Search Results

Search found 13404 results on 537 pages for 'george host'.

Page 175/537 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • DNS resolution Windows 7 & browsing to locally hosted web site

    - by Aidan Whitehall
    We host two Intranet sites, http://intranet/ and http://sales.intranet/, both on the same server on the LAN. Local DNS (a Windows 2003 Server) was updated and both hostnames are configured to be CNAMEs that point to the FQDN name of the server on which they're hosted. On the LAN, Windows XP Professional clients can browse to both sites. However, Windows 7 Professional clients can browse to the main Intranet site, but not the Sales Intranet (neither using Firefox 3 nor Internet Explorer 8). Using nslookup on the command line on the Windows 7 boxes, intranet and sales.intranet both correctly resolve as CNAMEs of the server hosting them, and that in turn correctly resolves to the host's IP address. So the Q is... can anyone think why this might be, or what test to try next? Thank you for any suggestions!

    Read the article

  • Snapshot/rollback for libvirt+KVM?

    - by jtimberman
    I've recently begun using KVM for my development/test environment on a Linux host system with 8G memory. Prior, I was using VMware Fusion for my virtual environment, but my Macbook only has 2G memory. I tried VMware Server and ESX on the host instead of KVM, but the webUI doesn't run on Mac OSX's Firefox, and we're going to be doing more with KVM anyway. The main feature of VMware I miss is robust snapshot/rollback, but I'm missing this in KVM. I understand the snapshot command, but it shuts down the guest OS when complete, and then copying the disk image to preserve its state is cumbersome. Is this really the best way to manage snapshots on KVM?

    Read the article

  • 403 forbidden while submitting a POST request with image data via iPhone application

    - by binnyb
    I am creating an iOS application which allows users to send image/text data to my webserver via a POST request. I am successfully sending POSTS to the server when image data is not included in the request. Any time i POST with image data the server spits back a 403 forbidden. I have tried adding the following to the .htaccess file in the directory of the script with no luck: Options +Indexes FollowSymLinks +ExecCGI Order allow,deny Allow from all web browsers and Android devices can successfully POST with image data to the script, the only device which cannot is the iPhone. POSTING with data to other hosting providers works as expected - it is just this host(ipowerweb.com). i noticed that when i try to POST to ANY script on the server with data returns a 403 forbidden. another note: i can successfully post to another server that is hosted by ipowerweb, but mine cant seem to handle it. My host has tried to resolve the issue but cannot, and they have marked it on their end as "resolved", so no more help from them. I wish to keep this host as moving would be a pain - i will change hosts as a last resort, so please help me! Why am i getting this 403 forbidden error only when i submit data via my iPhone application? How can i resolve the issue so i can successfully POST data? any advice on what i can do would be greatly appreciated. edit: as request, here are the response headers: { Connection = close; "Content-Length" = 217; "Content-Type" = "text/html; charset=iso-8859-1"; Date = "Wed, 12 Jan 2011 19:11:19 GMT"; Server = "Apache/2"; } edit: as request here are the request headers(oops): { "Accept-Encoding" = gzip; "Content-Length" = 5781; "Content-Type" = "multipart/form-data; charset=utf-8; boundary=0xKhTmLbOuNdArY"; "User-Agent" = "YeahIAteThat 1.0 (iPhone; iPhone OS 4.2.1; en_US)"; }

    Read the article

  • 403 Forbidden

    - by demas
    Here is my Nginx config: user pass users; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib64/ruby/gems/1.8/gems/passenger-3.0.7; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name some.another.ru; root /www/public/redmine; passenger_enabled on; rails_env development; } } Here is Nginx log: 2011/06/02 12:53:57 [error] 45986#0: *1 directory index of "/www/public/redmine/" is forbidden, client: **.*.**.***, server: some.another.ru, request: "GET / HTTP/1.1", host: "some.another.ru" 2011/06/02 12:53:59 [error] 45986#0: *1 open() "/www/public/redmine/favicon.ico" failed (2: No such file or directory), client: **.*.**.***, server: some.another.ru, request: "GET /favicon.ico HTTP/1.1", host: "some.another.ru" What is the reason of this error and how can I fix it?

    Read the article

  • Help about NAT with virtual server

    - by Thanh Tran
    I have a dedicated server running Linux CentOS 5.3 with 2 IP addresses. I've installed a virtual machine using VMware Server. The host and the guest have a host-only network. Now I want to map the 2nd IP address to the virtual machine so that it can run as a second dedicated server for me. Here is what I do: modprobe iptable_nat echo "1" > /proc/sys/net/ipv4/ip_forward iptables -t filter -A FORWARD -s 192.168.78.128 -d 64.85.164.184 -j ACCEPT iptables -t nat -A PREROUTING -d 64.85.164.184 -i eth0 -j DNAT --to-destination 192.168.78.128 iptables -t nat -A POSTROUTING -s 192.168.78.128 -o eth0 -j SNAT --to-source 64.85.164.184</p> But it not working as intended. What is the matter?

    Read the article

  • Testing Tomcat with Virtual Hosts

    - by Marty Pitt
    I'm trying to test Tomcat virtual hosts on my dev machine (windows 7/Tomcat 6). I'd like to have requests for localhost, test1.localhost and test2.localhost all route through to the same tomcat instance. I've edited my hosts file to look as follows: 127.0.0.1 localhost ::1 localhost 127.0.0.1 test1.localhost 127.0.0.1 test2.localhost And added modified the Engine in server.xml as follows: <Engine defaultHost="localhost" name="Catalina"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase" /> <Host appBase="webapps" autoDeploy="true" name="localhost" unpackWARs="true" xmlNamespaceAware="false" xmlValidation="false"> <Alias>test1.localhost</Alias> <Alias>test2.localhost</Alias> </Host> </Engine> However, I'm getting a 404 when hitting test1.localhost:8080/myWebApp, although localhost:8080/myWebApp works fine. I can ping test1.localhost fine. What have I missed?

    Read the article

  • Spam issues while using Postfix as a two-way relay

    - by BenGC
    I want to use a Postfix box to do two things: Relay mail from any host on the internet addressed to one of my domains to my Zimbra server Relay mail from my Zimbra server to any address on the internet. To try and accomplish this I have configured Postfix thusly: mynetworks = 127.0.0.0/8, zimbra_ip/32 myorigin = zimbra_server mydestination = localhost, zimbra_server relay_domains = example.com example.org transport_maps = hash:/etc/postfix/transport_map local_transport = error:no mailboxes on this host transport_map looks like this: example.com smtp:[zimbra_server] example.org smtp:[zimbra_server] Now, this works and passes the Open Relay tests. However, I am seeing in the maillog that the server is relaying spam that has a From: address of <> to domains that are not mine. How do I stop this behavior?

    Read the article

  • Can't create a valid symlink under VMWare HGFS

    - by Alexander Gladysh
    Host: OS X 10.6.5 VMWare Fusion: 3.1.2 Guest: Ubuntu x86 10.10 $ uname -a Linux ubuntu 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 01:41:57 UTC 2010 i686 GNU/Linux I can not create a symlink, readable from the Guest OS anywhere in the directory, mounted with hgfs: /mnt/hgfs/projects/tmp$ touch aaa /mnt/hgfs/projects/tmp$ ln -s aaa bbb /mnt/hgfs/projects/tmp$ less bbb bbb: No such file or directory /mnt/hgfs/projects/tmp$ ls -la total 6 drwxr-xr-x 1 501 users 136 2010-12-28 18:12 . drwxr-xr-x 1 501 users 8602 2010-12-28 18:12 .. -rw-r--r-- 1 501 users 0 2010-12-28 18:12 aaa lrwxr-xr-x 1 501 users 3 2010-12-28 18:12 bbb - aaa /mnt/hgfs/projects/tmp$ readlink bbb aaa The same symlink is perfectly accessible in OS X host. Is there a workaround for this?

    Read the article

  • Trying to install wordpress inside rails app with nginx and fastcgi

    - by pinouchon
    I have a rails app (let's call it myapp) running at www.myapp.com. I want to add a wordpress blog at www.myapp.com/blog. The webserver for the rails app is thin (see the upstream block). The wordpress runs with php-fastcgi. The rails app works fine. My problem is the following: in /home/myapp/myapp/log/error.log error I get: 2013/06/24 10:19:40 [error] 26066#0: *4 connect() failed (111: Connection refused) while connecti\ ng to upstream, client: xx.xx.138.20, server: www.myapp.com, request: "GET /blog/ HTTP/1.1", \ upstream: "fastcgi://127.0.0.1:9000", host: "www.myapp.com" Here is the nginx conf file: upstream myapp { server unix:/tmp/thin_myapp.0.sock; server unix:/tmp/thin_myapp.1.sock; server unix:/tmp/thin_myapp2.sock; } server { listen 80; server_name www.myapp.com; client_max_body_size 20M; access_log /home/myapp/myapp/log/access.log; error_log /home/myapp/myapp/log/error.log error; root /home/myapp/myapp/public; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # Index HTML Files if (-f $document_root/cache/$uri/index.html) { rewrite (.*) /cache/$1/index.html break; } if (!-f $request_filename) { proxy_pass http://myapp; break; } # try_files /system/maintenance.html $uri $uri/index.html $uri.html @ruby; } location /blog/ { root /var/www/wordpress; fastcgi_index index.php; if (!-e $request_filename) { rewrite ^(.*)$ /blog/index.php?q=$1 last; } include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/wordpress$fastcgi_script_name; fastcgi_pass localhost:9000; # port to FastCGI } } Any ideas why that doesn't work ? How do I make sure that php-factcgi is configured properly ? Edit: I cant test if fastcgi is running with telnet: $> telnet 127.0.0.1 9000 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused And it's not.

    Read the article

  • How to use instances with s3 load balancing?

    - by Slay
    I have some questions about instances and load balancing in amazon s3. I can configured an instances in s3, but i do not understand how to deal with many instances in s3. Currently, my instance is loaded with mysql, php etc (ALL IN ONE). However how do i ensure my instances is scaling? E.g if i have a site that is suppose to be handled using 3 instances and using amazon rds. Do i need to host my code base in the 3 instances? how do people normally do this? like facebook has 1000+ servers. Do they host their code base in all the 1000+ servers? Thanks

    Read the article

  • Difference between *:80 and _default_:80 in Apache2

    - by Johannes Ernst
    I'm trying to understand the difference between the following two terms: *:80 _default_:80 in the Apache configuration file. The documentation here is unclear to me, and the only mailing list conversation that I could find here does not shed any (comprehensible, to me) light on the matter either. I have a bunch of name-based virtual hosts declared like this: <VirtualHost *:80> ServerName example.com ... and I'd like to have an entry that fires when none of those match, i.e. when a request comes in without a virtual host name, or with a virtual host name that has not been declared. Should I use *:80 or default:80?

    Read the article

  • Email notification and mail server

    - by Jerr Wu
    I am building a web application with email notification just like Facebook, which will host in http://www.linode.com/. When a user A comment to a post, the poster will get an email notification from '[email protected]' with the comment message written by user A. (Not spam) I really like Google Apps but they have sending limits 2000 sending per day, that is not suit for my case becuz I cannot have sending limits. There will be many email notifications. http://support.google.com/a/bin/answer.py?hl=en&answer=166852 I also need company email accounts for team members use which I prefer Google Apps. My web application will host in linode, I am considering "Amazon Simple Notification Service" for the email notification. My questions are Any other recommend email service provider suits my case for me? Can I bind company email accounts(ex: [email protected]) with Google Apps and bind [email protected] with other email service provider?

    Read the article

  • How to get Virtual PC to recognize MIDI devices?

    - by bparker
    Hey all. I have an XP Pro virtual machine running inside Virtual PC 2007. My host machine is x64 Windows 7. I have a MIDI keyboard hooked up to my machine via a Turtle Beach USB to MIDI 1x1 cable. I have installed the driver and software on my host machine and ran a soundcheck, and everything appears to be working fine. Playback is sent to the MIDI device with no problems. However, when I attempt to install the driver and run a soundcheck in my XP virtual machine, the device is not found. Other USB devices (mouse, keyboard, flash drives) work fine in the virtual machine, but not they MIDI keyboard. I'm not sure what steps to take in order to troubleshoot the and get the VM to start recognizing the MIDI keyboard. Any help or suggestions would be greatly appreciated. Thanks.

    Read the article

  • Resolving CloudFlare DNS related mail delivery problems

    - by Andy Castles
    I recently started using CloudFlare and am having a few teething problems. Our domain is netlanguages.com and while we have a lot of sub-domains listen, we are currently only trialling a few of the servers through the CloudFlare CDN (for example, www.netlanguages.com is enabled for CDN, netlanguages.com is not). The actual CDN service seems to be reliable, but the problem that we are having is with DNS, and specifically with mail delivery. The background is that we have contact forms on our web site which use PHP mail() to send the details to end-users' email addresses, with the "from" address of the messages being [email protected] which is a valid address on our mail server. Most of the mails are arriving correctly, but a few specific people are not receiving them. The webserver uses qmail to deliver the messages, and the qmail log files show us some of the errors that the receiving mail servers return when they reject the mail delivery attempt. Two examples: Connected to 94.100.176.20 but sender was rejected./Remote host said: 421 DNS problem (interdominios.netlanguages.com). Try again later Connected to 213.186.33.29 but sender was rejected./Remote host said: 451 DNS temporary failure (#4.3.0) From what I can tell, the receiving SMTP server is doing a DNS lookup of some description on either the host of the "from" email address (netlanguages.com) or the server name given in the EHLO command of the SMTP conversation (in the first example above, interdominios.netlanguages.com), both of which should resolve to non-CloudFlare IP addresses. I've read that the CloudFlare DNS service is very reliable and fast but both of the problems above seem to point to a problem with remote servers unable to do DNS lookups. I should also point out that we changed our DNS to CloudFlare on 6th Feb, and since then started experiencing these mail delivery problems. On 22nd Feb we moved our DNS away from CloudFlare to see if the issues were related to CloudFlare and after a few hours delivery began to work. Then on 26th Feb I moved the DNS back to CloudFlare again and delivery problems started again. The issues definitely seems to be related to DNS, but I don't know if it's a configuration issue, or something else. Finally, I should say that our two DNS MX records point to non-CDN A record IP addresses, interdominios.netlanguages.com (the web and qmail server) also points to a non-CDN A record IP address. Does anyone know what the problem could be here? Any light you can shed on this will be most appreciated. Many thanks, Andy

    Read the article

  • Unmounting a zfs pool while it is shared with sharenfs

    - by Ted W.
    I have a Solaris (open indiana) system which is getting poor disk write performance. In order to enable ZIL in this version of zfs I need to add a line to /etc/system. This will not take affect until I've unmounted and remounted the zpool. The trick is that this spool is shared via nfs to about 200 other servers to host users' home directories. I can guarantee that no users will be accessing the disks during this period of maintenance but I would like to avoid having to issue an unmount for 200 systems in order to unmount the disk on the Solaris box. My question is, with sharenfs, is it necessary to have all systems disconnected before unmounting the filesystem on the host? If it's possible, how do you go about it? I've tried unmounting already, the normal way, and it reports the disk is busy. There is no lsof in Solaris and pfiles (I think that's what it was) does not show anything obviously using the mounts.

    Read the article

  • CheckPoint SecuRemote / SecureClient on Vista 64

    - by cliff.meyers
    According to this page, CheckPoint's SecuRemote client is not supported on Vista 64: https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit%5FdoGoviewsolutiondetails=&solutionid=sk36681 Unfortunately in working with the systems team they will not confirm if the other two clients (SSL Network Extender or Endpoint Connect) are supported by their environment. Does anyone know if it would be possible to do the following? Install VMware Workstation on my Vista 64 system (host) install a Vista 32-bit OS in a virtual machine (guest) Install SecuRemote VPN client within the guest (Vista 32) Get my Vista 64 machine (host) to use the VPN connection from the guest Any other ideas are more than welcome.

    Read the article

  • Ubuntu web site hosting & free ,tk domain

    - by user5819
    Hello, I am sort of new to web hosting so sorry if I ask bad questions. I have a pc that runs ubuntu I instaled apache and now I host a web site, but I need a domain name so I found out .tk is free. The site works when typing 192.168.1.x in the browser(x= a number) but in dot.tk when I register in ip it whats one that look like 79.117.x.x so thats where I get stuck, I think I managed to make my ip address static but it still looks like 192.168.1.x and I can't put that in because it says: " This IP address is not valid". Why must it have the ip address that looks like 79.117.x.x and won't work with the internal static one and how can I do to host my site with a .tk domain name ? PS: I'm using a cisco router that's connected with computer via a cable.

    Read the article

  • Set up Glassfish connection pool to talk to a database on a Ubuntu VPS

    - by Harry Pham
    On my Ubuntu VPS, i have a mysql server running and a Glassfish 3.0.1 Application Server running. And I am having a hard to have my GF successfully ping the database. Here is my GF set up Assume: x.y.z.t is the ip of my VPS Resource Type: javax.sql.ConnectionPoolDataSource User: root DatabaseName: scholar Url: jdbc:mysql://x.y.z.t:3306/scholar URL: jdbc:mysql://x.y.z.t:3306/scholar Password: xxxx PortNumber: 3306 ServerName: x.y.z.t Inside my glassfish3/glassfish/lib, I have my mysql-connector-java-5.1.13-bin.jar Inside the database, table mysql here is the result of the query select User, Host from user; +------------------+-----------+ | User | Host | +------------------+-----------+ | root | 127.0.0.1 | | debian-sys-maint | localhost | | root | localhost | | root | yunaeyes | +------------------+-----------+ Now from my machine, if I try to connect to this db via mysql browser (mysql client software), well I cant. Well from the table above, seem like it only allow localhost to connect to this db. Keep in mind that both my db and my GF are on the same VPS. Please help

    Read the article

  • Varnish does not recognize req.hash

    - by Yogesh
    I have Varnish 3.0.2 on Redhat and service varnish start fails after I added vcl_hash section. I did varnishd and then loaded the vcl using vcl.load vcl.load default default.vcl Message from VCC-compiler: Unknown variable 'req.hash' At: ('input' Line 24 Pos 9) set req.hash += req.url; --------########------------ Running VCC-compiler failed, exit 1 cat default.vcl backend default { .host = "127.0.0.1"; .port = "8080"; } sub vcl_recv { if( req.url ~ "\.(css|js|jpg|jpeg|png|swf|ico|gif|jsp)$" ) { unset req.http.cookie; } } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if( req.httpCookie == "JSESSIONID" ) { set req.http.X-Varnish-Hashed-On = regsub( req.http.Cookie, "^.*?JSESSIONID=([a-zA-z0-9]{32}\.[a-zA-Z0-9]+)([\s$\n])*.*?$", "\1" ); set req.hash += req.http.X-Varnish-Hashed-On; } return(hash); } What could be wrong?

    Read the article

  • Nagios3 gives a warning on HTTP service monitoring

    - by Dez
    Already set up my local net configuration to be monitored by Nagios3. I found a problem that Nagios3 reports a warning in the HTTP monitoring service of a Debian server set at ip 192.168.1.52, that has an individual virtual host and a mass virtual host for application development. I get this status message: HTTP WARNING: HTTP/1.1 404 Not Found I used the Nagios tools to check. servername is the name of the vhost server name I used in the Apache configuration. /usr/lib/nagios/plugins/check_http -H servername -I 192.168.1.52 receiving this status message: HTTP OK HTTP/1.1 200 OK - 37900 bytes in 0.504 seconds |time=0.503946s;;;0.000000 size=37900B;;;0 But when I check like this: /usr/lib/nagios/plugins/check_http -I 192.168.1.52 I get the same status message as the warning, so I assume that I don't have Nagios completely well set up because doesn't recognize the vhosts for that server, how it should be as the check_http service shows. Where should I look to fix that warning?

    Read the article

  • rewrite all .html extension and remove index in Nginx

    - by Pardoner
    How would I remove all .html extensions as well as any occurrences of index.html from a url string in Nginx http://www.mysite/index.html to http://www.mysite http://www.mysite/articles/index.html to http://www.mysite/articles http://www.mysite/contact.html to http://www.mysite/contact http://www.mysite/foo/bar/index.html to http://www.mysite/foo/bar EDIT: Here is my conf file: server { listen 80; server_name staging.mysite.com; root /var/www/staging.mysite.com; index index.html index.htm; access_log /var/log/nginx/staging.mysite.com.log spiegle; #error_page 404 /404.html; #error_page 500 503 /500.html; rewrite ^(.*/)index\.html$ $1; rewrite ^(/.+)\.html$ $1; rewrite ^(.*/)index\.html$ $scheme://$host$1 permanent; rewrite ^(/.+)\.html$ $scheme://$host$1 permanent; location / { rewrite ^/about-us /about permanent rewrite ^/contact-us /contact permanent; try_files $uri.html $uri/ /index.html; } }

    Read the article

  • firehol (firewall) with bridge: how to filter

    - by Leon
    I have two interfaces: eth0 (public address) and lxcbr0 with 10.0.3.1. I have a LXC guest running with ip 10.0.3.10 This is my firehol config: version 5 trusted_ips=`/usr/local/bin/strip_comments /etc/firehol/trusted_ips` trusted_servers=`/usr/local/bin/strip_comments /etc/firehol/trusted_servers` blacklist full `/usr/local/bin/strip_comments /etc/firehol/blacklist` interface lxcbr0 virtual policy return server "dhcp dns" accept router virtual2internet inface lxcbr0 outface eth0 masquerade route all accept interface any world protection strong #Outgoing these protocols are allowed to everywhere client "smtp pop3 dns ntp mysql icmp" accept #These (incoming) services are available to everyone server "http https smtp ftp imap imaps pop3 pop3s passiveftp" accept #Outgoing, these protocols are only allowed to known servers client "http https webcache ftp ssh pyzor razor" accept dst "${trusted_servers}" On my host I can connect only to "trusted servers" on port 80. In my guest I can connect to port 80 on every host. I assumed that firehol would block that. Is there something I can add/change so that my guest(s) inherit the rules of the eth0 interface?

    Read the article

  • Virtual DNS recommended setup...

    - by luison
    Hi. We are new to virtualization which we are setting up with Proxmox VE (OpenVZ + KVM). I am a bit lost about the recommended DNS forwarder config specially in the OpenVZ (Virtuosso type) of enviroiment. Our intention was to have a small dnsmasq running in one of the VM acting as backup DHCP server and serving our in-office local addresses (and PCs) by an additional resolve.conf file which dnsmasq supports, but I've read that all VM should share DNS pointing to the host machine in which case it would make more sense having it there. My problem is that I would like to have as least as possible apps in the host so a reinstall of the environment (porxmox ve) and a machine restore can be as quick as possible. Does anyone have a similar setup? Does it make sense to have the 1st virtual machine running the local dns forwarder? Also... dnsmasq seems to want to have root permissions when running on an OpenVZ container... are there any work arrounds anyone knows for that.

    Read the article

  • Allow image upload - most efficient way?

    - by K-P
    Hey everyone, In my site, I currently only allow users to import images from other sites rather than uploading it themselves. The main reason for this is because I don't have much storage space on my host (relatively speaking). The host charges quite a bit for additional space. What are the alternatives to hosting images users upload (max 1mb size). Would it be a good idea to purchase separate cheap hosting with "unlimited space" (I know that's not true, but I'm guessing it's more than 1gb)? Or are there some caveats with this approach (e.g. security since the site should not be browsable, but accessed via another server)? Are there alternative ideas that I could employ? Thanks for any suggestions

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >