Search Results

Search found 12836 results on 514 pages for 'host mechanic'.

Page 436/514 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • Separate Certificate by Subdomain (With multiple IPs)

    - by Brian
    Note: Yes, I realize this problem is easier to solve by just using 1 multi-domain or wildcard certificate. I wish to have an ASP.NET site running on IIS with 2 SSL domains sharing 1 web application but using separate certificates. Assuming I have 2 certificates, this can be solved on IIS7 as follows: Web Application1: Binding 1: http, 80, IP Address *, Host Name * Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Binding 3: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 ip addresses, but both mapped to the same web application. In IIS6, the closest I have been able to come to this configuration is: Web Application1: Binding 1: http, 80, IPADDRESS1 Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Web Application2: Binding 1: http, 80, IPADDRESS2 Binding 2: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 IP addresses, 2 web applications, both mapped to the same file location. The IIS6 solution is not optimal. Even if sharing an application pool, there are still costs associated with running the same site as two applications. Is upgrading from IIS6 to IIS7 a legitimate way to resolve this problem? Is there an IIS6 way to map 2 IP addresses within the same web application to different certificates?

    Read the article

  • Limited connections to Ubuntu 12.04 server

    - by Luis M. Valenzuela
    I'm having a weird problem with my server. The server is inside my network, connected to a 3com switch which is connected to the router that handles the internet connection. The main purpose of the server is to host a php application. What's happening is that user 1 to 15 in the private network have no problems connecting to the server, when user 16 tries to connect a time out comes out and is unable to connect to the server. It's not just to the php application, but to any service from the server. When the 15 users are using the application, the server doesn't even answer to ping. I haven't set any special limit in Apache's ini file or MySql and the firewall is being turned off because the server is only to give service to the internal network. Is there a parameter in any of the network's card conf. files that might me causing this ? Or should I suspect from the router's or switches configuration ? UPDATE. Tomorrow, I'm gonna do some test on the server modifying two kernel params in : /etc/sysctl.conf The settings are: net.core.somaxconn which has the limit on simultaneous network connections to the server and kernel.shmmax which controls the amount of memory the system can use for managing connections.

    Read the article

  • Wireless driver activation issue in Compaq c700 in Ubuntu 9.04

    - by Fazil
    I am using Ubuntu 9.04, I cant access my wireless driver, I activate the madwifi in administrationhardware drivers, but I could'nt activated the wireless too. when I type lspci I get the following message, ################################################## # 00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 04) 00:1d.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 04) 00:1d.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 04) 00:1d.2 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 04) 00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 04) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f4) 00:1f.0 ISA bridge: Intel Corporation 82801HEM (ICH8M) LPC Interface Controller (rev 04) 00:1f.1 IDE interface: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 04) 01:00.0 Ethernet controller: Atheros Communications Inc. AR242x 802.11abg Wireless PCI Express Adapter (rev 01) 02:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) ################################################## but when I tried in Windows I found that the driver for my laptop is ################################################ atheros AR5007 802.11b/g WiFi Adapter ################################################ so what can I do for solving this problem.

    Read the article

  • Uninstalled server 2008 now router won't handle DHCP

    - by john
    My set up is this. server behind router, router has a server and switch connected to it with multiple computers. router used to serve DHCP and DNS, a couple of days ago installed AD, DNS and DHCP on the server, and the server gave out IP's. For various reasons we had to uninstall the domain on our server. I removed AD, DHCP and DNS from the roles and set the router back to serving DHCP and DNS. Now I can't get computers on the network. I reset my router back to factory defaults, and if I plug a computer directly into the router I can get a IP address, but all the computers behind the switch can't get an IP address and can't see the router. All my computers say unidentified network, and if I ping the router it says host is unreachable. On the other hand, my wireless devices are just fine and connect no problem. But for desktops, ipconfig /release doesn't release anything and /renew can't find a server to renew on. My router log shows several FIN scans but they are from innocuous websites (google, netgear) and it shows a couple of smurf attacks but they are all from my external IP. Any ideas? the server isn't even connected to the route right now, and all the computers are set for dynamic IP addresses.. I don't know what else to try? Any help?

    Read the article

  • nginx with fail2ban and mod_security

    - by Mahesh
    I forgot to update my fail2ban config for nginx. I just moved to nginx from apache. Today, I got a lot of cals from a single IP. IP tried to access login pages with post and get methods IP tried to use nginx as a proxy (GET http:/...) IP searched images, js, css folders IP tried to inject -d url_allow_fopen =1 and something similar. Most of the calls ended with 404. http { limit_req_zone $binary_remote_addr zone=app:10m rate=5r/s; ... server { ... location / { limit_req zone=app burst=50; } I got approximately 50 requests from that ip for a second. So i updated my nginx like the above. Will it avoid too many connections per second now? I have updated my fail2ban jail.local to support nginx. I am confused with the nginx-noscript.conf [Definition] failregex = ^<HOST> -.*GET.*(\.php|\.asp|\.exe|\.pl|\.cgi|\scgi) ignoreregex = I am serving php with nginx. I checked apache's noscript.conf and which has .php extension on it too. I tested this above settings before restarting fail2ban and got thousands of ips matched. I removed php and nothing matched. Do i need .php| in nginx-noscript.conf? Using mod_security and fail2ban together bring any problem? When i was searching today, i came to know mod_security is available for nginx too. So i am planning to use it too.

    Read the article

  • Seeing traffic destined for other people's servers in wireshark

    - by user350325
    I rent a dedicated server from a hosting provider. I ran wireshark on my server so that I could see incoming HTTP traffic that was destined to my server. Once I ran wireshark and filtered for HTTP I noticed a load of traffic, but most of it was not for stuff that was hosted on my server and had a destination IP address that was not mine, there were various source IP addresses. My immediate reaction was to think that somebody was tunnelling their HTTP traffic through my server somehow. However when I looked closer I noticed that all of this traffic was going to hosts on the same subnet and all of these IP addresses belonged to the same hosting provider that I was using. So it appears that wireshark was intercepting traffic destined for other customers who's servers are attached to the same part of the network as mine. Now I always assumed that on a switch based network that this should not happen as the switch will only send data to the required host and not to every box attached. I assume in this case that other customers would also be able to see data going to my server. As well as potential privacy concerns, this would surely make ARP poising easy and allow others to steal IP addresses (and therefor domains and websites)? It would seem odd that a network provider would configure the network in such a way. Is there a more rational explanation here?

    Read the article

  • Hyper-V vss-writer not making current copies [migrated]

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • How can I rapidly switch hosts?

    - by EAMann
    I'm in the process of migrating a forum setup from one version of the software on one machine (older shared Windows host) to a new VPS (Windows Server 2008). To install the software, I used my hosts file to temporarily point the domain at the new IP address. To see the old site, I obviously re-edit the hosts file to remove the reference. But this leaves me constantly adding/removing a # from my hosts file just so I can switch back and forth between the two servers. Is there a way to do this more rapidly? I've found a handful of toggling batch scripts, but all they do is automate the addition/removal of the # character ... so there's still a noticeable lag where I have to repeatedly hit F5 to force my system to detect the new settings. Ideally, I could view both servers at the same time on the same machine. Maybe one through a regular browser session and one through some kind of a proxy. Unfortunately, I don't have the first idea how to set that up. Ideas?

    Read the article

  • mod_perl custom configuration directives don't work when placed in .htaccess and there is <Location>

    - by al_l_ex
    I'm trying to complete Redmine's feature request #2693: Use Redmine.pm to authenticate for any directory (1). I have not much knowledge on all these things and need help. Redmine uses mod_perl module Redmine.pm for authentication & authorization. This module defines several custom configuration directives. I've successfully modified patch from (1) and it works when all config is in <Location>: <Location /digischrank/test> AuthType basic AuthName "Digischrank Test" Require valid-user PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=SomedaTaBAse;host=localhost" RedmineDbUser "SoMeuSer" RedmineDbPass "SomePaSS" RedmineProject "digischrank" </Location> But when I move one of these directives (RedmineProject, see (1)) in .htaccess file, Redmine.pm doesn't see it! I've tried to change <Location> to <Directory> and add AllowOverride All. Directives from .htaccess is visible, but remaining ones from <Directory> - not. I don't want to move all directives to each .htaccess. When I add <Location> in addition to <Directory>, again - only directives from <Location> are visible. As far as I know, directives should be merged. I miss something?

    Read the article

  • how to get ip address of a PPP(Point-to-Point Protocol) network interface?

    - by Xsmael
    I have a Linux machine with two network interfaces, and I'd like to get the IP address of the PPP interface w1g1 but it doesn't show up in ifconfig. There is a public IP on the PPP interface, but there is no internet connection, I'm trying to troubleshoot but I need to get the IP address of the interface and I can't. ifconfig : eth0 Link encap:Ethernet HWaddr 00:30:48:8D:F0:2C inet addr:192.168.2.254 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::230:48ff:fe8d:f02c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9970 errors:0 dropped:567 overruns:0 frame:0 TX packets:4338 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:1441024 (1.3 MiB) TX bytes:915814 (894.3 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:675 errors:0 dropped:0 overruns:0 frame:0 TX packets:675 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:50659 (49.4 KiB) TX bytes:50659 (49.4 KiB) w1g1 Link encap:Point-to-Point Protocol UP POINTOPOINT RUNNING NOARP MTU:240 Metric:1 RX packets:748994 errors:0 dropped:0 overruns:0 frame:0 TX packets:748992 errors:0 dropped:0 overruns:0 carrier:3 collisions:0 txqueuelen:100 RX bytes:179758560 (171.4 MiB) TX bytes:179758080 (171.4 MiB) Interrupt:177 Memory:f881c400-f881e3ff w1g1 is connected to a modem by an RJ45<-Serial cable and the modem is connected to the phone line. The modem is a NOKIA DNT2Mi you can see it here Routing table : 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.254 169.254.0.0/16 dev eth0 scope link default via 192.168.2.180 dev eth0

    Read the article

  • what are these weird IP address connections in resource monitor?

    - by bill
    I decided to check out Resource Monitor (on the 'Performance' tab in Task Manager, Windows 7) and I noticed in the "Network" section that the 'System' image name kept making a bunch (~5 at a time) of connections to random IP addresses, it would show anywhere from 1-500 bytes/sec 'sent'. They would stay connected for 1-2 minutes. -All web browsers are closed So, first thing I did was run a trace from network-tools.com on some of these IP addresses. 8/10 were outside of US and did not resolve to any host name. Of the 10 IP addresses I traced, 2 were in US, 4 showed origins in China, and one each to Algeria, Russia, Pakistan, Korea. (!) So, the next thing I did was turn off my wireless card, watch the connections disappear, then turn the card back on, and within 30 seconds more random connections were created by System, with different IP addresses from the first time. The next thing I did was go open Task Manager, Show Processes From All Users, then I killed just about everything that wasn't (what appeared to be) a windows process. Turned on wi-fi, and again within 30 seconds, random IP addresses connect for ~ 1 min at a time, new ones coming and going. I occasionally use bit torrent on this machine, but there was definitely no process that seemed related to bt running after I went through task manager, and bt wasn't open to begin with. So, any ideas on what these connections might be for? I have been using Ad-Aware Free and AVG Free on this computer for a while now, always up to date..

    Read the article

  • How to automatically start VM created by virt-manager?

    - by Jeff Shattock
    I have created a virtual machine with virt-manager that runs on kvm/qemu. The machine works well when started through virt-manager. However, I would like to be able to start and stop the VM through a script in init.d, so that it comes up and down along with the host. I need to have virt-manager show that the machine is running, and to be able to connect to its console through there. When I use the command line that is produced by running ps -eaf | grep kvm after starting the vm through virt-manager, I get some console messages about redirected character devices, but the machine does start and runs properly. However, I do not get any indication from virt-manager that it has started. How can I modify the command line to get virt-manager to pick up the running VM? Is there anything else about the command line that should change when starting outside of virt-manager? Command line is (slightly reformatted for readability): /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name BORON \ -uuid fa7e5fbd-7d8e-43c4-ebd9-1504a4383eb1 \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/BORON.monitor,server,nowait \ -monitor chardev:monitor -localtime -boot c \ -drive file=/dev/FS1/BORON,if=ide,index=0,boot=on,format=raw \ -net nic,macaddr=52:54:00:20:0b:fd,vlan=0,name=nic.0 \ -net tap,fd=41,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 \ -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:1 -k en-us -vga cirrus

    Read the article

  • Enterprise Redirection Services?

    - by Aaron Alton
    This is probably a case of "if I new what it was called, I could google it in 5 minutes" - but I don't know what it's called. It's probably best to explain the requirement using an example. We have a number of services (vpn, owa, etc) which we host from one of our datacenters. We have a number of datacenters, and we technically have the infrastructure already in place to support these services at a number of our datacenters. To provide access to these "services", I would create an external DNS entry (ex. VPN.MyCompany.com Gateway IP for one of my DCs), and clients will connect to it via the DNS entry. Since I have multiple datacenters that can support this service, I could theoretically offer a "highly available, geographically dispersed" solution if I could point this DNS entry to some sort of third party who offers highly available "redirection" services. If my primary site goes down, I could just make a change via some management console and configure the redirector to point to a different DC. Of course, it would be fairly straightforward to set this sort of thing up on one of our servers, but that would kinda defeat the purpose of a highly available third party. Is anyone familiar with a service like this? I'm thinking something like DynDNS, but with Enterprise availability guarantees.

    Read the article

  • "Server not found" errors all over after Wordpress installation

    - by picardo
    I uploaded my Wordpress blog from my local machine to Slicehost and then pointed the domain name to the IP address. Then I installed the blog as normal. Once I went to wp-login.php to login, though, I started getting "Server not found" errors. That was strange because the server process was still running, and I checked many times. I can't see anything wrong in the error log, or the access log either. This doesn't only affect Wordpress. I can't access phpmyadmin either now, which was mapped to a subdirectory of the same domain address. What is going on? Can anyone help? Edit: the blog is located on a subdomain. It's still accessible from IP address. The virtual host configs are ServerName and ServerAlias, both set to blog.mysite.com. When I changed those and restarted apache, phpmyadmin came back. Edit: also it's not a propagation issue because I installed the blog from the domain name. It's only when I tried to log into the admin section, I started getting these errors.

    Read the article

  • Localhost stop resolving/serving local site after a few clicks IIS 7.5

    - by Jo-Pierre
    previously I have searched tried to find the answer from a previous question, however Im not sure it was resolved. I could comment on the question to find more, so decided to post a new question. Previous question found (http://serverfault.com/questions/314333/localhost-stop-resolving-after-a-few-minutes-iis-7-5) So I set up a new website on Windows 7 IIS 7.5 ... I give it a host header and in the hosts file I add the entry for 127.0.0.1 and browse the site. After about the second or third time of trying to click around on the local site, it starts hanging ... just seems to be looking like its trying to load, but just eventually comes back in Firefox with "The connection was reset" (takes about 30-50 secs before this happens). I then used a program like CurrPorts, to view the ports that are listening, and for the initial request it all seems good. Now after the site is hanging, I dont see the hit coming through anymore. Its as if the browser loses touch with IIS or something. If I open a different browser, works fine for about 2 clicks or so, then same problem. Anyone know what could be causing this? Or how to resolve?

    Read the article

  • Cannot connect to MySQL on RDS (Amazon Web Services) from my laptop

    - by Bruno Reis
    I'm having some trouble connecting to a MySQL 5.1 server on an RDS instance on AWS from my laptop. The detailed description of the problem is here: https://forums.aws.amazon.com/thread.jspa?messageID=323397 In short: I have 2 MySQL servers, both with the same db configuration and firewall (security group) configuration. One of them works fine: I can connect to it from my EC2 instances (ie, from inside the AWS cloud) and from my laptop. The other one doesn't: I can connect from my EC2 instances but not from my laptop. The symptom: a connection attempt from my laptop just hangs, and then times out, as if there was a firewall blocking me (ie, silently dropping my SYN packets). I must say that everything has been working fine for a very long time, and this problem began suddenly, 3 days ago, without any modifications to DB parameters or the security groups. My current analysis of the situation: The firewall (ie, security group) cannot be the problem: both MySQL servers share the same firewall configuration -- I can connect to one of them but not to the other. Later on, I even added a rule to allow inbound connections from 0.0.0.0/0 (ie, I turned off the firewall), and nothing. Oh, I also created a new, fresh security group and changed this instance's SG to the new one (to which I first added my ip address, and then 0.0.0.0/0) but still nothing. The credentials cannot be the problem: I use the same from my laptop and from my EC2 instances -- and the user (which is what Amazon calls master user), in the database, has a host of '%'. MySQL is not blocking my IP due to, say, too many failed connection attemps: I've FLUSH HOSTS on the database, and also I tried to connect using many different source IP addresses, even from all around the world through a VPN proxy service. What could I be missing? I'm asking here because it's been about 36 hours since I've posted on AWS forums but got no answer at all over there... someone here might have a solution! Any input is really appreciated, I'm out of ideas. Thanks!

    Read the article

  • Uploading to another domain gives HTTP code 405

    - by dragon112
    I'm trying to upload a file (which can be quite large) from the website of one server to the backend of another server using plupload. Lets say: domain 1 = http://www.websitedomain.com/uploadform domain 2 = http://www.backenddomain.com/uploadhandler Trying to upload i send the following: OPTIONS /main/uploadnetwork.php HTTP/1.1 Host: backenddomain.com Connection: keep-alive Access-Control-Request-Method: POST Origin: http://www.websitedomain.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4 Access-Control-Request-Headers: origin, content-type Accept: */* Referer: http://www.websitedomain.com/uploadform Accept-Encoding: gzip,deflate,sdch Accept-Language: nl-NL,nl;q=0.8,en-US;q=0.6,en;q=0.4 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 DNT: 1 But when I try to start the upload the server returns the following: HTTP/1.1 405 Method Not Allowed Allow: GET, HEAD, OPTIONS, TRACE Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET X-Powered-By-Plesk: PleskWin Date: Mon, 01 Oct 2012 12:41:57 GMT Content-Length: 999 After doing some research I found out that a browser does this to check if the server will accept the intended message. It looks like my server doesn't feel like accepting a simple POST call even tho i use post all the time. The Google Chrome console gives the following error: XMLHttpRequest cannot load http://www.backenddomain.com/uploadhandler. Origin http://www.websitedomain.com is not allowed by Access-Control-Allow-Origin. Does anyone know how to stop the browser from checking or how i can tell my server to just accept the POST?

    Read the article

  • How to add a user to Wheel group?

    - by Natasha Thapa
    I am trying to add a use to wheel group using in a Ubuntu server. sudo usermod -aG wheel john I get: usermod: group 'wheel' does not exist On my /etc/sudoers I have this: > cat /etc/sudoers > # sudoers file. > # > # This file MUST be edited with the 'visudo' command as root. > # > # See the sudoers man page for the details on how to write a sudoers file. > # > > # Host alias specification > > # User alias specification > > # Cmnd alias specification > > # Defaults specification > > # User privilege specification root ALL=(ALL) ALL %root ALL=(ALL) NOPASSWD: ALL > > %wheel ALL=(ALL) NOPASSWD: ALL Do I have to do groupadd of wheel?

    Read the article

  • What is best configuration settings for Wordpress and MySQL on Win2008 + IIS7 stack?

    - by holiveira
    I currently have four blogs that uses Wordpress running on a shared hosting company. This blogs have a considerable amount of visits and I'm constantly receiving warnings from the hosting company saying that I'm consuming too much server CPU. Considering the fact that I have a dedicated server in another company with plenty of idle resources (it has a quad core Xeon 2.5GHz and 8GB of Ram and run on Win2008) I'm planning to move the blogs to this server in order to have some more freedom. I'm currently using this server to host some web applications using ASP.Net and SQL Express. I've installed a blog to test and it worked fine, but some issues appeared and raised some questions in my mind: How to properly set the permissions in the folders used by wordpress plugins, I mean, what permissions should I set for the IIS_User in some folders so that the plugins works correctly? What's the best caching plugin to use considering this is a Window Server? In the previous hosting company I used the WPSuperCache, but it was a Linux Stack. Or should I ignore the caching plugins and use the Dynamic Caching Feature of IIS7? How can I optmize the MySQL server running in this server (specially the settings regarding memory and caching) How can I protect the admin folders against hacker attacks? I know some people will advice me not to run Wordpress in a Windows stack, but that's my only choice. I don't even know were to start managing and LAMP stack, don't have the time to do so nor the money to rent another server.

    Read the article

  • Routing WIFI and LAN for specific traffic

    - by jakebird451
    I have two network devices aboard my macbook pro: WIFI (en1): Used for general traffic. Connects to an ip of 192.168.19.* via DHCP LAN (en0): Used for specific traffic. Connects to an ip of 192.168.2.10 as a static IP. Does not connect to a router, only a switch for direct routing connection. I have 4 IP addresses I need to access on the LAN: 192.168.2.1 192.168.2.21 192.168.2.20 192.168.2.30 The rest of the traffic needs to go to WIFI. I have tried setting up a routing table for the specific ip addresses, but I only managed to mess up my network. I do not venture out into the world of networking too often, but this was the latest command I have been trying: sudo route add -host 192.168.2.30 -interface en0 This command killed my ability to use ping. It told me that ping could not allocate memory (is that even possible)? It also killed my wifi access. Logging out and back in fixed the issue. I really do not mind to make this solution permanent, so I am fine with a temporary routing.

    Read the article

  • How would I write a terminal command to download a folder with wget from a Media Temple (gs) server?

    - by racl101
    I'm trying to download a folder using wget on the Terminal (I'm usin a Mac if that matters) because my ftp client sucks and keeps timing out. It doesn't stay connected for long. So I was wondering if I could use wget to connect via ftp protocol to the server to download the directory in question. I have searched around in the internet for this and have attempted to write the command but it keeps failing. So assuming the following: ftp username is: [email protected] ftp host is: ftp.s12345.gridserver.com ftp password is: somepassword I have tried to write the command in the following ways: wget -r ftp://[email protected]:[email protected]/path/to/desired/folder/ wget -r ftp://serveradmin:[email protected]/path/to/desired/folder/ When I try the first way I get this error: Bad port number. When I try the second way I get a little further but I get this error: Resolving s12345.gridserver.com... 71.46.226.79 Connecting to s12345.gridserver.com|71.46.226.79|:21... connected. Logging in as serveradmin ... Login incorrect. What could I be doing wrong?

    Read the article

  • What are the possible problems, when wget returns code 500 but same request works in normal browsers?

    - by markus
    What should I be looking for, when wget returns 500 but the same URL works fine in my web browser? I don't see any access_log entries that seem to be related to the error. DEBUG output created by Wget 1.14 on linux-gnu. <SSL negotiation info stripped out> ---request begin--- GET /survey/de/tools/clear-caches/password/<some-token> HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: testing.thesurveylab.net Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.0 500 Internal Server Error Date: Wed, 12 Dec 2012 14:53:07 GMT Server: Apache/2.2.3 (CentOS) Set-Cookie: blueprint2-staging=8jnbmkqapl30hjkgo0u6956pd1; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Strict-Transport-Security: max-age=8640000;includeSubdomains X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 5 Connection: close Content-Type: text/html; charset=UTF-8 ---response end--- 500 Internal Server Error Stored cookie testing.thesurveylab.net -1 (ANY) / <session> <insecure> [expiry none] blueprint2-staging 8jnbmkqapl30hjkgo0u6956pd1 Closed 3/SSL 0x0000000001f33430 2012-12-12 15:53:07 ERROR 500: Internal Server Error.

    Read the article

  • What to do after a fresh Linux install in a production server?

    - by Rhyuk
    I havent had previous experience with the 'serious' IT scene. At work I've been handed a server that will host an application and MYSQL (I will install and configure everything), this will be a productive server. Soon I will be installing RHEL5 to it but I would like to know like, if you get a new production server, what would be the first 5 things you would do after you do a fresh Linux install? (configuration/security/reliability wise) EDIT: Added more information regarding the server enviroment and server roles: -The server will be inside my company's intranet/firewall. -The server will receive files (GBs) in binary code from another internal server. The application installed in this server is in charge of "translating" all that binary into human readable input. Server will get queried to get this information. -Only 2-3(max) users will be logging in. -(2) 145GB HDs in RAID1 for the OS and (2) 600GB HDs in RAID1 also for data. I mean, I know I may not get the perfect guideline. But at least something thats better than leaving everything on default.

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >