Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 118/232 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • IIS 7.5 website application pool with 'full control' permissions hackable?

    - by Caroline Beltran
    Although I would never set this permission, I would like to know how a static html website with the permission mentioned in the title could be compromised. In my humble opinion, I would guess that this would pose no threat since a web visitor has no way to upload/edit/delete anything. What if the site was a simple PHP website that simply displayed ‘hello world’? What if this PHP site had a contact us form that was properly sanitized? Thank you EDIT: I should mention that restricting IIS to GET and POST requests only, otherwise people anybody can delete and upload content.

    Read the article

  • Slow http traffic between VMWare guest and host.

    - by toluju
    I have a web application running as an http server inside the VMWare guest OS, and I'm trying to access the content from the host OS. The guest is running Ubuntu, and the host is running Windows XP. The problem is, when I try to access the application from a browser in the host OS, the content takes a very long time to load (up to a minute for a single page). A browser in the guest OS can access the application with no problems. I've tried using both NAT and bridged networking, but the results are the same. The Windows firewall is turned off. The connection itself appears fine, as ping requests from guest to host as well as host to guest complete without errors or delays. Both guest and host can access the external Internet connection without a problem. I'm using VMWare Player. Any ideas?

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • How do I rewrite *.example.com to www.example.com?

    - by Lekensteyn
    In my network, I've some Ubuntu machines which need to download files from nl.archive.ubuntu.com. Since it's quite a waste of time to download everything multiple times, I've setup a squid proxy for caching the data. Another use for this proxy was rewriting requests for archive.ubuntu.com or *.archive.ubuntu.com to nl.archive.ubuntu.com because this mirror is faster than the US mirrors. This has worked quite well, but after a recent install of my caching machine, the configuration was lost. I remember having a separate perl program for handling this rewrite. How do I setup such a squid proxy which rewrites the host *.example.com to www.example.com and cache the result of the latter?

    Read the article

  • Lost sudo/su on Amazon EC2 instance

    - by barrycarter
    I have an Amazon EC2 instance. I can login just fine, but neither "su" nor "sudo" work now (they worked fine previously): "su" requests a password, but I login using ssh keys, and I don't think the root user even has a password. "sudo <anything>" does this: sudo: /etc/sudoers is owned by uid 222, should be 0 sudo: no valid sudoers sources found, quitting I probably did "chown ec2-user /etc/sudoers" (or, more likely "chown -R ec2-user /etc" because I was sick of rsync failing), so this is my fault. How do I recover? I stopped the instance and tried the "View/Change User Data" option on the AWS EC2 console, but this didn't help. EDIT: I realize I could kill this instance and create a new one, but was hoping to avoid something that extreme.

    Read the article

  • ARP replies contain wrong MAC address

    - by Jayen
    I've got a robot running linux with wired and wireless adapters. When I boot up, it connects to the wireless fine. When I assign an IP to the wired (either statically or with DHCP), it looks like it works. As in, ifconfig shows a proper IP and route shows proper routes. However, when I do an ARP request of the wired IP, the ARP reply contains the wireless MAC. ??? There's no bridge running on the robot, so why don't I get the wired MAC??? When the wire is disconnected, the wired IP replies to ping... Why is the robot replying over the wireless interface to IP requests on the wired???

    Read the article

  • How can i access windows XP remote desktop on private IP from internet?

    - by Jennie
    So the machine is behind a DSL router on a private IP so that it can not receive inbound requests. I want to know: Is there anyway to setup the router NAT (i highly doubt it supports one to one port mapping) without disturbing other users on the same router. I have another machine on internet which has public IP on it without any firewall. Can i use this machine as a relay server so that to initiate the connection, the XP machine send an outbound request and this relay server makes my connection through and then i can access my machine on pvt ip without any problem. Please tell??

    Read the article

  • How to block/redirect hosts and subdomains of a host using htaccess?

    - by Sven
    I want to block several host-domains and their subdomains, as well as IP-Adresses using htaccess. So far I added to my .htaccess-file: # block doamins and all subdomains Deny from .example.org # block domainrange: 1.2.3.[1-255] Deny from 1.2.3. # Block single IP Deny from 2.3.4.5 but still I had problems with spam from e.g. server1.example.org. What is wrong with my script? Is it also possible to redirect all requests from certain hosts/IPs to a document (say: info.html)?

    Read the article

  • HTTP resource caching / fetching

    - by Bobby Jack
    I'm trying to optimise a page, and I'm seeing some strange behaviour. Each time I click on a link to the page, all resources are fetched from the server, responding with 200s. However, when I refresh the page (specifically, F5 in Firefox), all resources return a 304 and - of course - the page loads much faster as a result. The main page returns a 200 in both cases. In the refresh case, If-Modified-Since headers are sent with the requests to the resources. However, in the 'clicking a link' case, they are not. What's the reason for that, and can I control it?

    Read the article

  • SSH connection falling down

    - by kappa
    I've set up a connection with autossh that creates some tunnels at system startup, but if I try to connect, after successful login (with RSA key) connection fall down, here a trace: debug1: Authentication succeeded (publickey). debug1: Remote connections from LOCALHOST:5006 forwarded to local address localhost:22 debug1: Remote connections from LOCALHOST:6006 forwarded to local address localhost:80 debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: remote forward success for: listen 5006, connect localhost:22 debug1: remote forward success for: listen 6006, connect localhost:80 debug1: All remote forwarding requests processed debug1: Sending environment. debug1: Sending env LANG = it_IT.UTF-8 debug1: Sending env LC_CTYPE = en_US.UTF-8 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug1: channel 0: free: client-session, nchannels 1 Transferred: sent 2400, received 2312 bytes, in 1.3 seconds Bytes per second: sent 1904.2, received 1834.4 debug1: Exit status 1 What can be the problem? All this stuff is managed by a script already running on another machine (creating reverse tunnels on the same machine but with different ports)

    Read the article

  • Block IPs if they access a resource

    - by Victor Oliva
    I own a server that it's costantly being attacked by scripts (that try to access to phpMyAdmin's setup file's and stuff like this). I've heard that many people get this kinds of attacks, but I'm starting to worry since they are getting more common (last month I got 2 attacks, and on november 7th there are 3 attempts already (1st, 4th and 6th of nov). I'm not really concerned about it, since I don't have any database. All the info i have on that server is absolutely public, but I'm worried about that attacking-rate increase. So I thought I could -temporarily- block the IPs that come from those attackers, or something that could make my server ignore requests that ask for phpMyAdmin, pma, xamp, etc. Is there something like that? my server is Linux+Apache+Php

    Read the article

  • Apache mod_proxy to another server

    - by trobrock
    I am using the proxy_balancer in Apache2 to proxy requests to a Rails application to my rails server on the port the application is running on. This is how its set up... Rails Server Mongrel running on port 8000, when accessing the url directly to http://rails_server:8000 the site loads fine Apache Server Conf file for the site: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName myserver.com ServerAlias application.myserver.com <Proxy balancer://application_cluster> Allow from localhost BalancerMember http://ip.to.server:8000 retry=10 </Proxy> ProxyPass / balancer://application_cluster </VirtualHost> The problem I am having is going to http://rails_server:8000 works fine, but going to http://application.myserver.com Loads the right content, but is displaying all the HTML as text and not rendering it as html

    Read the article

  • Dhcp clients fail after successful import of server to new machine

    - by Tathagata
    I transfered the configs of a dhcp server from one server to another both running Windows Server 2003 R2 following http [://] support.microsoft.com/kb/325473. The new server has a statically configured ip(outside the scope) like the old one. Stopped the server on the old, and started up in the new server (authorized too) - but when I ipconfig /renew from a client its network interface fails with all 0.0.0.0 (or 169...*). I read somewhere I need to reconcile the scope to sync the new registry values ('ll try this tomorrow). What other troubleshooting steps can I take other than these (which didn't help)? Things work fine when the old server resurrects and the new one is taken down. The new server showed there was no requests for offer.

    Read the article

  • How to write a ProxyPass rule to go from HTTPS to HTTP in IIRF

    - by Keith Nicholas
    I have a server which is running a web app that self serves HTTP. I'm wanting to use IIS6 (on the same server) to provide a HTTPS layer to this web app. From what I can tell doing a reverse proxy will allow me to do this. IIRF seems like the tool to do this job. There are no domain names involved.... its all ip numbers. So I think I want :- https:<ipnumber>:5001 to send all its requests to the same server but on a different port and use HTTP ( not exposed to the net ) http:<ipnumber>:5000 but not sure how to go about it with IIRF, I'm not entirely sure how to write the rules? I think I need to make a virtual web app on 5001 using HTTPS? then add a rules file.

    Read the article

  • lighttpd silently stops logging

    - by Max Cantor
    I'm on a Slicehost 256MB VPS with Ubuntu 9.04 (Jaunty). lighttpd is the only web server process running; it listens on port 80. My lighttpd.conf can be found here. I'm using Ubuntu's default logrotate setup for lighty. At seemingly random times, lighttpd will stop logging. It is not correlated with log rotation--that is, the errors do not occur when logrotate kicks in. What happens is, I will verify that the server is serving files by hitting a URL with my browser, and I will verify that it is not logging by checking access.log and seeing that the GET request I just made is not there. Using init.d to restart the process starts logging again, without truncating or rotating the log file. That is, new requests will be logged at the end of the existing access.log file. There are no cron jobs running on this box. Any ideas?

    Read the article

  • How to use SSL on AWS EC2

    - by Aubada Taljo
    Hello I have an AWS EC2 account and I am running an instance that serves as a web host for my PHP website... This is a private website that has no UI but only URLs to be requested by my other software to get some response from the server... I want the requests (that I send to the server) to be secured so I want to use https instead of http... so what should I do to achieve that? PS: I found this link while searching... but I don't know how useful it's in my situation http://matt-darby.com/posts/690-aws-ec2-and-ssl Thanks in advance Good luck

    Read the article

  • Have OS X send wake on lan before printing to shared printer

    - by Dean Hill
    I have a MacBook that prints to a shared Windows 7 printer. Sometimes the Windows machine is asleep, and the Mac will just queue up its print requests. I recently created a script to send a wake-on-lan packet to a Windows 7 machine. This wakes up the Windows machine and printing starts. Great, but I think the system can be automated en Is it possible to have the MacBook run the wake-on-lan script everytime something is printed? Stated more generally, can I have the OS X print subsystem execute a script everytime something is printed?

    Read the article

  • How can I add config options for a specific hostname outside <VirtualHost>?

    - by Boldewyn
    I'm using Apache 2.2 and let it serve domains foo.example.com and bar.example.com with <VirtualHost> statements: <VirtualHost 127.0.0.1:80> ServerName foo.example.com </VirtualHost> <VirtualHost 127.0.0.1:80> ServerName bar.example.com </VirtualHost> My problem is, that I need to add configuration options, that are only targeted at foo.example.com, in a separate file (let's say, /etc/apache/sites-enabled/foo.conf). This file will be included, before the VirtualHost statement is issued, but it can't be embedded inside it. Can I (and if yes, how) target configuration settings to foo.example.com requests only, outside the VirtualHost container?

    Read the article

  • Windows Server 2003: Remapping external domain

    - by Chuck Harmston
    We're playing a going-away prank on a coworker, and would like to use a rule in our internal DNS server to redirect techcrunch.com to point at one of our internal development servers. Basically, I'd like to accomplish the same thing as adding a line to a Linux /etc/hosts file, only for the entire network. I have access to our DNS server. How would you go about doing this? I created an entry in the reverse lookup subnet with the 'Host Name' of techcrunch.com and the 'Host IP' of our development server, a Linux box running Debian on which I've created a virtualhost to handle requests to techcrunch.com. It doesn't appear to be working, however, and my expertise has reached its limit. Thanks!

    Read the article

  • How to troubleshoot slow powerconnect 62xx management interface

    - by Hannes
    Our Dell Powerconnect 62xx switches have a very high packetloss on the management interface. I presume this is caused by a new appliance which uses multicast for communication but I am not sure. Our network setup is following: servers a - Dell PC6248 | servers b - Dell PC6248 |- juniper core router servers c - Dell PC6248 | What we see is that the multicast traffic arrives at all servers (but only the servers b use the multicast) and I fear that this multicast traffic floods the switch management interface. The switches' management interfaces are reachable via vlan101, all other traffic is sent over other vlans. When I tcpdump on one of the 2 servers with a vlan 101 ip address, I only get a few arp requests but almost nothing. When I try to ping between these 2 servers, it works like a charm. I would like to know what a good way is to troubleshoot this problem and maybe help me understand what is going wrong on that subnet.

    Read the article

  • what is the best setting for using lighttpd on 8G ram?

    - by user39639
    I have running 8GB ram and 8 x Xeon 3361 system! What is the best setting for running simultaneous connection! What is the maximum? Is setting like this correct? server.max-keep-alive-requests = 0 server.max-keep-alive-idle = 10 server.max-read-idle = 60 server.max-write-idle = 60 server.event-handler = "linux-sysepoll" server.max-fds = 2048 fastcgi.server = ( ".php" = ( "localhost" = ( "socket" = "/tmp/php-fastcgi.socket", "bin-path" = "/usr/bin/php-cgi", "max-procs" = 20, "bin-environment" = ( "PHP_FCGI_CHILDREN" = "40", "PHP_FCGI_MAX_REQUESTS" = "800" ), "broken-scriptfilename" = "enable" ) ) ) please help me!

    Read the article

  • Only allow the POST method for a specific file in a directory

    - by Dave Chen
    I have one file that should only be accessible via the POST method. /var/www/folder/index.php The document root is /var/www/ and index.php is nested inside a folder. My configurations are as follows: <Directory "/var/www/folder"> <Files "index.php"> order deny,allow Allow from all <LimitExcept POST> Deny from all </LimitExcept> </Files> </Directory> I visit my server at 127.0.0.1/folder but I can GET and POST the file just like normal. I've also tried reversing the order, order allow,deny, require, limitexcept and limit. How can I only allow POST requests to be processed by one file in a folder?

    Read the article

  • scp to remote servers stalls, unable to isolate cause

    - by Rolf
    When I copy a large file (100+mb) to a remote server using scp it slows down from 2.7 mb/s to 100 kb/s and downward and then stalls. The problem is that I can't seem to isolate the problem. I've tried 2 different remote servers, using 2 local machines (1 osx, 1 windows/cygwin), using 2 different networks/isps and 2 different scp clients. All combinations give the problem except when I copy between the two remote servers (scp). Using wireshark I could not detect any traffic volume that would congest the network (although about 7 packets/sec with NBNS requests from the osx machine). What in the world could be going on? Given the combinations I've used there doesn't seem to be any overlap in the thing that could be causing the trouble.

    Read the article

  • nginx conditional Accept header

    - by manu_v
    Some mobile devices send the following incorrect requests to our servers : GET / HTTP/1.0 Accept: User-Agent : xxx The empty Accept header causes our Ruby on Rails server to throw back a 500 error. In Apache, the following directive allows us to rewrite the header before sending it to the application RoR server in order to cope with the broken devices : RequestHeader edit Accept ^$ "*/*" early We're currently setting up nginx, but achieving the same work-around is proving difficult. We are able to set : proxy_set_header Accept */*; However, this seems to have to be done inconditionally. Whenever trying to do : if ($http_accept !~ ".") { proxy_set_header Accept */*; } It complains with the message : "proxy_set_header" directive is not allowed here So, using nginx, how can we set the HTTP Accept header to */* when it is empty before sending the request to the application server ?

    Read the article

  • How to have Jetty redirect http to https

    - by Noel Kennedy
    I want to redirect all requests for http to https using Jetty (6.1.24). For some reason (my ignorance) this is eluding me. This is what I have: <New id="redirect" class="org.mortbay.jetty.handler.rewrite.RedirectPatternRule"> <Set name="pattern">http://foobar.com/*</Set> <Set name="location">https://foobar.com</Set> </New> In response I get 200 - ok, and the body is the page over http, ie the redirect doesn't occur.

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >