Search Results

Search found 76098 results on 3044 pages for 'http gdata youtube com'.

Page 501/3044 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • Optimise Apache for EC2 micro instance

    - by Shiyu Sekam
    I'm running apache2 on a EC2 micro instance with ~600 mb RAM. The instance was running for almost a year without problems, but in the last weeks it just keeps crashing, because the server reached MaxClients. The server basically runs few websites, one wordpress blog(not often used), company website(most used) and 2 small sites, which are just internal. The database for the blog runs on RDS, so there's no Mysql running on this web server. When I came to the company, the server already was setup and is running apache + mod_php + prefork. We want to migrate that in the future to a nginx + php-fpm, but it still needs further testing. So for now I have to stick with the old setup. I also use CloudFlare DDOS protection in front of the server, because it was attacked a couple of the times in the last weeks. My company don't want to pay money for a better web server at this point, so I have to stick with the micro instance also. Additionally the code for the website we run is really bad and slow and sometimes a single page load can take up to 15 seconds. The whole website is dynamic and written in PHP, so caching isn't really an option here. It's a customized search for users. I've already turned off KeepAlive, which improved the performance a little bit. My prefork config looks like the following: StartServers 2 MinSpareServers 2 MaxSpareServers 5 ServerLimit 10 MaxClients 10 MaxRequestsPerChild 100 The server just becomes unresponsive after a while running and I've run the following command to see how many connections there are: netstat | grep http | wc -l 75 Trying to restart apache helps for a short moment, but after that a while the apache process(es) become unresponsive again. I've the following modules enabled(output of apache2ctl -M) Loaded Modules: core_module (static) log_config_module (static) logio_module (static) version_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) authz_host_module (shared) deflate_module (shared) dir_module (shared) expires_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) setenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK apache2.conf # Security ServerTokens OS ServerSignature On TraceEnable On ServerName "web.example.com" ServerRoot "/etc/apache2" PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive off User www-data Group www-data AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> <Directory /> Options FollowSymLinks AllowOverride None </Directory> DefaultType none HostnameLookups Off ErrorLog /var/log/apache2/error.log LogLevel warn EnableSendfile On #Listen 80 Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf Include /etc/apache2/ports.conf LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include /etc/apache2/conf.d/*.conf Include /etc/apache2/sites-enabled/*.conf Vhost of main site <VirtualHost *:80> ServerName www.example.com ## Vhost docroot DocumentRoot /srv/www/jenkins/Web ## Directories, there should at least be a declaration for /srv/www/jenkins/Web <Directory /srv/www/jenkins/Web> AllowOverride All Order allow,deny Allow from all </Directory> ## Load additional static includes ## Logging ErrorLog /var/log/apache2/www.example.com.error.log LogLevel warn ServerSignature Off CustomLog /var/log/apache2/www.example.com.access.log combined ## Rewrite rules RewriteEngine On RewriteCond %{HTTP_HOST} !^www.example.com$ RewriteRule ^.*$ http://www.example.com%{REQUEST_URI} [R=301,L] ## Server aliases ServerAlias www.example.invalid ServerAlias example.com ## Custom fragment <Location /srv/www/jenkins/Web/library> Order Deny,Allow Deny from all </Location> <Files ~ "^\.(.+)"> Order deny,allow deny from all </Files> </VirtualHost>

    Read the article

  • Multiple SSL vhosts using wildcard certificate in nginx

    - by vvanscherpenseel
    I have two hostnames sharing the same domain name which I want to serve over HTTPs. I've got a wildcard-SSL certificate and created two vhost configs: Host A listen 127.0.0.1:443 ssl; server_name a.example.com; root /data/httpd/a.example.com; ssl_certificate /etc/ssl/wildcard.cer; ssl_certificate_key /etc/ssl/wildcard.key; Host B listen 127.0.0.1:443 ssl; server_name b.example.com; root /data/httpd/b.example.com; ssl_certificate /etc/ssl/wildcard.cer; ssl_certificate_key /etc/ssl/wildcard.key; However, I get the same vhost served for either hostname.

    Read the article

  • Nginx server 301 Moved permanently

    - by user145714
    When I did a curl -v http://site-wordpress.com:81 I received this result: About to connect() to site-wordpress.com port 81 (#0) Trying ip... connected Connected to site-wordpress.com (ip) port 81 (#0) GET / HTTP/1.1 User-Agent: curl/7.19.7 (x86_64-unknown-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 Host: site-wordpress.com:81 Accept: / < HTTP/1.1 301 Moved Permanently < Server: nginx/1.2.4 < Date: Fri, 16 Nov 2012 16:28:19 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Pingback: The URL above/xmlrpc.php < Location: The URL above Seems like this line in my fastcgi_params is causing grief. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; If I remove this line , I get HTTP/1.1 200 OK but I get a blank page. This is my config: server { listen 81; server_name site-wordpress.com; root /var/www/html/site; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php; if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; # port where FastCGI processes were spawned fastcgi_index index.php; include /etc/nginx/fastcgi_params; include /etc/nginx/mime.types; } location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } } This config works with ip and port 80. But now I need to use a domain name and port 81, which doesn't work. Could someone please help. Thanks.

    Read the article

  • Why doesn't postfix use my smtp_generic_maps?

    - by RichardTheKiwi
    What have I set up incorrectly? >postconf -n .... smtp_generic_maps = regexp:/etc/postfix/rewrite .... >cat /etc/postfix/rewrite /.*/ removed+postfix@gmail.com >echo "test" | mail -s "test" tester2@mailinator.com >tail -f /var/log/mail.log Dec 8 05:56:01 xxxxxxxxxxxx postfix/pickup[20227]: E9272709284: uid=501 from=<yyyy> Dec 8 05:56:01 xxxxxxxxxxxx postfix/cleanup[20270]: E9272709284: message-id=<[email protected]> Dec 8 05:56:01 xxxxxxxxxxxx postfix/qmgr[20228]: E9272709284: from=<[email protected]>, size=331, nrcpt=1 (queue active) Dec 8 05:56:03 xxxxxxxxxxxx postfix/smtp[20272]: E9272709284: to=<[email protected]>, relay=mailinator.com[72.51.33.80]:25, delay=1.1, delays=0.02/0.01/0.48/0.58, dsn=2.0.0, status=sent (250 Ok) FYI, I have reloaded postfix many times sudo postfix reload Note: This is on OSX 10.7.5

    Read the article

  • How to configure a static wildcard subdomain with dnsmasq.

    - by Prody
    I have a network behind a NAT with a few machines. The machines are: router - NAT, dnsmasq, forwarding - directly connected to the inet server - which runs ssh, www and some other stuff clients - which do stuff on server I also have mydomain.com. server.mydomain.com is pointing to my connection's IP (single IP), which is the router, which forwards ports to server. Server, has a httpd running, which serves different sites based on vhosts. So I have site1.server.mydomain.com, site2.. The problem is that all the traffic is going thru the router, and when I check logs I always see the router's IP for everything (so it's hard to see who is running the script with the while(1)). I would just ServerAlias site1.server.local, but most of the sites have a root URL saved somewhere on top of which other URLs are built, so I can't do that. The solution for me would be telling dnsmasq somehow to answer to *.mydomain.com with server's IP. Is this possible somehow?

    Read the article

  • Exclude regular expression from virtual host

    - by Joao Trindade
    I have a virtual host in apache which is redirecting requests to another web server. <VirtualHost *:80> DocumentRoot /var/www ServerName another.host ProxyPass / http://another.host2:8081/ ProxyPassReverse / http://another.host2:8081/ </VirtualHost> I need to exclude an URL pattern from being catch by this virtual host. Basically I don't want requests with the url: http://another.host:8081/~username to be forwarded to the other server. Can this be done?

    Read the article

  • Apache mod_proxy

    - by mhouston100
    Uggh, I'm spewing that I can't figure this out, I'm so frustrated: <VirtualHost *:80> servername domain1.com.au ServerAdmin webmaster@localhost DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined <Proxy *> Order Allow,Deny Allow from all </Proxy> RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] </VirtualHost> <VirtualHost *:443> servername domain1.com.au SSLEngine on SSLCertificateFile /etc/apache2/ssl/owncloud.pem SSLCertificateKeyFile /etc/apache2/ssl/owncloud.key DocumentRoot /var/www/html </VirtualHost> <VirtualHost *:*> Servername domain2.com.au ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / https://192.168.1.12/ ProxyPassReverse / https://192.168.1.12/ </VirtualHost> Not sure if it's clear what I'm trying to do, but I've read and read and READ, I still can't figure it out. Basically I have a working Apache server with a rewrite to force HTTPS, as seen in the first two VirtualHost entries. I now have a webmail service I set up on another server, under another domain name, however I only have one incoming public IP address. So I'm trying to have any incoming requests for the second domain to be proxied to the other server to access the webmail, whether its port 80 or 443. IMAP and POP3 are no problems, I can just forward the ports directly to the correct server. The results of the above configuration is that requests to domain2.com.au (port 80 or 443) are forwarded to https://domain1.com.au. Am I headed in the right direction?

    Read the article

  • Websocket handshake response not forwarded from TCP to client

    - by Saharsh
    I am trying to create a websocket server. I can see the websocket client's opening handhshake. My response to it is received by the client laptop (I can see this on wireshark). So the TCP connection has been established. But the client (a chrome websocket client extension) does not receive the handshake packet. What could be a possible reason for TCP to not forward the handshake to the client or for the client to not be able to read the TCP message? Client handshake: GET HTTP/1.1 Upgrade: websocket Connection:Upgrade Cache-Control:no-cache Host:192.168.0.101 Origin:http://www.websocket.org Pragma:no-cache Sec-WebSocket-Extensions:permessage-deflate; client_max_window_bits, x-webkit-deflate-frame Sec-WebSocket-Key: qrmw/m+BoZije6h9HYKmVw== Sec-WebSocket-Version:13 Upgrade:websocket Server Response: HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: jj1g5Io57m9ks8cme3jkbyo2asc= Access-Control-Allow-Origin: http://www.websocket.org Server: xyz Sec-WebSocket-Extensions: Thanks!

    Read the article

  • How do I host multiple domains on Ubuntu Server (Hardy Heron)?

    - by markle976
    I am trying to figure out the best way to host multiple domains on my Ubuntu server. I have tried multiple options, but I can't get everything to work the way I want it to. I want to be able to add domains without having to restart Apache each time. I tried using mod_vhost_alias (see below), but that maps www.domain.com and domain.com to different folders. I also need to be able to use mod_rewite to map requests for domain.com/app/* to domain.com/somescript.php current httpd.conf: UseCanonicalName Off VirtualDocumentRoot /var/www/%0 Any thoughts?

    Read the article

  • why page is automatically redirecting to some other sites

    - by raj
    In my browser (Firefox 10.0.7) the page is automatically redirect to some other sites without clicking any link. If I enter the superuser.com url after pressing Enter button, It redirect to some other sites. sometimes while refreshing also the page is redirect to some other site. It's redirecting to this sites http://result.seenfind.com/ncp/Default.aspx?term=gatlinburg%20cabin&u=1000670913 http://search.cpvee.com/search.php?q=gatlinburg+cabin&y=&f=2168&s= http://www.insidecelebritygossip.com/ I cleared all history and all but still same problem. I am using CentOS 6.3

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • What to have in sources.list on an Ubuntu LTS server (production)?

    - by nbr
    I have several Ubuntu 10.04 LTS servers in production and I'm using apticron to check that my software is up to date, security-wise. However, by default, Ubuntu has the lucid-updates repository enabled. This means lots of low-priority updates (such as this) that I don't need and thus, extra work for me. Is it okay to just remove the lucid-updates line(s) in sources.list? I still get security updates via lucid-security, right? So, this is what my sources.list would look like. deb http://se.archive.ubuntu.com/ubuntu/ lucid main restricted deb http://se.archive.ubuntu.com/ubuntu/ lucid universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted deb http://security.ubuntu.com/ubuntu lucid-security universe

    Read the article

  • SVN Connection Not Successful

    - by user66850
    I am getting the following message when attempting to connect to our company's SVN repository - the same error occurs whether I try from the OSX command line or Eclipse. Any ideas on where to troubleshoot? I can access from other similar computers and others in my team do not have any problem - this issue started occurring on my MacBook Pro yesterday afternoon (no known changes were made to the OS prior to problem starting). $ svn co http://example.ca/cwl/tags/app svn: OPTIONS of 'http://example.ca/cwl/tags/app': Could not read status line: connection was closed by server (http://example.ca)*

    Read the article

  • Serving static content with Apache web server and Tomcat

    - by Hunter
    I've configured Apache web server and Tomcat like this: I created a new file in apache2/sites-available, named it "myDomain" with this content: <VirtualHost *:80> ServerAdmin admin@myDomain.com ServerName myDomain.com ServerAlias www.myDomain.com ProxyPass / ajp://localhost:8009 <Proxy *> AllowOverride AuthConfig Order allow,deny Allow from all Options -Indexes </Proxy> </VirtualHost> Enabled mod_proxy and myDomain a2enmod proxy_ajp a2ensite myDomain Edited Tomcat's server.xml (inside the Engine tag) <Host name="myDomain.com" appBase="webapps/myApp"> <Context path="" docBase="."/> </Host> <Host name="www.myDomain.com" appBase="webapps/myApp"> <Context path="" docBase="."/> </Host> This works great. But I don't like to put static files (html, images, videos etc.) into {tomcat home}/webapps/myApp's subfolders instead I'd like to put them the apache webserver's root WWW directory's subdirectories. And I'd like Apache web server to serve these files alone. How could I do this? So all incoming request will be forwarded to Tomcat except those that ask for a static file.

    Read the article

  • Is email forwarding to the sender's address usually blocked in Mail servers / MTA?

    - by codecowboy
    I've noticed that email forwarding to an address seems not to work if I send an email from the address to which I am forwarding email. This happens for GMail and Fasthosts mail servers. e.g I send an email to info@mail.com from myaddress@mail.com , info@mail.com is set to forward to myaddress@mail.com and the email never arrives. I realise this seems logical but it is a potential cause of confusion when testing email functionality in a web application (for me, anyway ;-). I would just like to know if this is standard for all MTA software so I can avoid confusing myself.

    Read the article

  • Does sphider can be a search engine for intranet ?

    - by garcon1986
    Hello, Sphinx is a kind of search engine but it should be installed on the server. But i can't install it on the server, so i have to find another solution. Actually, i have tested sphider in a small site. But when I want to integrate it in my intranet, it doesn't work. The error code: 1. Retrieving: http://localhost/XXX at 16:44:20. Updated Link To http://localhost/XXX Size of page: 1.54kb. Starting indexing at 16:44:20. Page contains less than 10 words Links found: 1. New links: 1 2. Retrieving: http://localhost/XXX.php at 16:44:20. Unreachable: http 404 Links found: 0. New links: 0 Anyone has ideas? Thanks

    Read the article

  • Set Thunderbird "from" address by incoming "to" address

    - by user293698
    I have configured my email server to cache all email to my mailbox. So x@mydomain.com and y@mydomain.com go to one mailbox. Every forum, registration, and guy get their own address for sending me emails so I can deliver it to /dev/null if anyone start spamming. That's the working setup. Now the problem: If I reply to a message, then Thunderbird always sets my default Identity as sender. I know I can add additional identities, but I don't want to add every address. How can I configure when a email is sent to x@mydomain.com, I answer with x@mydomain.com.

    Read the article

  • Trying to Set up SMTP Server on WIndows Server 2012

    - by datc
    I'm working on a website, and I need to test the functionality of sending email messages from ASP.NET, something like this: Dim msg As New MailMessage("email1", "email2") msg.Subject = "Subject"<br> msg.IsBodyHtml = True<br> msg.Body = "Click <a href='site'>here</a>." Dim client As SmtpClient = New SmtpClient() client.Host = "My-Server"<br> client.Port = 25<br> client.DeliveryMethod = SmtpDeliveryMethod.Network<br> client.Send(msg) This is running from a Windows 8 workstation. I've installed SMTP server on my Windows Server 2012 machine. The mail shows up in the mailroot/Queue folder and sits there, eventually getting deposited into Badmail. Now I have AT&T U-verse at home, and a few devices connected to the gateway, including let's call it "My-Server." When I run SmtpDiag from say, datc@... to datc@hotmail.com I get SOA serial number match passed, Local DNS (99-135-60-233.lightspeed.bcvloh.sbcglobal.net) & Remote DNS (hotmail.com) tests *not* passed, and ultimately, Connecting to the server failed. Error: 10060. Failed to submit mail to mx2.hotmail.com error. When I set My-Server's IP to static and equal to the external IP, 99.135.60.233, and again run SmtpDiag, I get SOA, Local DNS, and Remote DNS tests passed, but the same 10060 error. Same for yahoo.com, gmail.com, and so forth. Is it my ISP's job to fix this? Some PTR record missing somewhere? Is it at all possible to have a home-based SMTP server? All I want is to test my email code. Perhaps, my IP address is just not "trusted" somehow. Thanks.

    Read the article

  • What is my BaseDN supposed to be with the following configuration of OpenLDAP?

    - by fuzzy lollipop
    I have the following in my OpenLDAP configuration. Using the latest version OpenLDAP on Centos 5.3. Installed using yum. From my /etc/openldap/slapd.conf database bdb suffix "dc=company,dc=com" rootdn "cn=Manager,dc=company,dc=com" From my /etc/openldap/ldap.conf BASE dc=company,dc=com I have successfully added an entry with ldapadd and retrieved it with ldapsearch from a local bash shell on the box. Now I am trying to get a Graphical Editor to connect to this server remotely so I can enter people from my laptop. But I am having no luck. I tried JXplorer, and it connects with Anonymous bind without me having to specify a BaseDN but I can't edit anything that way. If I try and give it a user name and password, using Manager and my rootpw I have in clear text just for testing, every GUI Client on my remote laptop complains about my BaseDN not being the correct format when I enter dc=company,dc=com and I tried cn=Manager,dc=company,dc=com. Error opening connection: [LDAP: error code 34 - invalid DN] I have tried multiple clients and all of them connect as anonymous, none let me connect authenticated where I can actually create or edit anything. I am using Manager as my username and the password from rootpw, is that correct?

    Read the article

  • Samba / smbd on Centos 6.5

    - by Satalink
    I've installed Samba4 and have the smb.conf file as follows: [global] workgroup = WORKGROUP server string = Samba Server realm = REXIALO.COM netbios name = REXIALO.COM security = user map to guest = Bad Password bind interfaces only = no interfaces = lo venet0 log file = /var/log/samba/samba.log max log size = 1000 [webroot] path = /usr/local/apache/htdocs comment = Example.com webroot directory read only = No I can connect from the same server with smbclient. Localhost: # smbclient -L localhost -U root Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Sharename Type Comment --------- ---- ------- webroot Disk RexiAlo webroot directory IPC$ IPC IPC Service (RexiAlo Samba Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Server Comment --------- ------- Workgroup Master --------- -------Enter root's password: network: # smbclient -L rexialo.com -U Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Sharename Type Comment --------- ---- ------- webroot Disk RexiAlo webroot directory IPC$ IPC IPC Service (RexiAlo Samba Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Server Comment --------- ------- Workgroup Master --------- ------- The problem is when I try to map to the smb webroot from Windows 7, it asks for user/pass but just times out and then prompts for credentials. The samba.log file does not show any activity other than the startup of the smbd process. Any help would be appreciated.

    Read the article

  • SSH not working through Double NAT

    - by d_inevitable
    I am trying to setup port forwarding for ssh through 2 NATs The first Router translates my internet IP to my outer network (10.1.7.0). In the outer network there's a second Router that does NAT to my inner network (192.168.1.0). The target server is connected to both, the outer network and the inner network. I cannot change the port forwarding options for outer router. It is currently configured to forward the SSH and HTTP port to the router for the inner network. Internet + | v +-----------------+ +------------------+ | Outer Router | | Inner Router | |-----------------| |------------------| | | SSH HTTP | | +----+ +--------------------->| | | | | | | | | | | | | +-------+---------+ +------+---------+-+ | | | | | | | | | | | | | | +------------------+ | SSH | | | | Server | | | | | |------------------| | | | +-----------> |<-------+ | | | | |HTTP (testing) | +------------------+ | | | +------v------------------+ | | Outer Workstation | +-------------------+ | |-------------------------| | Inner Workstation| | | | |-------------------| | | | | |<----------------+ +-------------------------+ | | +-------------------+ When connecting from a outer workstation to the address of the inner router, then both SSH and HTTP work fine. When connecting from the internet to my public ip with HTTP, the connection works fine as well. However SSH just times out. Most likely because the reply is not routed back properly. I suspect its either because of the SSH itself, or because the server is connected to both, the inner and outer network. Any ideas how I could resolve this issue? The routes on the server are currently: ip route show default via 10.1.7.254 dev eth0 metric 100 10.1.7.0/24 dev eth0 proto kernel scope link src 10.1.7.1 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.2 Do I have to change this? If so how?

    Read the article

  • wget-ing protected content with exported cookies

    - by XXL
    I have exported a pair of cookies from Firefox that are valid for the URL in question and tried accessing/downloading the protected content off that address, but the end result is a return to the login page. I have tried doing the same thing for 3 other websites with similar outcome. Any clues as to what I might be doing wrong? The syntax I'm using: wget --load--cookies=FILE URL ----------------------------------------------- DEBUG output created by Wget 1.12 on linux-gnu. Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_login lz8xZQ%3D%3D Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_pass 2fd4e1c67a2d28fced849ee1bb76e74a Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_uid GZX4TDA%3D --2011-01-14 13:57:02-- www.x.org/download.php?id=397003 Resolving www.x.org... 1.1.1.1 Caching www.x.org => 1.1.1.1 Connecting to www.x.org|1.1.1.1|:80... connected. Created socket 5. Releasing 0x0943ef20 (new refcount 1). ---request begin--- GET /download.php?id=397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 302 Found Date: Fri, 14 Jan 2011 11:26:19 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Set-Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 Vary: Accept-Encoding Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html ---response end--- 302 Found Stored cookie www.x.org -1 (ANY) / <session> <insecure> [expiry none] PHPSESSID 5f2fd97103f8988554394f23c5897765 Registered socket 5 for persistent reuse. Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 [following] Skipping 0 bytes of body: [] done. --2011-01-14 13:57:02-- www.x.org/login.php?returnto=download.php%3Fid%3D397003 Reusing existing connection to www.x.org:80. Reusing fd 5. ---request begin--- GET /login.php?returnto=download.php%3Fid%3D397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Date: Fri, 14 Jan 2011 11:26:20 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Vary: Accept-Encoding Content-Length: 2171 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: text/html ---response end--- 200 OK Length: 2171 (2.1K) [text/html] Saving to: `x.out' 0K .. 100% 18.7M=0s 2011-01-14 13:57:02 (18.7 MB/s) - `x.out' saved [2171/2171]

    Read the article

  • Limit access on Apache 2.4 to ldap group

    - by jakobbg
    I've upgraded from Ubuntu 12.04 LTS to 14.04 LTS, and suddenly, my Apache 2.4 (previous: Apache 2.2) now lets everybody in to my virtual host, which is unfortunate :-). What am I doing wrong? Anything with the Order/Allow lines? Any help is greatly appreciated! Here's my current config; <VirtualHost *:443> DavLockDB /etc/apache2/var/DavLock ServerAdmin admin@mydomain.com ServerName foo.mydomain.com DocumentRoot /srv/www/foo Include ssl-vhosts.conf <Directory /srv/www/foo> Order allow,deny Allow from all Dav On Options FollowSymLinks Indexes AllowOverride None AuthBasicProvider ldap AuthType Basic AuthName "Domain foo" AuthLDAPURL "ldap://localhost:389/dc=mydomain,dc=com?uid" NONE AuthLDAPBindDN "cn=searchUser, dc=mydomain, dc=com" AuthLDAPBindPassword "ThisIsThePwd" require ldap-group cn=users,dc=mydomain,dc=com <FilesMatch '^\.[Dd][Ss]_[Ss]'> Order allow,deny Deny from all </FilesMatch> <FilesMatch '\.[Dd][Bb]'> Order allow,deny Deny from all </FilesMatch> </Directory> ErrorLog /var/log/apache2/error-foo.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access-foo.log combined </VirtualHost>

    Read the article

  • Forcing exact hostname match in IIS

    - by iis_newbie
    I am looking how to force an exact hostname match within IIS when using https. For instance, I want "https://works.mysite.com/resource" to be ok, but "https://noworks.mysite.com/resource" to return 404 (assuming they both resolve to the same IP). IIUC, the default behavior of IIS when going to "https://noworks.mysite.com/resource" is to get a cert warning, if the user presses continue, the user is able to access the URL. I was able to do this by generating a *.mysite.com SSL cert, and then specify the hostname within the bindings in IIS, but without the * in the beginning, the hostname field is disabled and blank. Am I missing something simple here?

    Read the article

  • How to redirect or rewrite IIS site with port in URL to URL without port?

    - by user2573690
    I'm not 100% sure if this is the right part of StackOverflow to post this but to me it made the most sense. Sorry if its not! Currently I have a site in IIS configured on HTTPS with port 7500. I can access this site by using the URL: https://portal.company.com:7500. What I would like to do is remove the port number at the end of the URL so users can access this site using https://portal.company.com... I am a complete beginner with IIS, but what I have tried is the HTTP Redirect, which if I used on this IIS site, would redirect a user that hits portal.company.com:7500 to some other site, which is not what I need. Another thing I have though about is creating another IIS site which serves the purpose of being at the URL portal.company.com and when its hit, it redirects to my portal.company.com:7500, but I don't know if this is the best approach. So my question is, what are my options for achieving the behavior mentioned above and what is the best/recommended approach? I haven't played with URL Rewriting before but I will look into that now while I wait for a reply. Thanks!! Using IIS Manager on a Windows Server 2008 machine.

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >