Search Results

Search found 8284 results on 332 pages for 'trusted sites'.

Page 280/332 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • Bouncing between a 502 and 503 error

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • Mystery 0xc0000142 error on starting java from a service, as a different user.

    - by cpf
    This is a very convoluted setup, but effectively this is what goes down: Manager service (which I don't have control over) running as admin user X starts my executable, which then starts Java as user Y using the standard c# StartInfo.Username/Password controls. Now, from a basic (not elevated or anything, just admin) command prompt I can run that executable, and Java pops up and works fine, running perfectly under the user it should be. When the service runs the same executable, however, Java silently fails. The only hint I see is this series of events in the event viewer: Service starts "Application popup: java.exe - Application Error : The application was unable to start correctly (0xc0000142). Click OK to close the application. " (googling this reveals a lot of scam sites telling me to use their "free antivirus to fix 0xc0000142 errors easy!"... sigh) Service stops (the java shutdown propagated, which is supposed to happen) And here's what process explorer has for the failure: As you can see, everything shows as a success. Now, I think this might have something to do with the permissions (the user java.exe is running under has traverse permission for the entire drive and full permissions to Directory A, which is where the .jar is), but I just can't fathom how something that works fine from the command line (and, this is an upgrade, the previous system without the user-switching aspect works fine from the service) can fail with such a cryptic message and little showing up in logs.

    Read the article

  • When I try to access a website without www I get access denied.

    - by madphp
    I have an apache web server on a debian machine. I'm using virtualmin to administer virtual hosts. I have two sites on this server right now, when I try to access one site without the www in the URL I get an access denied. The other site is fine. The site with the problem is a cakephp app and has the following .htaccess file in the public_html folder. <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> Below is the directives for the problem domain. SuexecUserGroup "#1001" "#1001" ServerName mydomain.net ServerAlias www.mydomain.net ServerAlias webmail.mydomain.net ServerAlias admin.mydomain.net DocumentRoot /home/mydomain/public_html ErrorLog /var/log/virtualmin/mydomain.net_error_log CustomLog /var/log/virtualmin/mydomain.net_access_log combined ScriptAlias /cgi-bin/ /home/mydomain/cgi-bin/ ScriptAlias /awstats/ /home/mydomain/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/mydomain/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 FCGIWrapper /home/mydomain/fcgi-bin/php5.fcgi .php FCGIWrapper /home/mydomain/fcgi-bin/php5.fcgi .php5 </Directory> <Directory /home/mydomain/cgi-bin> allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} =webmail.mydomain.net RewriteRule ^(.*) https://mydomain.net:20000/ [R] RewriteCond %{HTTP_HOST} =admin.mydomain.net RewriteRule ^(.*) https://mydomain.net:10000/ [R] RemoveHandler .php RemoveHandler .php5 IPCCommTimeout 31 <Files awstats.pl> AuthName "mydomain.net statistics" AuthType Basic AuthUserFile /home/mydomain/.awstats-htpasswd require valid-user </Files>

    Read the article

  • Setting up httpd-vhosts.conf for multiple virtual hosts

    - by Chris Sobolewski
    I have a simple test setup using xampp at home, and I am getting really weird behavior when I attempt to set up multiple virtual hosts on this box. Here is my vhosts file: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName foo DocumentRoot "D:\wamp\xampp\htdocs\foo" ErrorLog logs/foo-error_log CustomLog logs/foo-access_log common <Directory "D:\wamp\xampp\htdocs\foo"> Options Indexes FollowSymLinks Includes execCGI AllowOverride All Order Allow,Deny Allow From All </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName bar DocumentRoot "D:\wamp\xampp\htdocs\bar" ErrorLog logs/bar-error_log CustomLog logs/bar-access_log common <Directory "D:\wamp\xampp\htdocs\bar"> Options Indexes FollowSymLinks Includes execCGI AllowOverride All Order Allow,Deny Allow From All </Directory> </VirtualHost> When I attempt to run visit the first site, it works as expected. When I attempt to run the second site, I get a weird hybrid mishmash of both sites. It's the weirdest thing.

    Read the article

  • Recommended offline on-demand virus scanners

    - by ashh
    I have never run full anti-virus on my Windows XP systems. Instead I use various anti-malware tools to manually perform scans every few weeks. This approach, combined with Windows updates and general care about what web-sites I visit and what files I download has kept me 99% free of problems. The remaining 1% has occurred when I download files that I know may contain malware, but still decide the risk is worth it. When on 2 occasions in 10 years I did get caught doing this, I realised that being able to easily scan them would most likely have avoided getting infected. I don't need, or want, to run a "stay resident" anti-virus. Also, the online scanners such as Kaspersky etc limit uploads to small files, so these are not always useful. In summary I would like to simply be able to download a file and then manually initiate an on demand anti-virus scan, on the downloaded file only. I'm sure some/most Anti-Virus do both, however once again I don't really want to pay for or need the stay resident part. Any recommendations (commercial or free)? UPDATE: This is not an exact duplicate, nor a possible duplicate. I searched for and read other questions on anti-virus here at SuperUser and found none that answered my question. I am specifically asking about anti-virus scanners that run ON-DEMAND locally on the computer, not online scanners.

    Read the article

  • Server 2003 and XP Client; Why are HTTP connections being silently dropped.

    - by Asa Yeamans
    On my network, my edge-router, a windows 2003 r2 server router with all the latest updates, will drop packets, but only under specific circumstances. I have troubleshot and isolated it down to the most simple configuration i can. There is NO NAT involved. Only fully-public IP addresses. No Firewalls are running either, all ahve been disabled. no packet filters on any interfaces anywhere either. I have a single Windows XP virtual machine and my edge-router(the windows 2003 r2 server, and also a virtual machine) running on a windows 2008 x64 r2 system (running virtual server 2005 as i dont have Intel-VT compatible chip yet). The edge router can access any external http site just fine, no issues. However the windows XP machine is only able to access certain sites. These work: www.google.com www.txstate.edu www.workintexas.com www.thedailywtf.com . These Dont: www.yahoo.com www.utexas.edu en.wikipedia.org slashdot.org www.bing.com. I have removed all possibility of DNS issues by connecting with net-cat from the XP box and sending GET /\r\nHost: \r\n\r\n and that connection replicates the issue as well. The network setup: My statically assigned IP block: x.x.x.168/29 DSL Modem -----PPPoE Connection---- x.x.x.169[EdgeRouter] [EdgeRouter]x.x.x.170 -----Virtual Ethernet----- x.x.x.174 [Test2] Test2's Default gateway is x.x.x.170 and test2 can ping any and every valid, accessible, public IP address with no packet loss what-so-ever. If i connect directly over PPPoE from test2 (the XP box) everything works just fine... Im at my wits end, i have NO IDEA whats causing this.

    Read the article

  • wget crawling search results of news website

    - by kiltek
    I am trying to crawl the search results of a news website using wget. The name of the website is www.voanews.com. After typing in my search keyword and clicking search, it proceeds to the results. Then i can specify a "to" and a "from"-date and hit search again. After this the URL becomes: http://www.voanews.com/search/?st=article&k=mykeyword&df=10%2F01%2F2013&dt=09%2F20%2F2013&ob=dt#article and the actual content of the results is what i want to download. To achieve this I created the following wget-command: wget --reject=js,txt,gif,jpeg,jpg \ --accept=html \ --user-agent=My-Browser \ --recursive --level=2 \ www.voanews.com/search/?st=article&k=germany&df=08%2F21%2F2013&dt=09%2F20%2F2013&ob=dt#article Unfortunately, the crawler doesn't download the search results. It only gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..." links and saves the articles they link to. It seems like he crawler doesn't check the search result links at all. What am I doing wrong and how can I modify the wget command to download the results search list links (and of course the sites they link to) only ?

    Read the article

  • Enable gzip on Nginx

    - by Rob Wilkerson
    Yes, I know that there are a lot of other questions that seem exactly like this out there. I think I must've looked all of them. Twice. In desparation, I'm adding another in case my specific configuration is the issue. Bear with me. First, the question: What do I need to do to get gzip compression to work? I have an Ubuntu 12.04 server installed running nginx 1.1.19. Nginx was installed with the following packages: nginx nginx-common nginx-full The http block of my nginx.conf looks like this: http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Both PageSpeed and YSlow are reporting that I need to enable compression. I can see that the request headers indicate Accept-Encoding:gzip,deflate,sdch, but the response headers do not have the corollary Content-Encoding header. I've tried various other config values (gzip_vary on, gzip_http_version 1.0, etc.), but no joy. As far as I know, I can only assume that nginx was compiled with compression support, but if there's any way to verify that, I'd love to know. If anyone sees anything I'm missing or can suggest further debugging, please let me know. I'm no sysadmin and I'm new to Nginx so I've exhausted everything I can think of or have read. Thanks.

    Read the article

  • IIS / Virtual Directory authentication.

    - by Chris L
    I have an IIS(v6)/Windows 2003/.Net 3.5(app code, libraries etc.) server hosting a website at www.mywebsite.com mapped to E:\Inetpub\wwwroot\mywebsite, we also have a virtual directory (VirtDir) mapped out to E:\Inetpub\wwwroot\mywebsite\files (although in theory this could be in a different directory or a separate machine) where we store a customer's files(a bunch of .pdf & .xls). Currently to access a file you can enter into the url something like: www.mywebsite.com/VirtDir/Customer/myFile.pdf and get access to the file. The problem is the user doesn't have to log into www.mywebsite.com to get access to the file, we would prefer them to log in first. We would like the user to login via the mywebsite and if valid, let them download files from the virtual directory. The www.mywebsite.com and VirtDir are separate sites on the same farm. Allow Anon Access, and Integrated Windows Authentication both enabled. I'm more of a developer and less of a Sys Admin, but hopefully I'm in the right spot, any help would be appreciated.

    Read the article

  • How to deploy website in IIS with a host name?

    - by Jayakumar
    I try to host my application in IIS. Below are the steps that I follow: Publish the code and place it in a path. Open IIS, right click on "sites" and select "Add Website". In that dialog I gave the site name and selected the app pool created for the application. I selected the physical path of the published code. I left the IP and port in the binding section without changes. and, finally, gave the host name as fus.km.com. When I try to browse the application the page is not Loading "Internet Explorer cannot display the Page" The machine domain is km.com UPDATE I tried to add the host name to the host file and flushed the DNS. The application asked for user credentials (I use windows Authentication in the application). But it did not login. On repeated tries it throws the error: HTTP Error 401.1 - Unauthorized You do not have permission to view this directory or page using the credentials that you supplied. I tried with different user to login but I get the same result.

    Read the article

  • Prevent SSL certificate being returned for a specific domain

    - by jezmck
    Apologies for a long question: We've taken on a new client whose web hosting was previously on their in-house server which still has their Exchange/Outlook email. We now host their domain (and many others) on our server. They're complaining that they're getting errors in Outlook. I don't understand the AutoDiscover stuff at the root of the problem, but believe that I just need to stop the SSL certificate on our server being returned when requested at a particular domain: Yes it is, the issue lies with "{newclient}.com" being pointed to your server IP and that server has Port 443 open with an SSL certificate associated to it. So when Outlook/ActiveSync use autodiscover to find the mailbox settings it find your SSL (because 443 is open) and flags it as an error. The solution is to close 443 so its not discovered, Autodiscover will then proceed to mail.{newclient}.com via the MX / ServiceRecords and discover the correct SSL. I'm new here and there was no hand-over, so I don't know whether other currently hosted sites need to accept SSL connections, though I suspect some will, or may in future. This is a live server, so I can't risk trying loads of options in case I take the server offline! I feel like I should be adding something like the following to vhosts.conf. <VirtualHost *:443> ServerName {newclient}.com ServerAlias www.{newclient}.com SSLEngine Off SSLCertificateFile {NONE} SSLCertificateKeyFile {NONE} </VirtualHost> Apologies for the fact that I don't know enough about this subject to be able to ask the question more clearly!

    Read the article

  • Home-made HTTP proxy server [closed]

    - by Martin Dimitrov
    I wanted to help a friend who has some restrictions at work to visit certain sites. Locally, on a Windows 7 machine, I run Apache server and decided to make it a proxy just for the IP of my friend. So I added the following to the configuration file: ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Deny from all Allow from <his.ip> </Proxy> It worked fine. But shortly the proxy started to receive many requests of the form: 66.249.66.242 - - [22/Sep/2012:11:01:12 +0300] "GET /search?hl=en&lr=lang_en&as_qdr=all&ie=UTF-8&q=related:www.aarp.org/aarp-foundation/+allinurl:+foundation&tbo=1&sa=X&ei=BSy2T9L_L8PitQapwtHtBw&ved=0COQBEB8wPw HTTP/1.1" 403 208 66.249.71.36 - - [22/Sep/2012:11:01:49 +0300] "GET /search?hl=en&lr=lang_en&as_qdr=all&ie=UTF-8&q=related:www.aarp.org/aarp-foundation/+allinurl:+foundation&tbo=1&sa=X&ei=BOCzT-_WK8_0sgbki5XCDA&ved=0COABEB8wPg HTTP/1.1" 403 208 These are Google IPs. The requests are every 30 seconds or so. My friend is not at work today. In apache_error.log I see: [Sat Sep 22 11:09:20 2012] [error] [client 66.249.66.242] client denied by server configuration: C:/wamp/www/aclk [Sat Sep 22 11:09:47 2012] [error] [client 66.249.71.36] client denied by server configuration: C:/wamp/www/search What the hell is going on? Please, help.

    Read the article

  • How do you make Google's interface always be in your chosen language?

    - by Michael Wolf
    Google's interface and search results don't always appear in my preferred language, English. I'm located in Mexico City and, although I generally have no problem with Spanish, I would prefer search results in English most of the time. (The exception is when I'm using search terms in Spanish.) I'd also prefer the interface to be in English, but that's far less important to me than search results. Google looks at your IP to decide where you're coming from and thus what language to present results in. So, when I type www.google.com into the URL bar, it redirects me to www.google.com.mx. Is there a way to force Google to use one language all the time? Here are some things I've done and tried: 0) I have a Google account, and I've configured it such that it should know that English is my preferred language. I don't often explicitly log out of Google, so generally Google knows I'm me and my preferences when I access its services. 1) I've configured my browser to ask for pages in English. Very few sites support this feature at all; Google isn't one of them. 2) From www.google.com.mx, I can click on "Google.com in English". This works until, I think, I close the browser. 2a) From www.google.com.mx, I can go into account configuration, which is English. From then on, everything's in English. 3) I can append &hl=en (Human Language = English) to the end of the URLs of result pages. 2, 2a, and 3 all "work", but they're all mildly annoying. I'd rather avoid them if I could. (At the risk of stating the obvious, English and Spanish are the languages I'm dealing with, but I imagine that, say, a francophone using Google from Korea would run into basically the same issue.)

    Read the article

  • How can I safely close this window and forever avoid seeing similar pop-ups from Mackeeper Zeobit's malware and spyware?

    - by Michael Prescott
    The attached image shows a window that just popped up and the only button available is the OK button. I could Force quit Safari, but I've got several sites open right now and don't want to try and find my place again. Besides, I've seen similar hacks in the past and I'd like to learn how to handle them in a way better than just a brute force-quit. I've never heard of MacKeeper or Zeobit, so I opened Firefox and did a few searches while Safari is obviously still stuck, waiting for me to click the sneaky OK button in the dialog window. Anyhow, at least the first few pages of most search results contain lots of blabbering from questionable witnesses about how MacKeeper saved them from some malware or spyware. However, any company that is hacking the browser to maliciously install their product is itself the criminal and not providing a true security application. So, there are three questions here: How can I close this window? Can I do something to Safari to avoid these hacks in the future? (Just curious) Is MacKeeper or Zeobit somehow loading the search results so that no information about their application being malware or spyware is listed (I can't be the only person in the world that is offended by their tactics, even though it appears I am)?

    Read the article

  • OpenVZ vs KVM for Linux VMs

    - by Eliasdx
    Hardware: Intel® Core™ i7-920, 12 GB DDR3 RAM, 2 x 1500 GB SATA-II HDD (no SoftRaid because Proxmox developers don't support softraid and they are sure you'll run into problems) Software: Proxmox VE with KVM and OpenVZ support and debian everywhere I want to run multiple Linux VMs on this server. One for a firewall (I want to try pfSense), one for MySQL, one VM for nginx (my stuff) and ~2 VMs with nginx for other people's web sites. I don't think that pfSense will run in an OpenVZ environment but it should run in KVM. The question is if I should setup the other VMs using KVM or OpenVZ. In OpenVZ they should have less overhead for the OS itself but I don't know about the performance. I heard that KVM is more stable but needs more RAM and CPU. I found this diagram showing a OpenVZ setup on the same hardware I'm using. This guy uses an own VM for each and every website which is running on his server. I can't think of any advantage why he's using so many VMs.

    Read the article

  • Selectively allow unsafe html tags in Plone

    - by dhill
    I'm searching for a way to put widgets from several services (PicasaWeb, Yahoo Pipes, Delicious bookmarks, etc.) on the community site I host on Plone (currently 3.2.1). I'm looking for a way to allow a group of users to use dangerous html tags. There are some ways I see, but I don't know how to implement those. One would be changing safe_html for the pages editors own (1). Another would be to allow those tags on some subtree (2). And yet another finding an equivalent of "static text portlet" that would display in the middle panel (3). We could then use some of the composite products (I stumbled upon Collage and CMFContentPanels), to include the unsafe content on other sites. My site has been ridden by advert bots, so I don't want to remove the filtering all together. I don't have an easy (no false positives) way of checking which users are bots, so deploying captcha now wouldn't help either. The question is: How to implement any of those solutions? (I already asked that on plone mailing list without an answer, so I thought I would give it another try here.)

    Read the article

  • What could cause an 101 Error in WAMP under Windows 7 ?

    - by Brayn
    Hey, I'be been using WAMP for local development for quite a while now but lately I've been getting an Error 101 message when I browse localhost sites. It's possible for this to have appeared after the last WAMP update but I'm not 100% sure on this. If I try again and again, after several page refreshes it works but it's really annoying! The exact error message is: Error 101 (net::ERR_CONNECTION_RESET): Unknown error. This is my configuration: OS: Windows 7 Apache: 2.2.11 PHP: 5.2.9-2 WAMP: 2.0 Also the local scripts connect to a remote MySQL server, they don't use the local MySQL(I don't know if it matters, just though I let you know). I've been looking into the apache logs and I've found the following. It seems that the apache server keeps restarting and I can't figure why: [Wed Oct 14 13:52:30 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Oct 14 13:52:30 2009] [notice] Apache/2.2.11 (Win32) PHP/5.2.9-2 configured -- resuming normal operations [Wed Oct 14 13:52:30 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Oct 14 13:52:30 2009] [notice] Parent: Created child process 6784 [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Child process is running [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Acquired the start mutex. [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Starting 64 worker threads. [Wed Oct 14 13:52:31 2009] [notice] Child 6784: Starting thread to listen on port 80. [Wed Oct 14 13:52:32 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Oct 14 13:52:33 2009] [notice] Apache/2.2.11 (Win32) PHP/5.2.9-2 configured -- resuming normal operations [Wed Oct 14 13:52:33 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Oct 14 13:52:33 2009] [notice] Parent: Created child process 3572 [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Child process is running [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Acquired the start mutex. [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Starting 64 worker threads. [Wed Oct 14 13:52:33 2009] [notice] Child 3572: Starting thread to listen on port 80. Also I've checked Windows Firewall and disabled any other protection that I have on this computer with no improvement. Thanks!

    Read the article

  • Apache Probes -- what are they after?

    - by Chris_K
    The past few weeks I've been seeing more and more of these probes each day. I'd like to figure out what vulnerability they're looking for but haven't been able to turn anything up with a web search. Here's a sample of what I get in my morning Logwatch emails: A total of XX possible successful probes were detected (the following URLs contain strings that match one or more of a listing of strings that indicate a possible exploit): /MyBlog/?option=com_myblog&Itemid=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 /index2.php?option=com_myblog&item=12&task=../../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 /?option=com_myblog&Itemid=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 301 /index2.php?option=com_myblog&item=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 //index2.php?option=com_myblog&Itemid=1&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 This is coming from a current CentOS 5.4 / Apache 2 box with all updates. I've manually tried entering a few in to see what they get, but those all appear to just return the site's home page. This server is just hosting a few Joomla! sites... but this doesn't seem to be targeting Joomla (as far as I can tell). Anyone know what they're probing for? I just want to make sure whatever it is I've got it covered (or not installed). The escalation of these entries has me a bit concerned.

    Read the article

  • Default /server-status location not inheriting in Apache

    - by rmalayter
    I'm having a problem getting /server-status to work Apache 2.2.14 on Ubuntu Server 10.04.1. The default symlinks for status.load and status.conf are present in /etc/apache2/mods-enabled. The status.conf does include the location /server-status and appropriate allow/deny directives. However, the only vhost I have in sites-enabled looks like this. The idea is to proxy anything with a Tomcat URL to a cluster of tomcats, and anything else to an IIS box. However, this seems to result in requests to /server-status being sent to IIS. Copying the /server-status in explicitly to the Vhost configuration doesn't seem to help, no matter what order I use. Is it possible to include /server-status do this within a vhost configuration that has a "default" proxy rule?: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED <Proxy balancer://tomcatCluster> BalancerMember ajp://qa-app1:8009 route=1 BalancerMember ajp://qa-app2:8009 route=2 ProxySet stickysession=ROUTEID </Proxy> <ProxyMatch "^/(mytomcatappA|mytomcatappB)/(.*)" > ProxyPassMatch balancer://tomcatCluster/$1/$2 </ProxyMatch> #proxy anything that's not a tomcat URL to IIS on port 80 <Proxy /> ProxyPass http://qa-web1/ </Proxy>

    Read the article

  • What are the "least legally restrictive" well-connected countries to host a website?

    - by monster
    NB: I am aware that this question is subjective, as it can't be defined precisely, but the answers should still be "objective": Country name, and what makes it legally safer. EDIT: A) I am located in Germany. B) I am NOT looking for a place to offer pirated Software/Media; no binary on my site, except "profile icon". Hello! I want to start publishing "social" websites / apps, and I found that the biggest initial problem is this: Any and all services I have to depend on, including Domain Registrar, DNS provider, Server/Cloud Provider, CDN Provider, ... even my Insurance Agent, basically say that they can "throw me out" if my website contains "unacceptable" content. It's always phrased in such a way that basically anything can fall under "unacceptable" content. This is very frustrating because you just can't fully control what users post on your "social website", and you so you basically have to expect when you go to bed that your site is going to be gone when you wake up. I've heard a lot of horror stories about this. Since the "Terms Of Service" of all those providers are foremost to protect themselves from legal actions, and those legal actions depend on the country where they are located, it seems like the first step is to find which country is the "safest" to locate a site. "Safest" being defined as, where I am least likely to get in legal trouble with the local authorities, if some user posts something unacceptable in some way. The main restriction is that it should also be a "well-connected" country, because there is no point in being "safe", if my users can't get to my sites, or the latency is unacceptable. I am targeting the English speaking people in any country as my future users.

    Read the article

  • High latency due to non-presence of a transit provider in my country

    - by nixnotwin
    My ISP, a state owned incumbent, buys bandwidth from different transit providers. Whenever it buys transits it announces only a specific prefix (in most cases a hitherto unused) through the new transit AS. For e.g. if it runs out of bandwidth, it buys bandwidth from a new transit and announces a new prefix through it, while the same prefix is not announced (or announced with lowest metrics, so that the routes are very rarely used) via the old transits which continue to provide bandwidth to it. I am a business customer, so I have a fiber based link to the ISP and a tiny subnet is given to me. The subnet which is provide to me is part of a prefix which is announced by the AS of a transit who, it seems, do not have a presence in my country. So when I do a trace the packets, when they leave my ISP's AS, they take about 275ms to reach the transit providers core router, which is located in USA (half the world away). Also for upstream traffic my ISP uses a transit provider (tier 1) who has a presence in my country. But the return path is always through the transit which is in USA. So, average latency is 400ms. All the users of other ISPs in my country discover my subnet via USA. Even the traffic from neighboring countries, from Europe (which is much nearer) follows the path via USA. Sites using CDNs also resolve to ips in USA. I have informed the ISP NOC about the issue and I have asked them to provide an ip subnet belonging to a prefix announced by a local transit (preferably a tier 1 transit provider) and I am waiting for a reply. My question: Is it a serious issue that I must follow up to get it resolved? When I compared the latency on other providers in my country, it is, in most cases, less than half of my ISPs latency. Why my ISP doesn't announce all its prefixes to all of its transit providers, so that the packets can take efficient and nearest routes to reach prefixes that originate within its network?

    Read the article

  • Windows VPN for remote site connection drawbacks

    - by Damo
    I'm looking for some thoughts on a particular way of setting up a estate of machines. We have a requirement to install machines into unmanned, remote locations. These machines will auto login and perform tasks controlled from a central server. In order to manage patching, AV, updates etc I want these machines to be joined to a dedicated domain for this estate. Some of the locations will only have 3G connectivity (via other hardware), others will be located on customer premises in internal networks. The central server (of ours) and the Domain Controller will be on a public WAN. I see two ways of facilitating this. Install a router at each location and have a site to site VPN between the remove device and the data centre where the servers are location Have the remote machine dial up and authenticate via a Windows VPN connection to the DC via RAS Option one is more costly to setup and has a higher operational cost. It also offers better diagnostics if the remote PC goes down. Option two works well but is solely dependent on the VPN connection been made before any communication can be made to the remote machine. In a simple test, I can got a Windows 7 machine to dial a VPN prior to authentication to a domain, then automatically login to the machine using domain credentials. If the VPN connection drops, it redials. I can also create a timed task to auto connect every hour in case of other issues. I'd like to know, why (if at all) is operating a remote network of devices which are located in various out of band locations in this way a bad idea? Consider 300-400 remote machines all at different sites. I'd rather have 400 VPN connections to a 2008 server than 400 routers, however I'd like to know other opinions on this.

    Read the article

  • Transferring domain from one registrar to another

    - by Macha
    I have a domain from my old web host, which was free with my hosting account. After a few years, I am moving to a VPS. Most of my other domains were registered with Namecheap, so it was just a matter of changing a few DNS records. However, given that my old host does not provide me with a DNS control panel, and I don't want to be paying a full hosting bill for just domains, I'm now looking into transferring it. My old host says there will be a charge of $15 to them. NameCheap's page seems to imply you don't need the current registrar to do anything, but it also seems to be based on sending an email to the one listed in whois. Of course, my old host have whoisguard on the domain so the only email on it is [email protected] (and not a unique [email protected], just [email protected]) which doesn't go to me. Again, there doesn't seem to be an option to disable this. So, is it a case of paying my old host's fee, and paying again for the domain from NameCheap, or is there some other way to transfer my domain? (I'm not really sure which of the trilogy sites this is best for.)

    Read the article

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >