Search Results

Search found 8284 results on 332 pages for 'trusted sites'.

Page 79/332 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

  • Linux/Apache performance very slow even on local network

    - by klausch
    I have an Ubuntu server machine running Apache and MYSQL. System and version info is as follows: Linux kernel 3.0.0.-12 Apache/2.2.20 MySQL Ver 14.14.Distrib 5.1.58 I am running a few websites on this server, some HTML only, some PHP/MySQL. THe [problem is that response time is very slow, both on static as well as the dynamic sites. Sometimes it takes more than 10 seconds before a response is given, this makes the sites very slow and almost unusable. The problem occurs even when requesting from the local network. I have added the involved subdomains to my /etc/hosts file, and abolve all the problem is not solved by using IP numbers instead of URL's. So there is no DNS lookup issue. I have modified the log format by showing the response times and sometimes a files takes 12 seconds to be served, see the jquery~.js file in the example screenshot. I have no explanation for this extremely long response time, but is is not even the only issue here, some other files takes a long time to be served too, but do not show a long response time in the log file. So probably different tissues are involved here. I cannot find a solution until now, any suggestions??? THanx in advance, Klaas link to screenshot picture from access logfile Some extra configuration info: apache2.conf (comment is removed) LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType text/plain HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %T/%D" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ And the virtual hostfile for one of the slow sites, in fact it is pretty straightforward... <VirtualHost *:80> ServerAdmin [email protected] ServerSignature EMail ServerName toenjoy.drsklaus.nl DocumentRoot /var/www/toenjoy.drsklaus.nl <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/toenjoy.drsklaus.nl/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig AuthType Basic AuthName "To Enjoy" AuthUserFile /etc/.htpasswd Require user petraaa Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> And the output of free -m: klaas@ubuntu-server:/etc/apache2$ free -m total used free shared buffers cached Mem: 1997 1401 595 0 144 1017 -/+ buffers/cache: 238 1758 Swap: 2035 0 2035 and I have no indication that swapping occurs on the moments the site is slow. I have runned top and it does not appear to be a CPU issue. I have the impression that the spawning of a apache thread could maybe be the bottleneck but it is just a suggestion. Maybe this gives some extra information! EDIT: The problem seemed to be gone for some time but occurs again! And not only with Apache, also connecting using SSH takes a tremendous time, sometimes it takes up to 15 seconds before the keyphrase is asked for. Also scp works very slowly. The behavious is really unpredoctable and makes the server very hard to use. Any ideas...?

    Read the article

  • Mitigating the 'firesheep' attack at the network layer?

    - by pobk
    What are the sysadmin's thoughts on mitigating the 'firesheep' attack for servers they manage? Firesheep is a new firefox extension that allows anyone who installs it to sidejack session it can discover. It does it's discovery by sniffing packets on the network and looking for session cookies from known sites. It is relatively easy to write plugins for the extension to listen for cookies from additional sites. From a systems/network perspective, we've discussed the possibility of encrypting the whole site, but this introduces additional load on servers and screws with site-indexing, assets and general performance. One option we've investigated is to use our firewalls to do SSL Offload, but as I mentioned earlier, this would require all of the site to be encrypted. What's the general thoughts on protecting against this attack vector? I've asked a similar question on StackOverflow, however, it would be interesting to see what the systems engineers thought.

    Read the article

  • Can you have a staging and production slot in Azure Websites

    - by Barry King
    I'm looking at hosting 3 Websites (there will all use the same linked database resource but I think I have to use 3 websites within Azure for this); www.website.com, provider.website.com and admin.website.com. Using Windows Azure Websites, can you have a Staging, Production slot? I think this feature is only available to Azure Cloud Services but there is little documentation on this. If its not possible, other than spinning up 3 more sites to act as the staging sites is there another way? I want the ability to "swap" from staging to production.

    Read the article

  • Building intranet search

    - by gmkv
    At work, we have lots of information squirreled away in many different sites -- wikis, product docs, ticketing system, etc -- many of which require authentication. I'm very interested in having a single way to search all our various silos, and in my spare time have looked at Nutch, Grub, Django + Haystack, etc. None of these is a complete solution a la Google Mini or Google Search Appliance. Has anybody built a basic intranet search engine out of a mixture of these tools? Would you have recommendations about how to go about it? I like Django, and Haystack seems to be a mildly popular search solution for it, but I'd need to wire up a crawler that can support crawling authenticated sites to it.

    Read the article

  • Getting IIS redirects proper for second HTTP site

    - by Gotenks
    2x IIS sites on one host. I have mainsite.domain.com and secondsite.seconddomain.com. Both sites point to the same IP in public DNS. Nothing wrong with mainsite.domain.com, it redirects and goes to its own HTTPS site without issue. Going to secondsite.seconddomain.com without HTTPS, it re-directs me to the HTTPS mainsite.domain.com when I want it to go to its own secured site. It's odd that HTTPS secondsite.seconddomain.com still works as expected. Is there anyway to make HTTP of secondsite.seconddomain.com redirect to its own HTTPS entry?

    Read the article

  • Nginx Virtual Host upstream error

    - by TenJack
    I'm trying to add another virtual host and it keeps giving me this: host not found in upstream "domain1" error, even though I have changed the upstream from domain1 to something else in my sites-enabled file. It used to be domain1, but it's almost as if nginx is caching this value somewhere. This is what my sites-available/mysite.com file looks like: upstream mysite { server 127.0.0.1:5000; } server { listen 80; server_name www.mysite.com; rewrite ^/(.*)$ http://mysite.com/$1 permanent; } } And my thin server is running on port 5000 for this.

    Read the article

  • Seeing DNS changes takes too long on my PC, can it be my router misconfiguration?

    - by Borek
    I administer a few sites and need to update their DNS entries from time to time, e.g., adding an A-record point certain subdomain to a certain IP. When I check sites like http://www.opendns.com/support/cache/, I can clearly see the DNS change taking effect throughout the world - is it just my PC that can't see this change (ping newsubdomain.example.org says it cannot resolve host name) The network "map" is like this: My PC -> my router -> my ISP's router -> internet On my PC, the DNS is set automatically which means that if I run iconfig /all, my router will be returned as the DNS server (192.168.1.1). On my router, the DNS is set to be what my ISP provided me with. Is this correct? What can I do to see new hostnames resolved quicker?

    Read the article

  • OpenVPN: Single certificate authority, multiple VPNs

    - by darwish
    The company in which I work has a single site (I'll refer it as "Site A"). There are several private networks within site A. We have a running instance of OpenVPN which allows some employees to connect to one of the private networks in site A. We're planning to extend our facilities to another site (which I'll refer as "Site B") and we wish to connect both sites using OpenVPN. The VPN which will connect sites A to B will be a trunk link, meaning it will have access to all networks. If we use the same certificate authority for both VPN servers, this will allow the employees, which can only to one of the private networks within site A, to connect to the site-to-site link, which will give them access to all networks. Off course this is undesirable. Using 2 different certificate authorities seems like the obvious solution, but it doesn't feel right. I wounder if there's a way to maintain permission control within a single certificate authority.

    Read the article

  • Creating a fallback error page for nginx when root directory does not exist

    - by Ruirize
    I have set up an any-domain config on my nginx server - to reduce the amount of work needed when I open a new site/domain. This config allows me to simply create a folder in /usr/share/nginx/sites/ with the name of the domain/subdomain and then it just works.™ server { # Catch all domains starting with only "www." and boot them to non "www." domain. listen 80; server_name ~^www\.(.*)$; return 301 $scheme://$1$request_uri; } server { # Catch all domains that do not start with "www." listen 80; server_name ~^(?!www\.).+; client_max_body_size 20M; # Send all requests to the appropriate host root /usr/share/nginx/sites/$host; index index.html index.htm index.php; location / { try_files $uri $uri/ =404; } recursive_error_pages on; error_page 400 /errorpages/error.php?e=400&u=$uri&h=$host&s=$scheme; error_page 401 /errorpages/error.php?e=401&u=$uri&h=$host&s=$scheme; error_page 403 /errorpages/error.php?e=403&u=$uri&h=$host&s=$scheme; error_page 404 /errorpages/error.php?e=404&u=$uri&h=$host&s=$scheme; error_page 418 /errorpages/error.php?e=418&u=$uri&h=$host&s=$scheme; error_page 500 /errorpages/error.php?e=500&u=$uri&h=$host&s=$scheme; error_page 501 /errorpages/error.php?e=501&u=$uri&h=$host&s=$scheme; error_page 503 /errorpages/error.php?e=503&u=$uri&h=$host&s=$scheme; error_page 504 /errorpages/error.php?e=504&u=$uri&h=$host&s=$scheme; location ~ \.(php|html) { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_intercept_errors on; } } However there is one issue that I'd like to resolve, and that is when a domain that doesn't have a folder in the sites directory, nginx throws an internal 500 error page because it cannot redirect to /errorpages/error.php as it doesn't exist. How can I create a fallback error page that will catch these failed requests?

    Read the article

  • Windows 7 / Internet Explorer 8

    - by Rene
    I am a shop owner at zazzle.com. About six weeks ago, when my computer was running on Windows XP/IE7, my sites, as well as zazzle's homepages went out on me. I can only see part of each page. Since that time, I have a new computer running Windows 7/IE8, thinking that would solve the issue. It did not. Zazzle's emails told me to download Firefox and/or download Internet Explorer 7. I tried Firefox and was getting a different problem at the zazzle site. Now I was getting only the 'view source' pages on zazzle's homepages and my own shop sites as well. Question: Can I download IE 7 onto my IE 8 computer? Can this be done without loading that compilation of internet explorer 1 through 8? What do you think is the best solution to this problem?

    Read the article

  • Mozilla nonsense. Page changes size by itself

    - by Browser Madness
    I have never intentionally changed the size of font on latest mozilla browser install on windows machine. For example Google site is now 200% size, and I did nothing to make this happen. Whats worse is it does not change back but remembers this! Similarly other sites are too small and they remember this per site. What is going on here? I mean what nonsense! How can I undo this? And for extra points who came up with this absurd behavior at mozilla? Not making this up folks. 15.0.1 Not at all clear why it changes size or how to go back to default size for these sites Acutally it just happened again while editing this entry. Icon changes and than font size is too small.

    Read the article

  • Web Content Filtering for Windows Clients

    - by djoyce
    I'm working with a small business to solve a bunch of problems. One is their Windows 7 POS registers need to have web access restricted to only three remote support sites, but the back office machine needs an unfiltered connection. I'd like something I can install and configure on the few registers to block all but those few sites. In a perfect world this would restrict the normal register user, but the admin user would not be filtered. Free is best, if it works, but a small fee would be alright too. Microsoft's Family Safety filter is close, but requires a Windows Live account, which isn't ideal, but may be alright. Anyone use this in a small business environment? I'd prefer something easily managed at the local machines. K9 Web Protection is interesting and I'm going to look into it more. Are there other options? Seems like someone would have made something simple like this as an open source project, but maybe not.

    Read the article

  • IPv6 feature in Network Adaptor is Slowing Internet

    - by Teknophilia
    The past few days, my internet browsing has become very poor. It's not a matter of speed, as a speed test will give at least 15Mbps. It seems as if my laptop has a hard time actually connecting to the sites. I've found a possible culprit, but don't know why it would affect anything: Going to adapter settings and disabling ipv6, but leaving ipv4, my browsing is back to normal. Re-enabling ipv6 brings back the issue. This is strange though, because I have always had ipv6 enabled. Moreover, using sites to test ipv6 compatibility, I fail with ipv6 enabled on my adapter, and pass when it's disabled. Ideas about why this is happening, and how to fix it?

    Read the article

  • Advice for an EC2 Architecture and Deployment Strategy

    - by Mark
    My company is currently migrating several websites and PHP web applications (standard LAMP stack) from three in-house servers to Amazon EC2. Because we had only three servers, we clustered several low-traffic websites with perhaps one high-traffic web application, and served them from the same server. The server admin has pretty much copied the previous architecture wholesale onto the EC2 instances, simply upping the instance size to account for the highest traffic client that occupies that particular instance. This architecture might be okay if it wasn't for deployment. Any time one of these sites/apps changes, it means redeploying the entire instance, along with the 30 sites/apps it hosts, instead of just updating one. How can we architect our cloud in a more modular fashion? Should each app get its own appropriately-sized instance? What is the best strategy for deployment in this type of situation?

    Read the article

  • Very frequent "Server not found" messages

    - by Village
    Recently, while browsing and clicking or typing an address, I get: "Server not found", "Connection was reset", or a half-loaded page. Usually the page loads instantly after clicking reload, but this means I must reload on every page. Images on one site, but stored at a different server don't load until reloading for a second time. Clicking "submit" on many sites frequently doesn't work unless I reload many times. Sometimes sites load, but without colors and formating, appearing as if they would in Lynx. This seems to happen with every Web site. My Internet service claims everything on their end is fine. This happened a day after running an update in aptitude. I have not updated any hardware. I have tried clearing Iceweasel's cache. I do not have any router or other equipment. What could be going on? How can I troubleshoot this? PPPoE connection, Iceweasel 3.5.16, Debian 6

    Read the article

  • Should I be using www. when setting up virtual hosts on apache?

    - by MAZUMA
    Does it matter whether or not I include the www. sub-domain when creating new virtual hosts on apache? So is this? /etc/apache2/sites-available/www.example.com better than this? /etc/apache2/sites-available/example.com I would assume I'd need to a2ensite either www.example.com or example.com. Depending on whichever method used? This might be a a fairly basic question But, I have no one else to ask. And, want to do it right.

    Read the article

  • Personal Browsing Monitor Software [closed]

    - by jmadden93
    Anyone know of any personal browsing monitor software? I'd like to be able to monitor my own browsing habbits and the time I spend on entertainment, vs work vs educational sites. Something that offers more than simply looking at the history feature built into browsers. It would be nice if it gave you a breakdown of how much time you spend on certain categories of sites like social media, vs video, vs. news, productivity, etc. I think it would be useful to know how one spends their time.

    Read the article

  • Unable to install some applications on windows 7 64 bit (These files can't be opened error)

    - by rzlines
    I get the following error when I try to install some application on my windows 7 64 bit system. How do I turn this off as I know that the applications that I'm installing are trusted. I have turned off windows defender and tried to tweak internet explorer security settings according to the first few google results but yet I have the same error. (I also created a new user account and tried importing new registry keys but nothing even then) How can I solve this?

    Read the article

  • How to block a website completely?

    - by user37076
    I want to block some sites(e.g. youtube,news sites ), because I have a problem with procrastination and I find these websites affect my productivity very much. I used to block them by adding them to HOSTS file. However, gradually every time I want to take a break, I open the hosts file and comment my block again. Is there any way I can block the websites and cannot (at least a little bit hard, e.g. I have to reboot my pc) unblock them. I have no access to the router or any firewall, besides the ones on my computer. I just want to FORCE myself to work without any chance to procrastinate.

    Read the article

  • Why am I having trouble viewing HTTPS websites only using Chrome only on my employer's network?

    - by user1742777
    I'm using Google Chrome on my new MacBook Pro that has been provided to me by my employer. Many of the HTTPS sites I visit do not work when I visit them using Google Crome while I am connected to my employer's network. Example: www.facebook.com These same sites work perfectly fine if I use a different browser (like Safari) or even with Chrome when my Macbook is connected to my home WiFi network. Chrome reports the error: "The certificate was signed by an unknown authority". See attached screenshots. How can I resolve this problem? I really want to use Chrome. But not having access to numerous important work and outside websites is unacceptable.

    Read the article

  • How can I restore the stored password in firefox 15.0.1 when deleted by error?

    - by Bob Legringe
    I, by error, deleted my stored passwords, using the "Wise disc cleaner 7" program. As I saw on another thread, the passwords are stored in 2 files signons.sqlite and the encryption key file key3.db When opening the file signons.sqlite with the text editor, I can see that the web adresses of the sites belonging to the passwords are still there. They have not been deleted by the "Wise disc cleaner 7" program, and adding a stored password on Firefox just modifies the file. However, Firefox will not display my old stored passwords and neither their respective sites. Is there any way to "undelete" the passwords?

    Read the article

  • Why are my favorite websites becoming slower, over months?

    - by Wolfpack'08
    I spend a lot of my time at sites for watching online videos: youtube, gorillavid, thedailyshow.com etc. I used to watch the videos in full screen mode, and then that became very laggy. So, I started watching them with full-browser zooming. Then that became laggy. Recently, I've had to actually zoom out; otherwise, the video will lag so much that my PC locks. Could this be a symptom of my processor, RAM, or motherboard going bad? Has it, perhaps, anything to do with softwares like Chrome or the playeres the sites are using being updated?

    Read the article

  • Remote additional domain controllers

    - by user125248
    Is it possible to setup several additional domain controllers (ADC) at remote locations that are connected via medium bandwidth DSL (2-10 Mbit) WAN connections for a single domain (intranet.example.com)? And would it be a good idea? We have five sites and would like to have extremely high availability if any of the site were to lose their Internet connection. However each site is very small, and all are over a fairly small geographical area within the same region, so it would seem strange to have a PDC for each of the sites. If it were possible to have an ADC for each site, would the clients use the ADC or just use the PDC if it's available to them?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >