Search Results

Search found 25629 results on 1026 pages for 'site maintenance'.

Page 212/1026 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Restart single uWSGI application (when it's in emperor mode)

    - by Oli
    I'm running uWSGI in emperor mode to host a bunch of Django sites based on their individual configs. These are supposed to update when it detects a change in the config file and this largely works when I just touch uwsgi.ini the relevant file. But occasionally I'll mess something up in the Django site and the server won't load. Yeah, yeah, I should be testing better but that's not really the point. When this happens, uWSGI seems to mark the site as dead and stops trying to run it (seems to make sense). Even after I fix the underlying issue, no amount of touching will get that site's uWSGI process up and running. I have to reload the whole uWSGI server (knocking dozens of sites out at once for a few seconds). Is there a way to force uWSGI to just reload one of its sites?

    Read the article

  • Why do Google search results include pages disallowed in robots.txt?

    - by Ilmari Karonen
    I have some pages on my site that I want to keep search engines away from, so I disallowed them in my robots.txt file like this: User-Agent: * Disallow: /email Yet I recently noticed that Google still sometimes returns links to those pages in their search results. Why does this happen, and how can I stop it? Background: Several years ago, I made a simple web site for a club a relative of mine was involved in. They wanted to have e-mail links on their pages, so, to try and keep those e-mail addresses from ending up on too many spam lists, instead of using direct mailto: links I made those links point to a simple redirector / address harvester trap script running on my own site. This script would return either a 301 redirect to the actual mailto: URL, or, if it detected a suspicious access pattern, a page containing lots of random fake e-mail addresses and links to more such pages. To keep legitimate search bots away from the trap, I set up the robots.txt rule shown above, disallowing the entire space of both legit redirector links and trap pages. Just recently, however, one of the people in the club searched Google for their own name and was quite surprised when one of the results on the first page was a link to the redirector script, with a title consisting of their e-mail address followed by my name. Of course, they immediately e-mailed me and wanted to know how to get their address out of Google's index. I was quite surprised too, since I had no idea that Google would index such URLs at all, seemingly in violation of my robots.txt rule. I did manage to submit a removal request to Google, and it seems to have worked, but I'd like to know why and how Google is circumventing my robots.txt like that and how to make sure that none of the disallowed pages will show up in their search results. Ps. I actually found out a possible explanation and solution, which I'll post below, while preparing this question, but I thought I'd ask it anyway in case someone else might have the same problem. Please do feel free to post your own answers. I'd also be interested in knowing if other search engines do this too, and whether the same solutions work for them also.

    Read the article

  • Combine several locations with regex in nginx

    - by AlexAtNet
    I dynamic number of Joomla installations in subfolders of the domain. For example: http://site/joomla_1/ http://site/joomla_2/ http://site/joomla_3/ ... Currently I have the follwing config that works: index index.php; location / { index index.php index.html index.htm; } location /joomla_1/ { try_files $uri $uri/ /joomla_1/index.php?q=$uri&$args; } location /joomla_2/ { try_files $uri $uri/ /joomla_2/index.php?q=$uri&$args; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm/joomla.sock; ... } I'm trying to combine joomla_N rules in one: location ~ ^/(joomla_[^/]+)/ { try_files $uri $uri/ /$1/index.php?q=$uri&$args; } but server starts to return index.php as is (does not call the php-fpm). It looks like the nginx stops the processing of the regex rules after the first match. Is there any way to combine this rules with something like regex?

    Read the article

  • Summary of usage policies for website integration of various social media networks?

    - by Dallas
    To cut to the chase... I look at Twitter's usage policy and see limitations on what can and can't be done with their logo. I also see examples of websites that use icons that have been integrated with the look and feel of their own site. Given Twitter's policy, for example, it would appear that legal conversations/agreements would need to take place to do this, especially on a commercial site. I believe it is perfectly acceptable to have a plain text button that simply has the word "Tweet" on it, that has the same functionality. My question is if anyone can provide online (or other) references that attempt to summarize what can and can't be done when integrating various social networks into your own work? The answer I will mark as the correct one will be the one which provides the best resource(s) giving the best summaries of what can and can't be done with specific logos/icons, with a secondary factor being that a variety of social networking sites are addressed in your answer. Before people point to specific questions, I am looking for a well-rounded approach that considers a breadth of networks and considerations. Background: I would like to incorporate social media icons and functionality, but would like to consider what type of modifications can be done without needing to involve lawyers. For example, can I bring in a standard Facebook logo, but incorporate my site color into the logo? Would the answer differ if I maintained their color, but add in a few pixels of another color to transition? I am not saying I want to do this, but rather using it as an example.

    Read the article

  • How do I troubleshoot a page not found error when configuring IIS6 Windows Server 2003? [Page Not Found]

    - by Vinicius Ottoni
    I have configured IIS6 in my windows server 2003 with this link: http://www.simongibson.com/intranet/iis6/ After that I create a new web site inside Web Sites directory. Inside the physical path I created an index.htm that has: <html> <body>Test</body> </html> But I got the following error: "The page cannot be found". When I put the same index file inside the Web Site Default physical path, it works. I configured the new web site with the link above using the IP configuration and without a Host Header.' What should I do to troubleshoot this or is there an obvious configuration error?

    Read the article

  • nginx caching per user agent

    - by Tuinslak
    I'm currently using nginx as reverse proxy with caching enabled. However, the main site has two different layouts, depending on the user-agent (mobile or not). I've tried something similar to this: # mobile users if ($http_user_agent ~* '(iPhone|iPod|mobile|Android|2.0\ MMP|240x320|AvantGo|BlackBerry|Blazer|Cellphone|Danger|DoCoMo|Elaine/3.0|EudoraWeb|hiptop|IEMobile)') { set $iphone_request '1'; } if ($iphone_request = '1') { proxy_cache mobile; } if ($iphone_request = '') { proxy_cache site; } proxy_cache_key "$scheme://$host$request_uri"; proxy_pass http://real-site.tld; However, nginx gives an error, stating proxy_cache can't be used in an if-structure. Any other way to serve from a different cache depending on the browser? Thanks, Tuinslak

    Read the article

  • .htaccess redirect not preserving http_referer header

    - by CodeToaster
    We're merging with another company, and we want to redirect content from their (Apache) website to our (IIS) site. When traffic arrives at our site, we inspect the HTTP_REFERER, and if the visitor was just redirected from the company's site that we just merged with, they'll be presented with a "splash" page announcing the merger. I've added the line... Redirect / http://www.oursite.com/ ...to their .htaccess, which works fine, except that when the browser is redirected it doesn't send the HTTP_REFERER header. I've tried redirecting with redirect codes 301, 302 and 307 (the default, I believe, is 302) and all have the same effect (redirects fine, but no HTTP_REFERER). Can anyone provide some insight into why HTTP_REFERER wouldn't be included?

    Read the article

  • Can too many 301 redirects cause a DNS error?

    - by Graham
    For a site http://imageocd.com that I just set up I initially spelled the category "automobiles" as "autimobiles"... I know it's rediculous. I then set up over 10,000 pages behind that category e.g. http://imageocd.com/automobiles/hillman-minx-cabrio-pictures-and-wallpapers. So, I set up over 10,000 301 url redirects to change the spelling on automobiles. I just checked my Google Webmasters report and got an error saying: http://www.imageocd.com/: Googlebot can't access your siteSep 7, 2012 Over the last 24 hours, Googlebot encountered 2 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 66.7%. Could the overabundance of 301 redirects be causing this? I host 13 sites on this dedicated server and all sites are running fine. I also contacted GoDaddy and they said the server is running fine. Any ideas on what might be going on? Also, I have "canonical" set up for every URL. Could this be part of the error? Thanks.

    Read the article

  • ftp connects but files aren't visible browsing

    - by YsoL8
    Hello If this should be on that other site, please don't shoot me, as I can't remember the name or the url. I have an ftp account in Dreamweaver that connects to the remote site and appears to be uploading files as normal. But when I browse to the location I can't see any new files or changes to the index page. (I've uploaded index.php and connect.php). I'm getting a 404 page. I suspect the host directory is wrong, but looking at the file tree, I can't see the folder I'm supposed to be using, so I'm uploading to the apparent site root. Any guidance on this?

    Read the article

  • Mitigating the 'firesheep' attack at the network layer?

    - by pobk
    What are the sysadmin's thoughts on mitigating the 'firesheep' attack for servers they manage? Firesheep is a new firefox extension that allows anyone who installs it to sidejack session it can discover. It does it's discovery by sniffing packets on the network and looking for session cookies from known sites. It is relatively easy to write plugins for the extension to listen for cookies from additional sites. From a systems/network perspective, we've discussed the possibility of encrypting the whole site, but this introduces additional load on servers and screws with site-indexing, assets and general performance. One option we've investigated is to use our firewalls to do SSL Offload, but as I mentioned earlier, this would require all of the site to be encrypted. What's the general thoughts on protecting against this attack vector? I've asked a similar question on StackOverflow, however, it would be interesting to see what the systems engineers thought.

    Read the article

  • Hide directory contents from showing when accessing the URL directly

    - by SoLoGHoST
    On my site, if you browse to http://example.com/images/ the contents of the entire directory are shown like so: How can I make it so that this doesn't show up when people browse directly to http://example.com/images/? Can I create an .htaccess file in that directory? Or is there a better way? I really don't want people being able to do this for the entire site (i.e. every directory on that site). What can I do to prevent this? I figure it's either something that has to be done in Apache or using an global .htaccess file and placing it in the public_html folder perhaps? EDIT I diverted this using an index.php file, but I still feel that security is an issue here, how can I fix this permanently?

    Read the article

  • Location-Based redirection and duplication in sub-directories affecting SEO

    - by Joshua
    I currently own the website www.xyz.com. The website has a sub-directory for each of the 3 target countries: .../en-US/ (United States), .../es-MX/ (Mexico), and .../es-DO/ (Dominican Republic). I have two main questions about this setup: Currently, the main domain/root (xyz.com) contains a blank index.php file, but I would like for a user to be redirected to one of the sub-directories based on their regional location. What is the best way to accomplish this? I have looked at using browser language-based redirection, but how would I know whether to direct a user to the MX or DO site if the browser language is set to spanish? Is there a way to detect a user's geographic location? Also, the 3 websites are practically identical except they all have 3 unique color schemes and the US site is in english while the MX and DO sites are in spanish. My problem is that I believe GoogleBot is penalizing/banning my site because the spanish text on the MX and DO pages are nearly identical and are thus marked as duplicates/spam. Is there a way to avoid this?

    Read the article

  • JSR Updates

    - by heathervc
    JSR 349, Bean Validation 1.1, has published a Public Review. The review closes on 12 November. JSR 331, Constraint Programming API, has published a Maintenance Release. JSR 335, Lambda Expressions for the Java Programming Language, has moved to JCP 2.8!  Check out their java.net project. JSR 107, JCACHE - Java Temporary Caching API, has posted their Early Draft Release.  The review closes on 22 November.

    Read the article

  • Sharepoint 2010 can't find domain users when granting permissions

    - by quani
    I'm trying to grant permissions to other people to view a SharePoint site but when granting permissions it uses "Check Names" and claims any user or group that is part of a domain does not exist. It does this if I try granting permissions to the team site or in central admin BUT if I try to add someone to Farm Administrators in Central admin then all of the sudden it can find all domain users. Why is it finding domain users in that one context but not others? It is supposed to be using NTLM authentication and has Windows configured as the authentication provider (And IIS is configured to use NTLM). What's even more strange is I enabled Anonymous Access for the team site which I thought would allow anyone to view it but others say they can't access it.

    Read the article

  • Remote Desktop Problem on Windows Server 2008 R2

    - by lukiffer
    Revised this question to be more concise, consolidating several revisions. Symptoms: From a domain-member Windows 7 Client: Domain credentials to a domain controller = success Domain credentials to a member server (by hostname or FQDN) = success Domain credentials to a member server (by IP) = fail Local credentials to a member server (by either) = success From a non-domain-member Windows 7 Client: Domain credentials to a domain controller = success Domain credentials to a member server = fail Local credentials to a member server = success (Identical behavior from a Mac RDC 2.1 client) Server Configuration Details: Windows 2008 R2 Datacenter w/ SP1 The domain in question is a subdomain of a Windows 2008 domain (forest root). Root has DCs in both Site A and Site B, subdomain only has DCs in Site B. RDP is operating normally on all root member-servers and DCs. No remote desktop settings are defined by GPOs. Network level authentication is enabled; all clients are compatible and the certificate exchange/SSL handshake completes successfully. Not catching any errors in netlogon log.

    Read the article

  • Using a PowerShell Script to delete old files for SQL Server

    Many clients are using custom stored procedures or third party tools to backup databases in production environments instead of using database maintenance plans. One of the things that you need to do is to maintain the number of backup files that exist on disk, so you don't run out of disk space. There are several techniques for deleting old files, but in this tip I show how this can be done using PowerShell.

    Read the article

  • Where can I find Ad Networks with single liner Ads?

    - by MaX
    I've developed a site that serves pure HTML Weather widgets (and they are great looking too). Just after two months I am generating 1.25K hits monthly (Google Analytics). Now I want to generate some money out of it. You can check my service out on Here . I am looking for affiliate or an Ads service that can I can hookup within but there is a twist in story. I want single liner text Ad in a particular location otherwise widgets will look rubbish, see this snapshot: Plus I have some unique places in my site to place some banner ads as well, Here are existing set of services that I've already tried: Ad Sense, doesn't allow or have such formats of methods. Peefly provides you with straight links works best but I recorded some clicks (Through Google Events) and they didn't show me any, plus it introduces overhead of manually going and choosing your links. BidVertise totally rubbish opens popups and what not, makes site look like spam I am new to this ad stuff so have a limited knowledge. Suggestions please? I have one more place in Forecast but I want to start simple. P.S. I also have a MetroUI like widget coming in the pipeline but its not ready yet.

    Read the article

  • How does 301 redirection work across the network? & should I use it if there is a chance we made need to change the resource back to the original URL?

    - by Faust
    I've built a CMS that makes it fairly easy for my client to relocate pages in their site hierarchy. This site has all human-readable and intuitive URLs, so moving a page necessarily means that its URL changes. I am storing records of each resource's past URLs in the data store so that requests for bygone URLs are re-routed to their appropriate successors. I'm warning my clients not to re-arrange the site willy-nilly (for numerous reasons). But nevertheless I suspect there's a chance page moves could get reversed from time to time. So I'm trying to figure out whether 301 or 302 or 307 redirects should be used when serving up pages to requests for out-of-date URLs. I understand the value of using 301 for search engine optimization. But my concern is with this system possibly inadvertently making some pages unavailable to some users QUESTIONS: That is, if the clients move a page at location/URL A to a new location B, then users get the redirect for A to B, and then the clients move the page back to A again, how long can I expect any of those users to keep getting their requests for A redirected to B -- in this case sending them to my friendly 404 page? Is it until an item in their browser history is cleared? Is the redirect somehow cached in routers throughout the internet? How does this work? How long can I expect the 301 redirect to linger out there ?

    Read the article

  • Is rsync corrupting my RAR?

    - by Mark Henderson
    We have two qnap devices - one in our datacentre and one off-site. We have hundreds of password protected RAR files stored on the qnap that contain virtual machine image snapshots, with approx 20 of them being created each day. We synchronise the two devices using rsync, and it looks like all the files are being rsynced OK - they come over and have the same file size and all the files are present and accounted for. However, when I try to open the RAR files on the remote site, I get Cannot open \\qnap01\FromDatacentre\Snapshots\DB001SQL1-20110626.rar I can open the RAR files on the local site just fine, so I assume that something is getting mangled during the rsync procedure. However, the older files (pre 2011-06-20) work just fine, it's something that's only started happening in the last week. There haven't been (as far as I know) any changes to any of the devices, setup or configuration in that time. Obviously something has changed though. Where should I start investigating?

    Read the article

  • Replicating A Volume Of Large Data via Transactional Replication

    During weekend maintenance, members of the support team executed an UPDATE statement against the database on the OLTP Server. This database was a part of Transactional Replication, and once the UPDATE statement was executed the Replication procedure came to a halt with an error message. Satnam Singh decided to work on this case and try to find an efficient solution to rebuild the procedure without significant downtime.

    Read the article

  • News Applications internal working [on hold]

    - by Vijay
    How does news applications work other than RSS Feed based applications? I know some of them take the RSS content from the source site.But sometimes I see, those applications show - Title Description Date Image video etc. Even though when I see the original site's rss, image, video is not there in rss. So how does one get that to show in there applications? Some applications even shows feeds from magazine sites, newspaper sites. How do these applications work? I am creating an application which will link to different news sites feeds categorized (like top news, technology, games, articles etc.) On the front page it will show the website names, then on selection of any news site it will get the feed from that website and show it to user. So I would like to know All the fetching of data from should be done on user selection or data should be prefetched? Detailed information I want to fetch from the original like provided in the rss data. How should I go about it?

    Read the article

  • Linux foxboard network monitor

    - by het.oosten
    I want to use a Foxboard a simple network monitor for multiple routers (all routers are connected to the internet). Foxboard is a mini pc with an embedded version of Debian. My idea is to use multiple virtual network devices like this: eth0 192.168.2.10 eth0:1 192.168.3.10 eth0:2 192.168.4.10 I found a nice Python script to ping an external host here (the solution from Ryan Cox): http://stackoverflow.com/questions/316866/ping-a-site-in-python Is it possible to configure Debian to use eth0 when I ping www.site-a.com and eth0:1 when I ping www.site-b.com?

    Read the article

  • How to Edit aspx, Cshtml and other kind of files live on FTP server ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/06/27/how-to-edit-aspx-cshtml-and-other-kind-of-files.aspxMany time we just want to make a small changes on site and we don’t want to download the whole project again. In this post I will show you some good way to do it.   People who have Expression Web 4 can do it. I tried it and it’s work good with aspx file. If you have site in asp.net and use aspx engine then this is a good option. Well, Expression Web is free (previously paid software). A another good option is Komodo Edit. You can use komodo edit and few plugin to make FTP editing work for you. The problem in these 2 apps are they don’t have syntax highlight and support for CSHTML file which are introduced with MVC 3. For this I suggest you to go with webmatrix. You can use Webmatrix to edit cshtml file online. Remember that Webmatrix don’t support Compiling of MVC project. You need Visual Web developer Express at-least to compile your project. if you are in hurry try  https://c9.io/ put your FTP settings and you just got your hands ready to make changes on live site. If you have anything else in your mind share it here.

    Read the article

  • Long domain lookup on .dev domain inside vmware

    - by skelle
    I'm developing on my macbook and normally I have a local running webserver which just works finde. Now I have to use a vmware image where the webserver is running. I set up everything and my dev site is running under site.dev inside vmware. I can connect to the webserver but EVERY request takes a very long time. I already red that this is related with iIPv6 and the way OSX handles /etc/hosts. There I added 192.168.155.42 site.dev and I already did this (Resolving to virtual host very slow on Mac OS X Lion) but my lookup still takes ~30seconds on every request. What can I do to fix this issue?

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >