Search Results

Search found 1969 results on 79 pages for '404'.

Page 51/79 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Django: TypeError: 'str' object is not callable, referer: http://xxx

    - by user705415
    I've been wondering why when I set the settings.py of my django project 'arvindemo' debug = Flase and deploy it on Apache with mod_wsgi, I got the 500 Internal Server Error. Env: Django 1.4.0 Python 2.7.2 mod_wsgi 2.8 OS centOS Here is the recap: Visit the homepage, go to sub page A/B/C/D, and fill some forms, then submit it to the Apache server. Once click 'submit' button, I will get the '500 Internal Server Error', and the error_log listed below(Traceback): [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] Traceback (most recent call last): [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] File "/opt/python2.7/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 241, in __call__ [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] response = self.get_response(request) [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] File "/opt/python2.7/lib/python2.7/site-packages/django/core/handlers/base.py", line 179, in get_response [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] response = self.handle_uncaught_exception(request, resolver, sys.exc_info()) [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] File "/opt/python2.7/lib/python2.7/site-packages/django/core/handlers/base.py", line 224, in handle_uncaught_exception [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] if resolver.urlconf_module is None: [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] File "/opt/python2.7/lib/python2.7/site-packages/django/core/urlresolvers.py", line 323, in urlconf_module [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] self._urlconf_module = import_module(self.urlconf_name) [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] File "/opt/python2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] __import__(name) [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] File "/opt/web/django/arvindemo/arvindemo/../arvindemo/urls.py", line 23, in <module> [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] url(r'^submitPage$', name=submitPage), [Tue Apr 10 10:07:20 2012] [error] [client 122.198.133.250] TypeError: url() takes at least 2 arguments (2 given) When using django runserver, I set arvindemo.settings debug = True, everything is OK. But things changed once I set debug = Flase. Here is my views.py from django.http import HttpResponseRedirect from django.http import HttpResponse, HttpResponseServerError from django.shortcuts import render_to_response import datetime, string from user_info.models import * from django.template import Context, loader, RequestContext import settings def hello(request): return HttpResponse("hello girl") def helpPage(request): return render_to_response('kktHelp.html') def server_error(request, template_name='500.html'): return render_to_response(template_name, context_instance = RequestContext(request) ) def page404(request): return render_to_response('404.html') def submitPage(request): post = request.POST Mall = 'goodsName' Contest = 'ojs' Presentation = 'addr' WeatherReport = 'city' Habit = 'task' if Mall in post: return submitMall(request) elif Contest in post: return submitContest(request) elif Presentation in post: return submitPresentation(request) elif Habit in post: return submitHabit(request) elif WeatherReport in post: return submitWeather(request) else: return HttpResponse(request.POST) return HttpResponseRedirect('404') def submitXXX(): ..... def xxxx(): .... Here comes the urls.py from django.conf.urls import patterns, include, url from views import * from django.conf import settings handler500 = 'server_error' urlpatterns = patterns('', url(r'^hello/$', hello), # hello world url(r'^$', homePage), url(r'^time/$', getTime), url(r'^time/plus/(\d{1,2})/$', hoursAhead), url(r'^Ttime/$', templateGetTime), url(r'^Mall$', templateMall), url(r'^Contest$', templateContest), url(r'^Presentation$', templatePresentation), url(r'^Habit$', templateHabit), url(r'^Weather$', templateWeather), url(r'^Help$', helpPage), url(r'^404$', page404), url(r'^500$', server_error), url(r'^submitPage$', submitPage), url(r'^submitMall$', submitMall), url(r'^submitContest$', submitContest), url(r'^submitPresentation$', submitPresentation), url(r'^submitHabit$', submitHabit), url(r'^submitWeather$', submitWeather), url(r'^terms$', terms), url(r'^privacy$', privacy), url(r'^thanks$', thanks), url(r'^about$', about), url(r'^static/(?P<path>.*)$','django.views.static.serve',{'document_root':settings.STATICFILES_DIRS}), ) I'm sure there is no syntax error in my django project,cause when I use django runserver, everything is fine. Anyone can help ? Best regards

    Read the article

  • Getting 500 Error when trying to access Rails application through Apache2

    - by cojones
    Hey, I'm using Apache2 as proxy and mongrel_cluster as server for my Rails applications. When I try to access it by typing in the url I get a 500 "Internal Server Error" but when try to locally access the website with "lynx http://localhost:8200" it works. This is my config: <Proxy balancer://sportfreundewitold_cluster> BalancerMember http://127.0.0.1:8200 BalancerMember http://127.0.0.1:8201 </Proxy> # httpd [example.org] dmn entry BEGIN. <VirtualHost x.x.x.x:80> <IfModule suexec_module> SuexecUserGroup vu2025 vu2025 </IfModule> ServerAdmin [email protected] DocumentRoot /var/www/virtual/example.org/htdocs/current/public ServerName example.org ServerAlias www.example.org example.org *.example.org vu2025.admin.roughneck-media.de Alias /errors /var/www/virtual/example.org/errors/ RedirectMatch permanent ^/ftp[\/]?$ http://admin.roughneck-media.de/ftp/ RedirectMatch permanent ^/pma[\/]?$ http://admin.roughneck-media.de/pma/ RedirectMatch permanent ^/webmail[\/]?$ http://admin.roughneck-media.de/webmail/ RedirectMatch permanent ^/ispcp[\/]?$ http://admin.roughneck-media.de/ ErrorDocument 401 /errors/401.html ErrorDocument 403 /errors/403.html ErrorDocument 404 /errors/404.html ErrorDocument 500 /errors/500.html ErrorDocument 503 /errors/503.html <IfModule mod_cband.c> CBandUser example.org </IfModule> # httpd awstats support BEGIN. # httpd awstats support END. # httpd dmn entry cgi support BEGIN. ScriptAlias /cgi-bin/ /var/www/virtual/example.org/cgi-bin/ <Directory /var/www/virtual/example.org/cgi-bin> AllowOverride AuthConfig #Options ExecCGI Order allow,deny Allow from all </Directory> # httpd dmn entry cgi support END. <Directory /var/www/virtual/example.org/htdocs/current/public> # httpd dmn entry PHP support BEGIN. # httpd dmn entry PHP support END. Options -Indexes Includes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> # httpd dmn entry PHP2 support BEGIN. <IfModule mod_php5.c> php_admin_value open_basedir "/var/www/virtual/example.org/:/var/www/virtual/example.org/phptmp/:/usr/share/php/" php_admin_value upload_tmp_dir "/var/www/virtual/example.org/phptmp/" php_admin_value session.save_path "/var/www/virtual/example.org/phptmp/" php_admin_value sendmail_path '/usr/sbin/sendmail -f vu2025 -t -i' </IfModule> <IfModule mod_fastcgi.c> ScriptAlias /php5/ /var/www/fcgi/example.org/ <Directory "/var/www/fcgi/example.org"> AllowOverride None Options +ExecCGI -MultiViews -Indexes Order allow,deny Allow from all </Directory> </IfModule> <IfModule mod_fcgid.c> Include /etc/apache2/mods-available/fcgid_ispcp.conf <Directory /var/www/virtual/example.org/htdocs> FCGIWrapper /var/www/fcgi/example.org/php5-fcgi-starter .php Options +ExecCGI </Directory> <Directory "/var/www/fcgi/example.org"> AllowOverride None Options +ExecCGI MultiViews -Indexes Order allow,deny Allow from all </Directory> </IfModule> # httpd dmn entry PHP2 support END. Include /etc/apache2/ispcp/example.org.conf RewriteEngine On # Make sure people go to www.myapp.com, not myapp.com RewriteCond %{HTTP_HOST} ^myapp\.com$ [NC] RewriteRule ^(.*)$ http://www.myapp.com$1 [R=301,L] # Yes, I've read no-www.com, but my site already has much Google-Fu on # www.blah.com. Feel free to comment this out. # Uncomment for rewrite debugging #RewriteLog logs/myapp_rewrite_log #RewriteLogLevel 9 # Check for maintenance file and redirect all requests RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f RewriteCond %{SCRIPT_FILENAME} !maintenance.html RewriteRule ^.*$ /system/maintenance.html [L] # Rewrite index to check for static RewriteRule ^/$ /index.html [QSA] # Rewrite to check for Rails cached page RewriteRule ^([^.]+)$ $1.html [QSA] # Redirect all non-static requests to cluster RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://mongrel_cluster%{REQUEST_URI} [P,QSA,L] # Deflate AddOutputFilterByType DEFLATE text/html text/plain text/xml application/xml application/xhtml+xml text/javascript text/css BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \\bMSIE !no-gzip !gzip-only-text/html # Uncomment for deflate debugging #DeflateFilterNote Input input_info #DeflateFilterNote Output output_info #DeflateFilterNote Ratio ratio_info #LogFormat '"%r" %{output_info}n/%{input_info}n (%{ratio_info}n%%)' deflate #CustomLog logs/myapp_deflate_log deflate </VirtualHost> # httpd [example.org] dmn entry END. Does anyone know what could be wrong with it?

    Read the article

  • jqGrid multiplesearch - How do I add and/or column for each row?

    - by jimasp
    I have a jqGrid multiple search dialog as below: (Note the first empty td in each row). What I want is a search grid like this (below), where: The first td is filled in with "and/or" accordingly. The corresponding search filters are also built that way. The empty td is there by default, so I assume that this is a standard jqGrid feature that I can turn on, but I can't seem to find it in the docs. My code looks like this: <script type="text/javascript"> $(document).ready(function () { var lastSel; var pageSize = 10; // the initial pageSize $("#grid").jqGrid({ url: '/Ajax/GetJqGridActivities', editurl: '/Ajax/UpdateJqGridActivities', datatype: "json", colNames: [ 'Id', 'Description', 'Progress', 'Actual Start', 'Actual End', 'Status', 'Scheduled Start', 'Scheduled End' ], colModel: [ { name: 'Id', index: 'Id', searchoptions: { sopt: ['eq', 'ne']} }, { name: 'Description', index: 'Description', searchoptions: { sopt: ['eq', 'ne']} }, { name: 'Progress', index: 'Progress', editable: true, searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'ActualStart', index: 'ActualStart', editable: true, searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'ActualEnd', index: 'ActualEnd', editable: true, searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'Status', index: 'Status.Name', editable: true, searchoptions: { sopt: ['eq', 'ne']} }, { name: 'ScheduledStart', index: 'ScheduledStart', searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'ScheduledEnd', index: 'ScheduledEnd', searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} } ], jsonReader: { root: 'rows', id: 'Id', repeatitems: false }, rowNum: pageSize, // this is the pageSize rowList: [pageSize, 50, 100], pager: '#pager', sortname: 'Id', autowidth: true, shrinkToFit: false, viewrecords: true, sortorder: "desc", caption: "JSON Example", onSelectRow: function (id) { if (id && id !== lastSel) { $('#grid').jqGrid('restoreRow', lastSel); $('#grid').jqGrid('editRow', id, true); lastSel = id; } } }); // change the options (called via searchoptions) var updateGroupOpText = function ($form) { $('select.opsel option[value="AND"]', $form[0]).text('and'); $('select.opsel option[value="OR"]', $form[0]).text('or'); $('select.opsel', $form[0]).trigger('change'); }; // $(window).bind('resize', function() { // ("#grid").setGridWidth($(window).width()); //}).trigger('resize'); // paging bar settings (inc. buttons) // and advanced search $("#grid").jqGrid('navGrid', '#pager', { edit: true, add: false, del: false }, // buttons {}, // edit option - these are ordered params! {}, // add options {}, // del options {closeOnEscape: true, multipleSearch: true, closeAfterSearch: true, onInitializeSearch: updateGroupOpText, afterRedraw: updateGroupOpText}, // search options {} ); // TODO: work out how to use global $.ajaxSetup instead $('#grid').ajaxError(function (e, jqxhr, settings, exception) { var str = "Ajax Error!\n\n"; if (jqxhr.status === 0) { str += 'Not connect.\n Verify Network.'; } else if (jqxhr.status == 404) { str += 'Requested page not found. [404]'; } else if (jqxhr.status == 500) { str += 'Internal Server Error [500].'; } else if (exception === 'parsererror') { str += 'Requested JSON parse failed.'; } else if (exception === 'timeout') { str += 'Time out error.'; } else if (exception === 'abort') { str += 'Ajax request aborted.'; } else { str += 'Uncaught Error.\n' + jqxhr.responseText; } alert(str); }); $("#grid").jqGrid('bindKeys'); $("#grid").jqGrid('inlineNav', "#grid"); }); </script> <table id="grid"/> <div id="pager"/>

    Read the article

  • How to share a folder using the Ubuntu One Web API

    - by Mario César
    I have successfully implement OAuth Authorization with Ubuntu One in Django, Here are my views and models: https://gist.github.com/mariocesar/7102729 Right now, I can use the file_storage ubuntu api, for example the following, will ask if Path exists, then create the directory, and then get the information on the created path to probe is created. >>> user.oauth_access_token.get_file_storage(volume='/~/Ubuntu One', path='/Websites/') <Response [404]> >>> user.oauth_access_token.put_file_storage(volume='/~/Ubuntu One', path='/Websites/', data={"kind": "directory"}) <Response [200]> >>> user.oauth_access_token.get_file_storage(volume='/~/Ubuntu One', path='/Websites/').json() {u'content_path': u'/content/~/Ubuntu One/Websites', u'generation': 10784, u'generation_created': 10784, u'has_children': False, u'is_live': True, u'key': u'MOQgjSieTb2Wrr5ziRbNtA', u'kind': u'directory', u'parent_path': u'/~/Ubuntu One', u'path': u'/Websites', u'resource_path': u'/~/Ubuntu One/Websites', u'volume_path': u'/volumes/~/Ubuntu One', u'when_changed': u'2013-10-22T15:34:04Z', u'when_created': u'2013-10-22T15:34:04Z'} So it works, it's great I'm happy about that. But I can't share a folder. My question is? How can I share a folder using the api? I found no web api to do this, the Ubuntu One SyncDaemon tool is the only mention on solving this https://one.ubuntu.com/developer/files/store_files/syncdaemontool#ubuntuone.platform.tools.SyncDaemonTool.offer_share But I'm reluctant to maintain a DBUS and a daemon in my server for every Ubuntu One connection I have authorization for. Any one have an idea how can I using a web API to programmatically share a folder? even better using the OAuth authorization tokens that I already have.

    Read the article

  • When the canonical page itself changes url

    - by lulalala
    This is a continuation of the question: How to handle canonical url changes like Stack Overflow. Say I have the canon url: questions/11/car <---canonically-linked-from--- questions/11/ What will happen if I want to change the canon url to questions/11/car-with-sgx Obviously, questions/11/ will point to the new canon url. But how should the old questions/11/car change to the new one? There are two ways: 301 redirect that to new canon url the old canon url canonically link to the new canon url According to this post: [By using canonical link instead of redirect,] OldPage.html’s rankings will drop over time due to fewer internal links, but the canonical tag won’t make it disappear entirely. It could theoretically remain in their index until one of the following occurs: it is redirected permanently via 301 it returns a 404 for an extended period of time (they will keep checking for a while before dropping a URL) a meta robots “noindex” tag is added If this is true, I really need to use redirect from old canon url to the new canon url, which means I need to keep a log of previous old canon urls of this content, so I know when I can redirect. This is a bit of a hassle to do.

    Read the article

  • Clean URLs issue using .htaccess in PHP project

    - by x4ph4r
    I am working on a PHP laravel project. I am currently facing issues with .htaccess file. I have following .htaccess: <IfModule mod_rewrite.c> Options -MultiViews RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] </IfModule> When I reload my page the it gave me following error: 404 Not Found The requested URL /contacts was not found on this server. Then I opened /etc/apache2/users/username.conf file which had following line of code: <Directory "/Users/username/Sites/"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> In above code I changed AllowOverride None to AllowOverride All. Then I reload page and got following error: 403 Forbidden You don't have permission to access /contacts on this server. When I add FollowSymLinks to .htaccess file Options such as like this Options -MultiViews FollowSymLinks. Then sometimes I get this 500 Internal Server Error error and sometime this *Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data*. Each time I reload my page one of these errors with FollowSymLinks option. I also uncomment following lines in /etc/apache2/httpd.conf: LoadModule rewrite_module libexec/apache2/mod_rewrite.so LoadModule php5_module libexec/apache2/libphp5.so and still I am getting same permission denied error. Please help me I am trying to solve this problem for past 3 days but it is till unresolved.

    Read the article

  • Wubi shows error

    - by Quirk
    I tried installing Ubuntu 12.04 using Wubi and it just doesn't work, every time, without fail. I had the following scenarios: 1. I downloaded only wubi.exe and ran it. The wubi installer started downloading the amd64.iso using torrent. But when there were just about 40 secs left to download, it shows an error 404: File not found. 2. I downloaded the iso file seperately and put it in the same folder as the wubi.exe. Now there are two cases: a. Offline: wubi says it could not download the metalinks file and hence cannot download the iso. So I download the meta files separately and place them in the same directory. wubi shows the same error again. b. Online: wubi works in same way as in case 1. and the same problem occurs as in case 1. In short wubi doesn't recognize the already downloaded iso in the directory at all. 3. I burn the iso into a Cd and run it. The same thing occurs as in Case 2. Just in case that you know, I installed SP3 for win XP just before using wubi. While Windows is running alright, is it possible that its causing conflicts for wubi?

    Read the article

  • Simple monitoring of a Raspberry Pi powered screen - Part 2

    - by Chris Houston
    If you have read my previous blog post Raspberry Pi entrance signed backed by Umbraco - Part 1 which describes how we used a Raspberry Pi to drive an Entrance sign for QV Offices you will have seen I mentioned a follow up post about monitoring the sign.As the sign is mounted in the entrance of the building on the ground floor and the reception is on the 1st floor, this meant that if there was a fault of any kind showing on the screen, the first person to see this was inevitably one of QV Offices' clients as they walked into the building.Although the QV Offices' team were able to check the Umbraco website address that the sign uses, this did not always mean that everything was working as expected. We noticed a couple of times that the sign had Wifi issues (it is now hard wired) and this caused the Chromium browser to render a 404 error when it tried to refresh the screen.The simple monitoring solutionWe added the following line to our refresh script, so that after the sign had been refreshed a screen shot of the Raspberry Pi would be taken:import -display :0 -window root ~/screenshot.jpgFinally we wrote a small Crontab task that ran on a QV Offices Mac that grabs this screen shot and saved it on the desktop, admittedly we have used a package that it not mega secure, but in reality this is an internal system that only runs an office sign, so we are not to concerned about it being hacked.*/5 * * * * /usr/local/bin/sshpass -p 'password' /usr/bin/scp [email protected]:screenshot.jpg Desktop/QVScreenShot.jpgAs the file icon updates, if the image changes, this gives a quick visual indication of the status of the sign, if for some reason the icon does not look correct the QV Offices administrator can just click on the file to see the exact image currently displayed on the sign.Sometimes a quick and easy solution is better than a more complex and expensive one.

    Read the article

  • Showing content from pages at different URL's (masking), possibly with .htaccess

    - by zigojacko
    If I have URL's like:- domain.com/category/widgets/filter/blue domain.com/category/widgets/filter/red And it is pretty difficult to reconstruct them to something like:- domain.com/category/blue-widgets domain.com/category/red-widgets Is there any way at all that I can use URL rewrites or anything else with .htaccess or on the server to display the URL's as the domain.com/category/blue-widgets on the domain.com/category/widgets/filter/blue page? I've looked into masking URL's but got nowhere and this has been something bugging me for almost 6 months now. Is there any way to achieve what I want to do? FYI: This is a Magento website and the above process, I am wanting to implement for potentially hundreds of URL's. Edit To respond to @kkugelmann's answer:- I couldn't get your proposed RewriteRule to make a difference at all in the .htaccess file so I started testing a few things in this .htaccess tester:- The proposed RewriteRule didn't work in this tester:- However, the following did:- But adding any of these RewriteRule's into the website's .htaccess file did not rewrite the URL at all... Edit2 By the way, if I add [R=301,L] to the end of the URL rewrite rule, it does actually then rewrite the rule, but of course 301 redirects it as well which is unwanted behaviour. Edit3 I found another question with the same issue... And an accepted answer that solved the problem which seemed to be something to do with using mod_proxy and the [P] tag on the rule (if I try this, the page 404's).

    Read the article

  • Wordpress .htaccess preventing subfolder access

    - by John K.
    This is sort of a goofy setup, but it's not in my power to reconfigure it at this time. I'm running in a shared hosting environment. The domain is example.com. This is an add-on domain on the host side with example.com being redirected to the www/example.com sub-directory. That directory houses a standard Wordpress site which acts as the main site when you visit example.com. The .htaccess file within that directory is: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^wp-admin/profile\.php$ /ssm/welcome [R] </IfModule> I have a subdirectory, at the root level with the /example.com subdirectory that houses a cake php application. That subdirectory is /tracker. My problem is that when I attempt to browse to example.com/tracker, I get a 404 from Wordpress because perma links are on. What I think I need is a rewrite rule in the Wordpress .htaccess file that short circuits the existing rewrite rules and permits example.com/tracker to work independently of the Wordpress install. Or a rewrite rule at the root level that short circuits the redirect to the /example.com directory in the first place. Not sure how well I explained that so here's a summary. The www/ directory structure: example.com/ tracker/ Add on domain of www.example.com redirecting to the /example.com directory with Wordpress and a tracker/ directory running CakePHP which I would like to access via www.example.com/tracker. If you need further info or clarification let me know!

    Read the article

  • Trigger IP ban based on request of given file?

    - by Mike Atlas
    I run a website where "x.php" was known to have vulnerabilities. The vulnerability has been fixed and I don't have "x.php" on my site anymore. As such with major public vulnerabilities, it seems script kiddies around are running tools that hitting my site looking for "x.php" in the entire structure of the site - constantly, 24/7. This is wasted bandwidth, traffic and load that I don't really need. Is there a way to trigger a time-based (or permanent) ban to an IP address that tries to access "x.php" anywhere on my site? Perhaps I need a custom 404 PHP page that captures the fact that the request was for "x.php" and then that triggers the ban? How can I do that? Thanks! EDIT: I should add that part of hardening my site, I've started using ZBBlock: This php security script is designed to detect certain behaviors detrimental to websites, or known bad addresses attempting to access your site. It then will send the bad robot (usually) or hacker an authentic 403 FORBIDDEN page with a description of what the problem was. If the attacker persists, then they will be served up a permanently reccurring 503 OVERLOAD message with a 24 hour timeout. But ZBBlock doesn't do quite exactly what I want to do, it does help with other spam/script/hack blocking.

    Read the article

  • Fake links cause crawl error in Google Webmaster Tools

    - by Itai
    Google reported Crawl Errors last week on my largest site though Webmaster Tools. Here is the message: Google detected a significant increase in the number of URLs that return a 404 (Page Not Found) error. Investigating these errors and fixing them where appropriate ensures that Google can successfully crawl your site's pages. The Crawl Errors list is now full of hundreds of fake links like these causing 16,519 errors so far: Note that my site does not even have a search.html and is not related to any of the terms shown in the above image. Inspecting sources for one of those links, I can see this is not simply an isolated source but a concerted effort: Each of the links has a few to a dozen sources all from different, seemingly unrelated sites. It is completely baffling as to why would someone to spending effort doing this. What are they hoping to achieve? Is this an attack? Most importantly: Does this have a negative effect on my side? Could it negatively impact my ranking? If so, what to do about it? The few linking pages I looked at are full of thousands of links to tons of sites and have no contact information and do not seem like the kind of people who would simply stop if asked nicely! According to Google Webmaster Tools, these errors have appeared in a span of 11 days. No crawl errors were being reported previously.

    Read the article

  • How long does it take to delete a Google Apps for Domain account and allow recreation?

    - by Wil
    Basically, I set up a Google Apps for domain account for one of my clients but upon trying to upload a CSV of users, only half were created and then I kept getting time outs, 404's and other errors which I never saw when setting up another client. I was not sure how, but thought I may have caused an error or there was an error Google Side when the account was created so I thought it may be best to delete the domain and start from scratch. Little did I know, you had to wait to recreate the domain! As far as I can tell from what I have read, there is five days to delete and allow recreation of similar user names, however, I can't see anything that shows how long you have to wait for the domain account. It has now been 5/6 days and I can't recreate it. The cancellation email I got shows "...If at any time you would like to sign-up for Google Apps for this or another domain, you can do so by visiting..." But it does not mention how long!!! I tried emailing Google directly but got no response. Does anyone know?

    Read the article

  • root folder php scripts not running in nginx

    - by Thermionix
    nginx with php-fpm on ubuntu 12.04 server. attempting to access /var/www/test.php (via https://example.net/test.php) downloads the script instead of executing it. if I place the test.php in a subdirectory, i.e. /var/www/test/test.php it executes. root.conf; root /var/www; include php-fpm.conf; location ~ /\. { access_log off; log_not_found off; deny all; } php-fpm.conf; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.socket; include fastcgi_params; } fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK What is the mistake if any one can point out in above configuration, or else I need to check any thing else?

    Read the article

  • Is it possible to have multiple subdomains point to the same Blogger blog?

    - by cclark
    For our application we want to have a status page which is hosted outside of the rest of our infrastructure so in case there are issues in our data center we can post updates for our users and our users will be able to access them. We registered a blog on Blogger and set it up with xyzstatus.blogspot.com and status.xyz.com. Everything seems to work fine. We need to perform some maintenance at our datacenter which will sever all connectivity so we're unable to have a redirect using nginx or apache. We'd like to do this with a short TTL CNAME DNS entry. Ideally www.xyz.com and app.xyz.com could be CNAMEd to status.xyz.com. When I setup the CNAME and go to that URL I get a Google broken robot 404 page. I figure I must need to let Google know it should associate traffic to www.xyz.com and app.xyz.com to the blog served up by status.xyz.com. But I can't see anywhere to do this in Blogger. Does anyone know if this is possible?

    Read the article

  • Should I make up my own HTTP status codes? (a la Twitter 420: Enhance Your Calm)

    - by Max Bucknell
    I'm currently implementing an HTTP API, my first ever. I've been spending a lot of time looking at the Wikipedia page for HTTP status codes, because I'm determined to implement the right codes for the right situations. Listed on that page is a code with number 420, which is a custom code that Twitter used to use for rate limiting. There is already a code for rate limiting, though. It's 429. This led me to wonder why they would set a custom one, when there is already a use case. Is that just being cute? And if so, then which circumstances would make it acceptable to return a different status code, and what, if any problems may clients have with it? I read somewhere that Mozilla doesn't implement the joke 418: I’m a teapot response, which makes me think that clients choose which status codes they implement. If that's true, then I can imagine Twitter's funny little enhance your calm code being problematic. Unless I'm mistaken, and we can appropriate any code number to mean whatever we like, and that only convention dictates that 404 means not found, and 429 means take it easy.

    Read the article

  • Google is displaying "Translate this page" based on a previously registered domain inbound links

    - by crnm
    I recently started a new project with a newly registered generic tld domain. As soon as Google started indexing the page, it displayed a "translate this page" in SERP's, which tries to translate the page to the language of a small Eastern European country from the language that the site actually uses. I tried everything to prevent this: language meta headers and attributes, localisation through Google Webmaster Tools...all to no avail - nothing helped. After a couple of weeks I spotted dozens of inbound links popping up in Google Webmaster Tools all coming from that small Eastern European country, from sub-pages that are not active anymore (either sending out 404's or 301's to the main page), and also had been written in that other language. So the domain had been registered before and as it looks, it did got a lot of possibly spam links in that language. I can't even ask the sites where those links should have been to remove them as they are not active anymore physically, just in Google Webmaster Tools and/or internal data masses... Now I'm at a loss about what to do? As my site is pretty new, it does not have many links pointing towards it in my targeted language. So those are probably not enough to convince Google of attaching the right language to it as Google ignores all other signals about the page language. I'm also unsure if I should use the "disavow" tool, or a reconsideration request...or what else to do about this miserable state. I never used these tools before so I don't have any experience with them. Somehow I have to convince Google about the right language of the page and also to not count/apply/whatever all those historical links from the previous owner. (The domain had been deleted without any traces in Google before I registered it) Has anyone here ever dealt with a similar "Translate this page" problem? (I've also looked at this thread: How can I prevent Google mistakenly offering to translate a page? but didn't find a solution there)

    Read the article

  • Need help: jBPM installation issue

    - by Kouky
    This is Ms Kahina, I’m getting started with jBPM & Java EE, I tried to install the jBPM 5.4 version by following all the steps stated in the iBPM user guide as below: 1- I’ve installed Java (JDK 1.7) and ant ( Apache-ant-1.9.4). 2- I’ve set both JAVA_HOME and ANT_HOME environement variables. 3- I’ve downloaded the jBPM 5.4 full installer and run the installation script “ant install.demo” and after that the start script “ant start.demo”. But unfortunately jBPM 5.4 is not working. When running the installation Script I’m getting successful message but with the following warnings : [copy] Warning: Could not find file C:\jbpm-installer\db\task-persistence.xml to copy. [copy] Warning: Could not find file C:\jbpm-installer\db\Taskorm.xml to copy. The start demo was successful. Actually when I try to open any tool provided with the jBPM such us the jBPM-console for example I’m getting the following messages: Address not found or sometimes the http status 404 occurred even if the Welcome page of jBoss AS was opened at http://localhost:8080/. Please need your assistance to sort out this issue in order to move forward in my project as I'm blocked in the installation stage since more than a week now, I don’t know if this is related to the jBoss AS7 or to any other thing that I didn’t find out. Looking forward to hearing from you. Thanks Kahina

    Read the article

  • What could have caused a large traffic drop from Google in early May?

    - by Scott Schluer
    I have a website (www.equispot.com) that has been indexed for almost 2 years in Google. I managed to get myself on the first page (average position 6-8) on Google for my target keyword of "horses for sale" and held there pretty solidly for months. Suddenly, with no changes to the site, traffic from Google dropped like a rock in early May. I slowly fell in position until now I'm sitting at the bottom of page 4. I have never hired an SEO firm, have not used any "black hat" techniques that Google would have penalized me for in their May update, etc. I'm not familiar enough with SEO to know how to look at link profiles, etc. to tell if there's something wrong. I've run my site through a DNS checker and it came back with no errors. Google Webmaster Tools shows no messages or notices of any kind, just a drop in traffic. GWT also shows only 2 server errors and 1 404. Is there anyone who can tell me by quickly checking my domain if there's an obvious reason that my traffic would have fallen so far, something that I can fix?

    Read the article

  • What guidelines should be followed when implementing third-party tracking pixels?

    - by Strozykowski
    Background I work on a website that gets a fair amount of traffic, and as such, we have implemented different tracking pixels and techniques across the site for various specific reasons. Because there are many agencies who are sending traffic our way through email campaigns, print ads and SEM, we have agreements with a variety of different outside agencies for tracking these page hits. Consequently, we have tracking pixels which span the entire site, as well as some that are on specific pages only. We have worked to reduce the total number of pixels available on any one page, but occasionally the site is rendered close to unusable when one of these third-party tracking pixels fails to load. This is a huge difficulty on parts of the site where Javascript is needed for functionality built into the page, but is unable to initialize until a 404 is returned on the external tracking pixel. (Sometimes up to 30 seconds later) I have spent some time attempting to research how other firms deal with this sort of instability with third-party components, but have come up a bit short. The plan currently is to implement our own stop-gap method to deal with these external outages, but rather than reinventing the wheel, we wanted to find out how this is dealt with on other sites. Question Is there a good set of guidelines that should be followed when implementing third-party tracking pixels? I would love to see some white papers or other written documents about how other people have dealt with this issue.

    Read the article

  • How does 301 redirection work across the network? & should I use it if there is a chance we made need to change the resource back to the original URL?

    - by Faust
    I've built a CMS that makes it fairly easy for my client to relocate pages in their site hierarchy. This site has all human-readable and intuitive URLs, so moving a page necessarily means that its URL changes. I am storing records of each resource's past URLs in the data store so that requests for bygone URLs are re-routed to their appropriate successors. I'm warning my clients not to re-arrange the site willy-nilly (for numerous reasons). But nevertheless I suspect there's a chance page moves could get reversed from time to time. So I'm trying to figure out whether 301 or 302 or 307 redirects should be used when serving up pages to requests for out-of-date URLs. I understand the value of using 301 for search engine optimization. But my concern is with this system possibly inadvertently making some pages unavailable to some users QUESTIONS: That is, if the clients move a page at location/URL A to a new location B, then users get the redirect for A to B, and then the clients move the page back to A again, how long can I expect any of those users to keep getting their requests for A redirected to B -- in this case sending them to my friendly 404 page? Is it until an item in their browser history is cleared? Is the redirect somehow cached in routers throughout the internet? How does this work? How long can I expect the 301 redirect to linger out there ?

    Read the article

  • Website migration from WordPress to a static site and doing 301 redirects without access to existing site?

    - by user3114468
    Currently working on a project that is a hosted on WordPress that is being migrated to a static site. However I presently do not have access to the existing site as it's managed by another developer. The concern is not the lack of having access to content as the site owner has generated very little content (reason for migration) and we were able to do this manually. Rather the concern is to do 301 redirects. The site will not change domains but URLs such as from example.com/?page_id=3 to example.com/services. To add, the site is migrating to new server using same domain name. I thought maybe this could be done via editing permalinks prior to migration and WordPress would update automatically if configured to write on server. But if not configured (as this is not always the case) I do not have htaccess to fix it in case there are suddenly a bunch of 404 errors for every page. Really could use some help on the best procedure to follow in this case. This is the first migration project I've worked on.

    Read the article

  • Merging two sites into one, how to redirect from the domain that's going away?

    - by bikeboy389
    I haven't been able to find any existing questions that cover my exact issue, so here goes: My client wants her two sites (domain1.com and domain2.com) rolled into a single, new site under domain1.com. Once the site is ready on domain1.com, DNS for domain2.com would be pointed at the same server as domain1.com. I know how to do an htaccess rewrite rule that would make all domain2.com traffic map to a specific single page or directory within domain1.com. But that's not what the client wants. What she wants is for a bunch of specific pages on domain2.com to map to specific new pages on domain1.com. For example: domain2.com/index.php?pageid=58 GOES TO domain1.com/2011/04/somearticle domain2.com/index.php?pageid=92 GOES TO domain1.com/2011/03/differentname etc. I could put a bunch of 301 redirects in the htaccess on domain1.com, which would work fine. The problem is, the client doesn't want/need specific redirects for ALL the domain2.com pages, and if I just do 301 redirects, anybody who comes looking for a domain2.com page that I haven't built a specific redirect for will get a 404 error. So I need to use 301 redirects for some traffic, and a rewrite rule for any traffic that's not covered in the 301 redirects. How do I do sort of a blending of a rewrite rule and 301 redirects, all in the htaccess file for domain1.com? Is this possible? Is it as simple as putting the 301 redirects in the htaccess file first, then doing the rewrite rule? I'm guessing not.

    Read the article

  • mod_rewrite not working after upgrade to 12.10

    - by CrowderSoup
    I'm hoping this is a quick and simple fix and that I just need a fresh set of eyes. However, I'm fearful that it might actually be an error in the latest build of the rewrite module. I have a .htaccess file that turns on the rewrite engine (I've made sure the module is enabled), creates some rewrite conditions, and finally a rewrite rule. Here's my .htaccess file for reference: <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?request=$1 [L,QSA,NC] </IfModule> Now for the problem: if I go to hostname.com it works fine. If I go to hostname.com/Index it works fine. However, if I go to hostname.com/index it doesn't rewrite the request and I get a 404. I'm not sure what's going on here. I've used a rewrite rule tester and there doesn't appear to be any issues with my rewrite rule itself. Again, this issue didn't manifest until after I upgraded to 12.10, at which point I know that Apache was updated. Any thoughts? Has anyone else here experienced this? I know that two other people besides myself have experienced this here. Thanks in advance for any help you can provide!

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >