Search Results

Search found 9717 results on 389 pages for 'pro metedor'.

Page 66/389 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Can Campaign URL tags cause a soft 404 error?

    - by user35306
    I was checking out one of my company's website's Webmaster Tools to analyze the cause behind some soft 404 errors and discovered that a few of the older errors had affiliate mp referral tags listed as the relative URLs. Since these are older problems and I don't seem too many of them coming up in the last few months I don't think it's still a problem. I'm just curious if it's possible to cause a soft 404 by improperly copying the campaign or referral tag into the URL.

    Read the article

  • Mass emailing bouncebacks- Sendblaster

    - by Matt
    I am currently using a mass emailer called sendblaster- if anyone has experience using this program for mass emails any help would be fantastic. The program has a feature that allows you to track reads and opens of emails sent, however the problem i have is with delivery failures/bouncebacks. The "manage bouncebacks" feature is very confusing, and appears to be incapable of showing which email addresses have bounced. For some reason the sender address does not receive delivery failures as with other mass email programs that I've used. If anyone knows a way to efficiently manage the delivery failures/bounceback using this program please help! Thanks

    Read the article

  • Redirect/rewrite dynamic URL to sub-domain and create DNS for subdomain

    - by Abdul Majeed
    I have created an application in PHP, I would like to re-direct the following URL to corresponding sub-domain. Dynamic URL pattern: http://mydomain.com/mypage.php?user_name=testuser I wish to re-direct this to the corresponding sub domain: http://testuser.mydomain.com/ How do I create a rewrite rule for this purpose? How do I register DNS for sub-domain without using CPANEL? (I want to activate sub-domain when the user registers to the system.)

    Read the article

  • Can't get heroku site updated on custom domain

    - by Joseph Brown
    I have hosayif.com registered at GoDaddy, and I set up a cname for rails.hosayif.com to point to my heroku app at sharp-meadow-6535.herokuapp.com. I set this up with a previous app, and it worked. I made a new app, renamed the old one, and then renamed the new one so that it is sharp-meadow-6535.herokuapp.com in hopes of not having to change anything at GoDaddy. In theory, rails.hosayif.com and sharp-meadow-6535.herokuapp.com should be the same site, but they are not. Can someone tell me what I am doing wrong?

    Read the article

  • How do I ask google not to index certain parts of my page?

    - by Gavin Mannion
    I was searching for an old review on my site today and I noticed that Google is indexing the headline text in my latest article list on every page that it appears, obviously I guess. The problem is if I search for my Dragon's Lair review specifically to my site like this http://www.google.co.za/search?sugexp=chrome,mod=9&sourceid=chrome&ie=UTF-8&q=site%3Alazygamer.net+dragons+lair+review Then it returns a ton of pages that aren't appropriate as they aren't related to the review at all. The reason why I care is that I have a second Dragon's Lair review that was posted years ago and now I can't find it. Is there a way to hint to google that certain text isn't relevant to the actual content on the page? is it a terrible idea?

    Read the article

  • Why Google skips page title

    - by Bob
    I have no idea why this is happening. An example http://www.londonofficespace.com/ofdj17062004934429t.htm Title tag is: Unfurnished Office Space Wimbledon – Serviced Office on Lombard Road SW19 But is indexed as: Lombard Road – SW19 - London Office Space If you look in the source code and search for this portion ‘Lombard Road – SW19’ You then find that it's next to an office image alt=’Lombard Road – SW19’. The only thing I could think of is that the spider somehow ‘skips’ our title tag and grabs this bit, and then inserts the name of the site (but WHY?) Is there anything I can do with this? or is this a Google behaviour?

    Read the article

  • problem connecting to magento connect

    - by amir
    hi, I'm using magento 1.4.0 and when I try to get to magento connect and download a plugin the page will say Error: Please check for sufficient write file permissions Your Magento folder does not have sufficient write permissions, which this web based downloader requires. If you wish to proceed downloading Magento packages online, please set all Magento folders to have writable permission for the web server user (example: apache) and press the "Refresh" button to try again. does anyone know how I can fix this problem, thanks Update: the plugin I'm trying to use is MagentoPycho light box so I unpacked the folder into the app/code/local but it still doesn't show in the admin area

    Read the article

  • How to prevent Google Website Optimizer from making Google Analytics spike Direct Traffic and lower Bounce Rate?

    - by Scott
    I am using Google Website Optimizer (GWO) and Google Analytics. Whenever a person (Google Website Optimizer) does a javascript redirect, Google Analytics will change the referrer. When the referrer changes, the traffic source becomes yourself and is changed to Direct Traffic. For Example: A visitor goes to google: searches for my great service. Clicks the link that goes to website page: /home/ At this point, Google Analytics tracks the referrer as Google. However, /home/ has a GWO javacript redirect to a battery of A/B tests. /home-1/ or /home-2/ or /home-3/ When the redirect from /home/ occurs to /home-1/, google analytics on the /home-1/ page now thinks the referrer is yourself and converts the referrer to Direct Traffic since the Direct Traffic bucket is the unknown. I'm really surprised that GWO and GA do this when they both come from google. Now, How does one fix this to prevent the overwrite of the referrer using GWO?

    Read the article

  • Friendly URLs: is there a max length for search engines?

    - by Olivier Pons
    People from stackoverflow have been working closely with google team to help them make the panda algorithm more efficient, so I guess they've learned a lot from the google team. Thus they may have done very clever friendly URLs to maximize the page rank. I've seen from time to time very long URLs (can't find where) in stackoverflow, but after a certain "amount" of character there were only numbers, kind of "ok passed this length, SEOs will ignore this so let's put only numbers". I've done a huge work on my framework to make very friendly URLs, and my website can come up with URLs like: http://www.mysite.fr/recherche/region/provence-alpes-cote-d-azur/departement/bouches-du-rhone/categorie-de-metiers/paramedical/ It's very long and I'm wondering if the previous URL won't be mixed with, say, this one: http://www.mysite.fr/recherche/region/provence-alpes-cote-d-azur/departement/bouches-du-rhone/categorie-de-metiers/art/

    Read the article

  • Is there a plugin directory software like Firefox Addon or Wordpress Plugin Directory

    - by lulalala
    Nowadays browsers and content management systems all have the plugin/module/addon/theme systems for users to extend at their own will. Developers can also submit plugin to share the plugin with others. The most famous examples are Firefox Addon and Wordpress Plugin Directory. I want to ask that if there is an open source web system specifically designed for this need - to host plugins? Not just one plugin, but all plugins for one software? Ideally it should allow developers to upload the plugin, have a public version for each update, and client software can check if there are any plugin available through the directory's API. As an example, a CMS can have one such directory to host/display theme files. Also is there a better keyword to describe this kind of system?

    Read the article

  • Would using a self-signed SSL certificate be appropriate in this scenario?

    - by Kevin Y
    Now I realize this topic has been discussed in a few questions before (specifically this one), but I'm still a little confused about the implications of using a self-signed certificate, and how I would be affected by doing so in this case. After reading various sources, I'm still a little confused about the exact details of using one. The biggest problem with a self-signed certificate, is a man-in-the-middle attack. Even if you are 100% sure that you are on the correct website and you completely trust the site (your email server for example), you could have someone intercept the connection and present you with their own self-signed certificate. You would think that you are using a secure connection with your email server but you are really using a secure connection to an attacker's email server. – SSL Shopper So somebody could switch out my self-signed certificate with their own, and I wouldn't be able to detect it? The way this site phrases it, it makes it sound worse to install a self-signed certificate than to leave your site without a certificate at all. Self-signed certificates cannot (by nature) be revoked, which may allow an attacker who has already gained access to monitor and inject data into a connection to spoof an identity if a private key has been compromised. CAs on the other hand have the ability to revoke a compromised certificate if alerted, which prevents its further use. - Wikipedia Does this mean that the only way someone could switch out their own certificate for mine is for them to find out the private key? I suppose this is more secure, but I'm still slightly confused about what exactly results from using a self-signed certificate. Is the only issue that obnoxious security warning that pops up in your browser when directed to the site, or is there more to it? Now in my case, I want to add the an SSL certificate to a minuscule Wordpress blog I run that I don't expect anyone else will read anytime soon; I mainly started it to get into the habit of blogging, and to learn more about the process of administrating a site (ex. what to do in situations like this one). Whenever I go to the login page and there's an HTTP:// instead of HTTPS://, I cringe a little. Submitting my password feels like I'm shouting my password out loud with hundreds of people listening. I don't plan on adding any other authors to the site, so I am the only person who would ever need to login. This isn't a site I'm trying to get page views from, or one that handles e-commerce or any sensitive info like that, simply my username and password to login with. One of the concerns (that I've gathered so far) of a self-signed certificate is that non-technical users might be scared by the security warning, but this would not be an issue in my case. TL;DR: If scaring visitors away isn't a concern (which it isn't in my case), is it acceptable to use a self-signed certificate for the purpose of encrypting my Wordpress blog's password, or are there added security issues I should be aware of? Essentially, I'm wondering whether adding a self-signed certificate will be safer than leaving my login page the way it is now, or if it adds the potential for more security breaches than leaving it sans-SSL.

    Read the article

  • Googlebot fetches my pages very frequent, rel-nofollow, meta-noindex or robots.txt-disallow

    - by trante
    Googlebot fetches pages in my site very frequently. And this slowens my website. I don't want Googlebot to crawl too frequent. I decreased crawl rate from Google webmaster tools. But I'm supposing to use these three tools: Adding rel="nofollow" to my inner pages. So Googlebot won't crawl and index them. Adding meta tag "noindex" so Google will remove this page from index and won't get it again. Adding Disallow: /mySomeFolder/ to robots.txt and Googlebot won't crawl that pages. I'm planning to use these methods for my 56.000 pages, except the most important 6-7 pages. Which method would you prefer and what would be disadvantages or advantages ? Or won't it change my website speed etc..

    Read the article

  • Why Does the Same Site in Different SERPs Contain and Not Contain Google Sitelinks

    - by frank13
    Is there a way to get sitelinks on a Google SERP when searching on a site's name vs. the sites' web address? Example, if you search "twin city kings", the first result is the website for Twin City Kings without sitelinks. But if you search "twincitykings.com", the first result is the website for Twin City Kings with sitelinks. Is it possible to get sitelinks on both SERPs? Thanks for helping or clarifying. Note: this question does not pertain to "how to get a sitelink". It pertains to why does the same site come up in different SERPs with and without Sitelinks.

    Read the article

  • My website directories downloads instead of actually opening up from browser

    - by numerical25
    I added some screencast to show what I am having issues with http://screencast.com/t/212t3ANINqk http://screencast.com/t/bR44U1wkvNZl http://screencast.com/t/iDS7APYYsa but the page downloads my subdirectories instead of opening them up and displaying the index file of that page Here is the situation. I am trying to get my web service up using mac ports and I am just trying to configure all the files. I am using php, apache, etc. the page goes to the localhost root but anything beyond that. it can not find. edit Ive tried to add the following to httpd.conf within the <IfModule mime_module> but no hope AddType application/x-httpd-php .php AddType application/x-httpd-php .phtml AddType application/x-httpd-php .php3 AddType application/x-httpd-php .php4 AddType application/x-httpd-php .html AddType application/x-httpd-php-source .phps

    Read the article

  • Debugging "Premature end of script headers" - WSGI/Django [migrated]

    - by Marcin
    I have recently deployed an app to a shared host (webfaction), and for no apparent reason, my site will not load at all (it worked until today). It is a django app, but the django.log is not even created; the only clue is that in one of the logs, I get the error message: "Premature end of script headers", identifying my wsgi file as the source. I've tried to add logging to my wsgi file, but I can't find any log created for it. Is there any recommended way to debug this error? I am on the point of tearing my hair out. My WSGI file: import os import sys from django.core.handlers.wsgi import WSGIHandler import logging logger = logging.getLogger(__name__) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' os.environ['CELERY_LOADER'] = 'django' virtenv = os.path.expanduser("~/webapps/django/oneclickcosvirt/") activate_this = virtenv + "bin/activate_this.py" execfile(activate_this, dict(__file__=activate_this)) # if 'VIRTUAL_ENV' not in os.environ: # os.environ['VIRTUAL_ENV'] = virtenv sys.path.append(os.path.dirname(virtenv+'oneclickcos/')) logger.debug('About to run WSGIHandler') try: application = WSGIHandler() except (Exception,), e: logger.debug('Exception starting wsgihandler: %s' % e) raise e

    Read the article

  • Mod Rewrite - directing HTTP/HTTPS traffic to the appropriate virtual hosts

    - by kce
    I have an Apache2 web server (v. 2.2.16) running on Debian hosting three virtual hosts. The first two hosts are HTTP only (server1 and server2). The last host is HTTPS only (server3). My virtual host configuration files can be found at pastebin. I would like to use mod rewrite to get the following behavior: Any request for http://server3 is re-directed to https://server3 Any request for either https://server1 or https://server2 is re-directed to http://server1 or http://server2 as appropriate. Currently, requesting http://server3 gives you a 403 because indexing is disabled for that host and a request for https://server1 or https://server2 will resolve as https://server3 (as its the only virtual host running SSL). This behavior is not desirable. So far I have added a rewrite rule to the central configuration file (myServerWideConfs.conf), with unfortunately no effect. I was under the impression that this rule (or something similar) should rewrite all https:// requests for server1 and server2 to the proper http:// request. RewriteEngine On RewriteCond %{HTTP_HOST} !^server3 [NC] RewriteRule (.*) http://%{HTTP_HOST} My question is two-fold: What mod rewrite rules should I use to accomplish this? And where should they go? Debian's packaging of Apache has a pretty granular (i.e., fractured) configuration file layout; should my rewrite rules go in /etc/apache2/apache2.conf, /etc/apache2/conf.d/myServerWideConfs.conf, or the individual virtual host files? Is mod rewrite the right tool to accomplish this or am I missing something in my greater apache configuration?

    Read the article

  • Help with Apache mod_rewrite rules

    - by Brian Neal
    I want to change some legacy URL's like this: /modules.php?name=News&file=article&sid=600 to this: /news/story/600/ This is what I have tried: RewriteEngine on RewriteCond %{QUERY_STRING} ^name=News&file=([a-z_]+)&sid=([0-9]+) RewriteRule ^modules\.php /news/story/%2/ [R=301,L] However I still get 404's on the old URLs. I do have some other rewrite rules working, so I am pretty sure mod_rewrite is enabled and functioning. Any ideas? Thanks.

    Read the article

  • 301 redirect www to non-www [duplicate]

    - by Claudiu
    This question already has an answer here: Removing non-www support 4 answers I understand that it is better to make a 301 redirect to make sure that all your links are seen the same on Google. Until now I always used erasmus-plus.ro without the www. for my website. Is it ok to make a redirect from www. to non-www. From my search on Google all users spoke about it the other way around. And somewhere I read that redirects are not good for seo. Is 301 an exception?

    Read the article

  • Domain registration public information, measures domain authority in SEO?

    - by Rana
    I have seen on internet in various places talking about this and they says this information do have impact on SEO, specially in the domain authority section. Is it so? If so, Is there any proper way to fill this up? Like, I am running a site as my blog, what should the the organization field? If I own multiple domains, will having same contact information on all of them help gain domain authority? As a owner of multiple sites, should organization name be same or different? Considering I don't own a company yet? Any other suggestion about this?

    Read the article

  • CPanel - Wild card SSL - How to point *.domain.com to one root and sub.domain.com to another root

    - by Harry Muscle
    I have a wildcard (*.domain.com) SSL certificate installed on my CPanel server. I have domain.com configured to point to /domain.com as its document root and use this wildcard SSL certificate. I also have sub.domain.com configured to point to /sub.domain.com as its document root. Btw, I have not explicitly configured configured sub.domain.com to use the wildcard SSL certificate. When I go to "http://sub.domain.com" it goes to the correct document root, however my problem is that when I go to "https://sub.domain.com" it goes to the incorrect root, it goes to the root configured for the wildcard SSL. I've been trying to find information on how to go about configuring sub.domain.com to use the SSL certificate and go to the correct document root, however, so far I haven't found anything concrete. Do I use the same steps that I used for configuring the certificate for domain.com, but use the same certificate again and specify dev.domain.com as the domain that this certificate is for (instead of *.domain.com)? Or is there something else I should be doing? This is a production server, so I don't want to play around too much. I'm hoping to find the correct information before proceeding.

    Read the article

  • How can I get my dynamic site search results content indexed by Google?

    - by Kris
    I have a site that is simply a search box to search a cloud-hosted database of .tiff images, and then all of my content can only be accessed by entering a search term. So for example, you're on the home page www.example.com and you type in "search" to the box and hit submit. Then it takes you to www.example.com/?q=search, which is a page of all my .tiff images with "search" in the description. How can I get a page like www.example.com/?q=search indexed, WITHOUT making a humungous list of search terms that people might type in?? I know about mod_rewrite, but it seems like for that you need to know ahead of time which URLs you'll need to convert, which I don't. All of these pages will be dynamically user-generated by typing into the search field. Please help!

    Read the article

  • What is the most time-effective way to monitor & manage threats from bots and/or humans?

    - by CheeseConQueso
    I'm usually overwhelmed by the amount of tools that hosting companies provide to track & quantify traffic data and statistics. I'm equally overwhelmed by the countless flavors of malicious 'attacks' that target any and every web site known to man. The security methods used to protect both the back and front end of a website are documented well and are straight-forward in terms of ease of implementation and application, but the army of autonomous bots knows no boundaries and will always find a niche of a website to infest. So what can be done to handle the inevitable swarm of bots that pound your domain with brute force? Whenever I look at error logs for my domains, there are always thousands of entries that look like bots trying to sneak sql code into the database by tricking the variables in the url into giving them schema information or private data within the database. My barbaric and time-consuming plan of defense is just to monitor visitor statistics for those obvious patterns of abuse and either ban the ips or range of ips accordingly. Aside from that, I don't know much else I could do to prevent all of the ping pong going on all day. Are there any good tools that automatically monitor this background activity (specifically activity that throws errors on the web & db server) and proactively deal with these source(s) of mayhem?

    Read the article

  • What are the options for hosting a small Plone site? [closed]

    - by Tina Russell
    Possible Duplicate: How to find web hosting that meets my requirements? I’ve developed a portfolio website for myself using Plone 4, and I’m looking for someplace to host it. Most Plone hosting services seem to focus on large, corporate deployments, but I need something that I can afford on a very limited budget and fits a small, single-admin website. My understanding is that my basic options are thus: I can go with a hosting service that specifically provides Plone. I know of WebFaction, but what others exist? Also, I’d have two stipulations for a Plone hosting service: (a) It needs to use Plone 4, for which I’ve developed my site, and (b) it needs to allow me SSH access to a home directory (including the Plone configuration), so that I may use my custom development eggs and such. I could use a VPS hosting service. What are my options here? Again, I need something cheap and scaled to my level. I could use Amazon EC2 or a similar service (please tell me of any) and pay by the tiniest unit of data. I’m a little scared of this because I have no idea how to do a cost-benefit analysis between this and a regular VPS host. The advantage of this approach would be that I only pay for what I use, making it very scalable, but I don’t know how the overall cost would compare to any VPS host under similar circumstances. What factors enter into the cost of Amazon EC2? What can I expect to pay under either option for regular traffic for a new website? Which one is more desirable for when a rush of visitors drive up my bandwidth bill? One last note: I know Plone isn’t common for websites for individuals, but please don’t try to talk me out of it here; that’s a completely different subject. For now, assume I’m sticking with Plone for good. Also, I have seen the Plone hosting services list on Plone.org—it’s twenty pages long, and the first page was nothing but professional Plone consulting services that sometimes offer hosting for business clients. So, that wasn’t much help. Thank you!

    Read the article

  • website address point to localhost

    - by munir ahmad
    Today in firefox while surfing the internet, I opened a website it asked "This site may harm your computer" if you want to open this website add it in exception list. I added a website under exception list and trust this website. After that situation, whenever I opend this website, it always points toward the localhost untill internet connected. I have setup localhost through apache (xampp server). If internet not connected this website do not open anything but localhost still work as usaual. How can I remove this situation so that this website do not point to locathost? I am using winxp sp3. Same problem now appear in all browsers too.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >