Search Results

Search found 10444 results on 418 pages for 'macbook pro retina'.

Page 81/418 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Can Campaign URL tags cause a soft 404 error?

    - by user35306
    I was checking out one of my company's website's Webmaster Tools to analyze the cause behind some soft 404 errors and discovered that a few of the older errors had affiliate mp referral tags listed as the relative URLs. Since these are older problems and I don't seem too many of them coming up in the last few months I don't think it's still a problem. I'm just curious if it's possible to cause a soft 404 by improperly copying the campaign or referral tag into the URL.

    Read the article

  • Dns hosting question

    - by ArthurD42
    Hi, I'm new to dns hosting and I have recently setup google apps for the 'mail.' cname record. How can I use it to display files on the www URL? ie: 'www.' cname. Or is it not possible to upload files using dns hosting only? I also have hosting elsewhere and wanted to know if I could forward the www cname to a subdirectory on that server, but not displaying the subdirectory URL publicly, rather the address bar constantly remains as the dns hosted (original) domain? Thank you!

    Read the article

  • clicktale.com alternative that works with https and ajax

    - by Alexey Ivanov
    I need to record user's actions on site for analytics purposes. The way clicktale.com doing it is just fine. But unfortunately it have problems with working over https and recording ajax events. Is there some service or script/library that I can host that can do this task? Non-free one's are ok to. Clarification: ClickTale function that I want to reproduce is recording of separate user sessions and their replay. So you can see video of all user's interactions with page: There he clicks first, which links opens, etc. Usually such services replay user's actions buy reproducing them with javascript (and here comes ajax problem: external sites can't use ajax because of cross-domain scripting). So I'm looking for a tool (possibly script that I host on site to allow cross-domain scripting) that can record ajax blocks actions.

    Read the article

  • Which ecommerce framework is fast and easy to customize?

    - by Diego
    I'm working on a project where I have to put online an ecommerce system which will require some good amount of custom features. I'm therefore looking for a framework which makes customization easy enough (from an experienced developer's perspective, I mean). Language shoul be PHP and time is a constraint, I don't have months to learn. Additionally, the ecommerce will have to handle around 200.000 products from day one, which will increase over time, hence performance is also important. So far I examined the following: Magento - Complicated and, as far as I could read, slow when database contains many products. It's also resource intensive, and we can't afford a dedicated VPS from the beginning. OpenCart - Rough at best, documentation is extremely poor. Also, it's "free" to start, but each feature is implemented via 3rd party commercial modules. OSCommerce - Buggy, inefficient, outdated. ZenCart - Derived from OSCommerce, doesn't seem much better. Prestashop - It looks like it has many incompatibilities. Also, most of its modules are commercial, which increases the cost. In short, I'm still quite undecided, as none of the above seems to satisfy the requirements. I'm open to evaluate closed source frameworks too, if they are any better, but my knowledge about them is limited, therefore I'll welcome any suggestion. Thanks for all replies.

    Read the article

  • How to refresh Google Reader cache for a specific domain

    - by Renan
    Brief history domain.com was associated with a blogspot. domain.com changed to an institutional site and features a small code instructing which file should be retrieved by RSS readers. <link rel="alternate" type="application/rss+xml" href="http://domain.tumblr.com/rss" /> Google Reader didn't update the RSS to the new blog. domain.com/blog retrieves the posts correctly because it was never used for that purpose in the older blog. How is it possible to force Google Reader to update the cached information? I tried using another RSS reader and it worked perfectly with the new domain. However, when I tried to follow domain.com in another Google Reader account, it still showed the posts from the older blog. It's been almost a month that the aforementioned changes were made.

    Read the article

  • Google shows subdomain of main site instead of add on domain URL

    - by Welsher
    I have my host (lunarpages) set up with a few add on domains to my main account. These show up as sub-domains of my main account, but they can be reached by using the new domain I've created. So: subdomain1.domain.com -- www.mynewsite.com subdomain2.domain.com -- www.myothersite.com etc. The problem is, mynewsite.com shows up in google with that domain, but myothersite.com shows up with subdomain2.domain.com. I don't have a clue what might be causing this to happen. If anyone has an advice or can point me in the right direction, I'd really appreciate it! Thanks.

    Read the article

  • Why do some large companies use a different domain for sending emails?

    - by Andrei Rinea
    I've received notifications and newsletters from Microsoft and Facebook in the past and noticed that both emails came not from an address such as [email protected] or [email protected]. Not event [email protected] but both had different domains such as : [email protected] and [email protected] Why is this? Any particular advantage in doing so? Other than not polluting the employees email software, I can't see.

    Read the article

  • 301 redirect www to non-www [duplicate]

    - by Claudiu
    This question already has an answer here: Removing non-www support 4 answers I understand that it is better to make a 301 redirect to make sure that all your links are seen the same on Google. Until now I always used erasmus-plus.ro without the www. for my website. Is it ok to make a redirect from www. to non-www. From my search on Google all users spoke about it the other way around. And somewhere I read that redirects are not good for seo. Is 301 an exception?

    Read the article

  • Creating sub domain on webmin [duplicate]

    - by Vijay
    This question is an exact duplicate of: Webmin - Setting up multiple virtual hosts - Subdomains 1 answer Can anybody help me in creating subdoain through webmin. I want to create subdomain like test.xxxxx.com for this I tried with several reference site but no luck. exp. http://www.trickylinux.net/add-domain-virtualminwebmin.html http://codeboxlabs.com/add-subdomain-webmin-linux/ My current httpd.conf look like: <VirtualHost *:80> SSLEngine off DocumentRoot /var/www/html/******/web DirectoryIndex index.php <Directory "/var/www/html/*****/web"> AllowOverride All Allow from All </Directory> ServerName www.******/.com ServerAlias ftp.*****.com SSLEngine off SSLVerifyClient optional </VirtualHost> Please help me to solve this issue.

    Read the article

  • google changing crawl speed: doesn't seem to work. Why?

    - by Olivier Pons
    I've changed 3 days ago the google crawling speed of mywebsite. Here it is: This means: 2 demands by second. I've got the message on the google webmasters tools that the change speed has been taken in account: But after more than three days, nothing happens: still one request every ten seconds See here: My webserver is very fast and can handle up to twenty simultaneous connexions. And my website is brand new, this means google is almost the only one here crawling my website. After more than 30000 successful requests (= no 404), I think there's something going on... or maybe this is just a bug? Has anyone ever had this problem?

    Read the article

  • Mass emailing bouncebacks- Sendblaster

    - by Matt
    I am currently using a mass emailer called sendblaster- if anyone has experience using this program for mass emails any help would be fantastic. The program has a feature that allows you to track reads and opens of emails sent, however the problem i have is with delivery failures/bouncebacks. The "manage bouncebacks" feature is very confusing, and appears to be incapable of showing which email addresses have bounced. For some reason the sender address does not receive delivery failures as with other mass email programs that I've used. If anyone knows a way to efficiently manage the delivery failures/bounceback using this program please help! Thanks

    Read the article

  • How do I ask google not to index certain parts of my page?

    - by Gavin Mannion
    I was searching for an old review on my site today and I noticed that Google is indexing the headline text in my latest article list on every page that it appears, obviously I guess. The problem is if I search for my Dragon's Lair review specifically to my site like this http://www.google.co.za/search?sugexp=chrome,mod=9&sourceid=chrome&ie=UTF-8&q=site%3Alazygamer.net+dragons+lair+review Then it returns a ton of pages that aren't appropriate as they aren't related to the review at all. The reason why I care is that I have a second Dragon's Lair review that was posted years ago and now I can't find it. Is there a way to hint to google that certain text isn't relevant to the actual content on the page? is it a terrible idea?

    Read the article

  • Appropriate response when client empowered with CMS destroys content to his own will

    - by dukeofgaming
    So, I just recently closed a website project that pretty much was The Oatmeals' Design Hell, but with content. The client loved the site at the beginning but started getting other people involved and mercilessly bombarding us with their opinions. We served a carefully thought content strategy (which the client approved) and extremely curated copywriting that took us four months after at least 5 requirement changes (new content, new objectives for the business, changed offerings, new mindfaps, etc.) that required us to rewrite the content about 3 times. The client never gave timely feedback even though we kept the process open for him and his people to see (content being developed transparently in Google Docs). Near the end of the project he still wanted to make changes but wanted us to finish already (there are not enough words in the world to even try to make sense of this). So I explained to him the obvious implications of the never-ending requirement changes and advised him to take the time to gather his thoughts with his own team and see the new content introduced as a new content maintenance project. He happily accepted, but on the day of training/delivery things went very wrong and we have no idea why. The client didn't even allow the site to be out for a week with the content we developed for him and quickly replaced us with a Joomla savvy intern so that he completely destroy the content with shallow, unstructured, tasteless and plain wordsmithing (and I'm not even being visceral). Worst insult of all, he revoked our access from his server and the deployed CMS not even having passed 10 minutes of being given his administrator account (we realized the day after that he did it in our own office, the nerve!). Everybody involved in the team is enraged and insulted. I never want to see this happen again. So, to try to make sense of this situation and avoid it in the future with new clients I have two concrete questions: Is there even an appropriate course of action with a client like this?, or is he just not worth the trouble of analyzing (blindly hoping this never repeats again). In the exercise to try and blame ourselves instead of the client and take this as a lesson of... something, how should we set expectations for new clients about the working terms, process and final product so that they are discouraged from mauling the content to their own contempt once they get the codes to the nukes (access to the CMS)?

    Read the article

  • trouble setting up dns for vps

    - by Zenph
    I had registered a domain at namecheap and forwarded the dns to my host at vps.net. The strange thing is, when I did that the site was showing up. I even uploaded files and everything was displaying correctly on my new domain. Now, it is just the namecheap holding page again. I have no idea why this is happening as I haven't touched the configuration since it was working. Could anyone point me in the right direction? When I enter http://domain.com it redirects to http://www.domain.com and the namecheap holding page is shown. Prior to this domain.com was showing what the host was serving. I am completely lost and have no idea where to start so I'd appreciate any help I can get.

    Read the article

  • Can't get heroku site updated on custom domain

    - by Joseph Brown
    I have hosayif.com registered at GoDaddy, and I set up a cname for rails.hosayif.com to point to my heroku app at sharp-meadow-6535.herokuapp.com. I set this up with a previous app, and it worked. I made a new app, renamed the old one, and then renamed the new one so that it is sharp-meadow-6535.herokuapp.com in hopes of not having to change anything at GoDaddy. In theory, rails.hosayif.com and sharp-meadow-6535.herokuapp.com should be the same site, but they are not. Can someone tell me what I am doing wrong?

    Read the article

  • Webmaster tools, Duplicate Meta Descriptions, and Short Meta Descriptions [closed]

    - by Watsy91
    Possible Duplicate: Do meta keywords have any impact on ranking algorithms? I am fairly new to the whole Webmaster Tools concept. I have been looking at all the different options, such as crawl errors, HTML improvements etc... I have been looking at the Duplicate Meta Descriptions and Short Meta Descriptions, was wondering if anyone could suggest ideas on how to go about improving this. It seems that all the Duplicates are from the URL Title and the short description. It would seem to me that most people would have information regarding the page with the same keywords as their titles. Heres an example of one: These are the ultimate hampers in taste, quality and value. Amongst this range of luxury hampers ar /food-hampers/food-hampers-over-100.html /thank-you-gifts/large-gifts-over-100.html To get to the point I just want to know do these things really matter? Would they have a real consequence on my sites rankings? My sites have been falling down the rankings since early this year and I have really started to look at Google Analytics and Webmaster tools to try and indicate certain problems. I have researched the Internet and it seems that some people don't bother and others do!! I know that Stackoverflow has 100s+ people who have went through the above and I would really appricate if they could give me some tips etc. Or in the END does it really matter?? :D

    Read the article

  • How should I structure my urls for both SEO and localization?

    - by artlung
    When I set up a site in multiple languages, how should I set up my urls for search engines and usability? Let's say my site is www.example.com, and I'm translating into French and Spanish. What is best for usability and SEO? Directory option: http://www.example.com/sample.html http://www.example.com/fr/sample.html http://www.example.com/es/sample.html Subdomain option: http://www.example.com/sample.html http://fr.example.com/sample.html http://es.example.com/sample.html Filename option: http://www.example.com/sample.html http://www.example.com/sample.fr.html http://www.example.com/sample.es.html Accept-Language header: Or should I simply parse the Accept-Language header and generate content server-side to suit that header? Is there another way to do this? If the different language versions don't have different urls, what do I do about the search engines?

    Read the article

  • remap an xml feed to the address of a wordpress rss feed

    - by cboettig
    I used to have a blog based on Wordpress and moved to one based on Jekyll. I can create a new feed in Jekyll by building an atom page in XML with a bit of Liquid code, like this The trouble is, the location of the new feed is http://carlboettiger.info/atom.xml, while the old feed from the wordpress site is http://carlboettiger.info/feed, with no extension. how can I configure the Jekyll-created feed such that followers who have pointed their readers to the old feed address from wordpress will start to get the new content? (Site's Jekyll source here)

    Read the article

  • Why Google skips page title

    - by Bob
    I have no idea why this is happening. An example http://www.londonofficespace.com/ofdj17062004934429t.htm Title tag is: Unfurnished Office Space Wimbledon – Serviced Office on Lombard Road SW19 But is indexed as: Lombard Road – SW19 - London Office Space If you look in the source code and search for this portion ‘Lombard Road – SW19’ You then find that it's next to an office image alt=’Lombard Road – SW19’. The only thing I could think of is that the spider somehow ‘skips’ our title tag and grabs this bit, and then inserts the name of the site (but WHY?) Is there anything I can do with this? or is this a Google behaviour?

    Read the article

  • Debugging "Premature end of script headers" - WSGI/Django [migrated]

    - by Marcin
    I have recently deployed an app to a shared host (webfaction), and for no apparent reason, my site will not load at all (it worked until today). It is a django app, but the django.log is not even created; the only clue is that in one of the logs, I get the error message: "Premature end of script headers", identifying my wsgi file as the source. I've tried to add logging to my wsgi file, but I can't find any log created for it. Is there any recommended way to debug this error? I am on the point of tearing my hair out. My WSGI file: import os import sys from django.core.handlers.wsgi import WSGIHandler import logging logger = logging.getLogger(__name__) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' os.environ['CELERY_LOADER'] = 'django' virtenv = os.path.expanduser("~/webapps/django/oneclickcosvirt/") activate_this = virtenv + "bin/activate_this.py" execfile(activate_this, dict(__file__=activate_this)) # if 'VIRTUAL_ENV' not in os.environ: # os.environ['VIRTUAL_ENV'] = virtenv sys.path.append(os.path.dirname(virtenv+'oneclickcos/')) logger.debug('About to run WSGIHandler') try: application = WSGIHandler() except (Exception,), e: logger.debug('Exception starting wsgihandler: %s' % e) raise e

    Read the article

  • Help with Apache mod_rewrite rules

    - by Brian Neal
    I want to change some legacy URL's like this: /modules.php?name=News&file=article&sid=600 to this: /news/story/600/ This is what I have tried: RewriteEngine on RewriteCond %{QUERY_STRING} ^name=News&file=([a-z_]+)&sid=([0-9]+) RewriteRule ^modules\.php /news/story/%2/ [R=301,L] However I still get 404's on the old URLs. I do have some other rewrite rules working, so I am pretty sure mod_rewrite is enabled and functioning. Any ideas? Thanks.

    Read the article

  • Friendly URLs: is there a max length for search engines?

    - by Olivier Pons
    People from stackoverflow have been working closely with google team to help them make the panda algorithm more efficient, so I guess they've learned a lot from the google team. Thus they may have done very clever friendly URLs to maximize the page rank. I've seen from time to time very long URLs (can't find where) in stackoverflow, but after a certain "amount" of character there were only numbers, kind of "ok passed this length, SEOs will ignore this so let's put only numbers". I've done a huge work on my framework to make very friendly URLs, and my website can come up with URLs like: http://www.mysite.fr/recherche/region/provence-alpes-cote-d-azur/departement/bouches-du-rhone/categorie-de-metiers/paramedical/ It's very long and I'm wondering if the previous URL won't be mixed with, say, this one: http://www.mysite.fr/recherche/region/provence-alpes-cote-d-azur/departement/bouches-du-rhone/categorie-de-metiers/art/

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • My website directories downloads instead of actually opening up from browser

    - by numerical25
    I added some screencast to show what I am having issues with http://screencast.com/t/212t3ANINqk http://screencast.com/t/bR44U1wkvNZl http://screencast.com/t/iDS7APYYsa but the page downloads my subdirectories instead of opening them up and displaying the index file of that page Here is the situation. I am trying to get my web service up using mac ports and I am just trying to configure all the files. I am using php, apache, etc. the page goes to the localhost root but anything beyond that. it can not find. edit Ive tried to add the following to httpd.conf within the <IfModule mime_module> but no hope AddType application/x-httpd-php .php AddType application/x-httpd-php .phtml AddType application/x-httpd-php .php3 AddType application/x-httpd-php .php4 AddType application/x-httpd-php .html AddType application/x-httpd-php-source .phps

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >