Search Results

Search found 20852 results on 835 pages for 'local seo'.

Page 57/835 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Getting rank for keywords that I don't want to appear on my website [duplicate]

    - by Rober
    This question already has an answer here: Which keyword should I use. colors or colours or a combination of both? 2 answers One of my products has two names. One of them is what I consider correct and thus it is what I want to appear on my website. The other name is incorrect for me, so I would like to avoid it. But I know that many people will search my product using the "bad" name. How could I get the "bad" name indexed for my site on search engines even if nobody can read it there? Of course, I want to do it "legally" so that no engine will ban my site considering it as cloaking, black hat SEO, etc... EDIT: Having that "bad" name on my backlinks is not an option. For example I would perceive user reviews connecting my site to that word as a negative point. Maybe having my site as a search result for that word could be negative as well, but I think it is worth it.

    Read the article

  • Can I use a 302 redirect to serve up static content from an URL with escaped_fragment?

    - by Starfs
    We would like to serve up SEO-friendly Ajax-driven content. We are following this documentation. Has anyone ever tried to write a 302 redirect into the .htaccess file, that takes the ?_escaped_fragment= string and send that to a static page?, for example /snapshot/yourfilename/. How will Google react to this? I've gone through the documentation and it's not very clear. The below quote is from Google's documentation this is what I find. I'm not sure if they are saying that you can redirect the _escaped_fragment_ URL to a different static page, or if this is to redirect the hashtag URL to static content? Thoughts? From Google's site: Question: Can I use redirects to point the crawler at my static content? Redirects are okay to use, as long as they eventually get you to a page that's equivalent to what the user would see on the #! version of the page. This may be more convenient for some webmasters than serving up the content directly. If you choose this approach, please keep the following in mind: Compared to serving the content directly, using redirects will result in extra traffic because the crawler has to follow redirects to get the content. This will result in a somewhat higher number of fetches/second in crawl activity. Note that if you use a permanent (301) redirect, the url shown in our search results will typically be the target of the redirect, whereas if a temporary (302) redirect is used, we'll typically show the #! url in search results. Depending on how your site is set up, showing #! may produce a better user experience, because the user will be taken straight into the AJAX experience from the Google search results page. Clicking on a static page will take them to the static content, and they may experience avoidable extra page load time if the site later wants to switch them to the AJAX experience.

    Read the article

  • 404s on password protected content

    - by tjb1982
    I'm new to WordPress and SEO, generally, but we've been running into problems with our site that don't seem to make sense to me. The problem is that our editor likes to schedule posts and/or mark them private until she is ready to make them public, but somehow Google is crawling these posts and getting 404s (because they are password protected). How does Google know they exist in the first place? I checked the sitemap.xml file and don't see a record of the post. One of the offending posts was marked public, but is scheduled for a future date. Could that have something to do with it? I've tried to Google the answer, and I came up with a good amount of reassurance that this won't hurt the site, but I'm still wondering how it's happening in the first place. It's hard because I don't know exactly what the editor's workflow is. Is it possible she's posting publicly first and then revising it to be private only after it's too late? Does anyone know how Google finds WordPress URLs it shouldn't have access to?

    Read the article

  • What is the average page size for single page application (SPA)? [on hold]

    - by Emmanuel Istace
    I'm developing a single page application with a lot of css & javascript. For now the page is 1.3Mo composed by 5 section. Here are the rounded stats : Document : 10kb Style : 60kb Images : 450 kb (already compressed, include a big gallery thumbnails) Javascript : 700kb - 600kb of "framework" (jquery, jquery-ui, boostrap, modernizer, waypoint, ...) and 100kb of custom js. Fonts : 125kb And the site is not finished yet. (Will include gmap api, and some others...) My questions are : Do you have any statistics about the average weight of an SPA? As this is the whole website, do you think it's acceptable? Is lazy load (for images) a solution? What will be impact for SEO ? Is the "200kb rule" of google still relevant? Do you know great tools to detect which javascript code is not used during the the exection of a page and then the availability to optimize these 700kb of framework js stuffs? Can a caching strategy be an answer?

    Read the article

  • Will we be penalized for having multiple external links to the same site?

    - by merk
    There seem to be conflicting answers on this question. The most relevant ones seem to be at least a year or two old, so I thought it would be worth re-asking this question. My gut says it's ok, because there are plenty of sites out there that do this already. Every major retailer site usually has links to the manufacturer of whatever item they are selling. go to www.newegg.com and they have hundreds of links to the same site since they sell multiple items from the same brand. Our site allows people to list a specific genre of items for sale (not porn - i'm just keeping it generic since I'm not trying to advertise) and on each item listing page, we have a link back to their website if they want. Our SEO guy is saying this is really bad and google is going to treat us as a link farm. My gut says when we have to start limiting user useful features to our site to boost our ranking, then something is wrong. Or start jumping through hoops by trying to hide text using javascript etc Some clients are only selling 1 to a handful of items, while a couple of our bigger clients have hundreds of items listed so will have hundreds of pages that link back to their site. I should also mention, there will be a handful of pages with the bigger clients where it may appear they have duplicate pages, because they will be selling 2 or 3 of the same item, and the only difference in the content of the page might just be a stock #. The majority of the pages though will have unique content. So - will we be penalized in some way for having anywhere from a handful to a few hundred pages that all point to the same link? If we are penalized, what's the suggested way to handle this? We still want to give users the option to go to the clients site, and we would still like to give a link back to the clients site to help their own SE rankings.

    Read the article

  • Do you know of some performances test of the different ways to get thread local storage in C++?

    - by Vicente Botet Escriba
    I'm doing a library that makes extensive use of a thread local variable. Can you point to some benchmarks that test the performances of the different ways to get thread local variables in C++: C++0x thread_local variables compiler extension (Gcc __thread, ...) boost::threads_specific_ptr pthread Windows ... Does C++0x thread_local performs much better on the compilers providing it?

    Read the article

  • The right way of using index.html

    - by Jeyekomon
    I have quite a lot of issues I'd like to hear your opinion on, so I hope I'll manage to explain it well enough. I should also note that I'm beginner equipped only with the knowledge of HTML and CSS so although I'm almost sure that there is a simple solution using powerful PHP, it won't help me. Let's say that I have my personal blog on the address example.com/blog.html and there are links to several sub-blogs example.com/blog/math.html, example.com/blog/coding.html etc. So my root folder contains blog.html and blog folder, the blog folder itself contains files math.html and coding.html. First of all, I learned (from Google Webmasters Tools) that for SEO and aesthetical purposes it's good to unify example.com.com and example.com/index.html by adding _rel="canonical"_ attribute into the source of the index.html. Using a couple of other tricks (like linking to ../ and ./) I got rid of the ugly index.html appearing in my web addresses. And now I wonder if this trick can be used not only for the root folder but for any folder? I mean, I would move my blog.html into the blog folder, rename it into the index.html and add rel="canonical" to unify example.com/blog/index.html with example.com/blog/. This trick would change the address of my blog from example.com/blog.html into example.com/blog/. Not finished! I'm also experiencing problems with the google robot indexing my folders. So when I type site:example.com/ into the google search, the link to my folder example.com/blog/ with raw files, icons etc. appears among the other results. I guess there are also other ways how to fix it, but IMHO the change mentioned above would do the trick too - the index.html in the blog folder would preserve the user from viewing the actual raw content of that folder, there would appear only the right link example.com/blog/ in the google search and (I hope that) _rel="canonical"_ would make the second, unwanted link example.com/blog/index.html not to appear in the search results. So my questions are: Is it a good practice to have the index.html file in every subfolder or is it intended to be only in the root folder? Are there any disadvantages or problems that may occur when using the second, "index in every folder" method? Which one of the two ways of structuring the website described above would you prefer?

    Read the article

  • DHCPv6: Provide IPv6 information in your local network

    Even though IPv6 might not be that important within your local network it might be good to get yourself into shape, and be able to provide some details of your infrastructure automatically to your network clients. This is the second article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. IPv6 addresses for everyone (in your network) Okay, after setting up the configuration of your local system, it might be interesting to enable all your machines in your network to use IPv6. There are two options to solve this kind of requirement... Either you're busy like a bee and you go around to configure each and every system manually, or you're more the lazy and effective type of network administrator and you prefer to work with Dynamic Host Configuration Protocol (DHCP). Obviously, I'm of the second type. Enabling dynamic IPv6 address assignments can be done with a new or an existing instance of a DHCPd. In case of Ubuntu-based installation this might be isc-dhcp-server. The isc-dhcp-server allows address pooling for IP and IPv6 within the same package, you just have to run to independent daemons for each protocol version. First, check whether isc-dhcp-server is already installed and maybe running your machine like so: $ service isc-dhcp-server6 status In case, that the service is unknown, you have to install it like so: $ sudo apt-get install isc-dhcp-server Please bear in mind that there is no designated installation package for IPv6. Okay, next you have to create a separate configuration file for IPv6 address pooling and network parameters called /etc/dhcp/dhcpd6.conf. This file is not automatically provided by the package, compared to IPv4. Again, use your favourite editor and put the following lines: $ sudo nano /etc/dhcp/dhcpd6.conf authoritative;default-lease-time 14400; max-lease-time 86400;log-facility local7;subnet6 2001:db8:bad:a55::/64 {    option dhcp6.name-servers 2001:4860:4860::8888, 2001:4860:4860::8844;    option dhcp6.domain-search "ios.mu";    range6 2001:db8:bad:a55::100 2001:db8:bad:a55::199;    range6 2001:db8:bad:a55::/64 temporary;} Next, save the file and start the daemon as a foreground process to see whether it is going to listen to requests or not, like so: $ sudo /usr/sbin/dhcpd -6 -d -cf /etc/dhcp/dhcpd6.conf eth0 The parameters are explained quickly as -6 we want to run as a DHCPv6 server, -d we are sending log messages to the standard error descriptor (so you should monitor your /var/log/syslog file, too), and we explicitely want to use our newly created configuration file (-cf). You might also use the command switch -t to test the configuration file prior to running the server. In my case, I ended up with a couple of complaints by the server, especially reporting that the necessary lease file wouldn't exist. So, ensure that the lease file for your IPv6 address assignments is present: $ sudo touch /var/lib/dhcp/dhcpd6.leases$ sudo chown dhcpd:dhcpd /var/lib/dhcp/dhcpd6.leases Now, you should be good to go. Stop your foreground process and try to run the DHCPv6 server as a service on your system: $ sudo service isc-dhcp-server6 startisc-dhcp-server6 start/running, process 15883 Check your log file /var/log/syslog for any kind of problems. Refer to the man-pages of isc-dhcp-server and you might check out Chapter 22.6 of Peter Bieringer's IPv6 Howto. The instructions regarding DHCPv6 on the Ubuntu Wiki are not as complete as expected and it might not be as helpful as this article or Peter's HOWTO. But see for yourself. Does the client get an IPv6 address? Running a DHCPv6 server on your local network surely comes in handy but it has to work properly. The following paragraphs describe briefly how to check the IPv6 configuration of your clients, Linux - ifconfig or ip command First, you have enable IPv6 on your Linux by specifying the necessary directives in the /etc/network/interfaces file, like so: $ sudo nano /etc/network/interfaces iface eth1 inet6 dhcp Note: Your network device might be eth0 - please don't just copy my configuration lines. Then, either restart your network subsystem, or enable the device manually using the dhclient command with IPv6 switch, like so: $ sudo dhclient -6 You would either use the ifconfig or (if installed) the ip command to check the configuration of your network device like so: $ sudo ifconfig eth1eth1      Link encap:Ethernet  HWaddr 00:1d:09:5d:8d:98            inet addr:192.168.160.147  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: 2001:db8:bad:a55::193/64 Scope:Global          inet6 addr: fe80::21d:9ff:fe5d:8d98/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 Looks good, the client has an IPv6 assignment. Now, let's see whether DNS information has been provided, too. $ less /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTENnameserver 2001:4860:4860::8888nameserver 2001:4860:4860::8844nameserver 192.168.1.2nameserver 127.0.1.1search ios.mu Nicely done. Windows - netsh Per description on TechNet the netsh is defined as following: "Netsh is a command-line scripting utility that allows you to, either locally or remotely, display or modify the network configuration of a computer that is currently running. Netsh also provides a scripting feature that allows you to run a group of commands in batch mode against a specified computer. Netsh can also save a configuration script in a text file for archival purposes or to help you configure other servers." And even though TechNet states that it applies to Windows Server (only), it is also available on Windows client operating systems, like Vista, Windows 7 and Windows 8. In order to get or even set information related to IPv6 protocol, we have to switch the netsh interface context prior to our queries. Open a command prompt in Windows and run the following statements: C:\Users\joki>netshnetsh>interface ipv6netsh interface ipv6>show interfaces Select the device index from the Idx column to get more details about the IPv6 address and DNS server information (here: I'm going to use my WiFi device with device index 11), like so: netsh interface ipv6>show address 11 Okay, address information has been provided. Now, let's check the details about DNS and resolving host names: netsh interface ipv6> show dnsservers 11 Okay, that looks good already. Our Windows client has a valid IPv6 address lease with lifetime information and details about the configured DNS servers. Talking about DNS server... Your clients should be able to connect to your network servers via IPv6 using hostnames instead of IPv6 addresses. Please read on about how to enable a local named with IPv6.

    Read the article

  • Local SEO Tools For SME's

    For many large corporations the focus of their SEO strategy will be on a national scale. But often for small and medium-sized enterprises a more local view should be taken to maximise visibility with their target audience, and ensure they are reaching their potential client base on a regular basis. Alongside the traditional search engine optimisation tactics there are a number of tools that can be incorporated within local strategies.

    Read the article

  • Cannot save tar.gz file to usr/local

    - by ATMathew
    I'm using the following instruction to install and configure Hadoop on Ubuntu 10.10. http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#installation I tried to save the compressed tar.gz file to /usr/local/ but it just won't save. I've tried saving the tar.gz in my home folder and desktop and copying the files to the desired folder, but I get an error that tells me I don't have permission. How do I save and extract a tar.gz folder to /usr/local/hadoop?

    Read the article

  • Dominating Search Results With Local SEO

    Local Businesses are turning to local SEO services to obtain high placement with the major search engines. With tens of millions of websites currently online, dominant placement with the search engines is vital for online success. To obtain high placement within search engine results, you will need to deploy proven search engine optimization methods.

    Read the article

  • Using ISO Image with a Local Repository for updating Exadata Compute nodes

    - by Rene Kundersma
    For systems that cannot connect directly to Oracle ULN to build a local repository an ISO image file is made available by Oracle. This ISO image can be mounted and used as a local repository. The ISO image contains a file system that contains only the latest (x86_64) ULN channel and cannot be used to update the database servers to any other release than release 11.2.3.1.1. ISO and instructions can be found here  Rene Kundersma

    Read the article

  • Star rating not showing in rich snippets

    - by Danny R
    We've recently been doing a lot of work on our site's SEO (www.betterthanreviews.com). We recently did a push to update the rich snippets breadcrumb, meta description, and star rating. After giving Google some time to index the site, it has updated the breadcrumbs and meta descriptions for our review pages, but the stars are still not showing. This is currently how it appears on a Google search (link to the actual page: http://www.betterthanreviews.com/home-security/livewatch): This is what the Rich Snippets is supposed to look like, and how it appears in Google's testing tool: More context: As seen in our html, we are using schema.org language. We initially were using schema.org/Corporation for the site, but we now have the page labeled as schema.org/HomeAndConstructionBusiness because Google will not show star ratings for the Corporation language. However, in our Webmaster Tools, the Structured Data is still showing the Corporation language, which could be a potential issue. Here is a look at some of the coding that we used. But it can be looked at closer by inspecting the element: <div class="aggregate-rating" itemprop="aggregateRating" itemscope="" itemtype="http://schema.org/AggregateRating"> <div class="review row_fluid" itemprop="review" itemscope="" itemtype="http://schema.org/Review"> <div class="row_fluid rating" itemprop="reviewRating" itemscope="" itemtype="http://schema.org/Rating"> <meta content="4.5" itemprop="ratingValue" title="4.5 out of 5 stars" class="star-rating-readonly"> <meta content="2013-12-05" itemprop="datePublished"> <p class="review-headline" itemprop="headline">Way better than my previous system</p> <div> <p class="reviewer" itemprop="author">Scott H. </p> <span class="bullet">•</span> <p class="created_at">2 months ago</p> <p class="content" itemprop="description">I love it! The experience I have had so far is extremely positive. I had another alarm system before and I didn't like it but this one is really nice. I am telling everybody about it.</p> </div> </div> Any suggestions for how to fix this?

    Read the article

  • How Local SEO Can Improve Your Business

    Local search engine optimization is a good first step to conquer the search engines and present your business. This article summarizes some of the positive aspects of Local SEO, and why businesses should not scared to embrace the internet as a new marketing medium.

    Read the article

  • juju-local and cgroup-lite

    - by bitgandtter
    Im trying to configure juju-local on my virtual-box machine to test some environments, learn, play and then make some desition to deploy to my cloud. I follow the docs on juju page (https://juju.ubuntu.com/docs/getting-started.html, https://juju.ubuntu.com/docs/config-local.html) but always get this error on any service: status: "hook failed: install" i do some research and use juju-debug hooks and find that the install error was on cgroup-lite. Can anybody helpme? Thanks

    Read the article

  • Local SEO, SMB and Google Places - Tips You Won't Find Elsewhere

    The internet is moving fast and if you are a small business SMB needing to be found online through local search engine optimization SEO, then you better mover faster. The changes just in Google Maps Local Business Center include a name change to Google Places and much, much more. There is much already reported about how to take advantage. Here are three tips you will not likely find any where else.

    Read the article

  • Keeping local folders synchronized

    - by Earthling
    After repeatedly losing data on encrypted drives due to some trivial combination of software and hardware failure, I would like to know if there is a simple tool that keeps local folders synchronized. Like a local "cloud" service that runs on one computer and synchronizes any changes in one folder to the other folder as soon as both folders are available. That way I can keep a copy of the most important files on a different hard-drive.

    Read the article

  • Redirect subdomain to local pc

    - by user1188570
    I have a home webserver which is constatly running. Is it possible to create a subdomain which would redirect traffic to another local pc? For example I have 1 Server and one notebook(with webserver installed for developing) Now I can access to notebook only from local network with IP. Server is also hosting domain example.com. Now I would like to visit laptop.example.com, which would be my laptop.

    Read the article

  • Keeping remote files synced with local files?

    - by Kelp
    Hello, When developing web applications, how does one keep local files and remote files synced together? There is the obvious way, whenever you edit a file on your local machine, just upload that file to the remote machine. Is there a more efficient way? I ask because I have been using subversion control, and it is so easy to keep files synced on a remote server. All I have to do is "commit" and it will find the files which need to be replaced.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >