Search Results

Search found 37688 results on 1508 pages for 'site search'.

Page 128/1508 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • SH404SEF URLs in Joomla 1.5

    - by Tao Bellamine
    I have two modules to play with urls, the global configuration module and the sh404sef module. The global config is set to "Sef urls: YES" and "mod rewrite enabled: YES" and the sh404sef is set "url optimization: NO". My problem is, even with "Sef urls" set in the global config, my urls still don't seem to be that "user friendly" so I turn on the "Url optimization" using the sh404sef module, and I get better descriptive urls. However, the problem I inherit from doing this is that my dynamically populated chronoforms get messed up (only the chrono forms, other forms are fine); These forms are now showing up at the homepage instead of their own reserved page. Here's an example: Old form "GOOD" url: http://www.mycraftwork.com/index.php?option=com_content&view=article&id=94 New optimized "BAD" URL: http://www.mycraftwork.com/handthrown-pottery/alladin-teapot/index.php?option=com_content&view=article&id=94 Any help would be GREATLY appreciated! I can even turn the sh404sef on and off if some people are interested in seeing the issue LIVE. Thanks!! Tao Bellamine

    Read the article

  • Does text size and placement on page have an effect on seo

    - by sam
    I was wandering seeing as Google and others keep trying to get more and more 'human' in terms of rating whats good and whats spam, is it known if they take into account the size of a heading ie. an thats font size is 40px is going to speak allot more to the user than a thats font size is 14px.. similarly does placement factor ? ie. a 300 word article at the bottom of a landing page (not in the footer but bellow the useful content) would just be there for seo purposes. i know they look at if your doing things like text-indent:-9999px; and white text on a white background, but what about these more border line practices that both have legitimate uses but also the possibility to be spammy

    Read the article

  • Will Google crawl session based website

    - by DonShwep
    I have a website, it is split into 3 categories but using PHP its an all-in one kind of style. When a user chooses a category on the home page a session is set, this is then used to set the style and contents of the website. Would Googlebot and other bots be able to still scan my website? If a page is accessed and no session is set then the user is sent back to the home page. I have created special links, that set a session but go straight to the contact page. Even this page doesn't seem to be showing up. Any ideas if a sitemap with specially crafted links (to set the session) will help Google?

    Read the article

  • Site failing randomly - could it be Cloudflare or something weird in the JS?

    - by James
    I've been working on a simple site that uses javascript to fade through some fullscreen background images as well as some other simple animations. I've tested the site on Chrome, Safari, FF and Opera on OSX, IE8+ on Win7 and Chrome & FF on Ubuntu and everything looks as I'd expect it to. However, I've had reports of the site failing to load (stops at the stage where the background fades up) on Safari and Chrome on OSX and Win. I can't replicate this on any setup so I'm finding it impossible to troubleshoot. Google's instant preview shows the site fine as does most of the options at browsershots.org so I'm really scratching my head. I'm running the site's traffic through Cloudflare and I'm wondering whether anyone can see (or knows from other sites) why Cloudflare might be mangling the JS or causing a problem somehow (I don't get any errors in the JS error console). Of course, if you can replicate the problem on your machine and can suggest an area to look at that would be amazing but I'm hoping that, like me, you don't see any problem with the site! Here's the site: http://www.bighornrevelstoke.com Thanks, James

    Read the article

  • How to determine the source of a request in a distributed service system?

    - by Kabumbus
    Map/Reduce is a great concept for sorting large quantities of data at once. What to do if you have small parts of data and you need to reduce it all the time? Simple example - choosing a service for request. Imagine we have 10 services. Each provides services host with sets of request headers and post/get arguments. Each service declares it has 30 unique keys - 10 per set. service A: name id ... Now imagine we have a distributed services host. We have 200 machines with 10 services on each. Each service has 30 unique keys in there sets. but now to find to which service to map the incoming request we make our services post unique values that map to that sets. We can have up to or more than 10 000 such values sets on each machine per each service. service A machine 1 name = Sam id = 13245 ... service A machine 1 name = Ben id = 33232 ... ... service A machine 100 name = Ron id = 777888 ... So we get 200 * 10 * 30 * 30 * 10 000 == 18 000 000 000 and we get 500 requests per second on our gateway each containing 45 items 15 of which are just noise. And our task is to find a service for request (at least a machine it is running on). On all machines all over cluster for same services we have same rules. We can first select to which service came our request via rules filter 10 * 30. and we will have 200 * 30 * 10 000 == 60 000 000. So... 60 mil is definitely a problem... I hope to get on idea of mapping 30 * 10 000 onto some artificial neural network alike Perceptron that outputs 1 if 30 words (some hashes from words) from the request are correct or if less than Perceptron should return 0. And I’ll send each such Perceptron for each service from each machine to gateway. So I would have a map Perceptron <-> machine for each service. Can any one tall me if my Perceptron idea is at least “sane”? Or normal people do it some other way? Or if there are better ANNs for such purposes?

    Read the article

  • Is there a weight for html link?

    - by Questions
    I would like to know if there is any weight associated with a html link (its backlink) when google does crawling/indexing. Will 1,2 & 3 be ever considered as a backlink by Google? 1. <a href="xyz.com">1</a> // one character link 2. <a href="xyz.com"> </a> //blank space link 3. <a href="xyz.com">!</a> //special character link 4. <a href="xyz.com">keyword</a> //meaningful word link I hope my question is understandable and guess this is right forum. I don't know to put it in other words. Thanks in advance.

    Read the article

  • Cropping images & SEO

    - by user1181950
    So I have a page with a bunch of images with largely varying sizes. Also the layout of the page is such that the images are all in the shape of square tiles, so just resizing will cause distorted images. What I've been doing previously is when users upload images, I resize and crop them appropriately and display the new image as the thumbnail and load full image when user clicks on it. However, I just realized this is an issue with SEO as google will crawl the thumbnails and stick the thumbnails on Google Images instead of the full images. Is there any way to show a cropped/resized image but have Google Image show the full image? I can do something with css using an enclosing div and overflow:hidden, but I'd imagine the performance on that would be pretty bad. Any suggestions? Thanks! PS. I saw this (Make google index the actual image not the thumbnail), but in my case I have users continuously uploading images, and the database of images is always changing and pretty big (thousands), so sitemap will be pretty unwieldy..

    Read the article

  • Finding duplicate files?

    - by ub3rst4r
    I am going to be developing a program that detects duplicate files and I was wondering what the best/fastest method would be to do this? I am more interested in what the best hash algorithm would be to do this? For example, I was thinking of having it get the hash of each files contents and then group the hashes that are the same. Also, should there be a limit set for what the maximum file size can be or is there a hash that is suitable for large files?

    Read the article

  • check when a page was first seen by google

    - by sam
    Is there a way to see when a page was first published, by checking when it was first cahced by google (obviously its not 100% fool proof because there is a couple of days delay in some cases but it will give you a good idea.) The only other way i could think of checking it published date is if the page / post had a publicly viable time stamp on it, but in the case im looking for it, it dosnt have a publicly visible time stamp.

    Read the article

  • Looking for a customizable "Did you know..." dialog application

    - by Jorge Suárez de Lis
    I want to deploy a "Did you know..." or "Tip of the day" application at the office. It should: Show a dialog at login time with a random tip. Obviously, provide some way to store my own tips. Be easy to disable and reenable by the user itself. I'm using puppet, so I'm covered with the deployment. The tips don't even need to be gathered from a server, since I can deploy the newest tips file/database with no costs. Sure, I could hack a quick solution by using zenity and bash, but I'd like to know if there's any application out there specifically targeted at this. I don't like the zenity approach very much because it's very limited on the contents that can be displayed. No text alongside screenshots, for example. Zenity is aimed towards displaying simple dialogs.

    Read the article

  • Multiple Businesses at The Same Physical Address - SEO / Google Places

    - by Howdy_McGee
    I was wondering what kind if there would be any negative effects to have multiple businesses having the exact same physical address on their website. Currently we have five businesses at the exact same address and it shows on their website, so when people google one of the five businesses address their going to get multiple results from multiple website most of which will not be what their looking for. What is a way around this / what can I do about this? Would adding "Suite Numbers" be a solution? A thought occurred that it might be a good idea to create a landing page for users that are looking up a business by it's address via google. The page will bring up multiple businesses since we have a few at the same address but if we have a landing page at the top which then leads to multiple businesses that might solve the multiple address seo problem. Going to keep researching it though. I also know for Google Places (possible bing local and yahoo local) this could also become a problem. I've submitted an inquiry with them but I wanted to know if anybody had a ready-made solution around this so that Google doesn't bunch all these companies together into one. Thanks!

    Read the article

  • How do I get mlocate to only index certain directories?

    - by Andrew Ferrier
    I'd like to use mlocate on my Ubuntu server, but only to index certain directories (e.g. /home and /data, but not everything under /). However, mlocate's standard configuration works the opposite way; you specify the paths you want to remove (with PRUNE_PATHS). Is there any easy way to achieve this, or any similar utility that will do what I want? (note: it should maintain an index like mlocate, so find is not acceptable, for example) Thanks.

    Read the article

  • Creating sitemap for google bot - how to mark dynamic content / dynamic subpages?

    - by ojek
    I have a website that is internet forum. This forum has many categories, and single category page contains alot of subpages with listed threads. This internet forum is brand new, and about a week ago I filled it with few hundred thousands threads. I then looked at google webmasters page to see any changes in indexing, but the index went up from 300 to about 1200, so that means it did not index my added threads (although it added something). This is what my sitemap.xml contains which I uploaded on their website (of course there is a lot more of the code, this is just a snipped for a single category, in my real sitemap file I have all the categories listed as below): <url> <loc>http://mysite.com/Forums/Physics</loc> <changefreq>hourly</changefreq> </url> Now, I would expect google bot to go into http://mysite.com/Forums/Physics, and move through all the subpages with thread links, and then get inside of each thread and index it's content. How can I do this? Also if this will be unclear, I will add a real link to my website.

    Read the article

  • A couple of links to our products and 10 pages of crack/keygen/torrent/etc.

    - by devdept
    If you try searching for our company and product name you'll get two useful links and 10 pages of hacker sites where eventually you can download the cracked version of our products. How can we clean hacker links and leave only useful links to our prouct pages? We already checked the Google URL Removal Tool but within the 'Removal Type' options we can specify there is nothing meaningful to specify in this case. Shall we proceed the same? Thanks.

    Read the article

  • Aliasing resellers domain to primary domain

    - by Ashkan Mobayen Khiabani
    I have designed a website that accepts re-sellers and actually the concept of this website is having local re-sellers for each province (or should we say branches). I have designed this website in a way that anybody who has a domain, can point to our website (a record or cname). well most of the website content are the same, the only difference is that re-sellers website doesn't have some items on the main menu and may have some small descriptions of their own branch in some pages. I read that Google may ban websites with duplicate content (or which are significantly similar). I want to know will this be a problem for me? If yes, what else can I do? we have had considered asking our reseller to use iframe that loads our website but wanted that each reseller can have its own SEO and try harder but what I read about this duplicate thing worries me.

    Read the article

  • How do I catalog files on several external hard drives that I want to store off-line? OSX

    - by raudi
    My partner, an artist, has more than 10 external hardisks both USB and firewire and every 2-3 months a new one has to be added (She's working with videos and pictures) currently its 10TB and growing so too much for a affordable NAS. Right now the files are not indexed and I think can not be searched with spotlight because not all drives can be connected at the same time. So if she wants to search for a file, she has to guess which disk/disks (based mostly on the date) and then search several drives. Now I'm looking for a solution to index/catalog the drives, something like GentibusCD Cathy Disclib (all these solutions are unfortunately Windows only) Is there any software for OSX that will catalog all the hard drives, so she can search the catalog, find the files, and get the ID of the disk / disk name that has the content? Preferably something with a GUI so my partner can also use it easily Preferably with Thumbnails for pictures/videos (But even an equivalent to "tree /F /A" would be better than nothing)

    Read the article

  • Web development and tips for building a website and the advantages of using HTML 5 in the site

    - by Siddarth
    I am trying to make a website for my college, and the program starts from jan 13 and we get 15 days of time to develop a running site. The best site will become the college site. I am participating, for all these days i used to participate in C and C++ contests and also won a few contests, now i am really into web dev for the last 2 months. I knew HTML long ago recently i brushed up on it and learnt javascript from "javascript and jquery the missing manual"(sorry for not adding the link) and recently bought "PHP and MySQL web development" and I am going on fine with it, but still a lot of pages to cover in that book. After this what do i need to know ajax is one language to concentrate on, what else do i need to do to make this project up and running. Can someone let me know the tricks of this trade and complete information to build a site like this. Right now i am good with javascript HTML and CSS and thats it, what else I am studying HTML5 and CSS3 its pretty fast and neat. The info on site is a college website which includes students profiles where the have to register their info with college id number and pretty much thats it. Think of it as a college site + a social networking site for students, where they can upload there pics and videos pdf books etc.

    Read the article

  • Does searching documentation and samples look bad?

    - by Mick Aranha
    I am starting a new job in a company with many developers and media people, the layout of the place is open with computers around a skinny oval, I have worked in small teams and programming embedded C, the jobis for objective C I'm still in a medium stage, so I know what I don't know (haha), that means I have to google it and then implement it, So the question is how bad does it look if the guy next to you does lot of searching for coding I mean, at the end of the day I will get the job done, but want to look professional too!

    Read the article

  • pop up html as javascript string instead of hidden div for seo [closed]

    - by user1324762
    Possible Duplicate: How bad is it to use display: none in CSS? I have heard that using display:none or visibility:hidden css properties are not a very good idea for seo purposes. I have about 4 different pop up windows to display and each one has about 20 words inside it. I can create hidden divs. Another option is to store div html elements as javascript string. In this way pop up html elements will be generated from javascript string. This will be still faster than using ajax since the data is static. Is this method absolutely safe for SEO? P.S.: I was just asking about similar question on http://stackoverflow.com/questions/12389075/storing-data-in-javascript-array-for-further-use, but this one is different, it is about static data and about SEO.

    Read the article

  • Is there a way to force Windows to recognize a network folder as a local drive, for the purposes of

    - by NoCatharsis
    I just started using the file search program Everything at work to search through documentation on our shared drives. This is after disappointments with Google Desktop and Windows Search. I love the speed of Everything, but I wish it were able to index other shared folders. My makeshift solution was to somehow force Windows to recognize the necessary shared folders as local drives, then add them to the index list. I have also considered using SyncToy, but this requires downloading all data to my drive, which could be terabytes of information - obviously not a good idea on a small company network. What would be the best solution here?

    Read the article

  • Is it good to use same keyword for multiple pages in one domain?

    - by Phanen
    Hi, I want to know after Google recent updates, will it be good opt to use same keywords for multiple pages? Say- my keyword is "driver update" and I have a folder "HP" in website. Hp has lots of models like HP Elitebook, HP Envy, HP Mini, HP Pavilion and much more. HP Elitebook has many versions like HP Elitebook 2530p, HP Elitebook 2730p, HP Elitebook 8530w. Now should I create pages like "Driver update in HP Elitebook 2530p" , "Driver update in HP Elitebook 2730p", "Driver update in P Elitebook 8530w" or a single page "Driver update in HP Elitebook"? Which one will be the best option for better SE ranking- " single page or multiple pages using same keyword for different model versions" ?

    Read the article

  • How much of a benefit does a changing landing page give in terms of SEO?

    - by Glycan
    I have a friend with a small business with a website. He asked me if he should make a put a section on his landing page under the fold that with his most recent review (or something along those lines). Specifically, he wants to know if that's the most efficient use of his time. Is there a list or such of things google values compared to each other, so that these kinds of answers could be easily answered?

    Read the article

  • Google indexed the same page under two URLs (despite rel-canonical)

    - by unor
    The Super User question "Playing mp3 in quodlibet displays “GStreamer output pipeline could not be initialized” error" is indexed under two URLs in Google: http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia/652058 The first one is the canonical one; the corresponding rel-canonical is included in both pages: <link rel="canonical" href="http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia" /> Google also indexed http://superuser.com/a/652058, which redirects to the answer: http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia/652058#652058 Now, the second URL from above is the same as this one minus the fragment #652058. So Google seems to strip the fragment, which results in exactly the same page under another URL (= containing the answer ID /652058 as suffix), and indexes it, too -- despite rel-canonical and duplicate content. Shouldn’t Google recognize this and only index the canonical variant? And what could be the reason why Stack Exchange includes the answer ID in the URL path, and not only in the fragment (resulting in various URL variants for the same page)?

    Read the article

  • How to figure out recent PageRank of websites or any particular page (Homepage)

    - by Rajesh Magar
    I have this question because the most recent algorithm changes by Google have affected my website traffic. I've been wondering why my homepage PageRank has dropped from 6 to 4 (I am not sure exactly). I am not using any special SEO tools like SEOMOZ, Majestic SEO, etc. So it's quite difficult for me to ensure whether the page rank has been really affected or not. So can anyone please provide any good resource, tact or tricks to address this question?

    Read the article

  • How to specify importance of html elements?

    - by Julien Fouilhé
    Is it possible to specify what elements of the page are important, or, more specifically, what elements of the page are not important? I'm using HTML5 new elements (nav, header, footer, section, article, aside...), but in the description of the website, there's sometimes my login form (in the header of my page though) in the Google description of my website pages... Is there a solution to resolve this problem? Thank you.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >