Search Results

Search found 57986 results on 2320 pages for 'breadth first search'.

Page 110/2320 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Finding duplicate files?

    - by ub3rst4r
    I am going to be developing a program that detects duplicate files and I was wondering what the best/fastest method would be to do this? I am more interested in what the best hash algorithm would be to do this? For example, I was thinking of having it get the hash of each files contents and then group the hashes that are the same. Also, should there be a limit set for what the maximum file size can be or is there a hash that is suitable for large files?

    Read the article

  • Cropping images & SEO

    - by user1181950
    So I have a page with a bunch of images with largely varying sizes. Also the layout of the page is such that the images are all in the shape of square tiles, so just resizing will cause distorted images. What I've been doing previously is when users upload images, I resize and crop them appropriately and display the new image as the thumbnail and load full image when user clicks on it. However, I just realized this is an issue with SEO as google will crawl the thumbnails and stick the thumbnails on Google Images instead of the full images. Is there any way to show a cropped/resized image but have Google Image show the full image? I can do something with css using an enclosing div and overflow:hidden, but I'd imagine the performance on that would be pretty bad. Any suggestions? Thanks! PS. I saw this (Make google index the actual image not the thumbnail), but in my case I have users continuously uploading images, and the database of images is always changing and pretty big (thousands), so sitemap will be pretty unwieldy..

    Read the article

  • Will Google crawl session based website

    - by DonShwep
    I have a website, it is split into 3 categories but using PHP its an all-in one kind of style. When a user chooses a category on the home page a session is set, this is then used to set the style and contents of the website. Would Googlebot and other bots be able to still scan my website? If a page is accessed and no session is set then the user is sent back to the home page. I have created special links, that set a session but go straight to the contact page. Even this page doesn't seem to be showing up. Any ideas if a sitemap with specially crafted links (to set the session) will help Google?

    Read the article

  • Webmaster Tools - Preferred Domain and Google entries

    - by xylar
    Our client has two websites with the same content: example.com somethingelse.com When a user searchs in google for "something else" the example.com site appears in the top 10 results on google. What would happen if we switched the preferred domain in webmaster tools to somethingelse.com? Our client is hoping it will replace the "example.com" result with "somethingelse.com" but I assume this is not the case.

    Read the article

  • Add References with Search

    - by Daniel Cazzulino
    If you have been using VS2010 for any significant amount of time, you surely came across the awkward, slow and hard to use Add Reference dialog. Despite some (apparent) improvements over the VS2008 behavior, in its current form it's even LESS usable than before. A brief non-exhaustive summary of the typical grief with this dialog is: Scrolling a list of *hundreds* of entries? (300+ typically) No partial matching when typing: yes, you can type in the list to get to the desired entry, but the matching is performed in an exact manner, from the beginning of the assembly name. So, to get to the (say) "Microsoft.VisualStudio.Settings" assembly, you actually have to type the first two segments in their entirety before starting to type "Settings"....Read full article

    Read the article

  • Looking for a customizable "Did you know..." dialog application

    - by Jorge Suárez de Lis
    I want to deploy a "Did you know..." or "Tip of the day" application at the office. It should: Show a dialog at login time with a random tip. Obviously, provide some way to store my own tips. Be easy to disable and reenable by the user itself. I'm using puppet, so I'm covered with the deployment. The tips don't even need to be gathered from a server, since I can deploy the newest tips file/database with no costs. Sure, I could hack a quick solution by using zenity and bash, but I'd like to know if there's any application out there specifically targeted at this. I don't like the zenity approach very much because it's very limited on the contents that can be displayed. No text alongside screenshots, for example. Zenity is aimed towards displaying simple dialogs.

    Read the article

  • Static HTML to Wordpress Migration SEO Implications?

    - by Kayle
    Recently, I migrated a client's site to a new server and a new home within wordpress so they could more easily edit their website and start a blog section. The static site was 10 years old a was showing up at place #3 for it's primary keyword, consistently, according to my client, and has dropped to rank #6-8 following the migration. At launch, we made sure the urls were identical (save the removal of ".htm" which we used 301 redirects to compensate for) and we generated a new XML map and pinged google with the new site. We keep a 404 log to make sure we're not losing any incoming links. We also have Google Webmaster Tools on this site and have zero errors/suggestions, everything seems ok. I was told by numerous sources that Google would not penalize us for the use of 301s, but it's the only thing I can think of right now that is different about the site, other than the platform. Any ideas about what we could be getting docked for?

    Read the article

  • Best way to redirect users back to the pretty URL who land on the _escaped_fragment_ one?

    - by Ryan
    I am working on an AJAX site and have successfully implemented Google's AJAX recommendation by creating _escape_fragment_ versions of each page for it to index. Thus each page has 2 URLs: pretty: example.com#!blog ugly: example.com?_escaped_fragment_=blog However, I have noticed in my analytics that some users are arriving on the site via the "ugly" URL and am looking for a clean way to redirect them to the pretty URL without impacting Google's ability to index the site. I have considered using a 301 redirect in the head but fear that Googlebot might try to follow it and end up in an endless loop. I have also considered using a JavaScript redirect that Googlebot wouldn't execute but fear that Google may interpret this as cloaking and penalize the website. Is there a good, clean, acceptable way to redirect real users away from the ugly URL if for some reason or another they end up arriving at the site that way?

    Read the article

  • Creating sitemap for google bot - how to mark dynamic content / dynamic subpages?

    - by ojek
    I have a website that is internet forum. This forum has many categories, and single category page contains alot of subpages with listed threads. This internet forum is brand new, and about a week ago I filled it with few hundred thousands threads. I then looked at google webmasters page to see any changes in indexing, but the index went up from 300 to about 1200, so that means it did not index my added threads (although it added something). This is what my sitemap.xml contains which I uploaded on their website (of course there is a lot more of the code, this is just a snipped for a single category, in my real sitemap file I have all the categories listed as below): <url> <loc>http://mysite.com/Forums/Physics</loc> <changefreq>hourly</changefreq> </url> Now, I would expect google bot to go into http://mysite.com/Forums/Physics, and move through all the subpages with thread links, and then get inside of each thread and index it's content. How can I do this? Also if this will be unclear, I will add a real link to my website.

    Read the article

  • How do I get mlocate to only index certain directories?

    - by Andrew Ferrier
    I'd like to use mlocate on my Ubuntu server, but only to index certain directories (e.g. /home and /data, but not everything under /). However, mlocate's standard configuration works the opposite way; you specify the paths you want to remove (with PRUNE_PATHS). Is there any easy way to achieve this, or any similar utility that will do what I want? (note: it should maintain an index like mlocate, so find is not acceptable, for example) Thanks.

    Read the article

  • SSL Certificate

    - by outdoorcat
    I've received the email below from google about my wordpress site and have no idea how to follow the instructions. Any help out there? Dear Webmaster, The host name of your site, https://www.example.com/, does not match any of the "Subject Names" in your SSL certificate, which were: *.wordpress.com wordpress.com This will cause many web browsers to block users from accessing your site, or to display a security warning message when your site is accessed. To correct this problem, please get a new SSL certificate from a Certificate Authority (CA) with a "Subject Name" or "Subject Alternative DNS Names" that matches your host name. Thanks, The Google Web-Crawling Team

    Read the article

  • Multiple Businesses at The Same Physical Address - SEO / Google Places

    - by Howdy_McGee
    I was wondering what kind if there would be any negative effects to have multiple businesses having the exact same physical address on their website. Currently we have five businesses at the exact same address and it shows on their website, so when people google one of the five businesses address their going to get multiple results from multiple website most of which will not be what their looking for. What is a way around this / what can I do about this? Would adding "Suite Numbers" be a solution? A thought occurred that it might be a good idea to create a landing page for users that are looking up a business by it's address via google. The page will bring up multiple businesses since we have a few at the same address but if we have a landing page at the top which then leads to multiple businesses that might solve the multiple address seo problem. Going to keep researching it though. I also know for Google Places (possible bing local and yahoo local) this could also become a problem. I've submitted an inquiry with them but I wanted to know if anybody had a ready-made solution around this so that Google doesn't bunch all these companies together into one. Thanks!

    Read the article

  • A couple of links to our products and 10 pages of crack/keygen/torrent/etc.

    - by devdept
    If you try searching for our company and product name you'll get two useful links and 10 pages of hacker sites where eventually you can download the cracked version of our products. How can we clean hacker links and leave only useful links to our prouct pages? We already checked the Google URL Removal Tool but within the 'Removal Type' options we can specify there is nothing meaningful to specify in this case. Shall we proceed the same? Thanks.

    Read the article

  • How to show the right country domain in Google Places?

    - by Baumr
    Background A site has multiple ccTLDs: example.com for the US, example.co.uk for UK users, example.de for Germans, etc. Googling for certain city keywords will return rich snippets with a list of Google Places: Problem When searching on Google Germany, the domain for US users (example.com) appears instead of the corresponding ccTLD (example.de). This is not good user experience, as users would most likely like to book on a site localized for them (e.g. language and currency). Question What solutions are there? Is it possible to return different ccTLDs in rich snippets for Google searches in Germany/UK? Ideas Would implementing the hreflang annotation resolve this? What about entering multiple corresponding URLs in the structured data markup?

    Read the article

  • Will this sitemap get me de indexed from Google?

    - by heavy rocker dude
    My site's URL (web address) is: http://quantlabs.net/private/sitemap.xml Description (including timeline of any changes made): Will this sitemap get me de-indexed from Google? My new site map just got spidered by Google for some reason. It is located at http://quantlabs.net/private/sitemap.xml, is this in danger of getting me de-indexed from Google's index. Does it look like spam even though it is not meant to be? I am trying to figure the limitation in terms of Google's threshold before they deem it a spammy sitemap. This is sitemap contains automated postings which are different with the stock symbol provided. The amount of postings within the Sitemap are quite a few in a small amount of time.

    Read the article

  • Aliasing resellers domain to primary domain

    - by Ashkan Mobayen Khiabani
    I have designed a website that accepts re-sellers and actually the concept of this website is having local re-sellers for each province (or should we say branches). I have designed this website in a way that anybody who has a domain, can point to our website (a record or cname). well most of the website content are the same, the only difference is that re-sellers website doesn't have some items on the main menu and may have some small descriptions of their own branch in some pages. I read that Google may ban websites with duplicate content (or which are significantly similar). I want to know will this be a problem for me? If yes, what else can I do? we have had considered asking our reseller to use iframe that loads our website but wanted that each reseller can have its own SEO and try harder but what I read about this duplicate thing worries me.

    Read the article

  • How do I catalog files on several external hard drives that I want to store off-line? OSX

    - by raudi
    My partner, an artist, has more than 10 external hardisks both USB and firewire and every 2-3 months a new one has to be added (She's working with videos and pictures) currently its 10TB and growing so too much for a affordable NAS. Right now the files are not indexed and I think can not be searched with spotlight because not all drives can be connected at the same time. So if she wants to search for a file, she has to guess which disk/disks (based mostly on the date) and then search several drives. Now I'm looking for a solution to index/catalog the drives, something like GentibusCD Cathy Disclib (all these solutions are unfortunately Windows only) Is there any software for OSX that will catalog all the hard drives, so she can search the catalog, find the files, and get the ID of the disk / disk name that has the content? Preferably something with a GUI so my partner can also use it easily Preferably with Thumbnails for pictures/videos (But even an equivalent to "tree /F /A" would be better than nothing)

    Read the article

  • Does searching documentation and samples look bad?

    - by Mick Aranha
    I am starting a new job in a company with many developers and media people, the layout of the place is open with computers around a skinny oval, I have worked in small teams and programming embedded C, the jobis for objective C I'm still in a medium stage, so I know what I don't know (haha), that means I have to google it and then implement it, So the question is how bad does it look if the guy next to you does lot of searching for coding I mean, at the end of the day I will get the job done, but want to look professional too!

    Read the article

  • Is there a way to force Windows to recognize a network folder as a local drive, for the purposes of

    - by NoCatharsis
    I just started using the file search program Everything at work to search through documentation on our shared drives. This is after disappointments with Google Desktop and Windows Search. I love the speed of Everything, but I wish it were able to index other shared folders. My makeshift solution was to somehow force Windows to recognize the necessary shared folders as local drives, then add them to the index list. I have also considered using SyncToy, but this requires downloading all data to my drive, which could be terabytes of information - obviously not a good idea on a small company network. What would be the best solution here?

    Read the article

  • pop up html as javascript string instead of hidden div for seo [closed]

    - by user1324762
    Possible Duplicate: How bad is it to use display: none in CSS? I have heard that using display:none or visibility:hidden css properties are not a very good idea for seo purposes. I have about 4 different pop up windows to display and each one has about 20 words inside it. I can create hidden divs. Another option is to store div html elements as javascript string. In this way pop up html elements will be generated from javascript string. This will be still faster than using ajax since the data is static. Is this method absolutely safe for SEO? P.S.: I was just asking about similar question on http://stackoverflow.com/questions/12389075/storing-data-in-javascript-array-for-further-use, but this one is different, it is about static data and about SEO.

    Read the article

  • Should I submit my RSS feed as Google Sitemap?

    - by Svish
    I currently have no sitemap for a website I'm creating. I do have an RSS feed which includes the N latest updated posts on the site. It doesn't include everything on the site though, just blog posts. Creating a full sitemap would be a bit of a hassle I think. Should I submit the feed instead? Is there a difference between using a regular sitemap and a feed? Is it important to have a sitemap? What happens when you do/don't?

    Read the article

  • How to purge old links in google from an old domain.

    - by jbcurtin
    Hey all, Recently, I uploaded a new site to an existing domain and I'd like to figure out how I can forward all links to said domain to a new domain. I'm looking for a wordpress solution if possible, but in the end I I seem myself writing a small header script that I will paste into ever directory's index file saying header('Location:http://xxx.yyy.zzz') Is there a cleaner way to do this without having to resort to managing the whole file structure? No, I do not have access to the apache runtime. Unfortunately it is a shared-host server. Thanks in advance.

    Read the article

  • Is it good to use same keyword for multiple pages in one domain?

    - by Phanen
    Hi, I want to know after Google recent updates, will it be good opt to use same keywords for multiple pages? Say- my keyword is "driver update" and I have a folder "HP" in website. Hp has lots of models like HP Elitebook, HP Envy, HP Mini, HP Pavilion and much more. HP Elitebook has many versions like HP Elitebook 2530p, HP Elitebook 2730p, HP Elitebook 8530w. Now should I create pages like "Driver update in HP Elitebook 2530p" , "Driver update in HP Elitebook 2730p", "Driver update in P Elitebook 8530w" or a single page "Driver update in HP Elitebook"? Which one will be the best option for better SE ranking- " single page or multiple pages using same keyword for different model versions" ?

    Read the article

  • Google new algorithm: My company have a 40 sites with different domains that some of their articles appears in my main website

    - by user5674576
    Hi, My company have a 40 sites with different domains that some of their articles appears in my main website with reference to their source. Our articles write by high level processionals in the field that they write about - we also pay them high salary. In recent google algorithm change my main site rating down very seriously. What should we do to restore company main site google rating? our solution and ideas that not working well: rel="canonical" to source website (we already have it before google change without results) meta "original-source" but not have rating influence (we already have it before google change without results) Edit:: maybe we should delete rel="canonical" from main website articles that refer to our other small websites (because this articles in main website not indexed in google)? Thanks in advance

    Read the article

  • Request Removal of naked domain from Google Index

    - by Pedr
    I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed: www.example.com www.example.com/about_us www.example.com/products/something ... and example.com example.com/about_us example.com/products/something ... For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent. The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time. How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >