Search Results

Search found 32130 results on 1286 pages for 'local search'.

Page 86/1286 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Language redirect affecting pagerank and search listing?

    - by Janoszen
    Preface We have a number of sites that use the same redirect mechanism across the board. We recently transitioned one site from non-localised to localised and detected that the Google+ integration doesn't show up on the search results any more AND the PageRank is gone from 2 to 0. How the redirect works If the UA sends a cookie (e.g. lang=en), redirect the user to /language (e.g. /en) If the UA is a bot (.*bot.*), redirect to /en If the Accept-Language header contains a usable, non-English language, redirect to /language (English is the default on many browsers in non-English regions) If there is a valid GeoIP lookup and the detected region is linked to a supported language, redirect to /language Redirect to /en We do of course on all pages have the proper markup to indicate the alternate language: <link hreflang="de" href="/de" rel="alternate" /> As far as we can tell, we follow all publicly available guidelines from Google, so we are a bit at odds if this is a bug in Google or we have done something wrong. Question Does not having content on the root URL of a domain adversely affect search engine rankings and if yes, how does one implement a proper language redirection?

    Read the article

  • sendmail and local mail server?

    - by Hailwood
    For testing emails I don't want to actually send an email out (and can't anyway as ISP blocks port 25). I have installed Sendmail, and what I want to do is install a mail server on the same pc that is sending the email which I can simply send the emails to, and receive them on my pc. so I am looking for a couple of guides. install and configure sendmail correctly for this. install and configure a mail server correctly for this. setup mail client (evolution) to connect to the mail server Can anyone help with this?

    Read the article

  • How to Archive, Search, and View Your Tweet Statistics with ThinkUp

    - by YatriTrivedi
    Worried about archiving your tweets? Want a more powerful search? Want to see your tweet statistics? You can do all of that and more by installing ThinkUp on your home server. ThinkUp is a brilliant application (currently in beta) that will archive all of your tweets, your replies, responses, etc. so that you can search through them and find out some helpful usage statistics. It has quite a few plugins, including one that adds full Facebook support, too. It’s designed to be installed on a LAMP server; that is, Linux, Apache, MySQL, and PHP is what will provide the backbone for it. While it’s possible to install it on a Windows- or Mac-based machine, it’s most easily handled in Linux, so we’ll be using Ubuntu to show you how to get it up and running. It’s in very active development by the founder, Gina Trapani, and by many users in the community Latest Features How-To Geek ETC How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop What is the Internet? From the Today Show January 1994 [Historical Video] Take Screenshots and Edit Them in Chrome and Iron Using Aviary Screen Capture Run Android 3.0 on a Hacked Nook Google Art Project Takes You Inside World Famous Museums Emerald Waves and Moody Skies Wallpaper Change Your MAC Address to Avoid Free Internet Restrictions

    Read the article

  • Search network device from LAN through my C++ application and change the IP address

    - by Arun Kumar K S
    I am developing an application in C++ to communicate with my network device. I used UDP classes to search the device from the network. I done the code in such a way that from my application a broadcast message will send to the local network. The device will respond to the broadcast message and the application will get the IP address from that response. After establishing a network communication send a message to the device for changing the IP address. That worked fine if my devices IP address is correct. But when I set a wrong IP address and subnet to the device. My application will never get any messages from the device. So I can't able communicate to the device and not able get the device and unable to change the IP address etc. Say example IP address of the device 20.1.1.1 Subnet Mask 255.0.0.0 And in my system that runs the application IP address 192.168.1.23 Subnet Mask 255.255.255.0 I tried the Lantronix device installation software with their Lantronix device in network. It listed the device from the network and I am able to change the IP address from their software. Any one know how this is done in this type of software? How I can search in network to find the device and change the IP address when its IP address is not in range? Which protocol they used to find the device?

    Read the article

  • Vermont IT Jobs: Local ASP.NET Contractor possible hire

    The website www.imsuperb.com is looking for an ASP.NET developer to help them out with a site update. This is a contract position that could lead to a full time job. Contact Nick Lynch at [email protected] you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • PLESK PostFix Error Local in maillog, how to troubleshoot

    - by RCNeil
    I'm using the PHP mail() function, using PostFix, on CentOS6, Plesk 10.4, and my email is not getting delivered to a particular address. My personal GMail and Yahoo email addresses receive email from my server fine and do not produce errors. After a wonderful suggestion on here, I checked my mail logs, and this is the error I see : Apr 10 10:26:29 ######### postfix/qmgr[8323]: 19EA21827: from= <[email protected]>, size=645, nrcpt=1 (queue active) Apr 10 10:26:29 ######### postfix-local[8331]: postfix-local: [email protected], [email protected], dirname=/var/qmail/mailnames Apr 10 10:26:29 ######### postfix-local[8331]: cannot chdir to mailname dir name: No such file or directory Apr 10 10:26:29 ######### postfix-local[8331]: Unknown user: [email protected] Apr 10 10:26:29 ######### postfix/pipe[8330]: 19EA21827: to=<[email protected]>, relay=plesk_virtual, delay=0.15, delays=0.11/0/0/0.04, dsn=2.0.0, status=sent (delivered via plesk_virtual service) Apr 10 10:26:29 ######### postfix/qmgr[8323]: 19EA21827: removed [email protected] is the name I've declared in php.ini for sendmail_from = "[email protected]" sendmail_path = "/usr/sbin/sendmail -t -f [email protected]" and the recipient is supposed to be [email protected]. Is this an error on my side or the recipients? Can I address this on my server? Many thanks SF.

    Read the article

  • Azure Search Preview

    - by Greg Low
    One of the things I’ve been keeping an eye on for quite a while now is the development of the Azure Search system. While it’s not a full replacement for the full-text indexing service in SQL Server on-premises as yet, it’s a really, really good start. Liam Cavanagh, Pablo Castro and the team have done a great job bringing this to the preview stage and I suspect it could be quite popular. I was very impressed by how they incorporated quite a bit of feedback I gave them early on, and I’m sure that others involved would have felt the same. There are two tiers at present. One is a free tier and has shared resources; the other is currently $125/month and has reserved resources. I would like to see another tier between these two, much the same way that Azure websites work. If you have any feedback on this, now would be a good time to make it known. In the meantime, given there is a free tier, there’s no excuse to not get out and try it. You’ll find details of it here: http://azure.microsoft.com/en-us/documentation/services/search/ I’ll be posting more info about this service, and showing examples of it during the upcoming months.

    Read the article

  • Keep Your Data Local: Free Offline Alternatives to 6 Popular Web Apps

    - by Chris Hoffman
    Web apps are all the rage, but offline apps still have their place. Whether you want better offline support or you just want to keep your sensitive data on your PC, there’s a free desktop app that can replace your web-based productivity app. We’ve looked at web-based alternatives to desktop apps, and now we’ll do the opposite. Here  are some solid — and completely free — offline desktop alternatives to popular web apps. Be sure to perform regular backups if you store your only copies of important data locally. You wouldn’t want to lose it all when your hard drive inevitably bites the dust.    

    Read the article

  • Foraward Traffic from local machine to proxy server using iptables

    - by Vaibhav
    I am using Ubuntu Server 12.04. My IP is 192.168.4.160. I want to route the HTTP traffic generated locally from my system destined to a particular URL (say x.x.x.x) to pass through proxy server. My proxy server is 192.168.0.13:3128. I added following rule in iptables sudo iptables -t nat -A OUTPUT -p tcp -d x.x.x.x --dport 80 -j DNAT --to 192.168.0.13:3128 However, this rule does not seem to work for me. I captured packet in wireshark and I saw that packet is still going to x.x.x.x I am not very much familiar with iptables, so please try to be specific. Thanks in advance

    Read the article

  • Data indexing frameworks fit for large E-Commerce applications

    - by Dabu
    we wrote and still maintain a large E-Commerce application. Our feature list resembles what you would expect from most shops. We'd like to improve some of our features, and now the search/suggestion list functionality (enter some letters, a JScripted suggestion list appears) has caught our eye. Currently, we use http://xapian.org/. It has some drawbacks. Firstly, it's not actually the right solution. It has been created to index documents, not ever-changing data in a granularity that an E-Commerce application would need. Secondly, the load on the database is significant when we reindex all data every night. We'd like a framework that has been designed for indexing database data, which can add to the index easily and without much load, which can supply data changes in the backoffice quickly to the frontend without much load and delay. I'm aware of the fact that Xapian is Open Source and even Free Software, so we could adapt it to our needs if we decided to invest the time and manpower. But taking a quick look around for a solution more suited seems fair, right? Oh, and commercial applications are fine, too. FOSS is not required. Thanks a bunch.

    Read the article

  • Fastest way to run a JSON server on my local machine

    - by Mohsen
    I am a front-end developer. For many experiemnets I do I need to have a server that talks JSON with my client side app. Normally that server is a simple server that response to my POSTs and GETs. For example I need to setup a server that saves, modifies and read data from a "library" database like this: POST /books create a book GET /book/:id gets a book and so on... What is the fastest and easiest technology stack for database and server in this case? I am open to use Ruby, Nodejs and anything that do the job fast and easy. Is there any framework (on any language) that do stuff like this for me?

    Read the article

  • Local server updates for the network

    - by Brendon
    I have setup one computer on our network as the file server. Because Internet here in Tanzania is both slow and expensive I would like that one system to download all the updates and then the other 10 computers on the network to get those update files from the server. I'm a bit of a noobie to Ubuntu, but really want to learn how to get this working smoothly so as to help other NGOs and schools here in Tanzania. Brendon

    Read the article

  • Tackling thin content on an images gallery

    - by Ted Wilmont
    We run an images gallery as part of our site, however we have over 8,000 images and every image has a separate HTML page of its own to display the image caption, related image and comments from users of the site. This seems to be a problem especially with the Google Panda update because these pages are technically "thin content". What would be the best way to tackle this? We'd love some feedback and advice regarding this scenario. We have a few options we thought of already but can't decide: We could noindex the separate image pages and loose any image search listings we have for the image in favour of removing these thin pages from the index. We could 301 all of the individual image pages back to the image category listing and anchor each image (e.g. #img2122) and include all of the comments and description on the category listing page itself. If we was to simply list all of the images and content on the category pages themself; what's the best method? We could add all of the content in the anchor tags and use jQuery to display them in a box when a user clicks on the image or we could use Ajax to retrieve the information. However, what's the best Ajax method for SEO? Any ideas, suggestions, tips or advice is greatly appreciated and thank you in advance for any given.

    Read the article

  • Samba network sharing NTFS drives and root permissions from local drives

    - by Bill
    I'm able to share my internal 2ndry NTFS drives (sdb1,2 and 3) on the network with Windows computers now but even though Samba read/write is enabled, Windows network computers can only open files "read-only" and can't save files to the samba shared drives/folders. I try to set permissions in Ubuntu via folder and/or file properties even logged in root via Nautilus but all the samba shared folders and files are set as owner = root, accessible and does not allow me to change them to read/write, it just resets to root, accessible, in other words, I can't change permissions. I'm running Ubuntu 11.04 Gnome on an old Dell Dimension 2400. Also, in order to for me to copy or move any files from the Ubuntu drive to the sdb1,2 or 3 drives, I have to gksu nautilus. This consequently prevents me from copying .ISO files to my "Multisys" thumb drive too.

    Read the article

  • Multilingual website without language component in the URL

    - by user359650
    I'm working on a website for Canada which will have French and English versions. For SEO purposes, I would like to avoid using any language tag in URLs because I believe it will have more impact (e.g. example.ca/products better than en.example.ca/products or example.ca/en/products). I believe this is technically possible because the2 languages are sufficiently different that the URLs won't be conflicting with one another (e.g. if you want a "product" page, it will be /products in English, and /produits in French so you know which language the URL is about). Since Google (and most likely others) doesn't rely on the URL (nor HTML tags) to determine the content language I don't see any problems with search engines. To make this possible I've thought about using a cookie distinct from the session cookie (e.g. example.org_language) with long term expiry (e.g. N years) that will memorize the language chosen by the user. That way when people visit the website with a new browser session, they get served the proper language. I have already given up on users being able to switch one page from English to French: when people will chose English or French from the menu they will be redirected to the corresponding version of the home page. Do you foresee any problems with not using a language component in the URL (whether domain or path)? (as long as one makes sure URLS don't conflict).

    Read the article

  • What are the popular file indexing engines on Linux?

    - by netvope
    It would be nice if you can share your experience on the pros and cons of each of them. Personally I only know Google Desktop and Beagle, and I haven't really used them. I mainly store my files on Windows (and use its integrated indexed search) but I'm trying to see if I can switch over to Linux. Also, can any one of the search indexer run without X? Does any of them provide an API for search queries?

    Read the article

  • NetBeans 6.9 s'arme de fonctionnalités pour la gestion de l'environnement en local et à distance

    NetBeans 6.9 intègre de nouvelles fonctionnalités Ainsi, l'équipe a intégré : - Un terminal - Un émulateur de terminal à distance Ainsi, pour activer l'option, il suffit d'aller dans : Windows -> Output -> Terminal La console pourra être activée et vous pourrez voir ceci : [IMG]http://www.livesphere.fr/images/dvp/nb-terminal.png[/IMG] Que pensez-vous de ces fonctionnalités supplémentaires ?...

    Read the article

  • Fastest way to set up a JSON server on my local machine [closed]

    - by Mohsen
    I am a front-end developer. For many experiements I do I need to have a server that talks JSON with my client side app. Normally that server is a simple server that response to my POSTs and GETs. For example I need to setup a server that saves, modifies and read data from a "library" database like this: POST /books create a book GET /book/:id gets a book and so on... What is the fastest to set up and easiest technology stack for database and server in this case? I am open to use Ruby, Nodejs and anything that do the job fast and easy. Is there any framework (on any language) that do stuff like this for me?

    Read the article

  • Manually updating HTML5 local storage?

    - by hustlerinc
    I'm just starting out HTML5 game developement (and game dev in general) and watching all the videos and tutorials available something has crossed my mind. Everyone keep saying I should set the cookie's (or cached files) to be expired after a certain amount of time. So that when it reaches that time the browser automatically downloads all assets again, even if it's the same asset's. Wouldn't it be possible to manually define the version of the game? For example the user has downloaded all the files for 1.01 of the game, when updating I change a simple variable to 1.02. When the user logs in it would compare his version to the current and if they are not equal only then it downloads the files? This could even be improved to download only specific files depending on what needs to be updated? Would this be possible or am I just dreaming? What are the possible downsides of this approach?

    Read the article

  • SEO with duplicate content

    - by user16831
    I have a nature photography site with multiple types of photo galleries. Each photo and associated caption on my site appears in several galleries. For instance, a photo of a goldfinch that was taken on a trip to New Mexico in 2008 will appear in the "goldfinch.php" gallery, in the "finches.php" gallery, and in the "New_Mexico_2008.php" gallery. This duplication is useful for my site visitors - User A may want to see goldfinch photos, whereas User B wants to see photos from New Mexico - but I am concerned about the SEO implications. The typical suggestions to deal with duplicate content, such as 301 redirects and canonical tags, probably won't work in this case, because the page content is substantially different (ranging from ~1% to ~90% duplication, depending on the specific example chosen). The obvious solution to me would be to edit robots.txt to only allow search engines to crawl one type of gallery - for instance, if they crawled only the galleries organized by species(e.g. goldfinch.php), all the photos on my site would be found exactly once. However, the Google content guidelines recommend against blocking crawler access to duplicate information. Should I go ahead and use robots.txt anyway? Or is there a better solution?

    Read the article

  • Should I, and how do I incorporate microdata into my asp.net website with 47 pages?

    - by Jason Weber
    I have an asp.net (vb) with 47 pages. The problem is that it's in 10 different languages, although 98% just use English. I have 5 master pages. I've read Google Webmaster Tools, but I'm still confounded. I'm reading about how microdata is the way to go. Does this mean I should put itemtype and itemprop span and div tags in my master pages, or should I do all of my 47 pages (.resx resource files) separately? The main key phrase I want throughout search results is "machine vision". For instance, the first couple sentences on my "about.aspx" page are: <span itemprop="name">USS Vision Inc.</span> (USS) is a privately-owned company with headquarters in <span itemprop="locality">Detroit, Michigan, USA</span>. We design, engineer, produce, and integrate special machine vision error-proofing products and <a href="http://www.ussvision.com/services/" target="_self" itemprop="url">services</a> that create lean factories by improving the quality of manufactured products, and by significantly reducing manufacturing costs through advanced automation. Am I doing this right, or how would I do this if I'm not? Should I use the itemprop="url" or other rich snippets for every link in my website? I mean, do I need to add an itemprop to just about everything, or can I just alter my master pages? Any guidance in this regard to help improve my SEO and SERPS would be greatly appreciated!

    Read the article

  • SQL Server 2008 and 2008 R2 Integration Services - Managing Local Processes Using Script Task

    SQL Server 2008 R2 Integration Services includes a number of predefined tasks that implement common administrative actions to help with data extraction, transformation and loading (ETL). While in a majority of cases they are sufficient to deliver required functionality, there might be situations where an extra level of flexibility is desired. NEW! SQL Monitor 2.0Monitor SQL Server Central's servers withRed Gate's new SQL Monitor.No installation required. Find out more.

    Read the article

  • Automatically keep your local git repos clean

    - by kerry
    Most developers using git are probably aware of a command ‘git gc’ that has to be run from time to time when you notice your git commands are running a little slow. This command cleans up your git repo and makes sure everything is nice and tidy. If you have not run this command lately, you will notice a huge performance increase in your git commands after running. It’s a bit annoying to have to run this command when you notice that your git performance is suffering. The command also takes a while if you have not run it recently. With this in mind, I decided to create a method to automatically run this command from time to time. So I decided to overload cd similar to how rvm does. All you have to do is paste the method in your .profile file and it will run the command every time you enter a directory with a git repo. You’ll notice a little pause when entering the directory, it’s not insufferable but if you would prefer, you can add an & to the end of the command to have it run in the background. I chose the pause over the pid output of the background command. Here it is in all it’s glory. View the code on Gist.

    Read the article

  • PHP-based indexing and search implementation

    - by Chris
    Is there such thing? I designed a while back a rudimentary form based app for my users. We receive from our suppliers hardware manufacturing data in XML files: file name is made of eleven fields separated by tildes, with each field having its own meaning. R&D guys wanted to be able to search each field of the file names so I used regex() with decent results. Problem is that we have now in the upwards of 2.5 million files. And my app can't hack it anymore. I looked at Apache Lucene & Solr. Though it seemed like the best solution to my problem, the fields in the filenames are not peers to the file content. Big no-no with Solr. What is the best way to implement a PHP app with indexing and search capability with such large number of files? Do I have to buy Zend and use Zend_Search? Is it the only way? Thanks for your input.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >