Search Results

Search found 27592 results on 1104 pages for 'google sites'.

Page 395/1104 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • Recovering from an incorrectly deployed robots.txt?

    - by Doug T.
    We accidentally deployed a robots.txt from our development site that disallowed all crawling. This has caused traffic to dip dramatically, and google results to report: A description for this result is not available because of this site's robots.txt – learn more. We've since corrected the robots.txt about a 1.5 weeks ago, and you can see our robots.txt here. However, search results still report the same robots.txt message. The same appears to be true for Bing. We've taken the following action: Submitted site to be recrawled through google webmaster tools Submitted a site map to google (basically doing everything possible to say "Hey we're here! and we're crawlable!") Indeed a lot of crawl activity seems to be happening lately, but still no description is crawled. I noticed this question where the problem was specific to a 303 redirect back to a disallowed path. We are 301 redirecting to /blog, but crawling is allowed here. This redirect is due to a site redesign, wordpress paths for posts such as /2012/02/12/yadda yadda have been moved to /blog/2012/02/12. We 301 redirect to wordpress for /blog to keep our google juice. However, the sitemap we submitted might have /blog URLs. I'm not sure how much this matters. We clearly want to preserve google juice for URLs linked to us from before our redesign with the /2012/02/... URLs. So perhaps this has prevented some content from getting recrawled? How can we get all of our content, with links pointed to our site from pre-and-post redesign reporting descriptions? How can we resolve this problem and get our search traffic back to where it used to be?

    Read the article

  • WebCenter Customer Spotlight: Azul Brazilian Airlines

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryAzul Linhas Aéreas Brasileiras (Azul Brazilian Airlines) is the third-largest airline in Brazil serving  42 destinations with a fleet of 49 aircraft and employs 4,500 crew members. The company wanted to offer an innovative site with a simple purchasing process for customers to search for and buy tickets and for the company’s marketing team to more effectively conduct its campaigns. To this end, Azul implemented Oracle WebCenter Sites, succeeding in gathering all of the site’s key information onto a single platform. Azul can now complete the Web site content updating process—which used to take approximately 48 hours—in less than five minutes. Company OverviewAzul Linhas Aéreas Brasileiras (Azul Brazilian Airlines) has established itself as the third-largest airline in Brazil, based on a business model that combines low prices with a high level of service. Azul serves 42 destinations with a fleet of 49 aircraft. It operates 350 daily flights with a team of 4,500 crew members. Last year, the company transported 15 million passengers, achieving a 10% share of the Brazilian market, according to the Agência Nacional de Aviação Civil (ANAC, or the National Civil Aviation Agency). Business ChallengesThe company wanted to offer an innovative site with a simple purchasing process for customers to search for and buy tickets and for the company’s marketing team to more effectively conduct its campaigns. Provide customers with an  innovative Web site with a simple process for purchasing flight tickets Bring dynamism to the Web site’s content updating process to provide autonomy to the airline’s strategic departments, such as marketing and product development Facilitate integration among the site’s different application providers, such as ticket availability and payment process, on which ticket sales depend Solution DeployedAzul worked with the  Oracle partner TQI to implement Oracle WebCenter Sites, succeeding in gathering all of the site’s key information onto a single platform. Previously, at least three servers and corporate information environments had directed data to the portal. The single Oracle-based platform now facilitates site updates, which are daily and constant. Business Results Gained development freedom in all processes—from implementation to content editing Gathered all of the Web site’s key information onto a single platform, facilitating its daily and constant updating, whereas the information was previously spread among at least three IT environments and had to go through a complex process to be made available online to customers Reduced time needed to update banners and other Web site content from an average of 48 hours to less than five minutes Simplified the flight ticket sales process thanks to tool flexibility that enabled the company to improve Website usability “Oracle WebCenter Sites provides an easy-to-use platform that enables our marketing department to spend less time updating content and more time on innovative activities. Previously, it would take 48 hours to update content on our Web site; now it takes less than five minutes. We have shown the market that we are innovators, enabling customer convenience through an improved flight ticket purchase process.” Kleber Linhares, Information Technology and E-Commerce Director, Azul Linhas Aéreas Brasileiras Additional Information Azul Brazilian Airlines Case Study Oracle WebCenter Sites Oracle WebCenter Sites Satellite Server

    Read the article

  • Are One Way Links Still Important in Search Engine Optimization?

    Pretty dumb question huh? But people are beginning to wonder considering that Google might change its algorithms again. If you doubt it, do a quick search on the keyword "Google caffeine". This is the new Google search engine and so far, beta testers have stated that it is faster, provides more relevant search engine results and son. Anyways, whatever the case may be, it is important to note that one way links are important right now because the search engines have made them so.

    Read the article

  • Configure Postfix to use external MX servers for delivery of local mail if user is unknown

    - by mr.b
    I have a following setup: linux box with postfix configured to be responsible for example.com domain domain's MX servers are configured so that mail sent to example.com is sent to google mail servers several user accounts on linux machine exist (same machine also hosts example.com site) When someone from the outside attempts to send mail to address ending with @example.com, it gets routed to google mail (and there handled appropriately). When linux machine tries to send mail to outside world, mail is delivered correctly, as reverse dns and spf records are configured correctly, so linux machine is valid mail sender for example.com domain (along with google mail servers). However, here's the problem. When php application (hosted at linux box) tries to send mail to [email protected] (and someuser doesn't exist on linux box), it fails, since it doesn't even consult google mail servers, but postfix smtp locally concludes that "someuser" is unknown. So, the question is: how do I tell postfix to relay mails sent to @example.com domain to google mail servers (so, to servers specified in MX records), IF and only if a mailbox is not found locally.

    Read the article

  • displaying first few pages of a pdf on a page = duplicate content?

    - by Ace
    I am embedding scribd pdfs on my website. These are exam papers pdf which are available on other websites. As it is scribd is an embed/iframe, I think google considers my page as being empty with no content; google does see iframe content right? So I decided to display the first pages of the pdf as text on the page for google. Then, for user experience, i hide the text and replace it with the scribd embed code using javascript. I have 2 worries about this method. Firstly, i am displaying the first pages of the pdf and the latter may be hosted on other websites, will this be considered as duplicate content. Secondly, I am hiding the content and replacing it with the scribd embed with javascript; is it considered bad by google?

    Read the article

  • wrong SEO result for staging website

    - by OCD
    Hi, We have a domain with URL, lets say: http://www.xxxxstaging.com/abc.php?id=10 which is having a pretty good ranking for some keywords in google. However, this is our staging website and due to some historical reason, it was exposed to public. Now we have a production domain (the one we want to be publicly available), say: http://www.xxxxproduction.com/abc.php?id=10 which have exactly the same html output as xxxxstaging.com our question is, for the exactly same keyword search, it never appear in any page of google. Is it possible for us to "switch" the ranking in google, either tell the robot that they are indeed the same website, or tell google to change it for us? Thank you.

    Read the article

  • Android : Jelly Bean continue son ascension, Gingerbread baisse petit à petit, la fragmentation bientôt plus qu'un triste souvenir ?

    Android : Jelly Bean continue son ascension Gingerbread baisse petit à petit, la fragmentation bientôt plus qu'un triste souvenir ?Le dashboard pour les développeurs utilisant l'OS mobile de Google, Android, vient de révéler les chiffres de juillet concernant les périphériques mobiles qui ont eu accès à Google Play pendant cette période.L'écosystème d'Android a toujours été un casse-tête pour les développeurs du fait de sa forte fragmentation. À titre de rappel, Gingerbread pourtant sorti fin décembre 2010 s'est imposé, et ce pendant longtemps, comme l'OS le plus installé de tous les périphériques mobiles Android qui ont eu accès à Google Play.

    Read the article

  • Android : 600.000 applications, le nombre d'activations de terminaux en France explose

    Android : 600.000 applications Le nombre d'activations de terminaux en France explose La présentation d'Android 4.1, alias Jelly Bean, lors de la conférence du Google I/O qui se tient jusqu'à la fin de la semaine à San Francisco a été l'occasion pour Google de mettre à jour officiellement quelques chiffres concernant l'écosystème de son OS mobile. L'éditeur a par exemple confirmé que la galerie Google Play (ex-Android Market) a dépassé la barre des 600.000 applications (contre 450.000 en février). Pour Hugo Barra (Direc...

    Read the article

  • How to Build Links & Improve PageRank

    Before we start talking about building links to improve Google PageRank, let's clear up any confusion - PageRank is a calculation of Google's estimate of the importance of a page, but it is not the same as where your page "ranks" in the organic results. It can be very confusing since the words are so similar. Try thinking of it this way: a page can rank in any of the search engines, but you can only have PageRank in Google.

    Read the article

  • postfix cannot send email

    - by AKLP
    I'd like to mention that im really new to this so please bear with me. I'm trying to setup a forum software to send emails via postfix but I think my server has the port 25 blocked. I tried running these: works: ping alt2.gmail-smtp-in.l.google.com don't work: telnet alt2.gmail-smtp-in.l.google.com 25 telnet 66.249.93.114 25 tried flushing iptables and then using these rules but didn't work either: sudo iptables --flush sudo iptables -P INPUT ACCEPT sudo iptables -P OUTPUT ACCEPT sudo iptables -P FORWARD ACCEPT sudo iptables -F sudo iptables -X doing a telnet on 25 port to localhost url works but nothing when telnet'ing in none local urls. mail.log: Oct 17 01:20:24 webhost postfix/smtp[3642]: connect to alt2.gmail-smtp-in.l.google.com[2607:f8b0:400e:c03::1a]:25: Connection timed out Oct 17 01:20:24 webhost postfix/smtp[3643]: connect to alt2.gmail-smtp-in.l.google.com[2607:f8b0:400e:c03::1a]:25: Connection timed out Oct 17 01:20:24 webhost postfix/smtp[3642]: 4744380032: to=<[email protected]>, relay=none, delay=2892, delays=2741/0.03/150/0, dsn=4.4.1, status=deferred (connect to alt2.gmail-smtp-in.l.google.com[2607:f$

    Read the article

  • Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0!

    Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0! Ask and vote for questions at: bit.ly mod_pagespeed is an open-source Apache module that automatically optimizes web pages and resources on them: images, CSS, JavaScript, and much more. In this episode, we'll catch up with Joshua Marantz, the tech lead of the project at Google and talk about the history of mod_pagespeed, its fast growing adoption (130K+ sites!), technical architecture and how it works under the hood. Finally, we'll talk about the upcoming 1.0 release milestone for the project. If you're curious about mod_pagespeed, then this is definitely the show you won't want to miss! From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • nvidia-package in 12.10 somehow not same like in 12.04 resp. X-lib not complete?

    - by dschinn1001
    Not knowing if this has to do with new kernel-update automatic done by ubuntu 12.10 ? it seems that kernel 3.2 in 12.04 has not these problems with nvidia-drivers ? I tried to install the actual google-earth as deb-package with dpkg -i it seems to be no problem, but when I type command: google-earth in terminal, there comes up the report among else: Xlib: extension "NV-GLX" missing on display ":0". Xlib is installed completely and nvidia-driver is de-installed (then reboot) then re-installed again. the report of google-earth stays the same: Xlib: extension "NV-GLX" missing on display ":0". ubuntu 12.04 was working quite good with google-earth. however: bumblebee seems to be taken out of program ? ( or needs to be re-edited ? ) Don't hurry too quick with solution, I can wait !

    Read the article

  • Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0!

    Make the Web Fast: Automagic site optimization with mod_pagespeed 1.0! mod_pagespeed is an open-source Apache module that automatically optimizes web pages and resources on them: images, CSS, JavaScript, and much more. In this episode, we'll catch up with Joshua Marantz, the tech lead of the project at Google and talk about the history of mod_pagespeed, its fast growing adoption (130K+ sites!), technical architecture and how it works under the hood. Finally, we'll talk about the upcoming 1.0 release milestone for the project. If you're curious about mod_pagespeed, then this is definitely the show you won't want to miss! From: GoogleDevelopers Views: 2 0 ratings Time: 01:05:06 More in Science & Technology

    Read the article

  • Advanced SEO Strategy Guaranteed to Boost Your SEO

    One of the main factors for optimising your website for page 1 on Google is to get incoming links from other web pages, and to boost the effectiveness of those links there are a few things to consider, including the relevance of the page where your link is coming from, plus the link itself needs to be relevant to the keywords that you are targeting on Google. Here's one strategy you can use for getting great incoming links to help you achieve higher Google rankings.

    Read the article

  • GCM: onMessage() from GCMIntentService is never called [migrated]

    - by Shrikant
    I am implementing GCM (Google Cloud Messaging- PUSH Notifications) in my application. I have followed all the steps given in GCM tutorial from developer.android.com My application's build target is pointing to Goolge API 8 (Android 2.2 version). I am able to get the register ID from GCM successfully, and I am passing this ID to my application server. So the registration step is performed successfully. Now when my application server sends a PUSH message to my device, the server gets the message as SUCCESS=1 FAILURE=0, etc., i.e. Server is sending message successfully, but my device never receives the message. After searching alot about this, I came to know that GCM pushes messages on port number 5228, 5229 or 5230. Initially, my device and laptop was restricted for some websites, but then I was granted all the permissions to access all websites, so I guess these port numbers are open for my device. So my question is: I never receive any PUSH message from GCM. My onMessage() from GCMIntenService class is never called. What could be the reason? Please see my following code and guide me accordingly: I have declared following in my manifest: <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="8" /> <permission android:name="package.permission.C2D_MESSAGE" android:protectionLevel="signature" /> <!-- App receives GCM messages. --> <uses-permission android:name="com.google.android.c2dm.permission.RECEIVE" /> <!-- GCM connects to Google Services. --> <uses-permission android:name="android.permission.INTERNET" /> <!-- GCM requires a Google account. --> <uses-permission android:name="android.permission.GET_ACCOUNTS" /> <!-- Keeps the processor from sleeping when a message is received. --> <uses-permission android:name="android.permission.WAKE_LOCK" /> <uses-permission android:name="package.permission.C2D_MESSAGE" /> <uses-permission android:name="android.permission.INTERNET" /> <receiver android:name="com.google.android.gcm.GCMBroadcastReceiver" android:permission="com.google.android.c2dm.permission.SEND" > <intent-filter> <action android:name="com.google.android.c2dm.intent.RECEIVE" /> <action android:name="com.google.android.c2dm.intent.REGISTRATION" /> <category android:name="packageName" /> </intent-filter> </receiver> <receiver android:name=".ReceiveBroadcast" android:exported="false" > <intent-filter> <action android:name="GCM_RECEIVED_ACTION" /> </intent-filter> </receiver> <service android:name=".GCMIntentService" /> /** * @author Shrikant. * */ public class GCMIntentService extends GCMBaseIntentService { /** * The Sender ID used for GCM. */ public static final String SENDER_ID = "myProjectID"; /** * This field is used to call Web-Service for GCM. */ SendUserCredentialsGCM sendUserCredentialsGCM = null; public GCMIntentService() { super(SENDER_ID); sendUserCredentialsGCM = new SendUserCredentialsGCM(); } @Override protected void onRegistered(Context arg0, String registrationId) { Log.i(TAG, "Device registered: regId = " + registrationId); sendUserCredentialsGCM.sendRegistrationID(registrationId); } @Override protected void onUnregistered(Context context, String arg1) { Log.i(TAG, "unregistered = " + arg1); sendUserCredentialsGCM .unregisterFromGCM(LoginActivity.API_OR_BROWSER_KEY); } @Override protected void onMessage(Context context, Intent intent) { Log.e("GCM MESSAGE", "Message Recieved!!!"); String message = intent.getStringExtra("message"); if (message == null) { Log.e("NULL MESSAGE", "Message Not Recieved!!!"); } else { Log.i(TAG, "new message= " + message); sendGCMIntent(context, message); } } private void sendGCMIntent(Context context, String message) { Intent broadcastIntent = new Intent(); broadcastIntent.setAction("GCM_RECEIVED_ACTION"); broadcastIntent.putExtra("gcm", message); context.sendBroadcast(broadcastIntent); } @Override protected void onError(Context context, String errorId) { Log.e(TAG, "Received error: " + errorId); Toast.makeText(context, "PUSH Notification failed.", Toast.LENGTH_LONG) .show(); } @Override protected boolean onRecoverableError(Context context, String errorId) { return super.onRecoverableError(context, errorId); } }

    Read the article

  • Chrome : une extension pour bloquer les résultats de recherche indésirables et lutter contre les fermes de contenus

    Chrome : une extension pour bloquer les résultats de recherche indésirables Et lutter contre les fermes de contenus Google vient de publier une extension pour son navigateur Google Chrome pour permettre aux utilisateurs de bloquer un site directement depuis les résultats de son moteur de recherche. Google veut ainsi lutter contre les spams et les fermes de contenus. L'extension, baptisée « Personal Blocklist » ajoute un lien sous chaque résultat de recherche, lien qui donne la possibilité à l'utilisateur de supprimer l'affichage des résultats qu'il juge indésirables. Ces sites sont alors enregistrés dans une liste noire que l'utilisateur peut consulter depuis Google ou depu...

    Read the article

  • How do I server multiple domains from the same directory and codebase without my configuraton breaking when apache.conf is overwritten?

    - by neokio
    I have 20 domains on a VPS running cPanel. One public_html is filled with code, the remaining 19 are symbolic links to that one. (For example, assets is a directory within public_html ... for the 19 others, there's a symbolic link to that directory in each each accounts public_html dir.) It's all PHP / MySQL database driven, with content changing depending on the domain. It works like a charm, assuming cPanel has suExec enabled correctly, and assuming apache.conf does NOT have SymLinksIfOwnerMatch enabled. However, every few weeks, my apache.conf is mysteriously overwritten, re-enabling SymLinksIfOwnerMatch, and disabling all 19 linked sites for as long as it takes for me to notice. Here's the offending line in apache.conf: <Directory "/"> AllowOverride All Options ExecCGI FollowSymLinks IncludesNOEXEC Indexes SymLinksIfOwnerMatch </Directory> The addition of SymLinksIfOwnerMatch disables the sites in a strange way ... the html is generated correctly, but all css/js/image in the html fails to load. Clicking any link redirects to /. And I have no idea why. I do have a few things in my .htaccess, which work fine when SymLinksIfOwnerMatch is not present: <IfModule mod_rewrite.c> # www.example.com -> example.com RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L] # Remove query strings from static resources RewriteRule ^assets/js/(.*)_v(.*)\.js /assets/js/$1.js [L] RewriteRule ^assets/css/(.*)_v(.*)\.css /assets/css/$1.css [L] RewriteRule ^assets/sites/(.*)/(.*)_v(.*)\.css /assets/sites/$1/$2.css [L] # Block access to hidden files and directories RewriteCond %{SCRIPT_FILENAME} -d [OR] RewriteCond %{SCRIPT_FILENAME} -f RewriteRule "(^|/)\." - [F] # SLIR ... reroute images to image processor RewriteCond %{REQUEST_URI} ^/images/.*$ RewriteRule ^.*$ - [L] # ignore rules if URL is a file RewriteCond %{REQUEST_FILENAME} !-f # ignore rules if URL is not php #RewriteCond %{REQUEST_URI} !\.php$ # catch-all for routing RewriteRule . index.php [L] </ifModule> I also use most of the 5G Blacklist 2013 for protection against exploits and other depravities. Again, all of this works great, except when SymLinksIfOwnerMatch gets added back into apache.conf. Since I've failed to find the cause of whatever cPanel/security update is overwriting apache.conf, I thought there might be a more correct way to accomplish my goal using group permissions. I've created a 'www' group, added all accounts to the group, and chmod -R'd the code source to use that group. Everything is 644 or 755. But doesn't seem to be enough. My unix isn't that strong. Do you need to restart something for group changes to take effect? Probably not. Anyways, I'm entering unknown territory. Can anyone recommend the right way to configure a website for multiple sites using one codebase that doesn't rely on apache.conf?

    Read the article

  • How Many Web Pages Should Be Indexed?

    Search engines are crawling websites around the clock for unique web pages and content.Google has always been on the top in indexing deep-links of any website, Google indexed 26 million pages in 1998 and in past 10 years Google have indexed over 1 trillion pages. So, this gives a fair idea that how big this cyber world is.

    Read the article

  • Can I safely remove pre-installed software in Ubuntu 13.10?

    - by Andzt
    I uses a lots of Google Apps Product in my Laptop, Tablet and Mobile. Since I never use all pre-installed software in Ubuntu, can I remove it? E.g like: LibreOffice | I use Google Doc etc. RhythmBox | I use Google Play Music Thunderbird | I use Gmail, don't like client Videos | Watch online Messaging | Hangouts And other softwares like Remmina Client, BitTorrent Client. Can I simply remove it in Ubuntu Software Centre? I like my Laptop clean and having unmessy software that I never use. I normally use Chromium Browser to do all the stuff, I think I am heavy user of Google, and love the simplicity of Ubuntu. Lol! Can anyone tell or guide me? Thanks!

    Read the article

  • Search Engine Marketing - 5 Myths About SEO That Just Never Seem to Go Away

    The Google PageRank that you see within the Google Toolbar's PageRank indicator means everything to how a site is ranked in the search engines. This one simply can't be true. Some sites that show a zero PageRank score have a way of outranking even some of the higher ranked sites. Although there's no way to know for sure, it's very likely that Google uses a separate internal ranking score for search placement purposes.

    Read the article

  • Is my htaccess setting hurting SEO?

    - by Ramanonos
    I have a site that I have redirecting to https. I do this to leverage wildcard SSL for my password protected pages. Everything seems to work fine with testing. For example, whether you type in http or www, you always get redirected to the SSL https... That said, I have about 200-300 external backlinks -- many high quality, yet google webmaster (along with SEOMoz), shows I have just 4... Huh? I'm embarrassed to say I just discovered this. This has led me to hypothesize that maybe my settings in htaccess is messed up, so google isn't recognizing a link because it's recorded on another site as http, instead of https. Maybe? At any rate, here is my simple htaccess setting for 301 www to http, and from http to https. RewriteCond %{SERVER_PORT} !443 RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] RewriteCond %{SERVER_PORT} 443 RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ https://example.com/$1 [L,R=301] Like I said, everything works fine for redirect over https, so I'd rather not screw up what works. On the other hand something is very wrong with google finding all my back links, so I need to fix something... I'm just wondering that maybe google isn't picking up a my backlinks from other websites recording me as http because I'm at https. Maybe google doesn't care and it's some other issue. Am I barking up the right tree? If so any quick fixes? Thanks as always!

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >