Search Results

Search found 24291 results on 972 pages for 'google chrome'.

Page 258/972 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Android vs. iPhone: Google Hires Tim Bray

    <b>Linux Planet:</b> ""The iPhone vision of the mobile Internet&#8217;s future omits controversy, sex, and freedom, but includes strict limits on who can know what and who can say what," he wrote. "It's a sterile, Disney-fied walled garden surrounded by sharp-toothed lawyers.""

    Read the article

  • nyroModal Window on flash background

    - by Lowgain
    I've got a Flex App running at 100% width and 100% height. The embed code is: <script type="text/javascript"> var fVars = {}; fVars.wmode = "transparent"; //also tried "opaque" swfobject.embedSWF("/swf/app.swf", "app", "100%", "100%", "9.0.0", "expressInstall.swf", fVars); </script> <div class="app"></div> I've also got the nyroModal call, which is essentially $.nyroModalManual({ url: urlPath, wrap: {}, closeButton: "" }); The modal window I am trying to open in this case is a div with some text and another flash embed, which works. I've also got the app div's z-index to 0. In Firefox this looks fine, everything works. In Chrome however, nyroModal's fade-in/transparent overlay does not show up, and only the second swf is visible overtop the background flash. Am I missing anything here? Is this a known issue with Chrome?

    Read the article

  • OnmouseMove not work with SetTimeOut and Alerts in Chome

    - by Diogo Fernandes
    What's wrong with this code? It works in IE and FireFox, but Chrome does not work. The idea is that the function fnTimeOut will be triggered in 5 seconds after onmousemove. It´s ok. But when, in Chrome, I click on the button "ok" to function fnAlert is triggered instantly. It should be shot just 5 seconds after I move the mouse ... help me please. <input type="button" onclick="alert(1);" value="ok"> <script> document.onmousemove = fnTimeOut; var t = null; function fnAlert() { alert(2); } function fnTimeOut() { clearTimeout( t ); t = setTimeout( fnAlert, 5000 ); } </script>

    Read the article

  • Blazing Keywords - The Google Blazing Keywords Review

    Many people who are currently attempting different methods of online marketing in order to promote and build their business have heard that keyword research is extremely vital to the success of your online marketing. Unfortunately most online marketing companies do not properly teach their members how to effectively do their keyword research in order to get good results and because of that many people are left to look for services that promise to do this for them.

    Read the article

  • Update frequency for dynamic favicons

    - by rosscowar
    I wanted to learn how to dynamically update the favicon using the Google Chrome browser and I've noticed that the browser seems to throttle how often you can update the favicon per second and that sort of makes things look sloppy. The test page I've made for this is: http://staticadmin.com/countdown.html Which is simply a scrolling message displaying the results of a countdown. I added an input field to tweak how many pixels per second are moved by the script and I've eyeballed the max to be about 5 frames per second smoothly in Google Chrome and I have not tested it in any other browsers. My question is what is the maximum frequency, are there any ways to change, is there a particular reason behind it? NOTE: I've also noticed that this value changes based on window focus as well. It seems to drop to about 1 update per second when the browser's window isn't in focus and returns to "max" when you return.

    Read the article

  • Why does SEO based code tips not appear to affect ranking?

    - by Ben
    I've been researching various methods for SEO where pages have precise titles, keywords are highlighted with h tags and tick the many boxes stated in good page mark up for SEO. However when looking at some top ranked search sites on google for key terms they have terrible SEO based mark up. Really long page titles, no tags, limited appearance of keywords in the text and so on. SEO analysis services rate them lower than other sites, yet these sites rank really high. Even with a low number of back-links they are high, so I don't understand how these sites earn the position when they appear inferior to those below them which have better mark up and links. I don't want to cause trouble my mentioning sites or keywords etc. but looking in google at 'executive search' the roughly 5th placed site makes no sense why it should be highly rank, especially with all the added .swfs. The same applies for the top of 'Japan Executive Search'. My main point is that these sites seem to not have all the important structural rules stated in seo page rating applications and general suggested best practice, nor do they show large back-links. It makes me feel like there is no point bothering to write decent mark up if it really doesn't matter. Can anyone explain how sites with such mark-up, and low back-links can outrank well written and structured sites with greater linkage? Sorry if this is a fuzzy question, I want to avoid singling out any sites for example, but it really has me perplexed that sites which appear to ignore the suggested best practices rank so well.

    Read the article

  • How to get "AuthSub " token in C#? For google APPS Contacts ?

    - by Pari
    Hi, I fount this code on net : HttpWebRequest update = (HttpWebRequest)WebRequest.Create(**editUrl** ); // editUrl is a string containing the contact's edit URL update.Method = "PUT"; update.ContentType = "application/atom+xml"; update.Headers.Add(HttpRequestHeader.Authorization, "GoogleLogin auth=" + **AuthToken**); update.Headers.Add(HttpRequestHeader.IfMatch, **etag**); // etag is a string containing the <entry> element's gd:etag attribute value update.Headers.Add("GData-Version", "3.0"); Stream streamRequest = update.GetRequestStream(); StreamWriter streamWriter = new StreamWriter(streamRequest, Encoding.UTF8); streamWriter.Write(entry); // entry is the string representation of the atom entry to update streamWriter.Close(); WebResponse response = update.GetResponse(); But here i am not getting what to put in " editurl" , "AuthToken" and "Etag". a) I studied abt "AuthToken" from this Link .But not getting how to create it? Can anyone help me out here? b) Also not getting " editurl" and "Etag". I am trying to use this method to Migrate my contacts to Google Apps. Thanx

    Read the article

  • Groovlet not working in GWT project, container : embedded Jetty in google plugin

    - by user325284
    Hi, I am working on a GWT application which uses GWT-RPC. I just made a test groovlet to see if it worked, but ran into some problems here's my groovlet package groovy.servlet; print "testing the groovlet"; Every tutorial said we don't need to subclass anything, and just a simple script would act as a servlet. my web.xml looks like this - <!-- groovy --> <servlet> <servlet-name>testGroovy</servlet-name> <servlet-class>groovy.servlet.testGroovy</servlet-class> </servlet> <servlet-mapping> <servlet-name>testGroovy</servlet-name> <url-pattern>*.groovy</url-pattern> </servlet-mapping When I Run as - web application, i get the following error from jetty : [WARN] failed testGroovy javax.servlet.UnavailableException: Servlet class groovy.servlet.testGroovy is not a javax.servlet.Servlet at org.mortbay.jetty.servlet.ServletHolder.checkServletType(ServletHolder.java:377) at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:234) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:616) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1220) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:513) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448) at com.google.gwt.dev.shell.jetty.JettyLauncher$WebAppContextWithReload.doStart(JettyLauncher.java:447) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.handler.RequestLogHandler.doStart(RequestLogHandler.java:115) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:222) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:39) at com.google.gwt.dev.shell.jetty.JettyLauncher.start(JettyLauncher.java:543) at com.google.gwt.dev.DevMode.doStartUpServer(DevMode.java:421) at com.google.gwt.dev.DevModeBase.startUp(DevModeBase.java:1035) at com.google.gwt.dev.DevModeBase.run(DevModeBase.java:783) at com.google.gwt.dev.DevMode.main(DevMode.java:275) What did I miss ?

    Read the article

  • GWT: reporting crawling errors for non existing links

    - by pixeline
    Google Webmaster Tools is reporting crawl errors for links that never existed, and if i check the "Linked from" tab for a given error link, it shows another that never existed. They all mention joomla/ which is not the cms used on this domain (it's wordpress fyi). Exampled: http://example.com/joomla/index.php/component/user/register Linked from: http://example.com/joomla/component/user/login?return=L2###### What is going on? UPDATE 1 I tried something: I provided one of the faulty urls to the "Fetch as Google" functionality. Instead of returning a 404, it returns a 301 to another Joomla page. HTTP/1.1 301 Moved Permanently Server: Apache/2.4.3 X-Powered-By: PHP/5.4.4-10 X-Pingback: http://example.com/xmlrpc.php Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Pragma: no-cache Set-Cookie: PHPSESSID=1fgr5v2oip39miibuptd51s8h0; path=/ Set-Cookie: woocommerce_items_in_cart=0; expires=Sat, 12-Jan-2013 11:44:01 GMT; path=/ Location: http://example.com/joomla/component/user/register Content-Type: text/html; charset=iso-8859-1 Content-Length: 387 Date: Sat, 12 Jan 2013 12:44:01 GMT Via: 1.1 varnish Connection: keep-alive Accept-Ranges: bytes Age: 0 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="http://example.com/joomla/component/user/register">here</a>.</p> <p>Additionally, a 301 Moved Permanently error was encountered while trying to use an ErrorDocument to handle the request.</p> </body></html>

    Read the article

  • Must issue a STARTTLS command first. Sending email with Java and Google Apps

    - by Sergio del Amo
    I am trying to use Bill the Lizard's code to send an email using Google Apps. I am getting this error: Exception in thread "main" javax.mail.SendFailedException: Sending failed; nested exception is: javax.mail.MessagingException: 530 5.7.0 Must issue a STARTTLS command first. f3sm9277120nfh.74 at javax.mail.Transport.send0(Transport.java:219) at javax.mail.Transport.send(Transport.java:81) at SendMailUsingAuthentication.postMail(SendMailUsingAuthentication.java:81) at SendMailUsingAuthentication.main(SendMailUsingAuthentication.java:44) Bill's code contains the next line, which seems related to the error: props.put("mail.smtp.starttls.enable","true"); However, it does not help. These are my import statements: import java.util.Properties; import javax.mail.Authenticator; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; Does anyone know about this error?

    Read the article

  • Handling SEO for Infinite pages that cause external slow API calls

    - by Noam
    I have an 'infinite' amount of pages in my site which rely on an external API. Generating each page takes time (1 minute). Links in the site point to such pages, and when a users clicks them they are generated and he waits. Considering I cannot pre-create them all, I am trying to figure out the best SEO approach to handle these pages. Options: Create really simple pages for the web spiders and only real users will fetch the data and generate the page. A little bit 'afraid' google will see this as low quality content, which might also feel duplicated. Put them under a directory in my site (e.g. /non-generated/) and put a disallow in robots.txt. Problem here is I don't want users to have to deal with a different URL when wanting to share this page or make sense of it. Thought about maybe redirecting real users from this URL back to the regular hierarchy and that way 'fooling' google not to get to them. Again not sure he will like me for that. Letting him crawl these pages. Main problem is I can't control to rate of the API calls and also my site seems slower than it should from a spider's perspective (if he only crawled the generated pages, he'd think it's much faster). Which approach would you suggest?

    Read the article

  • There's More to SEO Than Google

    Though the 10-year search partnership deal between the former rivals was finalised in December, it still required approval from both the US Department of Justice and the European Commission. Microsoft and Yahoo! CEOs seem thrilled with the deal, with Microsoft's Steve Ballmer defining it as 'an exciting milestone' whilst Yahoo!'s Carol Bartz declared it a 'breakthrough search alliance.'

    Read the article

  • On Page SEO - The Leap To Google First Page

    Besides your accurate and descriptive content, you also make sure that your title contains at least one tested keyword and a convincing meta-description tag. You could say that when your landing page gets an 85% rating, your SEO job has been completed and you are free to undertake other facets of your e-business that may be waiting for your attention.

    Read the article

  • How can CSS be applied to components from Google Closure Library?

    - by Jason
    I'm getting my feet wet with Google's Closure Library. I've created a simple page with a Select Widget, but it clearly needs some styling (element looks like plain text, and in the example below the menu items pop up beneath the button). I'm assuming the library supports styles -- how can I hook into them? Each example page in SVN seems to use its own CSS. Abbreviated example follows: <body> <div id="inputContainer"></div> <script src="closure-library-read-only/closure/goog/base.js"></script> <script> goog.require('goog.dom'); goog.require('goog.ui.Button'); goog.require('goog.ui.MenuItem'); goog.require('goog.ui.Select'); </script> <script> var inputDiv = goog.dom.$("inputContainer"); var testSelect = new goog.ui.Select("Test selector"); testSelect.addItem(new goog.ui.MenuItem("1")); testSelect.addItem(new goog.ui.MenuItem("2")); testSelect.addItem(new goog.ui.MenuItem("3")); var submitButton = new goog.ui.Button("Go"); testSelect.render(inputDiv); submitButton.render(inputDiv); </script> </body>

    Read the article

  • IIS SMTP server (Installed on local server) in parallel to Google Apps

    - by sharru
    I am currently using free version of Google Apps for hosting my email.It works great for my official mails my email on Google is [email protected]. In addition I'm sending out high volume mails (registrations, forgotten passwords, newsletters etc) from the website (www.mydomain.com) using IIS SMTP installed on my windows machine. These emails are sent from [email protected] My problem is that when I send email from the website using IIS SMTP to a mail address [email protected] I don’t receive the email to Google apps. (I only receive these emails if I install a pop service on the server with the [email protected] email box). It seems that the IIS SMTP is ignoring the domain MX records and just delivers these emails to my local server. Here are my DNS records for domain.com: mydomain.com A 82.80.200.20 3600s mydomain.com TXT v=spf1 ip4: 82.80.200.20 a mx ptr include:aspmx.googlemail.com ~all mydomain.com MX preference: 10 exchange: aspmx2.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx3.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx4.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx5.googlemail.com 3600s mydomain.com MX preference: 1 exchange: aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt1.aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt2.aspmx.l.google.com 3600s Please help! Thanks.

    Read the article

  • Webmaster Tools, www and no-www, duplicate content and subdomains

    - by Jay
    I have not come to any conclusive answers on the following after many hours of research on many websites to the specific issue that I am trying to figure out. My company has two websites a main one at www.example.com and one at subdomain.example.com which is a subdomain of the first and is our self hosted blog. The way Google sees these with the www or no-www (called naked for now on) is that each of these actually are different when the www or naked version is used/not used in the front of the domain. I completely understand this. It is also advised that both should be set up in the Google Webmaster Tools, which I have done. Correct me if I am wrong on that in regard to having both set up. Now the way it appears is that we can set a preferred domain up in Webmaster Tools only at the root domain level. The subdomain can not have this and actually says the following "Restricted to root level domains only". So it appears that the domain should follow what the root domain says which on our preferred one says to display the www.example.com. and not the naked version. That is one issue I have in that one displays one way and the other displays another. Is it that we have the wrong redirects in place for the subdomain? Another question is does this have any affect on SEO in regards to duplicate content on the web in how we have set this up?

    Read the article

  • Website restyle, SEO migration plan?

    - by Goboozo
    I am currently in a project for one of my biggest clients. We have built a website that will -replace- the old website. When it comes to actual content its is largely the same. However, the presentation of the content has changed drastically. From our point of view much more user-friendly (main reason to update the site). Now, since the sites presentation has changed we have some major changes in: HTML & CSS: To change the presentation of the content URL's: To make them better understandable (301 redirects have been taken care of and are in place) Breadcrumbs: To enhance the navigation (we have made the breadcrumbs match exactly with the url's) Pagination: This was added to enable content browsing Title tags: Added descriptive title tags to the major links and buttons. Basically all user content including meta tags have remained the same. Now since this company is rather successful and 90% of its clients come from Google's organic results I am obliged to take all necessary precautions. People tell me I need a migration plan to prevent the site being hurt in Google, but I have never worked using such a plan... ...So, based on the above. Would you consider a migration plan necessary and what precautions/actions would you recommend to prevent us being put down in our SERP positions? Many thanks in advance for your answers.

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >