Search Results

Search found 24291 results on 972 pages for 'google chrome'.

Page 302/972 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Image storage as a service

    - by Samuel
    Google App Engine provides a image API for storing / retrieving images. We are currently not in a position to deploy our application on top of App Engine because of limitations in the java frameworks (jboss seam 2.2.0) we are using to build our j2ee application. We would eventually want to deploy our production application on top of Google App Engine, but what are the short term options (java based open source products) which provides comparable functionality to Google App Engine's Image API and will have an easier migration path at a later point in time.

    Read the article

  • Prepare community images for google image search indexing

    - by Vittorio Vittori
    Hi, I'm trying to understand how can I do to let my site be reachable from google image search spiders. I like how last.fm solution, and I thought to use a technique like his staff do to let google find artists images on their pages. When I'm looking for an artist and I search it on google image search, as often as not I find an image from last.fm artists page, I make an example: If I search the band Pure Reason Revolution It brings me here, the artist's image page http://www.last.fm/music/Pure+Reason+Revolution/+images/4284073 Now if I take a look to the image file, i can see it's named: http://userserve-ak.last.fm/serve/500/4284073/Pure+Reason+Revolution+4.jpg so if I try to undertand how the service works I can try to say: http://userserve-ak.last.fm/serve/ the server who serve the images 500/ the selected size for the image 4284073/ the image id for database Pure+Reason+Revolution+4.jpg the image name I thought it's difficult to think the real filename for the image is Pure+Reason+Revolution+4.jpg for image overwrite problems when an user upload it, in fact if I digit: http://userserve-ak.last.fm/serve/500/4284073.jpg I probably find the real image location and filename With this tecnique the image is highly reachable from search engines and easily archived. My question is, does exist some guide or tutorial to approach on this kind of tecniques, or something similar?

    Read the article

  • Language Translation API

    - by kandarp
    How can i convert language in my Java? Is there any API exist, which convert any language to any other language? I am using Google Translate API, but it giving me below exception. java.lang.Exception: [google-api-translate-java] Error retrieving translation. at com.google.api.GoogleAPI.retrieveJSON(GoogleAPI.java:123) at com.google.api.translate.Translate.execute(Translate.java:69) at com.nextenders.client.beans.ruleengine.RuleEngineTest.main(RuleEngineTest.java:27) Caused by: java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at sun.net.NetworkClient.doConnect(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source) null at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(Unknown Source) at com.google.api.GoogleAPI.retrieveJSON(GoogleAPI.java:107) ... 2 more If anybody knows any API for translation, please tell me.

    Read the article

  • Possible to access gdata api when using Java App Engine?

    - by PCBEEF
    I have a dilemma where I want to create an application that manipulates google contacts information. The problem comes down to the fact that Python only supports version 1.0 of the api whilst Java supports 3.0. I also want it to be web-based so I'm having a look at google app engine, but it seems that only the python version of app engine supports the import of gdata apis whilst java does not. So its either web based and version 1.0 of the api or non-web based and version 3.0 of the api. I actually need version 3.0 to get access to the extra fields provided by google contacts. So my question is, is there a way to get access to the gdata api under Google App Engine using Java? If not is there an ETA on when version 3.0 of the gdata api will be released for python? Cheers.

    Read the article

  • Prepare your site images for google image search indexing

    - by Vittorio Vittori
    Hi, I'm trying to understand how can I do to let my site be reachable from google image search spiders. I like how last.fm solution, and I thought to use a technique like his staff do to let google find artists images on their pages. When I'm looking for an artist and I search it on google image search, as often as not I find an image from last.fm artists page, I make an example: If I search the band Pure Reason Revolution It brings me here, the artist's image page http://www.last.fm/music/Pure+Reason+Revolution/+images/4284073 Now if I take a look to the image file, i can see it's named: http://userserve-ak.last.fm/serve/500/4284073/Pure+Reason+Revolution+4.jpg so if I try to undertand how the service works I can try to say: http://userserve-ak.last.fm/serve/ the server who serve the images 500/ the selected size for the image 4284073/ the image id for database Pure+Reason+Revolution+4.jpg the image name I thought it's difficult to think the real filename for the image is Pure+Reason+Revolution+4.jpg for image overwrite problems when an user upload it, in fact if I digit: http://userserve-ak.last.fm/serve/500/4284073.jpg I probably find the real image location and filename With this tecnique the image is highly reachable from search engines and easily archived. My question is, does exist some guide or tutorial to approach on this kind of tecniques, or something similar?

    Read the article

  • Hide a single content block from search engines?

    - by jonas
    A header is automatically added on top of each content URL, but its not relevant for search and messing up the all the results beeing the first line of every page (in the code its the last line but visually its the first, which google is able to notice) Solution1: You could put the header (content to exculde from google searches) in an iframe with a static url domain.com/header.html and a <meta name="robots" content="noindex" /> ? - are there takeoffs of this solution? Solution2: You could deliver it conditionally by apache mod rewrite, php or javascript -takeoff(?): google does not like it? will google ever try pages with a standard users's useragent and compare? -takeoff: The hidden content will be missing in the google cache version as well... example: add-header.php: <?php $path = $_GET['path']; echo file_get_contents($_SERVER["DOCUMENT_ROOT"].$path); ?> apache virtual host config: RewriteCond %{HTTP_USER_AGENT} !.*spider.* [NC] RewriteCond %{HTTP_USER_AGENT} !Yahoo.* [NC] RewriteCond %{HTTP_USER_AGENT} !Bing.* [NC] RewriteCond %{HTTP_USER_AGENT} !Yandex.* [NC] RewriteCond %{HTTP_USER_AGENT} !Baidu.* [NC] RewriteCond %{HTTP_USER_AGENT} !.*bot.* [NC] RewriteCond %{SCRIPT_FILENAME} \.htm$ [NC,OR] RewriteCond %{SCRIPT_FILENAME} \.html$ [NC,OR] RewriteCond %{SCRIPT_FILENAME} \.php$ [NC] RewriteRule ^(.*)$ /var/www/add-header.php?path=%1 [L]

    Read the article

  • Problem executing trackPageview with Google Analytics.

    - by dmrnj
    I'm trying to capture the clicks of certain download links and track them in Google Analytics. Here's my code var links = document.getElementsByTagName("a"); for (var i = 0; i < links.length; i++) { linkpath = links[i].pathname; if( linkpath.match(/\.(pdf|xls|ppt|doc|zip|txt)$/) || links[i].href.indexOf("mode=pdf") >=0 ){ //this matches our search addClickTracker(links[i]); } } function addClickTracker(obj){ if (obj.addEventListener) { obj.addEventListener('click', track , true); } else if (obj.attachEvent) { obj.attachEvent("on" + 'click', track); } } function track(e){ linkhref = (e.srcElement) ? e.srcElement.pathname : this.pathname; pageTracker._trackPageview(linkhref); } Everything up until the pageTracker._trackPageview() call works. In my debugging linkhref is being passed fine as a string. No abnormal characters, nothing. The issue is that, watching my http requests, Google never makes a second call to the tracking gif (as it does if you call this function in an "onclick" property). Calling the tracker from my JS console also works as expected. It's only in my listener. Could it be that my listener is not deferring the default action (loading the new page) before it has a chance to contact Google's servers? I've seen other tracking scripts that do a similar thing without any deferral.

    Read the article

  • Can't get java progam to run! NoClassDefFoundError?

    - by mcintyre321
    I'm a .NET developer, but for my current project I need to use Google Caja, a Java project. Uh-oh! I've followed the guide at http://code.google.com/p/google-caja/wiki/RunningCaja on my windows machine, but can't get the program to run. The command line they suggest didn't word, so I cd'd into the ant-jars directory and tried to run plugin.jar: D:\java\caja\svn-changes\pristine\ant-jars>java -cp . -jar pluginc.jar -i test.htm Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/cli/ParseException at com.google.caja.plugin.PluginCompilerMain.<init>(PluginCompilerMain.java:78) at com.google.caja.plugin.PluginCompilerMain.main(PluginCompilerMain.java:368) Caused by: java.lang.ClassNotFoundException: org.apache.commons.cli.ParseException at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 2 more Whats that all about? I've also tried file:///d:/java/caja/svn-changes/pristine/ant-jars/test.htm instead of test.htm as looking at the source it seems the file param is a Uri... I've also tried running IKVM on pluginc and then not worrying about java, but that came up with the ClassDefNotFoundException too... thanks!

    Read the article

  • Unable to migrate mail with large attachment upto 2 MB

    - by Preeti
    I am creating EML file with "System.Net.Mail".After that i am migrating it to Google Mail using google API ver 2. For small attachments it is working fine. But in case of large attachment upto (1 MB or 2 MB) it throwing exception "Request aborted". How can migrate mail with large file attachment? I got solution of sending large size attachment using System.Net.Mail. But how to implement it in case of Google apps? Thanx

    Read the article

  • Eclipse + AppEngine =? autocomplete

    - by Brandon Watson
    I was doing some beginner AppEngine dev on a Windows box and installed Eclipse for that. I liked the autocompletion I got with the objects and functions. I moved my dev environment over to my Macbook, and installed Eclipse Ganymede. I installed the AppEngine SDK and Eclipse plug in. However, when I am typing out code now, the autocomplete isn't functioning. Did I miss a step? UPDATE Just to add to this: the line: import cgi appears to give me what I need. When I type "cgi." I get all of the auto complete. However, the lines: from google.appengine.api import users from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext import db don't give me any auto complete. If I type "users." there is no auto complete.

    Read the article

  • Getting all pdf files from a domain (for example *.adomain.com)

    - by Zack
    I need to download all pdf files from a certain domain. There are about 6000 pdf on that domain and most of them don't have an html link (either they have removed the link or they never put one in the first place). I know there are about 6000 files because I'm googling: filetype:pdf site:*.adomain.com However, Google lists only the first 1000 results. I believe there are two ways to achieve this: a) Use Google. However, how I can get all 6000 results from Google? Maybe a scraper? (tried scroogle, no luck) b) Skip Google and search directly on domain for pdf files. How do I do that when most them are not linked?

    Read the article

  • Google Maps + PODSCMS

    - by Sharath
    Hi, I have a column called 'location' in a POD which I have created. I was looking to write a input helper where I display a Google Map and the user clicks on a location, and I capture that into this column. This is what I am trying to do <div class="form pick <?php echo $location; ?>"> <script src="http://maps.google.com/maps?file=api&v=1&key=<key_here>" type="text/javascript"></script> <div id="map" style="width:500; height:500"> <script type="text/javascript"> map=new GMap(document.getElementById("map")); map.centerAndZoom(new GPoint(77.595062,13.043359),6); map.setUIToDefault(); </script> </div> </div> But I am getting a number of errors from google maps API. Any idea how i can get this helper in there?

    Read the article

  • Which PHP calendar will suit my needs? Or build my own?

    - by John
    There are so many web based calendars out there that I'm not sure which will best suit my purposes. I want to create an events calendar, which I suppose looks and functions in much the same way as google calendar. But users shouldn't need to sign up for a gmail account or a google account. Users of my social network website should be able to create events in the calendar and share it with other friends in their "friend network". In month, year or day views, you can mouse hover events to see a bubble dialog that lists events in detail. People need to pay a monthly fee to use my website (not sure if this prevents me from using google apps). So is something like Google Calendar the way to go? Or are there other php/javascript calendars out there that already meet my needs? Or will it just save me more time to build my own?

    Read the article

  • Unable to migrate mails to "Trash"

    - by Preeti
    Hi, I am migrating some mails to 'TRASH' in Google Apps. Using Google API Ver 2: Code Sample : MailItemEntry[] entries = new MailItemEntry[1]; entries[0] = new MailItemEntry(); entries[0].Rfc822Msg = new Rfc822MsgElement(msg); entries[0].MailItemProperties.Add(MailItemPropertyElement.TRASH); I tried with : entries[0].Labels.Add(new LabelElement("Trash")); How can i migrate mails to "TRASH" in Google Apps ? Thanx

    Read the article

  • Failed to load url in Dom Document?

    - by Lakhan
    im trying to get latitude and longtitude from google map by giving address. But its giving error.when i directly copy url to browser url bar.its giving correct result.If anybody know the answer .Plz reply. Thanx in advance Here is my code. <?php $key = '[AIzaSyAoPgQlfKsBKQcBGB01cl8KmiPee3SmpU0]'; $opt = array ( 'address' => urlencode('Kolkata,India, ON') , 'output' => 'xml' ); $url = 'http://maps.google.com/maps/geo?q='.$opt['address'].'&output='.$opt['output'].'&oe=utf8&key='.$key; $dom = new DOMDocument(); $dom->load($url); $xpath = new DomXPath($dom); $xpath->registerNamespace('ge', 'http://earth.google.com/kml/2.0'); $statusCode = $xpath->query('//ge:Status/ge:code'); if ($statusCode->item(0)->nodeValue == '200') { $pointStr = $xpath->query('//ge:coordinates'); $point = explode(",", $pointStr->item(0)->nodeValue); $lat = $point[1]; $lon = $point[0]; echo '<pre>'; echo 'Lat: '.$lat.', Lon: '.$lon; echo '</pre>'; } ?> Following error has occured: Warning: DOMDocument::load(http://maps.google.com/maps/geo? q=Kolkata%252CIndia%252C%2BON&output=xml&oe=utf8&key=%5BAIzaSyAoPgQlfKsBKQcBGB01cl8KmiPee3SmpU0%5D) [domdocument.load]: failed to open stream: HTTP request failed! HTTP/1.0 400 Bad Request in C:\xampp\htdocs\draw\draw.php on line 19 Warning: DOMDocument::load() [domdocument.load]: I/O warning : failed to load external entity "http://maps.google.com/maps/geo?q=Kolkata%252CIndia%252C%2BON&output=xml&oe=utf8&key=%5BAIzaSyAoPgQlfKsBKQcBGB01cl8KmiPee3SmpU0%5D" in C:\xampp\htdocs\draw\draw.php on line 19 Notice: Trying to get property of non-object in C:\xampp\htdocs\draw\draw.php on line 26

    Read the article

  • dynamic meta data and description for pages

    - by pradeep
    Hi, I am developing pages in php dynamically i.e data gets filled up from mysql DB. how do i assign a proper meta data and description for these dynamic pages so that google recognises it properly. What needs to be passed in page so that google takes description properly. when i search a page in google. it takes the data in page as description not description tag contents

    Read the article

  • Integrating jQuery autocomplete with Google site search

    - by user1715700
    I have a bit of an odd situation. I have to implement search on a public facing website -but, that search must be able to search both web pages and have an autocomplete/suggestion functionality that comes from a list of terms that are in a DB table. So, I'm wondering a couple things: 1) should I be looking at Google search and jQuery autocomplete? 2) is there something else I should be looking at instead? 3) if this is the right path to be heading down are the enough pointers on implementation? The crux of my problem is that the terms that I need to use for the autocomplete/suggest functionality reside within a database and not on the webpages. So, I thought Google would be appropriate for search the webpages and that I could sort of fill in the blanks so to speak with these terms from the DB. I'm going to say that there are roughly 20-40,000 terms or so that need autocomplete. But that is really just a very rough guess -it could be less. I'm open to ideas and not really married to any particular solution. However, I will admit to liking the ideas of offloading the search to Google. I hear they have a good algorithm ;) Any ideas, thoughts, or leads are greatly appreciated!

    Read the article

  • Sample code for Anymote in C++ version?

    - by summerdog
    Does anyone can tell me where to find the resource just like the following link ? (But I hope this is for C++ !!) https://code.google.com/p/googletv-android-samples/source/browse/#git%2FAnymoteLibrary%2Fsrc%2Fcom%2Fexample%2Fgoogle%2Ftv%2Fanymotelibrary%2Fclient There is a dynamic library : http://code.google.com/p/google-tv-chrome-extensions/source/browse/AnymoteExample/?r=2048c59701bab6386f7ea8011e4ca342fb1f133a#AnymoteExample%2Fplugins%2Fgtvremote_AnymoteExample.plugin%2FContents%2FMacOS Unfortunately, the arch of this library is i386. I want to build this project for the other platform. Orz Thanks.

    Read the article

  • app-engine-rest-server to raise KeyError("name %s already used" % model_name)

    - by fx
    I'm playing with the project appengine-rest-server to create the REST webservices for all the existing models. I got a strange error, the first time I query the browser: http://localhost:8080/rest/metadata/user, it gives me the result: <xs:schema> - <xs:element name="user"> - <xs:complexType> - <xs:sequence> <xs:element maxOccurs="1" minOccurs="0" name="key" type="xs:normalizedString"/> <xs:element maxOccurs="1" minOccurs="0" name="surname" type="xs:string"/> <xs:element maxOccurs="1" minOccurs="0" name="firstname" type="xs:string"/> <xs:element maxOccurs="1" minOccurs="0" name="ages" type="xs:long"/> <xs:element maxOccurs="1" minOccurs="0" name="sex" type="xs:boolean"/> <xs:element maxOccurs="1" minOccurs="0" name="updatedDate" type="xs:dateTime"/> <xs:element maxOccurs="1" minOccurs="0" name="createdDate" type="xs:dateTime"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> But refreshing the page, gives me this error: Traceback (most recent call last): File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 3185, in _HandleRequest self._Dispatch(dispatcher, self.rfile, outfile, env_dict) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 3128, in _Dispatch base_env_dict=env_dict) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 515, in Dispatch base_env_dict=base_env_dict) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 2387, in Dispatch self._module_dict) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 2297, in ExecuteCGI reset_modules = exec_script(handler_path, cgi_path, hook) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 2195, in ExecuteOrImportScript script_module.main() File "/Users/foo/Documents/AppEngine/helloworld/main.py", line 48, in main rest.Dispatcher.add_models({"user": UserModel}) File "/Users/foo/Documents/AppEngine/helloworld/rest/__init__.py", line 845, in add_models cls.add_model(model_name, model_type) File "/Users/foo/Documents/AppEngine/helloworld/rest/__init__.py", line 863, in add_model raise KeyError("name %s already used" % model_name) KeyError: 'name user already used' Can someone give me the explanation on why it happens? Restarting the server, run on the browser again I get the xml result, but refreshing causes the error. Is it a bug in the appengine-rest-server application or it is in my code? My helloworld application is available for download here.

    Read the article

  • Web application/ site service (like Google App Engine) for PHP/ MySQL and Postgres

    - by Simon
    I would like to find a service similar to Google App Engine for PHP/ MySQL/ Postgres sites/ applications. We host two different types of site. i). PHP/ Mysql/ Zend Framework <VirtualHost *:80> DocumentRoot "/home/websites/website.com/public" ServerName website.com # This should be omitted in the production environment SetEnv APPLICATION_ENV development <Directory "/home/websites/website.com/public"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] </Directory> </VirtualHost> ii). Matrix CMS - PHP/ Postgres + loads of pear classes <VirtualHost *:80> ServerName server.example.com DocumentRoot /home/websites/mysource_matrix/core/web Options -Indexes FollowSymLinks <Directory /home/websites/mysource_matrix> Order deny,allow Deny from all </Directory> <DirectoryMatch "^/home/websites/mysource_matrix/(core/(web|lib)|data/public|fudge)"> Order allow,deny Allow from all </DirectoryMatch> <DirectoryMatch "^/home/websites/mysource_matrix/data/public/assets"> php_flag engine off </DirectoryMatch> <FilesMatch "\.inc$"> Order allow,deny Deny from all </FilesMatch> <LocationMatch "/(CVS|\.FFV)/"> Order allow,deny Deny from all </LocationMatch> Alias /__fudge /home/websites/mysource_matrix/fudge Alias /__data /home/websites/mysource_matrix/data/public Alias /__lib /home/websites/mysource_matrix/core/lib Alias / /home/websites/mysource_matrix/core/web/index.php/ </VirtualHost> My key requirements are: I don't want to worry/ know/ care about the server/ infrastructure Secure/ up to date software/ os Good monitoring Automatic scalability SLA I apologise for the length of the question. In short all I want to do is i). create vhost, ii). create db iii). install app/ site iv). relax. Thanks. Edit: I include the Matrix vhost because that is the only complication that I cannot really do via a .htaccess file.

    Read the article

  • Why is my laptop so sluggish? Or Damn You Facebook and Twitter! Or All Hail Chrome!

    - by John Conwell
    In the past three weeks, I've noticed that my laptop (dual core 2.1GHz, 2Gb RAM) has become amazingly sluggish.  I only uses for communications and data lookup workflows, so the slowness was tolerable.  But today I finally got fed up with the suckyness and decided to get to the root of the problem (I do have strong performance roots after all). It actually didn't take all that long to figure it out.  About a year ago I converted to Google Chrome (away from FireFox).  One of the great tools Chrome has is a "Task Manager" tool, that gives you Windows Task Manager like details for all the tabs open in the browser (Shift + Esc).  Since every tab runs in its own process, its easy from Task Manager (both Windows or Chrome) to identify and kill a single performance offending tab.  This is unlike IE, where you only get aggregate data about all tabs open.  Anyway, I digress.  Today my laptop sucked.  Windows Task Manager told me that I had two memory hogging Chrome tabs, but couldn't tell me which web page those tabs are showing.  Enter Chrome Task Manager which tells you the page title, along with CPU, memory and network utilization of each tab.  Enter my amazement.  Turns out Facebook was using just shy of half a Gb of RAM.  Half a Gigabyte!  That's 512 Megabytes!524,288 Kilobytes! 536,870,912 Bytes!  Or 4,294,967,296 Bits!  In other words, that's a frackin boat load of memory.  Now consider that Facebook is running on pretty much 96.3% (statistics based on absolutely nothing) of every house hold desktop, laptop, netbook, and mobile device in America, that is pretty horrific! And I wasn't playing any Facebook games like FarmWars or MafiaVille.  I just had my normal, default home page up showing me who just had breakfast, or just got finished with their morning run. I'm sorry...let me say that again...HALF A GIG OF RAM!  That is just unforgivable. I can just see my mom calling me up:  Mom: "John...I think I need a new computer.  Mine is really slow these days" John: "What do you have running?" Mom: "Oh, just Facebook" John: "Ok, close Facebook and tell me how fast your computer feels" Mom: "Well...I don't know how fast it is.  All I do is use Facebook" John: "Ok Mom, I'll send you a new computer by Tuesday" Oh yea...and the other offending web page?  It was Twitter, using a quarter of a Gigabyte. God I love social networks!

    Read the article

  • Apps Script Office Hours - September 13, 2012

    Apps Script Office Hours - September 13, 2012 In this week's episode of Google Apps Script office hours, Jan and Arun: - Introduce the Google Apps Script app that was recently published in the Chrome Web Store: chrome.google.com - Answer a variety of questions from the Google Moderator. - Answer live questions about UiApp, triggers, ScriptDb, and other topics. To find out when the next office hours will be held, visit developers.google.com From: GoogleDevelopers Views: 221 7 ratings Time: 17:26 More in Science & Technology

    Read the article

  • ajax.googleapis.com stopping my Firefox

    - by Oscar Reyes
    Today for some strange reason, Firefox stops working properly because it is trying to fetch something from ajax.googleapis.com. Is there something I can do to avoid this? Safari and Chrome work just fine. I tried uninstalling Firebug and clearing the cache. The only thing that worked was disabling the JavaScript altogether. This seems to be the culprit link: http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js What can I do? EDIT I think I have found where the problem is. My proxy is serving one byte at a time the file, so firefox consume it at that peace. What I don't understand is why Safari and Chrome takes it right away. What I did last night was, leave the FF open all the night to give him change to load the file, my hope was that I got cached and the next time there was no need to go for it. Today in the morning, the page load successfully but the page was not cached, because the next request failed the same. Here's a video showing the problem:

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >