Search Results

Search found 23715 results on 949 pages for 'google collections'.

Page 191/949 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • How to modify the language used by the Google Search Engine on IE 9.0?

    - by Seb Killer
    I would like to know how we can modify the settings of the Google Search Engine used in Internet Explorer 9.0 to force to use a specific language. Our problem is the following: as it uses geolocation by default, and we are in Switzerland, it takes the first of the official languages this is Swiss-German. However, we are located in Geneva where French is the official language. Furthermore, as most of our users speak English, we would like to force the language to be English and not Swiss-German. Does anybody know how to achieve this ? Thanks alot, Sébastien

    Read the article

  • Is there any two-panels bookmarks manager for Google Chrome?

    - by L. Shaydariv
    Hi to all. I'm just wondering, is there any two-panels-interface (like Total Commander, File Manager, etc) bookmark manager, extension, etc for Google Chrome? Using of the default bookmark manager is not so suitable because of two reasons: 1) I've gathered a very large collection of bookmarks (please, don't ask why) 2) bookmarks hierarchy tree always expands its branches when the bookmark manager is open (this makes the moving of bookmarks through the bookmark tree much and much harder). I tried to use Link Commander but it's very and very slow. Any suggestions? Thank you for advices.

    Read the article

  • Is there keyboard shortcut to move input focus to the Google Search box?

    - by Chen Jun
    I'm searching on Google a lot. I find it very annoying to move my mouse to the search box and click once so that I can input another search term. I did Googled some time, but no one seems to be annoyed with this, quite unbelievable. I'm using Firefox 8 and Chrome 16, on Windows 7. If you know Atlassian Confluence, you might probably know that pressing / will move input focus to the upper right search box, very convenient for a keyboard shortcut hobbyist . Try it here.

    Read the article

  • How Can I Not Show the Green Download Progress Display for Google Chrome in the Windows Taskbar?

    - by theMaxx
    I use Windows 7 and when downloading with Google Chrome the icon in the taskbar has a partially green background that indicates the download progress. Is there a way to not show this green status indicator in the Windows taskbar? I find it distracting when I am working. It seems there is no option to disable it. I do not want to hide the icon entirely but rather just not show the green background. Is this possible? I have searched for an option or setting to change this but there seems to be surprisingly little information about this on the internet. I imagine that others would also appreciate it if there is some solution for this. Thanks.

    Read the article

  • How can I get Google to re-point its search entries to new domain?

    - by poolski
    My main .com domain registration lapsed and when I went to re-register it, I found that a domain reseller service squatted it and I've lost access to it. As I wasn't terribly keen on spending money on funding scammers and the like, I registered a .co.uk domain under the same name. Is there any way of getting Google to re-point all its indexed links to the new domain? It's been indexing my blog for a couple years now and while it's not too big a deal, I'd like to not have to start all over again. Also, searching for my site results in an old entry which is currently pointing at a "Apply for a Tax Break NOW!!!" page.

    Read the article

  • Does Google Chrome officially work on Windows 7 64-Bit Yet?

    - by Nick Josevski
    As soon as I jumped onto one of the beta releases for Windows 7, I tried to install Google Chrome. Being on a 64-bit installation it came up with a 'non-supported OS' or some error (can't remember). Having a look around at the time I saw lots of posts/tips about just appending --in-process-plugins to the shortcut for chrome, after trying this and still not having luck, I found more posts including what seemed ones from the Chrome developers saying this was not wise and exposed a security risk. So does anyone have a well sourced answer, as to what's holding up Win 7 64-bit support in Chrome, or better yet an "official" answer to say that it is supported in Win7 x64 RTM and works well now...

    Read the article

  • How do I fix font corruption in Google Chrome 9.0.597.44beta in Windows XP?

    - by snicker
    I am not sure what is causing this problem, but I think it is related to unicode problems. Google Chrome, seemingly out of nowhere a month ago, stopped rendering unicode characters in certain fonts. IE this ?_? Looks fine in some fonts, but looks like this in others. Renders fine in other browsers. Most recently, I visited the FourSquare website and have complete font corruption. Here is IE vs Chrome Full Size What gives? Has anyone else seen this? How can I fix it?

    Read the article

  • Move email off Small Business Server to Google Apps, retain other SBS functions?

    - by Paul S.
    Recently, an in-house Microsoft Small Business Server 2011 was installed where I work. Unfortunately, our buildings have a bad electrical power supply and we suffer frequent outages. We have a large percentage of staff working off-site. Now when the power goes off here, everyone everywhere loses email functionality. I have been assigned to research the possibility of routing our email to Google Apps while maintaining LAN functions on the SBS. I haven't worked with Microsoft products for several years now, so do not know how SBS is structured. Can anyone here tell me if this is possible, or point me to good resources that explain our options?

    Read the article

  • How can I move authorized applications between google accounts?

    - by zoopp
    I'm looking into creating an email address with a professional name on gmail and due to the fact that I can't change my current one I have to create new google account. Among some things which which need to be patched (eg. forwarding email to the new address until every other account's email contact address is changed etc.) I came across authorized applications. If I am to use exclusively the new email address I have to somehow move my authorized applications as well since if I am to eventually delete my old account I will lose access to my current profiles created by those applications (eg. the stackexchange network, youtube etc). How can this move be accomplished?

    Read the article

  • How does Google Chrome know my Firefox history if I never imported it?

    - by fdisk
    I have Firefox and Google Chrome installed in the same machine (Linux). It happens when I type something in Chrome Omnibox it suggests pages I have already visited in Firefox. I have never connected the accounts of both browsers I have never imported information from one browser to other I have never visited the suggested pages in Chrome The keyword I type in the omnibox is vague and there is no way it could guess the suggestion without having access to the Firefox history. i.e.: i type "ir" in Chrome and it suggests me the same Iron Maiden lyrics page I have browse before in Firefox. Thanks

    Read the article

  • Does Google Chrome take over all other browsers no matter what?

    - by Jodi
    I can not use either IE or Firefox since I have downloaded Google Chrome. I don't even have Chrome set as my default anymore but it opens up anyway. I need to use IE in order to download updates for Windows Movie Maker and I can only download it using IE, thanks to good old Microsoft. And no where can I find a way to access IE on my computer. It is not shown in programs and no shortcut was created on my desktop in the download. Any suggestions? I got to get this video down and I am on a tight deadline. Thanks.

    Read the article

  • Why is My Google Chat get Blocked (by corporate firewall) somedays but not others? [closed]

    - by Peter
    I have noticed that some days I am able to chat while using Gmail, and other days I am not. It would make sense to me that I would either always be blocked, or never. But I can't figure out why it seems to change daily or weekly. Is Google constantly changing the URLs involved so that the censoring companies (they use websense where I work) have to play catch up? Or is there some other reason I'm missing? I am more interested in the technical reason it is might be happening than in an actual work around.

    Read the article

  • why does my google chat get blocked (by corporate firewall) somedays but not others?

    - by Peter
    I have noticed that some days I am able to chat while using Gmail, and other days I am not. It would make sense to me that I would either always be blocked, or never. But I can't figure out why it seems to change daily or weekly. Is Google constantly changing the URLs involved so that the censoring companies (they use websense where I work) have to play catch up? Or is there some other reason I'm missing? I am more interested in the technical reason it is might be happening than in an actual work around.

    Read the article

  • The model item passed into the dictionary is of type 'System.Collections.Generic.Lis

    - by mazhar
    Calling Index view is giving me this very very annoying error . Can anybody tell me what to do about it Error: The model item passed into the dictionary is of type 'System.Collections.Generic.List1[MvcApplication13.Models.Groups]', but this dictionary requires a model item of type 'MvcApplication13.Helpers.PaginatedList1[MvcApplication13.Models.Groups]'. public ActionResult Index(int? page) { const int pageSize = 10; var group =from p in _db.Groups orderby p.int_GroupId select p; var paginatedGroup = group.Skip((page ?? 0) * pageSize).Take(pageSize).ToList(); return View(paginatedGroup); } View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" % Index <h2>Index</h2> <table> <tr> <th></th> <th> int_GroupId </th> <th> vcr_GroupName </th> <th> txt_GroupDescription </th> <th> bit_Is_Deletable </th> <th> bit_Active </th> <th> int_CreatedBy </th> <th> dtm_CreatedDate </th> <th> int_ModifiedBy </th> <th> dtm_ModifiedDate </th> </tr> <% foreach (var item in Model) { %> <tr> <td> <%= Html.ActionLink("Edit", "Edit", new { id=item.int_GroupId }) %> | <%= Html.ActionLink("Details", "Details", new { id=item.int_GroupId })%> | <%= Html.ActionLink("Delete", "Delete", new { id=item.int_GroupId })%> </td> <td> <%= Html.Encode(item.int_GroupId) %> </td> <td> <%= Html.Encode(item.vcr_GroupName) %> </td> <td> <%= Html.Encode(item.txt_GroupDescription) %> </td> <td> <%= Html.Encode(item.bit_Is_Deletable) %> </td> <td> <%= Html.Encode(item.bit_Active) %> </td> <td> <%= Html.Encode(item.int_CreatedBy) %> </td> <td> <%= Html.Encode(String.Format("{0:g}", item.dtm_CreatedDate)) %> </td> <td> <%= Html.Encode(item.int_ModifiedBy) %> </td> <td> <%= Html.Encode(String.Format("{0:g}", item.dtm_ModifiedDate)) %> </td> </tr> <% } %> </table> <% if (Model.HasPreviousPage) { % <%= Html.RouteLink("<<<", "UpcomingDinners", new { page = (Model.PageIndex-1) }) % <% } % <% if (Model.HasNextPage) { % <%= Html.RouteLink("", "UpcomingDinners", new { page = (Model.PageIndex + 1) }) % <% } % <p> <%= Html.ActionLink("Create New", "Create") %> </p>

    Read the article

  • Which credentials should I put in for Google App Engine BulkLoader at development server?

    - by Hoang Pham
    Hello everyone, I would like to ask which kind of credentials do I need to put on for importing data using the Google App Engine BulkLoader class appcfg.py upload_data --config_file=models.py --filename=listcountries.csv --kind=CMSCountry --url=http://localhost:8178/remote_api vit/ And then it asks me for credentials: Please enter login credentials for localhost Here is an extraction of the content of the models.py, I use this listcountries.csv file class CMSCountry(db.Model): sortorder = db.StringProperty() name = db.StringProperty(required=True) formalname = db.StringProperty() type = db.StringProperty() subtype = db.StringProperty() sovereignt = db.StringProperty() capital = db.StringProperty() currencycode = db.StringProperty() currencyname = db.StringProperty() telephonecode = db.StringProperty() lettercode = db.StringProperty() lettercode2 = db.StringProperty() number = db.StringProperty() countrycode = db.StringProperty() class CMSCountryLoader(bulkloader.Loader): def __init__(self): bulkloader.Loader.__init__(self, 'CMSCountry', [('sortorder', str), ('name', str), ('formalname', str), ('type', str), ('subtype', str), ('sovereignt', str), ('capital', str), ('currencycode', str), ('currencyname', str), ('telephonecode', str), ('lettercode', str), ('lettercode2', str), ('number', str), ('countrycode', str) ]) loaders = [CMSCountryLoader] Every tries to enter the email and password result in "Authentication Failed", so I could not import the data to the development server. I don't think that I have any problem with my files neither my models because I have successfully uploaded the data to the appspot.com application. So what should I put in for localhost credentials? I also tried to use Eclipse with Pydev but I still got the same message :( Here is the output: Uploading data records. [INFO ] Logging to bulkloader-log-20090820.121659 [INFO ] Opening database: bulkloader-progress-20090820.121659.sql3 [INFO ] [Thread-1] WorkerThread: started [INFO ] [Thread-2] WorkerThread: started [INFO ] [Thread-3] WorkerThread: started [INFO ] [Thread-4] WorkerThread: started [INFO ] [Thread-5] WorkerThread: started [INFO ] [Thread-6] WorkerThread: started [INFO ] [Thread-7] WorkerThread: started [INFO ] [Thread-8] WorkerThread: started [INFO ] [Thread-9] WorkerThread: started [INFO ] [Thread-10] WorkerThread: started Password for [email protected]: [DEBUG ] Configuring remote_api. url_path = /remote_api, servername = localhost:8178 [DEBUG ] Bulkloader using app_id: abc [INFO ] Connecting to /remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\bulkloader.py", line 2802, in Run request_manager.Authenticate() File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\bulkloader.py", line 1126, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\ext\remote_api\remote_api_stub.py", line 488, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\appengine_rpc.py", line 344, in Send f = self.opener.open(req) File "C:\Python25\lib\urllib2.py", line 381, in open response = self._open(req, data) File "C:\Python25\lib\urllib2.py", line 399, in _open '_open', req) File "C:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "C:\Python25\lib\urllib2.py", line 1107, in http_open return self.do_open(httplib.HTTPConnection, req) File "C:\Python25\lib\urllib2.py", line 1082, in do_open raise URLError(err) URLError: <urlopen error (10061, 'Connection refused')> [INFO ] Authentication Failed Thank you!

    Read the article

  • Set Custom Reload Times for Individual Webpages in Chrome

    - by Asian Angel
    Do you have a webpage that needs to be reloaded every so often or perhaps you have multiple webpages that each need their own individual reload time? Now you can have the best of both with the AutoReloader extension for Google Chrome. Using AutoReloader When you first look at the drop-down window everything will be in a neutral “waiting” state. You can start using the extension immediately by simply entering the desired “time frame” for reloading a webpage. Notice for the “Repeat Option” that “0 = Continuous”… You may want to have a quick look through the “Options” to see if there are any “operational changes” that you would like to make. Once you enter a time click on the “Set Link” to start the timer. Notice that you can view the time remaining on the “Toolbar Button” unless you disabled the feature in the “Options”. Clicking on the “Toolbar Button” will show a larger version of the timer in the drop-down window along with a “Cancel Current Timer Link”. Here is the best part of all with AutoReloader…you can set up your own customized list of “Reload Times” and then access them through the drop-down window. Using the two times shown here we were able to set the “Productive Geek Webpage” up for 30 second reloads and the “TinyHacker Webpage” up for 1 minute reloads at the same time. There was no conflict whatsoever in running both “reload times” simultaneously. This is a really terrific feature! Conclusion Whether you have only one webpage or multiple pages that need periodic reloading (such as tracking a Woot-Off or an Ebay auction) the AutoReloader extension is the perfect tool for the job. Running custom reload times simultaneously have never been easier. Links Download the AutoReloader extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Set Up Automatic Timed Page Reloading on Your Webpages in FirefoxRemove Custom about:config Entries the Easy WayEnable Vista Black Style Theme for Google Chrome in XPActivate the Redesigned New-Tab Interface in Google ChromeModify Tab Ordering in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • Can you Trust Search?

    - by David Dorf
    An awful lot of referrals to e-commerce sites come from web searches. Retailers rely on search engine optimization (SEO) to correctly position their website so they can be found. Search on "blue jeans" and the results are determined by a semi-secret algorithm -- in my case Levi.com, Banana Republic, and ShopStyle show up. The NY Times recently uncovered a situation where JCPenney, via third-parties hired to help with SEO, was caught manipulating search results so they were erroneously higher in page rankings. No doubt this helped drive additional sales during this part Christmas. The article, The Dirty Little Secrets of Search, is well worth reading. My friend Ron Kleinman started an interesting discussion at the ARTS Linkedin forum. He posed the question: The ability of a single company to "punish" any retailer (by significantly impacting their on-line sales volume) who does not play by their rules ... is this a good thing or a bad thing? Clearly JCP was in the wrong and needed to be punished, but should that decision lie with Google alone? Don't get me wrong -- I'm certainly not advocating we create a Department of Search where bureaucrats think of ways to spend money, but Google wields an awful lot of power in this situation, and it makes me feel uncomfortable. Now Google is incorporating more social aspects into their search results. For example, when Google knows its me (i.e. I'm logged in when using Google) search results will be influenced by my Twitter network. In an effort to increase relevance, the blogs and re-tweeted articles from my network will be higher in the search results than they otherwise would be. So in the case of product searches, things discussed in my network will rise to the top. Continuing my blue jean example, if someone in my network had been discussing Macy's perhaps they would now be higher in the result set. soapbox: I already have lots of spammers posting bogus comments to this blog in an effort to create additional links to their sites and thus increase their search ranking. Should I expect a similar situation in Twitter and eventually Facebook? Now retailers need to expand their SEO efforts to incorporate social media as well, but do us all a favor and please don't cheat.

    Read the article

  • What constitutes a "substantial, good-faith effort to remove the links"

    - by Luke McCallum
    We engaged the services of a 3rd party SEO consultant to assist us in managing our Meta data and to write regular blogs on our site http://cyberdesignworks.com.au Without our authorisation, the SEO also ran a link building campaign which has seen us Penguin slapped and we no longer appear in Google for a number of our core keywords. Since notification by Google that we have "unnatural links" back in March we have undertaken a significant campaign to rid ourselves of these dodgy backlinks by a number of methods. I have just received feedback on my 4th or 5th resubmission which is still advising that we need to make a "substantial, good-faith effort to remove the links" before Google will reconsider us for inclusion. After the effort that I have gone through to get links removed, I am now at a loss as to what else I can do to demonstrate "substantial, good-faith effort to remove the links". Below is a summary of the actions that we have taken to date. According to http://removem.com we had about 5584 back-linking domains. Of those we have successfully contacted and had removed links from 344 domains We ignored links from 625 domains as they were either legitimate press releases, natural backlinks or client websites containing an attribution link in the footer that points back to us. Due to our efforts, or the sites simply becoming defunct, removem.com reports that links from 3262 domains have been removed. We have contacted but are yet to receive feedback from 1666 domains so we can assume that the backlinks remain. We have configured an automatic 301 redirect for each of the links from these 1666 domains to point to http://redirects.sanscode.com/ which we are calling our Bad Link Catcher (a stroke of genius I thought). i.e http://www.mysimplewebdesign.com/create-a-perfect-webpage-with-four-important-tips-from-sydney-web-development-service-companies.php As we are a web design agency, we have a large number of client websites which contain an attribution link in their footer which points back to us. We have gone through the vast majority of these and updated these links to replace anchor text with an image and rel="nofollow" link. i.e <a rel="nofollow" target="_blank" href="http://www.cyberdesignworks.com.au/"><img src="https://sessions.sanscode.com/site/assets/media/badges/Badge_CDW_SANSCODE.png"></a> See http://www.milkatwork.com.au/ An export from http://removem.com detailing the number of times we have contacted each link and whether it is still found or not was also supplied with each resubmission. The total back links reported in Google Web Master Tools has dropped from over 100K to 87K and I expect it to drop significantly lower once Google re-crawls each back-linking page. Based on all of the above, I am not sure what else I can do to to demonstrate a "substantial, good-faith effort to remove the links". I would sincerely appreciate any feedback or suggestions that you may have as I am out of ideas.

    Read the article

  • Is multiple domain names and links from same IP causing poor search engine rankings?

    - by John
    I have an ecommerce website which is not doing so well in Google. I am trying to improve this of course, and am looking at some possibilities for why it isn't doing well. The website has four domain names, all of which have been indexed by Google. A few months ago I applied 301 redirects to any requests for two of the domain names so now it is down to two domain names (one is a .net, the other is a .com.au, the others were .net.au and .com). I prefer to use my main domain name (the .com.au), but one of the names has been around for a long time and has more inbound links. According to a PageRank tool, both are PR2. It is a Classic ASP site and up until recently had a lot of querystring parameters. In the last week or so I added URL rewriting so there is now no parameters for most pages. I don't do 301 redirects from the old URLs but instead I add the META canonical tag indicating the preferred new URL. At the same time I redesigned the site and improved title tags, META descriptions, and H tags but it hasn't been long enough yet for Google to index many of these yet. I also looked at what pages Google has indexed and strangely it has some strange pages in the index, there are a lot of pages which are actual keyword searches (more a bunch of random letters than an actual word). What I mean is that it is as if they had typed in something to search for in my search box - there are no links to pages like this and the only way of getting this is to type something in to the search box). So I added a META robots tag with noindex,nofollow anytime that I render pages like this. Years ago I set up a fake price comparison site which lists all my products and links back to my site. It has a different keyword rich domain name but is on the same server and same IP address. It's a completely different layout but does have the same product categories and product descriptions (although I have stripped formatting out of them so they are not identical except in text). I also have a few blog sites which again are on the same server/IP and all have advertising for the website. My questions are: What should I do with the multiple domains, just use one, or continue with two or more? Should I add 301 redirects, not just the META canonical tag? Any idea about Google indexing my search results page, and did I do the right thing with the META robots tag? Is the fake price comparison site likely to be causing problems? Are all the links to the site from other domain names but the same IP address likely to be causing problems? Thanks for any help. Sorry for so many questions in one.

    Read the article

  • Problems using Maven to initialize a local thoughtsite (App Engine sample) project in Eclipse

    - by ovr
    This sample app ("thoughtsite") for App Engine contains a pom.xml in its trunk: http://code.google.com/p/thoughtsite/source/browse/#svn/trunk I ran mvn eclipse:eclipse and also tried using m2eclipse to import this source code into an Eclipse project. But I end up with this error despite the fact that I have the Google App Engine plugin and the Google App Engine SDK installed: Exception in thread "main" java.lang.ExceptionInInitializerError at com.google.appengine.tools.info.SdkImplInfo.<clinit>(SdkImplInfo.java:19) at com.google.appengine.tools.util.Logging.initializeLogging(Logging.java:36) at com.google.appengine.tools.development.DevAppServerMain.main(DevAppServerMain.java:82) Caused by: java.lang.RuntimeException: Unable to discover the Google App Engine SDK root. This code should be loaded from the SDK directory, but was instead loaded from file:~/.m2/repository/com/google/appengine/appengine-tools-sdk/1.3.0/appengine-tools-sdk-1.3.0.jar. Specify -Dappengine.sdk.root to override the SDK location. at com.google.appengine.tools.info.SdkInfo.findSdkRoot(SdkInfo.java:106) at com.google.appengine.tools.info.SdkInfo.<clinit>(SdkInfo.java:24) ... 3 more When I go into the project settings under "Google" and try to set it to use the default App Engine SDK it always reverts to trying to use Maven's App Engine SDK instead. No idea how to get this project working.

    Read the article

  • Google map API v3 event click raise when clickingMarkerClusterer?

    - by lucian.jp
    I have a Google Map API v3 map object on a page that uses MarkerClusterer. I have a function that need to run when we click on the map to it is registered as: google.maps.event.addListener(map, 'click', function (event) { CallMe(event.latLng); }); So my problem is as follows: When I click on a cluster of MarkerClusterer instead of behaving like a marker and not raise the click event on the map but only the one from the marker it calls the click from the map. To test this I have generated an alert from the markerclusterer click: google.maps.event.addListener(markerClusterer, "clusterclick", function (cluster) { alert('MarkerClusterer click event'); }); So the clusterclick rises after the click event of map object. I then can't remove the listener of map object as a solution. Is there any way to test if there was a clusterer click in the click event of the map? Or a way to replicate the marker behaviour and do not raise the click event of map when clustererclick is called? Google and documentation didn’t help me. Thx

    Read the article

  • How to return proper 404 for google while providing user friendly content to the user?

    - by Marek
    I am bouncing between posting this here and on Superuser. Please excuse me if you feel this does not belong here. I am observing the behavior described here - Googlebot is requesting random urls on my site, like aecgeqfx.html or sutwjemebk.html. I am sure that I am not linking these urls from anywhere on my site. I suspect this may be google probing how we handle non existent content - to cite from an answer to the linked question: [google is requesting random urls to] see if your site correctly handles non-existent files (by returning a 404 response header) We have a custom page for nonexistent content - a styled page saying "Content not found, if you believe you got here by error, please contact us", with a few internal links, served (naturally) with a 200 OK. The URL is served directly (no redirection to a single url). I am afraid this may discriminate the site at google - they may not interpret the user friendly page as a 404 - not found and may think we are trying to fake something and provide duplicate content. How should I proceed to ensure that google will not think the site is bogus while providing user friendly message to users in case they click on dead links by accident?

    Read the article

  • Handling file uploads with JavaScript and Google Gears, is there a better solution?

    - by gnarf
    So - I've been using this method of file uploading for a bit, but it seems that Google Gears has poor support for the newer browsers that implement the HTML5 specs. I've heard the word deprecated floating around a few channels, so I'm looking for a replacement that can accomplish the following tasks, and support the new browsers. I can always fall back to gears / standard file POST's but these following items make my process much simpler: Users MUST to be able to select multiple files for uploading in the dialog. I MUST be able to receive status updates on the transmission of a file. (progress bars) I would like to be able to use PUT requests instead of POST I would like to be able to easily attach these events to existing HTML elements using JavaScript. I.E. the File Selection should be triggered on a <button> click. I would like to be able to control response/request parameters easily using JavaScript. I'm not sure if the new HTML5 browsers have support for the desktop/request objects gears uses, or if there is a flash uploader that has these features that I am missing in my google searches. An example of uploading code using gears: // select some files: var desktop = google.gears.factory.create('beta.desktop'); desktop.openFiles(selectFilesCallback); function selectFilesCallback(files) { $.each(files,function(k,file) { // this code actually goes through a queue, and creates some status bars // but it is unimportant to show here... sendFile(file); }); } function sendFile(file) { google.gears.factory.create('beta.httprequest'); request.open('PUT', upl.url); request.setRequestHeader('filename', file.name); request.upload.onprogress = function(e) { // gives me % status updates... allows e.loaded/e.total }; request.onreadystatechange = function() { if (request.readyState == 4) { // completed the upload! } }; request.send(file.blob); return request; } Edit: apparently flash isn't capable of using PUT requests, so I have changed it to a "like" instead of a "must".

    Read the article

  • Collections not read from hibernate/ehcache second-level-cache

    - by Mark van Venrooij
    I'm trying to cache lazy loaded collections with ehcache/hibernate in a Spring project. When I execute a session.get(Parent.class, 123) and browse through the children multiple times a query is executed every time to fetch the children. The parent is only queried the first time and then resolved from the cache. Probably I'm missing something, but I can't find the solution. Please see the relevant code below. I'm using Spring (3.2.4.RELEASE) Hibernate(4.2.1.Final) and ehcache(2.6.6) The parent class: @Entity @Table(name = "PARENT") @Cacheable @Cache(usage = CacheConcurrencyStrategy.READ_WRITE, include = "all") public class ServiceSubscriptionGroup implements Serializable { /** The Id. */ @Id @Column(name = "ID") private int id; @OneToMany(fetch = FetchType.LAZY, mappedBy = "parent") @Cache(usage = CacheConcurrencyStrategy.READ_WRITE) private List<Child> children; public List<Child> getChildren() { return children; } public void setChildren(List<Child> children) { this.children = children; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Parent that = (Parent) o; if (id != that.id) return false; return true; } @Override public int hashCode() { return id; } } The child class: @Entity @Table(name = "CHILD") @Cacheable @Cache(usage = CacheConcurrencyStrategy.READ_WRITE, include = "all") public class Child { @Id @Column(name = "ID") private int id; @ManyToOne(fetch = FetchType.LAZY, cascade = CascadeType.ALL) @JoinColumn(name = "PARENT_ID") @Cache(usage = CacheConcurrencyStrategy.READ_WRITE) private Parent parent; public int getId() { return id; } public void setId(final int id) { this.id = id; } private Parent getParent(){ return parent; } private void setParent(Parent parent) { this.parent = parent; } @Override public boolean equals(final Object o) { if (this == o) { return true; } if (o == null || getClass() != o.getClass()) { return false; } final Child that = (Child) o; return id == that.id; } @Override public int hashCode() { return id; } } The application context: <bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="annotatedClasses"> <list> <value>Parent</value> <value>Child</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.SQLServer2008Dialect</prop> <prop key="hibernate.hbm2ddl.auto">validate</prop> <prop key="hibernate.ejb.naming_strategy">org.hibernate.cfg.ImprovedNamingStrategy</prop> <prop key="hibernate.connection.charSet">UTF-8</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.format_sql">true</prop> <prop key="hibernate.use_sql_comments">true</prop> <!-- cache settings ehcache--> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.cache.region.factory_class"> org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory</prop> <prop key="hibernate.generate_statistics">true</prop> <prop key="hibernate.cache.use_structured_entries">true</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.transaction.factory_class"> org.hibernate.engine.transaction.internal.jta.JtaTransactionFactory</prop> <prop key="hibernate.transaction.jta.platform"> org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform</prop> </props> </property> </bean> The testcase I'm running: @Test public void testGetParentFromCache() { for (int i = 0; i <3 ; i++ ) { getEntity(); } } private void getEntity() { Session sess = sessionFactory.openSession() sess.setCacheMode(CacheMode.NORMAL); Transaction t = sess.beginTransaction(); Parent p = (Parent) s.get(Parent.class, 123); Assert.assertNotNull(p); Assert.assertNotNull(p.getChildren().size()); t.commit(); sess.flush(); sess.clear(); sess.close(); } In the logging I can see that the first time 2 queries are executed getting the parent and getting the children. Furthermore the logging shows that the child entities as well as the collection are stored in the 2nd level cache. However when reading the collection a query is executed to fetch the children on second and third attempt.

    Read the article

  • How to iterate JPA collections in Google App engine

    - by palto
    Hi I use Google App Engine with datanucleus and JPA. I'm having a real hard time grasping how I'm supposed to read stuff from data store and pass it to JSP. If I load a list of POJOs with entitymanager and pass it to JSP, it crashes to org.datanucleus.exceptions.NucleusUserException: Object Manager has been closed. I understand why this is happening. Obviously because I fetch the list, close the entity manager and pass it to JSP, at which point it will fail because the list is lazy. How do I make the list NOT lazy without resorting to hacks like calling size() or something like that? Here is what I'm trying to do: @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { req.setAttribute("parties", getParties()); RequestDispatcher dispatcher = getServletContext().getRequestDispatcher("/WEB-INF/parties.jsp"); dispatcher.forward(req, resp); } private List<Party> getParties(){ EntityManager em = entityManagerProvider.get(); try{ Query query = em.createQuery("SELECT p FROM Party p"); return query.getResultList(); }finally{ em.close(); } }

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >