Search Results

Search found 28992 results on 1160 pages for 'content pages'.

Page 145/1160 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Is GAE Really GZipping My Content? Slow Response Times with GAE as CDN

    - by viatropos
    I am testing out Google App Engine as a free Content Delivery Network and it feels like it's taking a long time to serve up my content. Why does this gae page take a say a half a second to download, while your typical stack overflow page downloads much faster even with a ton more content? What am I missing here? All I have done is create an app and uploaded an image according to that tutorial, but content is being served very slowly it seems. Any suggestions? (Not considering Amazon or other CDNs right now, just looking for help with GAE). Note: I am using Safari when I visit those links, maybe safari is causing problems?

    Read the article

  • Does Google punish content duplication across multiple country domains?

    - by Logan Koester
    I like the way Google handles internationalization, with domains such as google.co.uk, google.nl, google.de etc. I'd like to do this for my own site, but I'm concerned that Google will interpret this as content duplication, particularly across countries that speak the same human language, as there won't be any translation to hint that the content is different. My site is a web application, not a content farm, so is this a legitimate concern? Would I be better off with subdomains of my .com? Directories?

    Read the article

  • ASP.Net: User control with content area, it's clearly possible but I need some details.

    - by bert
    I have seen two suggestions for my original question about whether it is possible to define a content area inside a user control and there are some helpful suggestions i.e. http://stackoverflow.com/questions/1971498/passing-in-content-to-asp-net-user-control and http://stackoverflow.com/questions/1912283/asp-net-user-control-inner-content Now, I like the theory of the latter better than the former just for aesthetic reasons. It seems to make more sense to me but the example given uses two variables content and templateContent that the answerer has not defined in their example code. Without these details I have found that the example does not work. I guess they are properties of the control? Or some such? The former example seems workable but I'd prefer to go with the latter if someone could fill in the blanks for me. Thanks.

    Read the article

  • Given this demo, How do I make HTML content area fit to viewport height?

    - by viatropos
    I just made this demo extracting out what I'm trying to accomplish: Autosize Main Content Area I want the pink/yellow area to act according to these rules: Minimum height is the size of its content (which is variable) IF content size is smaller than viewport size Otherwise minimum height is such that it adjusts to fill the window. Checking out the source to that demo, what am I missing? I feel like this is a pretty easy case that shouldn't require javascript. Any ideas?

    Read the article

  • ICOmmand - canexecute can not disable Button with image content.

    - by Anish
    Hi , I have a button control in my wpf-mvvm application. I use an ICommand property (defined in viewmodel) to bind the button click event to viewmodel. I have - execute and canexecute parameters for my ICommand implementation (RelayCommand). Even if CanExecute is false...button is not disabled...WHEN button CONTENT is IMAGE But, when button content is text..enable/disable works fine. <Button DockPanel.Dock="Top" Command="{Binding Path=MoveUpCommand}"> <Button.Content> <Image Source="/Resources/MoveUpArrow.png"></Image> </Button.Content> <Style> <Style.Triggers> <Trigger Property="IsEnabled" Value="False"> <Setter Property="Opacity" Value=".5" /> </Trigger> </Style.Triggers> </Style> </Button>

    Read the article

  • How to scan multiple pages from a book under Linux?

    - by rumtscho
    I want the process to look like: I choose the correct scan settings (dpi, color depth, etc) I lay the first page on the scanner and trigger the process The scanner scans the page and waits for me to position the next page correctly I confirm that the next page is ready for scanning Repeat the above two steps until I tell the scanner that there are no more pages to come The scanner saves everything into a single PDF. I tried both xsane and gscan2pdf. First problem: they want me to know how many pages will be scanned. This is already a nuisance, but I can do the counting if needed. The main problem is that in step 3, the scanner does not pause. It is probably optimised for being fed loose sheets. The next scan process is triggered automatically as soon as the CCD has returned to the start position. The time the scanner needs to return the CCD is very short and I can't turn the page and position the book properly. Is there a software which can do the scan process in the way I described above, or did I just miss a setting available in xsane or gscan2pdf to make the scanner pause? If it makes any difference, the scanner is an Epson Stylus SX620FW, I run it using the manufacturer-provided driver.

    Read the article

  • Can't access random web pages on my MacBook Pro 2012?

    - by Faruk Sahin
    Sometimes I can't access random web pages. The page simply doesn't load. If I wait for around a minute doing nothing, it will load. It happens very random and very intermittent. Sometimes it starts when I try to access youtube.com or cnn.com. When it starts, it happens once in a minute or once in 5 minutes for random web pages. But if I am downloading something, the download continues without any interruption. And also I am able to ping the address I can't browse. Then if I wait for around a minute, everything starts to work fine at the browser side also. I have tried a lot of different browsers. I have tried changing my DNS servers to Google's public DNS servers. Using a cable instead of the wireless connection doesn't work either. No one else in the network has this problem, but me. What can be the problem?

    Read the article

  • how do copyright permission systems for content hosting sites work?

    - by zebraman
    I am wondering about subscription sites that host content, like recorded performances from concerts. I'm sure there is a tangle of copyright permissions that must be granted for these video/audio files to be hosted. For example, if a band plays a cover of another band's song, permission must be obtained from not only the band that performed, but the band that owns the song. Perhaps even from the venue that hosted the performance, to record the video and post the content. I am curious how websites that host content like this work. How might an automated copyright system work to keep track of who has ownership of certain performances and obtain permission from said owners to record and post their content.

    Read the article

  • how to display a binary content of image/pdf in java script?

    - by Ka-rocks
    I have a binary content of image/pdf in java script variable downloaded from server. There will be indication server about the typr of the file. I have to display the content in respective file format. If it is image , i have to display the image. If it is a pdf, i have to open the content in pdf format. and so on. How to parse the binary content and display it? I have searched for it. But I couldn't find exact solution. I'm using jquery mobile framework. Pls help..

    Read the article

  • Action Cache for root URL not working

    - by askegg
    Here's the setup. I have web site which is essentially a simple CMS. Here is the routes file: map.connect ':url', :controller => :pages, :action => :show map.root :controller => :pages, :action => :show, :url => "/" The page controller is thus: class PagesController < ApplicationController before_filter :verify_access, :except => [:show] # Cache show action if we are not logged in. caches_action :show, :layout => false, :unless => Proc.new { |controller| controller.logged_in? } def update @page = Page.find(params[:id]) respond_to do |format| expire_action :action => :show, :url => @page.url So when a visitor hits "/" it maps to :controller = "pages, :action = "show, :url = "/". This generates a cached version on first try, then returns the appropriate result there after. The log files show: Processing PagesController#show (for 127.0.0.1 at 2009-08-02 14:15:01) [GET] Parameters: {"action"=>"show", "url"=>"/", "controller"=>"pages"} Cached fragment hit: views/out.local// (0.1ms) Rendering template within layouts/application Filter chain halted as [#<ActionController::Filters::AroundFilter:0x23eb03c @identifier=nil, @method=#<Proc:0x01904858@/Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/caching/actions.rb:64>, @kind=:filter, @options={:only=>#<Set: {"show"}>, :if=>nil, :unless=>#<Proc:0x025137ac@/Users/askegg/Sites/out/app/controllers/pages_controller.rb:6>}>] did_not_yield. Completed in 2ms (View: 1, DB: 0) | 200 OK [http://out.local/] OK - all good so far. When I update the page, it should expire the cache (see above). The logs show: Page Load (0.2ms) SELECT * FROM "pages" WHERE ("pages"."id" = 3) Page Load (0.1ms) SELECT "pages".id FROM "pages" WHERE ("pages"."url" = '/' AND "pages".domain_id = 1 AND "pages".id <> 3) LIMIT 1 Expired fragment: views/out.local/index (0.1ms) Redirected to http://out.local/pages/3 Completed in 9ms (DB: 0) | 302 Found [http://out.local/pages/3] See the problem? Rails is clearing the cache named "index", but it sets it as "/". Naturally this results in the cache NOT being cleared, so visitors are now seeing the old version.

    Read the article

  • How to set an onClick attribute that would load dynamic content?

    - by konzepz
    This code is suppose to add an onClick event to each of the a elements, so that clicking on the element would load the content of the page dynamically into a DIV. Now, I got this far - It will add the onClick event, but how can I load the dynamic content? $(document.body).ready(function () { $("li.cat-item a").each(function (i) { this.setAttribute('onclick','alert("[load content dynamically into #preview]")'); }); }); Thank you.

    Read the article

  • How to get sites identical in content but different in language and TLD indexed by major search engi

    - by mojo77
    Hi! Is it possible to get two "editions" of a website both indexed by the major search engines (Google/Yahoo/Bing/Teoma) which differ in content language only and are hosted under different TLDs? Say English content is available at "http://domain.com/", German content at "http://domain.de/". Now, if e.g. Google.com is used I want it to list the "domain.com" entry and vice versa. Is "Duplicate Content" an issue here? Thanks so much for advising me!

    Read the article

  • How do I force a DIV block to extend to the bottom of a page even if it has no content?

    - by Vince Panuccio
    In the markup shown below, I'm trying to get the content div to stretch all the way to the bottom of the page but it's only stretching if there's content to display. The reason I want to do this is so the vertical border still appears down the page even if there isn't any content to display. Here is my code <body> <form id="form1"> <div id="header"> <a title="Home" href="index.html" /> </div> <div id="menuwrapper"> <div id="menu"> </div> </div> <div id="content"> </div> and my CSS body { font-family: Trebuchet MS, Verdana, MS Sans Serif; font-size:0.9em; margin:0; padding:0; } div#header { width: 100%; height: 100px; } #header a { background-position: 100px 30px; background: transparent url(site-style-images/sitelogo.jpg) no-repeat fixed 100px 30px; height: 80px; display: block; } #header, #menuwrapper { background-repeat: repeat; background-image: url(site-style-images/darkblue_background_color.jpg); } #menu #menuwrapper { height:25px; } div#menuwrapper { width:100% } #menu, #content { width:1024px; margin: 0 auto; } div#menu { height: 25px; background-color:#50657a; } Thanks for taking a look.

    Read the article

  • How to check restricted access pages for broken links?

    - by kumarsfriends
    Hi All, I was googling for tools for checking broken links in a remote web page. The w3c validator seemed a good one. But I am still unsure as how to check for pages which are restricted, i.e. the pages which I can only access by logging in to the site. Can we do that using the w3c validator? If not than is there any other tool for the same?

    Read the article

  • How to encode content to send them via jquery to a php file?

    - by phpheini
    I am trying to send a form to a php file via jquery. The problem is, that the content, which has to be sent to the php file, contains slashes (/) since there is bb code inside. So I tried the following: $.ajax( { type: "POST", url: "create.php", data: "content=" + encodeURIComponent(content), cache: false, success: function(message) { $("#somediv").html(message); } }); In the php file I use rawurldecode to decode the content and get my bb codes back which I can then transform into html. The problem is as soon as I put the encodeURIComponent() it will ouput: [object HTMLTextAreaElement] What does that mean, where is my mistake? Thanks for your help! phpheini

    Read the article

  • Why are perfectly legitamate pages on my website registering in google Webmasters as 404?

    - by christian
    I have seen this question asked several times here, but never clearly answered. I suspect it has something to do with my .htaccess file: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^moreinfo/(.*)$ http://www.kgstiles.com/moreinfo$1 [R=301] RewriteRule ^healthsolutions/(.*)$ http://www.kgstiles.com/healthsolutions$1 [R=301] RewriteRule ^(.*)\.html$ $1/ [R=301] RewriteRule ^(.*)\.htm$ $1/ [R=301] </IfModule> when I check the url without a forward slash at the end, it registers as 404 (even though it renders fine in a browser), but when I write it without the forward slash at the end, it renders 200 OK, but if I try to take off the forward slash with the htaccess file, the browser gives me a 310 error (too many redirects) you can see the 404 and 310 with this url: http://www.kgstiles.com/pureplantessentials.html which redirects to http://www.kgstiles.com/pureplantessentials/ (which is a 404), so what is a solution and why might this be registering as a 404? Any Help is appreciated! (I'm using wordpress btw)

    Read the article

  • Introducing a new Umbraco datatype for Multi-lingual websites.

    - by Vizioz Limited
    Over the last 6 months we have been building various multi-lingual sites for different clients and for some of the clients they have 1 to 1 relationships between some or all of their pages.Within Umbraco, you can copy a page ( or whole tree of pages ) and keep a relationship between each of the pages and their new copy, this allows content editors to subscribe to change notifications that Umbraco can create if one of the linked pages is changed.Unfortunately one thing that is missing in Umbraco is any way to see which pages are related to each other and to have a quick and easy way to jump between the related pages.We created a datatype that solves these problems and thought we would release it as an open source project ( which we are still maintaining )Currently you can:1) See current relationships2) Add relationships3) Limit the number of relationships that can be added ( by the data type )4) See the Country flag ( assuming a culture has been set on each of your top level site nodes for each country site )5) Link between the documents6) Change or delete the linksAn example where multiple languages are allowed:An example where only 2 languages exist (1 relationship):You can download the datatype from the Umbraco project page:Vizioz Relationships for UmbracoPlease do let us know what you think :)

    Read the article

  • Killer content for my Kindle - The Economist with no need for an iPad - yipeee!

    - by Liam Westley
    I admin it, I was jealous of someone's iPad. They were reading The Economist, for free, as they were a print subscriber. I'm a print subscriber too. However, I don't have an iPad or an iPhone, just an Android phone and a Kindle. As soon as I got the Kindle, I looked up how to get The Economist on it. £9.99 per month. Hmmm, twice as much again as a my print subscription and I wanted to maintain the print subscription. No way Amazon. Fortunately some nice person wrote similar comments on The Economist subscription for Kindle, but added a very important additional nugget of information; and there is no need, as a print subscriber you can just use the free Calibre e-book creation tool anyway. So I downloaded it, searched for The Economist online 'recipe', entered my login name and password (part of my print subscription) and off went Calibre to screen scrape every single article from the Christmas 2010 issue into a .mobi file, complete with front cover image and full indexing. It's wonderful. Truely wonderful. Every section individually indexed, with each article separated and all inline images preserved. It even feels wonderfully retro, back to the days when The Economist only used black and white images. So many thanks the guys behind Calibre and The Economist recipe creators. Finally, I have my essential Kindle content that I've been waiting for.

    Read the article

  • One page using querystring or many folders and pages?

    - by ClarkeyBoy
    I have an application where I have the 'core' code in one folder for which there is a virtual directory in the root, such that I can include any core files using /myApp/core/bla.asp. I then have two folders outside of this with a default.asp which currently use the querystring to define what page should be displayed. One page is for general users, the other will only be accessible to users who have permission to manage users / usergroups / permissions. The core code checks the querystring and then checks the permissions for that user. An example of this as it is now is default.asp?action=view&viewtype=list&objectid=server. I am not worried about SEO as this is an internal app and uses Windows Auth. My question is, is it better the way it is now or would it be better to have something like the following: /server/view/list/ /server/view/?id=123 /server/create/ /server/edit/?id=123 /server/remove/?id=123 In the above folders I would have a home page which defines all the variables which are currently determined by the querystring - in /server/create/ for example, I would define the action as 'create', object name as 'server' and so on. In terms of future development, I really have no idea which method would be best. I think the 2nd method would be best in terms of following what page does what but this is such a huge change to make at this stage that I would really like some opinions, preferably based on experience. PS Sorry if the tags are wrong - I am new to this forum and thought this was a bit too much of a discussion for StackOverflow as that is very much right / wrong answer based. I got the idea SE is more discussion based.

    Read the article

  • Can a Mediawiki table be dynamically created using other Mediawiki pages?

    - by Ashimema
    OK, So I've got created a page on my wiki which contains just a single table listing various details about servers and customers. You can follow links for each customer name in the table to find additional details about said customer. What I want to know is; Can the information in the customers page (page B) be used to dynamically update the table (Page A). Is this something that the Semantic MediaWiki extension can accomplished? Running Mediawiki 1.16.2

    Read the article

  • How long does it take for Google to re-index pages or update the link titles?

    - by ElHaix
    On one of our classified sites, when doing site:[mysite.com] in Google, the link text is simply [product name] - [mysite.com], where as it should read [product name] classifieds for sale in... I suspect that the site map may have been submitted when we just had [product name], and updated the page titles later. However, it has been a couple of weeks that I have confirmed the longer page titles, and still they appear shortened in organic results. How can I get this looking right in Google's organic results?

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >