Search Results

Search found 9935 results on 398 pages for 'pages'.

Page 39/398 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • How Do I Export Pages from Browser with Embedded Hyperlinks?

    - by Volomike
    Made a sad discovery today. I have Ubuntu 10.04 LTS. My client is in the ad business and she had a marketing competition task for me. She wanted me to visit websites of the competitors, and export the home pages as PDF. However, she wanted me to do so with embedded hyperlinks. As it turns out, Firefox (and even the latest Chrome) on Ubuntu 10.04 LTS do not embed hyperlinks in PDF web page exports. Sure, there are several Chrome and FF plugins that let you export as PDF, but what these do is connect to the URL remotely, generate the PDF remotely, and then force a download in your browser to download it from a remote location. That's not good for me, though, because some of these competitor pages require an initial login. That means that all I get back on the PDF printing from these FF or Chrome plugins is a login page. Is there a way to get around this problem, to fix the broken PDF printer on Ubuntu 10.04?

    Read the article

  • Is it possible to edit the web pages in the Profile Manager (OSX Lion Server) user portal?

    - by sglantz
    When setting up Profile Manager in Lion Server, there are a few web pages that are generated for web management of both devices and profiles. Is it possible to edit these pages in any way? I am just looking to customize the look and feel without changing the functionality. If editing is possible, where are these pages stored? I have looked through the /Library/Server/ProfileManager and /Library/Server/Web folders without any luck. This is one particular page I would like to edit. Ideally I would like to be able to edit the HTML directly. I am looking to match the page's style with a variety of other pages hosted on the server.

    Read the article

  • Which one is better? To have link on one page or on all pages?

    - by Ervin
    I have a website: http://www.amenajari-gradini-mures.ro and I will put links on http://www.casesigradini.ro . I would like to know which one is better from the point of view of SEO: - to have one single link on the homepage OR - to have a link on every page (about 48000 pages) ? Right now I got my link up on every page. But if it's better to have it on one page only (or maybe on few main pages ) then i'll take it out the rest. Please give some arguments for your answers. Thanks

    Read the article

  • What is the best approach to copy public dynamic pages?

    - by Renan
    Situation: the government is supposed to publish official information online such as acts and laws. Problem: they're using 90s expertise to do it. You can tell that by the constant use of deprecated html tags such as <table and the lack of any compression at all, which makes some documents go way over 700,000 bytes even though they're pure text. Side problem: some companies are actually editing and selling this content that should be public and free. What I need to know is the best approach to offer said official content in my own site for free. I've thought of setting up a mirror to copy the official pages from time to time, since some of them are updated frequently, which would automatically be compressed as all my pages are via htaccess.

    Read the article

  • How do I link external style sheet to multiple pages and folders?

    - by user18681
    im building a pretty large website that will have many pages and folders. I have 1 stylesheet. How do I add the style sheet to "ALL" of these folders? I didnt have this problem before I started to put the pages in SEPERATE folders. Now that each page has its own folder it no longer reads my stylesheet unless its in the SAME folder Example, lets say I have a folder of pets, another of cars, and another of planes. I have to put my stylesheet in EACH and everyone of these folders so that I can see my site. How can I do it so that I do NOT have to put a stylesheet in each and every folder? In other words how can I get my stylesheets on the same page as my folders without having them in that folder? How can I get them to communicate while being in a different folder?

    Read the article

  • How to protect SHTML pages from crawlers/spiders/scrapers?

    - by Adam Lynch
    I have A LOT of SHTML pages I want to protect from crawlers, spiders & scrapers. I understand the limitations of SSIs. An implementation of the following can be suggested in conjunction with any technology/technologies you wish: The idea is that if you request too many pages too fast you're added to a blacklist for 24 hrs and shown a captcha instead of content, upon every page you request. If you enter the captcha correctly you've removed from the blacklist. There is a whitelist so GoogleBot, etc. will never get blocked. Which is the best/easiest way to implement this idea? Server = IIS Cleaning out the old tuples from a DB every 24 hrs is easily done so no need to explain that.

    Read the article

  • When the obvious answer is obviously wrong

    - by John Paul Cook
    This post is about how simple math in T-SQL can produce undesirable results, but first we begin with a math quiz. Answer the following as quickly as possible: You just read pages 100-300 of a book. How many pages did you read? QUICKLY NOW! For those of you who answered 200 pages, I have a new question: Which page did you not read? There were 201 pages to read. If you read 200 pages, you skipped a page! What your answer be if I asked you how many pages did you read if you read pages 1-3? Three pages!...(read more)

    Read the article

  • How reduce size of PPT 2010 Notes Pages PDFs?

    - by KnowItAllWannabe
    I have a PPT presentation of about 400 slides that I periodically update and publish as PDF. The view I publish is the Notes Pages. This worked fine for several years, during which time I was using PPT 2002. I recently upgraded to PPT 2010, and now I find that the PDFs I create are about 25 times bigger than they used to be, and the text in the slides part of the Notes Pages is no longer selectable in Acrobat. According to Why does Powerpoint 2010 print notes pages to PDF as raster images? , the problem is that PPT 2010 is rendering the slides' content as images, which is not what earlier versions of PPT did. The solution offered in that discussion involves Office Automation and VBA, neither of which I know anything about, and it's not clear whether that approach solves the problem of the text in the slides not being selectable in the PDF. Isn't there a simple way to get PPT 2010 to print Notes Pages to PDF the way it did in PPT 2002?

    Read the article

  • Can mass different log-in pages result in SEO duplicate and/or low quality punishments?

    - by Noam
    I have internal pages that rely on an external API which I would like to build upon user request. Two options I thought about: Make lots of 'thin' pages that specifies that if you want content about X, you need to log-in, and then the page will be built. Pros: user understands what he'll get when logging in. Cons: SEO implications of such a solution due to the mass 'low quality' and 'cross-sites duplicate content' Make them all redirect to ONE same generic log-in page. Pros: No duplicate low quality content. Cons: Lots of internal links to the same log-in page. Which would you recommend?

    Read the article

  • displaying first few pages of a pdf on a page = duplicate content?

    - by Ace
    I am embedding scribd pdfs on my website. These are exam papers pdf which are available on other websites. As it is scribd is an embed/iframe, I think google considers my page as being empty with no content; google does see iframe content right? So I decided to display the first pages of the pdf as text on the page for google. Then, for user experience, i hide the text and replace it with the scribd embed code using javascript. I have 2 worries about this method. Firstly, i am displaying the first pages of the pdf and the latter may be hosted on other websites, will this be considered as duplicate content. Secondly, I am hiding the content and replacing it with the scribd embed with javascript; is it considered bad by google?

    Read the article

  • How to import a pdf in libreoffice? under ubuntu, all pages are blank

    - by Daniele
    I have some .pdf generated by a scanner, that I want to import in LibreOffice and do some small editing. The PDF has only one object per page, a page-size image. If I open it in LibreOffice under Ubuntu 12.10, it imports "successfully" but all pages are blank. I have the libreoffice-pdfimport package installed. That is true with both LibreOffice 3.6 (part of Ubuntu 12.10) and with 4.0.2, from libreoffice ppa. The same .pdf files open perfectly fine on both LibreOffice for Windows and LibreOffice for Mac (yes, I have three computers with all three OSes), but on Ubuntu 12.10, all pages are blank, so I can only conclude this is an issue with Ubuntu packaging, or something really weird prevents it from working under linux. How can I import these kinds of .pdf into LibreOffice for editing?

    Read the article

  • Should I prevent search engines indexing tag/category pages?

    - by Macha
    On my site, I currently have no special rules for search engines. It is a blog, statically generated using a Python program. When I search for some of my articles on Google, there is usually a tag or category page included in the results. Sometimes it even ranks ahead of the article itself. Obviously, as these links aren't always going to have the article on them, this aren't the results I want people to click on. So, I'm thinking of setting noindex on these pages. Is there any possible downside to doing so? Is this possible to do via robots.txt, or do I have to add it to all the relevant templates? All I can find for robots.txt are ways to stop the search engine crawling those pages, which isn't what I want - while I don't want them indexed, it's still the only surefire way to find all my blog posts.

    Read the article

  • How long should I keep 301 redirecting pages from a deprecated domain?

    - by ElHaix
    I had an old domain that I have deprecated, but 301 redirected all results from it to my new site. The new site is now receiving a decent amount of traffic, but I don't know if it's 301 redirected from the old site, and doing a site:[old site] still shows several thousand pages indexed. Since all pages from the old site are 301 redirected, will they ever be removed from the index, as long as the old domain name is active? As a rule of thumb, somewhere I got 90 days for any significant site changes. When is it safe to burn the old domain?

    Read the article

  • How can I determine the trending pages on my site?

    - by Dogweather
    I'm looking to what what the "hot" pages are on one of my sites. I want to see for various timeframes, what the top-50 pages are. I'm going to create a data feed with this info which will be input to another app. I have Apache logs, and complete control of the machine to install what I want. I'm mostly wondering if there's something out there already that I can use, or if I have to implement it myself, what good algorithms or strategies might be. Thanks.

    Read the article

  • Is there any advantage/disadvantage to using robots.txt to disallow access to legal pages such as terms, privacy policy, etc.?

    - by CaptainCodeman
    As I understand, having repetitive content is a detriment to search engine placement. Given that many websites that use similar or even identical "Terms and Conditions" and "Privacy Policy" pages due to similar legal wording or due to copy & pasting from the same source, would it be a good idea to disallow access to these pages via robots.txt, in order to avoid being penalized for "non-original content"? Or, on the contrary, could the search engines identify this as circumvention and penalize the site for trying to hide content? Or does it not matter?

    Read the article

  • Is the structure used for these web pages a design pattern?

    - by aspdotnetuser
    I want to know if the structure for an ASP.NET website I'm working on uses a design pattern for it's web pages. If it is a design pattern, what is it called? The web pages have the following structure: UserDetails page (UserDetails.aspx) - includes UserDetailsController.ascx user control. UserDetailsController.ascx includes sub user controls like UserAccountDetails.ascx and UserLoginDetails.ascx etc Each sub user control contains a small amount of code/logic, the 'controller' user controls that host these sub user controls (i.e UserDetailsController.ascx) appear to call the business rules code and pass the data to the sub user controls. Is this a design pattern? What is it called?

    Read the article

  • Most standard / Best way to keep the same top menu among different web pages?

    - by jsoldi
    What's the standard way to keep the same menu on top among different web pages without having to duplicate it on each page (I don't mean that it doesn't reload like when using frames and only loading the bottom part; I want the menu to scroll with the page when scrolling down, like this, this, this and pretty much every single web page that exists). I found this answer but the guy can't use Php and I can. Plus, I see several people giving different suggestions, but I assume there is a standard since pretty much every single web page in the whole web have a menu on top that stays the same among multiple pages . I'm just a newbie on web design (I can program Php and Html easily but I have no idea about standards and stuff like that since I'm self-taught guy ;)). What I would normally do is to include the menu with php but I'm not sure if this is the "standard".

    Read the article

  • Web Speech API franchit un nouveau cap, la spécification JavaScript permettra d'intégrer la reconnaissance vocale dans les pages Web

    La spécification Web Speech API franchit un nouveau cap l'API JavaScript permet d'intégrer la reconnaissance vocale dans les pages Web La spécification Web Speech API vient de franchir une étape importante dans sa normalisation. Le groupe de travail Web Speech API du W3C a récemment publié le futur standard avec un appel des membres pour un accord de la spécification finale. Cette spécification décrit une API JavaScript qui permettra aux développeurs d'intégrer la reconnaissance vocale dans les pages Web. Grâce à cette API, les développeurs pourront utiliser des Scripts pour générer du texte à partir des paroles, utiliser la reconnaissance vocale comme entrée pour l...

    Read the article

  • The Importance of Internal Links

    No website is ever complete without linking from within your own pages to your other pages, and not through your normal navigation, but from linking to your other pages from within paragraphs. Google and other major search engines like to see relevant links to your pages, and if you link from one of your pages to another of your pages, their algorithm is tuned to see this as highly relevant, especially if you use good anchor text to link to your other page.

    Read the article

  • The network printer will only print blank pages and will not stop. What should I do?

    - by LibraryGeekAdam
    I work in a library, so we have a Ricoh printer that is networked between all of us. I just recently installed Ubuntu 11.10 64bit on my desktop and have been trying to set up the printer without success. Ubuntu found the printer immediately and allowed me to set it up. When I hit test print, I get one sheet with writing on it and then it prints blank pages and will not stop until I turn off the printer. I tried printing from a document and I get the same one page with writing and then all the blank pages again. Below is what the page says with writing on it. I'd appreciate any help anyone is willing to give. Thanks. %!PS-Adobe-3.0 %% %% mark () () (bunch of numbers) {setuserinfo} stopped cleartomark %%%!

    Read the article

  • The new Facebook Pages removed publication localization (translation)? [closed]

    - by Myka Eyl
    On the previous version of Facebook Pages I was able to specify in which language (or even countries) I wanted to publish a post. I've searched a LOT of time everywhere and I do not find anything like that since the timeline has landed on Facebook Pages. Do you know how I could publish something on a Facebook Page but only for ONE language? Because publishing things in several languages could get the "page fans" angry (too many new items that they could not read) and unlike the page. I would really appreciate how to find some language-specific options for new publications on a Facebook Page (I am admin of that page, of course). Any help will be greatly appreciated! Thanks you

    Read the article

  • Analytics - Total events divided by number of unique pages?

    - by GeekyAndUnique
    I am using Google Analytics events to track keywords on my articles - not necessarily the best system I know but there are too many for variables I can't easily change it right now - and I would like to be able to see how popular each keyword is by dividing the number of page views with a keyword by the number of unique pages. Is there a/what is the best way of doing this? EDIT FOR CLARITY I currently have a system set up where every time somebody loads an article an event is fired for each of the tags/keywords used, with the keyword being the label. I can currently view my view count for each of the keywords by looking at the total events for each label, however I would like to be able to see which keywords are the most popular by dividing the number of times the event has been fired by the the number of different pages it has been fired from.

    Read the article

  • Google indexing pages with #! although we don't have any

    - by Benjamin Gruenbaum
    Our company has developed a Single Page Application using AngularJS and its routing. Google indexed our site decently with JavaScript but it did not index some pages very well so we have developed an HTML only version. We have followed the Ajax Crawling Specification posted here and have a <meta name='fragment' content='!'> tag and canonical urls. We expect http://www.example.com/foo/bar to be fetched from http://www.example.com/?_escaped_fragment_=/foo/bar. However, we have found out that when we rolled the AJAX specification we now have all pages indexed twice, once with the JavaScript version as http://www.example.com/foo/bar and once with the new version as http://www.example.com/#!/foo/bar. This is harmful to us since it's duplicate content and also mis-representing out site. I have tried looking for similar questions here and in the Google product forum but could not come up with anything.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >