Search Results

Search found 9960 results on 399 pages for 'iwork pages'.

Page 16/399 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Why do Google search results include pages disallowed in robots.txt?

    - by Ilmari Karonen
    I have some pages on my site that I want to keep search engines away from, so I disallowed them in my robots.txt file like this: User-Agent: * Disallow: /email Yet I recently noticed that Google still sometimes returns links to those pages in their search results. Why does this happen, and how can I stop it? Background: Several years ago, I made a simple web site for a club a relative of mine was involved in. They wanted to have e-mail links on their pages, so, to try and keep those e-mail addresses from ending up on too many spam lists, instead of using direct mailto: links I made those links point to a simple redirector / address harvester trap script running on my own site. This script would return either a 301 redirect to the actual mailto: URL, or, if it detected a suspicious access pattern, a page containing lots of random fake e-mail addresses and links to more such pages. To keep legitimate search bots away from the trap, I set up the robots.txt rule shown above, disallowing the entire space of both legit redirector links and trap pages. Just recently, however, one of the people in the club searched Google for their own name and was quite surprised when one of the results on the first page was a link to the redirector script, with a title consisting of their e-mail address followed by my name. Of course, they immediately e-mailed me and wanted to know how to get their address out of Google's index. I was quite surprised too, since I had no idea that Google would index such URLs at all, seemingly in violation of my robots.txt rule. I did manage to submit a removal request to Google, and it seems to have worked, but I'd like to know why and how Google is circumventing my robots.txt like that and how to make sure that none of the disallowed pages will show up in their search results. Ps. I actually found out a possible explanation and solution, which I'll post below, while preparing this question, but I thought I'd ask it anyway in case someone else might have the same problem. Please do feel free to post your own answers. I'd also be interested in knowing if other search engines do this too, and whether the same solutions work for them also.

    Read the article

  • How do I print unprintable web pages?

    - by user1413
    I want to print out a web page that seems to be unprintable in both Firefox and Chrome. It is multiple pages but when I print it out in Firefox and Chrome, they only print the first page. The only way I have found to print out the page is to print it in IE in XPS Document Writer format. Is there a tool (e.g., web browser or web browser plugin) that will help? Or is there a setting I can use in Firefox or Chrome?

    Read the article

  • WSS and CAG , _layout pages break

    - by Mike
    Alright, I've searched everywhere and I cannot find the answer, due to the rarity of our setup. WSS 3.0/IIS 6.0/WinServer 2003 We have a sharepoint site that is in good shape, almost. Its TCP and SSL port are uncommon and need to be rerouted to work properly. This is where the Citrix Access Gateway (CAG) comes in play. It will redirect any request from URL (something.something.com) to the correct SSL port on the correct server. My AAM is configured to Default something.something.com and nothing else, since the CAG will provide the port. We use FBA, and require SSL. This works perfectly for everything that is safe or that is anything that an end user can see, but if I try to add a webpart, it errors out. Whereas if I add it internally, or bypass the CAG the webpart adds fine. The same goes for most of the _layouts pages, like _layouts/new.aspx. If I add a Link List/Doc library on the something.something.com, it errors out (Page cannot be displayed) and the page won't display, but if I try it with an internal address it will work fine. I found that if I am trying to add something or doing anything administrative, the site will navigate to the pages that I need to go to fine, but when i actually ADD something the URL will change from something.something.com to something.something.com:SSLport, thus erroring out the site. The URL with the SSL port shows on the Site URL when navigating to Site Settings. However, if I bypass the CAG, using the internal address the _layouts page works like a charm and i can add anything. All the CAG does is reroute a DNS request to the provided server and port. I've tried reextending the application, no luck same thing. I've tried changing the AAM to hide the port and the CAG rejects it. I've tried to recreate a new webapp/site collection with the same rules on the CAG, same thing occurs. Correct me if I'm wrong, and please provide me with some feedback and answers. Any suggestions would be very appreciated. Is it the CAG or the Alternate Access Mappings (AAM)?

    Read the article

  • How to record a "macro" that saves web pages as PDF in OSX

    - by dwatson
    I frequently save pages as PDF from Chrome on OSX. The page is apple-P then click the PDF button and the "Save as PDF.." menu item. I always use the pre-filled filename and save in the default directory. Is it possible to save this as an automator script? If that is possible I woulld sure like to add this as a button on Chrome somewhere so I can just "save this for reading later" Thanks for any help.

    Read the article

  • How to record a "macro" that saves web pages as PDF in OSX

    - by dwatson
    I frequently save pages as PDF from Chrome on OSX. The page is apple-P then click the PDF button and the "Save as PDF.." menu item. I always use the pre-filled filename and save in the default directory. Is it possible to save this as an automator script? If that is possible I woulld sure like to add this as a button on Chrome somewhere so I can just "save this for reading later" Thanks for any help.

    Read the article

  • Imagick Convert Append 2 pdf pages

    - by Hammad Khalid
    I am using the following code to make a single pdf file with multiple pages in one jpg file I am using Imagick library and PHP tcpdf convert -append path1.pdf path2.jpg Now what i need to do is to add a white space between each page to differentiate them from one another, or add text in between like Page 1, Page 2. Currently they come correct. But there is no space in between. Can anyone help me out

    Read the article

  • direct http to https on certain pages?

    - by Elliott
    Hi below is some code I added to my .htaccess code how can I add certain pages to be re-directed to https? such as login.php & login.html also if the user types in www. they get a "untrusted connection" as the SSL is only valid without the www. how could I fix this? Thanks RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} /login.html RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

    Read the article

  • How to select all text inside text area of code field on web pages

    - by Moorage
    In all the forums web pages the download links are given inside the [code] braces that looks like text area with white background. When the download links are in large numbers then there are scroll bars. I find it difficult to scroll up and then select and then drag down to select all links. Is there any way to select all , when I click on select all on right click then it usually selects the text from whole page not from that code segment

    Read the article

  • How do you switch between Linux manual pages?

    - by Sheldon
    I'm new with Linux and have noticed that there are numbers beside certain commands I look up. For example I want to look up accept() in the aspect of network programming, but man accept shows this instead: accept(8) Easy Software Products accept(8) NAME accept/reject - accept/reject jobs sent to a destination So how do you switch between manual pages to other numbers like accept(1) ~ accept(7)?

    Read the article

  • hp officejet 5510 is only printing blank pages if used with Ubuntu 9.10

    - by mutzel
    I'm trying to setup a hp officejet 5510 using HPLIP 3.10.2 under Ubuntu 9.10. The installation of the driver according to this guide was no problem but after installing and selecting the printer I was only able to print blank pages. The printer is working well under windows and scanning (its a multi-functional printer) is also possible under Ubuntu. Does anyone know this problem and a possible solution?

    Read the article

  • Multiple pages per sheet (PDF)

    - by smihael
    I use the following commands pdf2ps input.pdf - | psnup -pA4 -4 >> output.ps ps2pdf output.ps output.pdf rm output.ps to merge multiple pages (in this case 4) from input file to one sheet in outupt file. How can I modify pipelining so that I won't have to use 2 commands, but just a single one liner? Is there any other commandline tool that would do the same and can work directly on pdf files?

    Read the article

  • Some web pages won't download fully

    - by Sumac
    Some web pages won't download fully under any browser on any computer connected to the network. I have Internet access through a wireless modem/router (2 Mbps DSL connection, wireless reception is excellent). I use Opera and when I turn on Opera turbo the same sites download fully. I tried changing to some other dns (opendns, google dns), but it made no difference. What would you suggest I try? OS : Windows 7 64 bit

    Read the article

  • Child web.config can't clear <pages><controls> from parent web.config

    - by Lance Rushing
    How can I "clear" the vendor defined <controls> in my child app's web.config? Parent Web Config. <system.web> <pages> <controls> <!-- START: Vendor Custom Control --> <add tagPrefix="asp" namespace="VENDOR.Web.UI.Base" assembly="System.Web.Extensions, Version=1.0.61025.0, Culture=neutral /> ... <!-- END: Vendor Custom Control --> ... </controls> <tagMapping> <add tagType="System.Web.UI.WebControls.WebParts.WebPartManager" mappedTagType="Microsoft.Web.Preview.UI.Controls.WebParts.WebPartManager" /> <add tagType="System.Web.UI.WebControls.WebParts.WebPartZone" mappedTagType="Microsoft.Web.Preview.UI.Controls.WebParts.WebPartZone" /> </tagMapping> </pages> </system.web> Child: <system.web> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> <tagMapping> <clear/> </tagMapping> </pages> </system.web> I have it working for the <tagMapping> section, but <controls> does not support <clear/> (or ).

    Read the article

  • session variables lost between pages or use same variables

    - by user222333
    Hi, Yesterday I learn many thing from you, especially from Marc and my problem was solved ( session variables lost between pages or use same variables ). But now I continue asking: I don't want to use Session ID(session.use_trans_sid = 1) between pages. But also I don't want to use same session variables for different users at same application and also I don't want lost session variables between pages for same user. Is it possible? If yes how? Thanks for everybody for any help. Best regards. I have Wamp Server(2.2.11) with PHP(5.2.9.-2). My php.ini's session settings at below: [Session] session.save_handler = files session.save_path = "c:/wamp/tmp" session.use_cookies = 0 ;session.cookie_secure = ;session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = 0 session.bug_compat_warn = 1 session.referer_check = session.entropy_length = 0 session.entropy_file = ;session.entropy_length = 16 ;session.entropy_file = /dev/urandom session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 1 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry"

    Read the article

  • How to pass data between pages without sessions in ASP.net MVC

    - by Ashwani K
    Hello All: I have one application in which I want to pass data between Pages (Views) without sessions. Actually I want to apply some settings to all the pages using query string. For example if my link is like "http://example.com?data=test1", then I want to append this query string to all the link there after and if there is no query string then normal flow. I was thinking if there is any way that if we get the query string in any link for the web application then some application level user specific property can be set which can be used for subsequent pages. Thanks, Ashwani

    Read the article

  • Scope of the Pages in a Silverlight application

    - by AngryHacker
    I have an app built with the Silverlight Navigation Application Template. I have a main form (e.g. MainPage.xaml) and a bunch of Silverlight Pages, which are swapped in and out of the main content area. In the MainPage.xaml, I have a DispatcherTimer which hits some Uri resources, regardless of which page I am on. Every now and then, it will inexplicably stop firing. I have an inkling that it has to do with the scope of various pages. Can pages inside the MainPage.xaml take away the scope from its parent? Or is this something much simpler?

    Read the article

  • In LaTeX prefer figures on text-heavy pages.

    - by bjarkef
    Hi LaTeX seems to have a preference for placing figures together on a page, and placing surrounding text on a separate page. Can I somehow change that balance a bit, as I prefer figures to break up the text to avoid too black text-heavy pages. Example: \section{Some section} [Half a page of text] \begin{figure} [...] \caption{Figure text 1} \end{figure} [Half a page of text] \begin{figure} [...] \caption{Figure text 2} \end{figure} [More text] So what LaTeX usually does is to stack the two half pages of text on a single page, and the figures on the following page. I believe this really gives a bad balance, and bores the reader. So can I change that somehow? I know about postfixing the \begin{figure} with [ht!], but often it does not really matter. I would like to configure the balancing algorithms in LaTeX to naturally prefer pages with combined figures and text.

    Read the article

  • Caching linked pages in ASP.NET

    - by n0e
    I'm thinking of a way of creating a local backup for the pages I will be linking to from my site. This would be text-only, similar to Google's 'Copy' feature on the search pages. The idea is to be sure that the pages I would reference to, or cite from, do not dissapear from the Web in the near future. I know I could just keep local copies, but I will have A LOT of citations. What would be the best way of achieving this in ASP.NET? Some custom caching in database?

    Read the article

  • Issue with converting PDF pages to images with ImageMagick's convert (and PHP)

    - by Joel L
    I'm trying to create a small web service to convert PDF files to a series of images. When I run convert /full/path/to/source.pdf full/path/to/target.jpg when connected to the (shared hosting, Linux) server via ssh, everything works correctly. When executing the same command through PHP's exec() function, only the first few pages of the PDF file get converted. Sometimes the remaining pages are 0-length jpg files, sometimes they don't appear at all. Also, the bottom area of the first pages is sometimes black, as if convert stopped half-way on the page. What could be causing this problem?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >