Search Results

Search found 28992 results on 1160 pages for 'content pages'.

Page 140/1160 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • apache not displaying php pages, offering them for download instead.

    - by Peter NUnn
    Hi folks, we are trying to setup apache (apache 2 in this case, although apache does the same thing) and html pages display just fine, however, any php pages linked into buttons on the front page are offered for download rather than being displayed. Any ideas what we have missed? Its proving difficult to search on this in google as the terms are so heavily used elsewhere. I know this is a bit general, but we have tried adding types to the to the apache.conf (or httpd.conf for apache1) are having no joy at all. Thanks. Peter.

    Read the article

  • Do you known a reputable backup software that can capture ONLY file system structure + attributes, WITHOUT file content

    - by bogdan
    Is there, on Windows, a reputable backup software out there capable of capturing ONLY a file system's directory and file structure, along with each item's attributes, WITHOUT capturing the actual file content (all files should be zero-length in the backup). I thoroughly searched the web for a solution and wasn't able to find one. Scenario when this would be very useful: I have a large drive with a huge amount of files. If the drive dies, I don't care so much about the content in these files (I can always download this content again from the Internet at any time) but I do care HUGELY about the names of the files that were on it, possibly also about their MD5 hashes and other classic file attributes (especially created-date / modified-date). The functionality I need is present to an extent in "media"/file cataloging software (i.e. whereisit) and, to a lesser extent, in a Total Commander set of extensions (DiskDir, DiskDirExtended). The huge drawback with cataloging software is that it's not designed to store previous versions of each item (AFAIK) and, most importantly, it has very weak content backup capabilities. I managed to think of a hack but I hope there's some backup software out there that already has this capability and I just failed to find it, thus this question. The hack: RoboCopy could be used with /CREATE (CREATE directory tree and zero-length files only) or /COPY (what to COPY for files) without the D=Data flag, to clone a directory structure into one where all files are zero-length but have the desired attributes. Then I would backup the cloned directory structure with a reputable backup software. I would really love to avoid a hack like this one, if possible. Thanks, Bogdan

    Read the article

  • Looking for simple windows scan (multiple pages) to one pdf application?

    - by Troggy
    I would like to find some simple scan software for a windows machine that can scan to pdf, but I would like it to do batch or multiple pages into one big pdf. I saw a couple questions on scan to pdf software, but did not see anything talking about scanning to large multiple page pdf's. EDIT: I am surprised there are not more options out there. Do many of the scanners/all in one devices come with included software that perform this function? EDIT 2: I tried Scan2PDF and it locked up on me multiple times in the middle of the scan job and then gave me non-english error messages. Otherwise, I liked how simple the app was, just select number of pages and hit ok. Any other success stories out there?

    Read the article

  • Right-aligning button in a grid with possibly no content - stretch grid to always fill the page

    - by Peter Perhác
    Hello people, I am losing my patience with this. I am working on a Windows Phone 7 application and I can't figure out what layout manager to use to achieve the following: Basically, when I use a Grid as the layout root, I can't make the grid to stretch to the size of the phone application page. When the main content area is full, all is well and the button sits where I want it to sit. However, in case the page content is very short, the grid is only as wide as to accommodate its content and then the button (which I am desperate to keep near the right edge of the screen) moves away from the right edge. If I replace the grid and use a vertically oriented stack panel for the layout root, the button sits where I want it but then the content area is capable of growing beyond the bottom edge. So, when I place a listbox full of items into the main content area, it doesn't adjust its height to be completely in view, but the majority of items in that listbox are just rendered below the bottom edge of the display area. I have tried using a third-party DockPanel layout manager and then docked the button in it's top section and set the button's HorizontalAlignment="Right" but the result was the same as with the grid, it also shrinks in size when there isn't enough content in the content area (or when title is short). How do I do this then? ==EDIT== I tried WPCoder's XAML, only I replaced the dummy text box with what I would have in a real page (stackpanel) and placed a listbox into the ContentPanel grid. I noticed that what I had before and what WPCoder is suggesting is very similar. Here's my current XAML and the page still doesn't grow to fit the width of the page and I get identical results to what I had before: <phone:PhoneApplicationPage x:Name="categoriesPage" x:Class="CatalogueBrowser.CategoriesPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:phone="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone" xmlns:shell="clr-namespace:Microsoft.Phone.Shell;assembly=Microsoft.Phone" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" FontFamily="{StaticResource PhoneFontFamilyNormal}" FontSize="{StaticResource PhoneFontSizeNormal}" Foreground="{StaticResource PhoneForegroundBrush}" SupportedOrientations="PortraitOrLandscape" Orientation="Portrait" mc:Ignorable="d" d:DesignWidth="480" d:DesignHeight="768" xmlns:ctrls="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit" shell:SystemTray.IsVisible="True"> <Grid x:Name="LayoutRoot" Background="Transparent"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <StackPanel Orientation="Horizontal" VerticalAlignment="Center" > <TextBlock Text="Browsing:" Margin="10,10" Style="{StaticResource PhoneTextTitle3Style}" /> <TextBlock x:Name="ListTitle" Text="{Binding DisplayName}" Margin="0,10" Style="{StaticResource PhoneTextTitle3Style}" /> </StackPanel> <Button Grid.Column="1" x:Name="btnRefineSearch" Content="Refine Search" Style="{StaticResource buttonBarStyle}" FontSize="14" /> </Grid> <Grid x:Name="ContentPanel" Grid.Row="1"> <ListBox x:Name="CategoryList" ItemsSource="{Binding Categories}" Style="{StaticResource CatalogueList}" SelectionChanged="CategoryList_SelectionChanged"/> </Grid> </Grid> </phone:PhoneApplicationPage> This is what the page with the above XAML markup looks like in the emulator:

    Read the article

  • How can I clear content without getting the dreaded "stop running this script?" dialog?

    - by Cheeso
    I have a div, that holds a div. like this: <div id='reportHolder' class='column'> <div id='report'> </div> </div> Within the inner div, I add a bunch (7-12) of pairs of a and div elements, like this: <h4><a>Heading1</a></h4> <div> ...content here....</div> The total size of the content, is maybe 200k. Each div just contains a fragment of HTML. Within it, there are numerous <span> elements, containing other html elements, and they nest, to maybe 5-8 levels deep. Nothing really extraordinary, I don't think. After I add all the content, I then create an accordion. like this: $('#report').accordion({collapsible:true, active:false}); This all works fine. The problem is, when I try to clear or remove the report div, it takes a looooooong time, and I get 3 or 4 popups asking "Do you want to stop running this script?" I have tried several ways: option 1: $('#report').accordion('destroy'); $('#report').remove(); $("#reportHolder").html("<div id='report'> </div>"); option 2: $('#report').accordion('destroy'); $('#report').html(''); $("#reportHolder").html("<div id='report'> </div>"); option 3: $('#report').accordion('destroy'); $("#reportHolder").html("<div id='report'> </div>"); after getting a suggestion in the comment, I also tried: option 4: $('#report').accordion('destroy'); $('#report').empty(); $("#reportHolder").html("<div id='report'> </div>"); No matter what, it hangs for a long while. The call to accordion('destroy') seems to not be the source of the delay. It's the erasure of the html content within the report div. This is jQuery 1.3.2. EDIT - fixed code typo. ps: this happens on FF3.5 as well as IE8 . Questions: What is taking so long? How can I remove content more quickly? Addendum I broke into the debugger in FF, during "option 4", and the stacktrace I see is: data() trigger() triggerHandler() add() each() each() add() empty() each() each() (?)() // <<-- this is the call to empty() ResetUi() // <<-- my code onclick I don't understand why add() is in the stack. I am removing content, not adding it. I'm afraid that in the context of the remove (all), jQuery does something naive. Like it grabs the html content, does the text replace to remove one html element, then calls .add() to put back what remains. Is there a way to tell jQuery to NOT propagate events when removing HTML content from the dom?

    Read the article

  • C#.NET: How to update multiple .NET pages when a particular event occurs in one .Net page? In another words how to use Observer pattern(Publish and subscribe to events)

    Problem: Suppose you have a scenario in which you have to update multiple pages when an event occurs in main page. For example imagine you have a main page where you are dispalying a tab control. This tab control has 3 tab pages where you are loading 3 different user controls. On click of an update button in main page imagine if you have do something in all the 3 tab panels. In other words an event in main page has to be handled in many other pages. An event in main page which contains the tab control has to be handled in all the tab panels(user controls) Answer: Use Observer pattern Define a base page for the page that contains the tab control. Main page which contains the tab: Baseline_Baseline Basepage for the above main page: BaselineBasePage User control that has to be udpated for an event in main page: Baseline_PriorNonDeloitte Source Code: public class BaselineBasePage : System.Web.UI.Page { IList lstControls = new List(); public void Add(IObserver userControl) { lstControls.Add(userControl); } public void Remove(IObserver userControl) { lstControls.Remove(userControl); } public void RemoveAllUserControls() { lstControls.Clear(); } public void Update(SaveEventArgs e) { foreach (IObserver LobjControl in lstControls) { LobjControl.Save(e); } } } public interface IObserver { void Update(SaveEventArgs e); } public partial class Baseline_Baseline : BaselineBasePage { . . . this.Add(_ucPI); this.Add(_ucPI1); protected void abActionBar_saveClicked(object sender, EventArgs e) { SaveEventArgs se = new SaveEventArgs(); se.TabType = (BaselineTabType)tcBaseline.ActiveTabIndex; this.Update(se); } } Public class Baseline_PriorNonDeloitte : System.Web.UI.UserControl,IObserver { public void Update(SaveEventArgs e) { } } More info at: http://www.dofactory.com/Patterns/PatternObserver.aspx span.fullpost {display:none;}

    Read the article

  • C#.NET: How to update multiple .NET pages when a particular event occurs in one .Net page? In another words how to use Observer pattern(Publish and subscribe to events)

    Problem: Suppose you have a scenario in which you have to update multiple pages when an event occurs in main page. For example imagine you have a main page where you are dispalying a tab control. This tab control has 3 tab pages where you are loading 3 different user controls. On click of an update button in main page imagine if you have do something in all the 3 tab panels. In other words an event in main page has to be handled in many other pages. An event in main page which contains the tab control has to be handled in all the tab panels(user controls) Answer: Use Observer pattern Define a base page for the page that contains the tab control. Main page which contains the tab: Baseline_Baseline Basepage for the above main page: BaselineBasePage User control that has to be udpated for an event in main page: Baseline_PriorNonDeloitte Source Code: public class BaselineBasePage : System.Web.UI.Page { IList lstControls = new List(); public void Add(IObserver userControl) { lstControls.Add(userControl); } public void Remove(IObserver userControl) { lstControls.Remove(userControl); } public void RemoveAllUserControls() { lstControls.Clear(); } public void Update(SaveEventArgs e) { foreach (IObserver LobjControl in lstControls) { LobjControl.Save(e); } } } public interface IObserver { void Update(SaveEventArgs e); } public partial class Baseline_Baseline : BaselineBasePage { . . . this.Add(_ucPI); this.Add(_ucPI1); protected void abActionBar_saveClicked(object sender, EventArgs e) { SaveEventArgs se = new SaveEventArgs(); se.TabType = (BaselineTabType)tcBaseline.ActiveTabIndex; this.Update(se); } } Public class Baseline_PriorNonDeloitte : System.Web.UI.UserControl,IObserver { public void Update(SaveEventArgs e) { } } More info at: http://www.dofactory.com/Patterns/PatternObserver.aspx span.fullpost {display:none;}

    Read the article

  • How can I get my dynamic site search results content indexed by Google?

    - by Kris
    I have a site that is simply a search box to search a cloud-hosted database of .tiff images, and then all of my content can only be accessed by entering a search term. So for example, you're on the home page www.example.com and you type in "search" to the box and hit submit. Then it takes you to www.example.com/?q=search, which is a page of all my .tiff images with "search" in the description. How can I get a page like www.example.com/?q=search indexed, WITHOUT making a humungous list of search terms that people might type in?? I know about mod_rewrite, but it seems like for that you need to know ahead of time which URLs you'll need to convert, which I don't. All of these pages will be dynamically user-generated by typing into the search field. Please help!

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • Can we 301 redirect to a new page, but still publish the old content somewhere else?

    - by KBS
    We have a page on the site which ranks well for an SEO term (top 5) but contains old information. We have added a new page but Google doesn't rank it that well. Information on these pages is time sensitive. Old: example.com/2013-related-information.html New: example.com/2014-related-information.html Obvious solution is to delete old page and do a 301 redirect to the new page. Now, can we still keep the old page by giving it a new URL. example.com/2013-related-information.html is redirected to example.com/2014-related-information.html example.com/2014-related-information.html is recreated with a new address such as example.com/new-2013-related-information.html What we are trying to do is to send the user to the fresh page but still not destroying the record copy if someone wants to go and dig up the old information.

    Read the article

  • Will keep google traffic on new site from old site when moving content from old site? [closed]

    - by user1324762
    Possible Duplicate: new domain, old links are 301’d from old domain to new, how will this affect my rankings? I have a site about bikers. Now I created a dating site for bikers. I don't need old site any more, I want to move all articles to this new dating site. So basically, this is not only moving content to new domain, but also to entire new site. What I am planning to do is to make 301 redirect for all 200 articles. For pages that are not articles, I will just put message that the site will be down soon. Do you think that I will get all google traffic from old site from those articles? Is there anything I should be aware and careful?

    Read the article

  • How to use database to generate multiple folder content page? [migrated]

    - by VenomVipes
    Scenario :I am trying to build a Mobile Entertainment Portal. It will enable users to download Music & Movies to their Cell Phones... Problem Exp : Suppose I upload 100 folders of Songs, each folder is for one Album. I want a way to generate a page with all the folders name (Album Name) in it. If user click on the page, they should be taken to a page where they get list of all songs in the album. Clicking on any song name will let them download it. Can it be done anyway or will I have to manually design each of the 3 pages for each album. If I do that, its time consuming and also will be difficult to change anything like footer, header...

    Read the article

  • How to concatenate the contents of all children of a node in XPath?

    - by Brian
    Is it possible with XPath to get a concatenated view of all of the children of a node? I am looking for something like the JQuery .html() method. For example, if I have the following XML: <h3 class="title"> <span class="content">this</span> <span class="content"> is</span> <span class="content"> some</span> <span class="content"> text</span> </h3> I would like an XPath query on "h3[@class='title']" that would give me "this is some text". That is the real question, but if more context/background is helpful, here it is: I am using XPath and I used this post to help me write some complex XSL. My source XML looks like this. <h3 class="title">Title</h3> <p> <span class="content">Some</span> <span class="content"> text</span> <span class="content"> for</span> <span class="content"> this</span> <span class="content"> section</span> </p> <p> <span class="content">Another</span> <span class="content"> paragraph</span> </p> <h3 class="title"> <span class="content">Title</span> <span class="content"> 2</span> <span class="content"> is</span> <span class="content"> complex</span> </h3> <p> <span class="content">Here</span> <span class="content"> is</span> <span class="content"> some</span> <span class="content"> text</span> </p> My output XML considers each <h3> as well as all <p> tags until the next <h3>. I wrote the XSL as follows: <xsl:template match="h3[@class='title']"> ... <xsl:apply-templates select="following-sibling::p[ generate-id(preceding-sibling::h3[1][@class='title'][text()=current()/text()]) = generate-id(current()) ]"/> ... </xsl:template> The problem is that I use the text() method to identify h3s that are the same. In the example above, the "Title 2 is complex" title's text() method returns whitespace. My thought was to use a method like JQuery's .html that would return me "Title 2 is complex"

    Read the article

  • Extract news links from news website

    - by Ali
    Is there any reliable method to find out the collection of links which is directed us to detail news page. in other word after visiting the first page of website I just want those links that refer to a news item. any solution ?

    Read the article

  • Get the rendered text from HTML (Delphi)

    - by Daisetsu
    I have some HTML and I need to extract the actual written text from the page. So far I have tried using a web browser and rendering the page, then going to the document property and grabbing the text. This works, but only where the browser is supported (IE com object). The problem is I want this to be able to run under wine also, so I need a solution that doesn't use IE COM. There must be a programatic way to do this that is reasonable.

    Read the article

  • Jquery JQGrid breaks when contentType=application/json?

    - by JK
    I've had to use $.ajaxSetup() to globally change the contentType to application/json $.ajaxSetup({ contentType: "application/json; charset=utf-8" }); (See this question for why I had to use application/json http://stackoverflow.com/questions/2792603/aspnet-mvc-why-is-modelstate-isvalid-false-the-x-field-is-required-when-that) But this breaks the jquery jqrid with this error: Invalid JSON primitive: _search The POST data it is trying to send is: _search=false&nd=1274042681880&rows=20&page=1&sidx=&sord=asc Which of is not in json format, so of course it fails. Is there anyway to tell jqrid what contenttype to use? I have searched on the jqrid wiki, but doesn't have much documentation about anything really. http://www.trirand.com/jqgridwiki/doku.php?do=search&id=contenttype&fulltext=Search

    Read the article

  • Getting BeautifulSoup to find a specific <p>

    - by Ryan
    I'm trying to put together a basic HTML scraper for a variety of scientific journal websites, specifically trying to get the abstract or introductory paragraph. The current journal I'm working on is Nature, and the article I've been using as my sample can be seen at http://www.nature.com/nature/journal/v463/n7284/abs/nature08715.html. I can't get the abstract out of that page, however. I'm searching for everything between the <p class="lead">...</p> tags, but I can't seem to figure out how to isolate them. I thought it would be something simple like from BeautifulSoup import BeautifulSoup import re import urllib2 address="http://www.nature.com/nature/journal/v463/n7284/full/nature08715.html" html = urllib2.urlopen(address).read() soup = BeautifulSoup(html) abstract = soup.find('p', attrs={'class' : 'lead'}) print abstract Using Python 2.5, BeautifulSoup 3.0.8, running this returns 'None'. I have no option of using anything else that needs to be compiled/installed (like lxml). Is BeautifulSoup confused, or am I?

    Read the article

  • Response.TransmitFile() with UNC share (ASP.NET)

    - by frankadelic
    In the comments of this page: http://msdn.microsoft.com/en-us/library/12s31dhy.aspx ..it says that TransmitFile() cannot be used with UNC shares. As far as I can tell, this is the case; I get this error in Event Log when I attempt it: TransmitFile failed. File Name: \\myshare1\e$\file.zip, Impersonation Enabled: 0, Token Valid: 1, HRESULT: 0x8007052e The suggested alternative is to use WriteFile(), however, this is problematic because it loads the file into memory. In my application, the files are 200MB, so this is not going to scale. Is there a method in ASP.NET for streaming files to users that's: scalable (doesn't read entire file into RAM or occupy ASP.NET threads) works with UNC shares Mapping a network drive as a virtual directory is not an option for us. I would like to avoid copying the file to the local web server as well. Thanks

    Read the article

  • How do you parse an HTML in vb.net

    - by tooleb
    I would like to know if there is a simple way to parse HTML in vb.net. I know that HTML is not sctrict subset of XML, but it would be nice if it could be treated that way. Is there anything out there that would let me parse HTML in an XML-like way in VB.net?

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >