Search Results

Search found 14037 results on 562 pages for 'master pages'.

Page 113/562 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Google search results are downloaded as a file in Google Chrome

    - by i-g
    I'm behind a proxy at work, and Google Chrome insists on downloading Google search results pages instead of displaying them. Whether I try to search from the address bar, from google.com, or from a third-party site that has a Google search form, what ends up happening is that the search results page ends up as a downloaded file called "search" in my downloads directory. I haven't seen this happen with any other search pages. Yahoo! Search, for example, works fine. Has anyone run into this before and/or has any ideas on how to fix it or what might be causing it? I'd try the Chrome support pages, but they're blocked by the proxy...

    Read the article

  • Download/update webpages listed in XML sitemap

    - by unor
    I'm searching a FLOSS tool that downloads all pages (and embedded resources, e.g. images) linked in a XML sitemap (built according to http://www.sitemaps.org/). The tool should "crawl" the sitemap regularly and look for new and deleted URLs and changes in the lastmod element. So whenever a page gets added/deleted/updated, the tool should apply the changes. Some sitemaps list sub-sitemaps in sitemapindex?sitemap. The tool should understand this and load all linked sub-sitemaps and look for URLs in there. I know there are tools that allow me to extract all URLs from the sitemap, so that I could feed them to wget or similar tools (see for example: Extract Links from a sitemap(xml)). But this wouldn't help in getting noticed about updates to pages. Tracking the webpages itself for updates doesn't work, because "secondary" content on the pages changes daily, but lastmod gets only updated when relevant content changed.

    Read the article

  • Two-page view in Word, shouldn't the first page be on the right?

    - by Cylindric
    Greetings Superusers, I'm putting together a lengthy document in Word, and it's going to be printed and bound duplex. I've put page-numbers "outside" etc, and all is pretty. The problem is, in the "Two Pages" view, it puts p1 on the left, then p2 on the right, then p3 below on the left, and p4 on the right. p1 p2 p3 p4 p5 p6 Shouldn't this be slightly different though? When I get to print it, p1 is on the right, not the left, so the preview should go p1 p2 p3 p4 p5 p6 Because when I "open" the book, it's pages 2 and 3 that are side-by-side. This makes layout tweaking confusing, because it's not instantly obvious which pages will be "visible" to the reader at the same time together. Have I missed something? I can't just put a blank page first, because that would bugger up the printing, as the printer automatically duplexes and binds etc. (Office 2008, by the way)

    Read the article

  • Can Apache 2 be configured to start sending gzipped data early?

    - by rikh
    We have Apache set up to gzip compress html pages before they are sent to the client browser. However, some of our pages are slowish to generate and it seems that Apache is holding on until it has the complete page, compressing it, then sending it to the browser. There are big chunks of the page (the main important bits) that are actually generated and output fairly quickly. Is it possible to configure Apache to start compressing and send data for the page as soon as the script starts outputting something? Is it is, can you offer any help is how to do this? If not, can you suggest any other way to get gzip compression working for the server? The scripts that generate the pages are written in PHP. We are using Apache 2.0 on Linux.

    Read the article

  • Why is awstats reporting my static IP instead of Domain Name?

    - by Austin
    In AWStats under: "Links from an external page (other web sites except search engines)" it has generated a list of pages that had linked to my page. I see pages like: Bing, YouTube, HotFrog, etc.. However, there are many internal links within the pages. Towards the bottom it is reporting as followed: http://72.249.150.9/distributors.php 5 2.4 % http://72.249.150.9/contact/ 5 2.4 % http://72.249.150.9/catalog/ 4 1.9 % http://72.249.150.9/flex-point-hockey-grip.php 5 2.4 % http://72.249.150.9/sticky-grip-foam.php 5 2.4 % http://72.249.150.9/video.php 10 4.8 % http://72.249.150.9/dealers/ 5 2.4 % http://72.249.150.9/feedback/ 5 2.4 % http://72.249.150.9/products.php 10 4.8 % http://72.249.150.9/ergo-hockey-grip.php 5 2.4 %

    Read the article

  • Coldfusion server VERY slow page loads

    - by Kevin
    I inherited a windows server 2003 coldfusion 7 server a few weeks ago. Today a network cable was unplugged by accident from the server. On plugging it back in, pages were NOT loading at all. Rather, we were receiving a generic coldfusion error page. After restarting IIS several times and coldfusion even more than that, we finally got pages to start loading. However, the loading is extremely slow (30+ seconds) on pages that used to load instantly. Loading through the local network (IE localhost/cfide/administrator) does nothing to help the load speed. I am not familiar with IIS or Coldfusion (We're in the process of migrating this to Linux/PHP), so this is all new territory to me. I'm hoping someone may have experienced this issue in the past and can help me solve it. I'm happy to provide any additional information that might be necessary....I'm just not sure what information you might need in order to help. Thanks for your time.

    Read the article

  • Source of Unexplained Requests in Server Logs

    - by Synetech inc.
    Hi, I am baffled by some entries in my server logs, specifically the web-server logs. Other than normal, expected traffic, I have noticed three types of request errors (eg 404, etc.): Broken links, ie links from old, external pages that point to pages that are no longer here Sequences of probes, ie some jerk trying to hack in by scanning my server for a series of exploitable admin type pages and such What appear to be completely random requests for things that have never existed on the server or even have anything to do with the server, and appear by themselves (ie not a series of requests like the probes) Could it somehow be a mistyped URL or IP? That’s about the only thing that I can think of, but still, how could I get a request on say, foobar.dyndns.org (12.34.56.78) for something like www.wantsfly.com/prx2.php or /MNG/LIVE or http://ant.dsabuse.com/abc.php?auth=45V456b09m&strPassword=X%5BMTR__CBZ%40VA&nLoginId=43. (Those are a few actual requests from my logs.) Can someone please explain scenario three to me? Thanks.

    Read the article

  • Printing All Changes to MediaWiki Series of Articles

    - by Jason
    I have a MediaWiki site that I am responsible for. My management has recently asked to see the changes to a specific series of documents within MediaWiki (AKA, they basically want to see the output of the "changes" log). I was wondering two things: Is there a way to "nicely" print out this log so it easily shows the various changes that were made to a document. The information I need to print out is spread across multiple pages. Utilizing whatever information in step 1, is it possible to specify that I print out a subset of pages? (I'm talking about a lot of pages - ~135 of them or so.) Please let me know if you need clarification. Thanks!

    Read the article

  • Page allocation failures on iSCSI storage

    - by Dave
    We have a CentOS 6.3 iscsi server (16GB RAM) running on Infiniband bus (ipoib). When the load is high I can see multiple errors: Sep 3 23:22:20 stor4 kernel: tgtd: page allocation failure. order:2, mode:0x20 Sep 3 23:22:20 stor4 kernel: Pid: 3637, comm: tgtd Not tainted 2.6.32 #1 Sep 3 23:22:20 stor4 kernel: Call Trace: Sep 3 23:22:20 stor4 kernel: [] ? __alloc_pages_nodemask+0x77f/0x940 Sep 3 23:22:20 stor4 kernel: [] ? kmem_getpages+0x62/0x170 Sep 3 23:22:20 stor4 kernel: [] ? fallback_alloc+0x1ba/0x270 Sep 3 23:22:20 stor4 kernel: [] ? cache_grow+0x2cf/0x320 Sep 3 23:22:20 stor4 kernel: [] ? ____cache_alloc_node+0x99/0x160 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __kmalloc+0x189/0x220 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __pskb_pull_tail+0x2aa/0x360 Sep 3 23:22:20 stor4 kernel: [] ? tcp_init_tso_segs+0x37/0x50 Sep 3 23:22:20 stor4 kernel: [] ? dev_queue_xmit+0x4bb/0x6f0 Sep 3 23:22:20 stor4 kernel: [] ? neigh_connected_output+0xbd/0x100 Sep 3 23:22:20 stor4 kernel: [] ? ip_finish_output+0x237/0x310 Sep 3 23:22:20 stor4 kernel: [] ? ip_output+0xb8/0xc0 Sep 3 23:22:20 stor4 kernel: [] ? __ip_local_out+0x9f/0xb0 Sep 3 23:22:20 stor4 kernel: [] ? ip_local_out+0x25/0x30 Sep 3 23:22:20 stor4 kernel: [] ? ip_queue_xmit+0x190/0x420 Sep 3 23:22:20 stor4 kernel: [] ? sock_aio_write+0x167/0x180 Sep 3 23:22:20 stor4 kernel: [] ? tcp_transmit_skb+0x3fe/0x7b0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_write_xmit+0x1fb/0xa20 Sep 3 23:22:20 stor4 kernel: [] ? __tcp_push_pending_frames+0x30/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_push_pending_frames+0x33/0x40 Sep 3 23:22:20 stor4 kernel: [] ? do_tcp_setsockopt+0x3d6/0x480 Sep 3 23:22:20 stor4 kernel: [] ? tcp_setsockopt+0x2a/0x30 Sep 3 23:22:20 stor4 kernel: [] ? sock_common_setsockopt+0x14/0x20 Sep 3 23:22:20 stor4 kernel: [] ? sys_setsockopt+0x7f/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? system_call_fastpath+0x16/0x1b Sep 3 23:22:20 stor4 kernel: Mem-Info: Sep 3 23:22:20 stor4 kernel: Node 0 DMA per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 23 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 181 Sep 3 23:22:20 stor4 kernel: Node 0 Normal per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 171 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 29 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: active_anon:1875 inactive_anon:2473 isolated_anon:0 Sep 3 23:22:20 stor4 kernel: active_file:1243637 inactive_file:2505055 isolated_file:0 Sep 3 23:22:20 stor4 kernel: unevictable:0 dirty:268338 writeback:0 unstable:0 Sep 3 23:22:20 stor4 kernel: free:86050 slab_reclaimable:132377 slab_unreclaimable:23744 Sep 3 23:22:20 stor4 kernel: mapped:1293 shmem:222 pagetables:720 bounce:0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA free:15732kB min:124kB low:152kB high:184kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15332kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 2172 16060 16060 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 free:107544kB min:18268kB low:22832kB high:27400kB active_anon:468kB inactive_anon:2364kB active_file:566208kB inactive_file:976112kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2224900kB mlocked:0kB dirty:96816kB writeback:0kB mapped:908kB shmem:12kB slab_reclaimable:176940kB slab_unreclaimable:968kB kernel_stack:64kB pagetables:192kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 13887 13887 Sep 3 23:22:20 stor4 kernel: Node 0 Normal free:220924kB min:116772kB low:145964kB high:175156kB active_anon:7032kB inactive_anon:7528kB active_file:4408340kB inactive_file:9044108kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:14220800kB mlocked:0kB dirty:976536kB writeback:0kB mapped:4264kB shmem:876kB slab_reclaimable:352568kB slab_unreclaimable:94008kB kernel_stack:2048kB pagetables:2688kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 0 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA: 1*4kB 0*8kB 1*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15732kB Sep 3 23:22:20 stor4 kernel: Node 0 DMA32: 16305*4kB 4381*8kB 353*16kB 8*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 107900kB Sep 3 23:22:20 stor4 kernel: Node 0 Normal: 14548*4kB 14808*8kB 2420*16kB 31*32kB 5*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 220784kB Sep 3 23:22:20 stor4 kernel: 3748822 total pagecache pages Sep 3 23:22:20 stor4 kernel: 0 pages in swap cache Sep 3 23:22:20 stor4 kernel: Swap cache stats: add 0, delete 0, find 0/0 Sep 3 23:22:20 stor4 kernel: Free swap = 975864kB Sep 3 23:22:20 stor4 kernel: Total swap = 975864kB Sep 3 23:22:20 stor4 kernel: 4194303 pages RAM Sep 3 23:22:20 stor4 kernel: 126915 pages reserved Sep 3 23:22:20 stor4 kernel: 3753534 pages shared Sep 3 23:22:20 stor4 kernel: 213500 pages non-shared TCP stack and VM config: net.core.rmem_max = 83886080 net.core.wmem_max = 83886080 net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.ipv4.tcp_rmem = 40960 1048560 4194304 net.ipv4.tcp_wmem = 40960 196608 4194304 net.ipv4.tcp_mem = 16388608 16388608 16388608 vm.min_free_kbytes=135168 Additional tweaks: /sbin/blockdev --setra 16384 /dev/sdb echo 2048 /sys/block/sdb/queue/nr_requests Where might the problem be? Thank you.

    Read the article

  • Auto-crop black margins dynamically of scanned images?

    - by naxa
    I have a notebook photocopied and the photocopy scanned, about 200 pages. For various reasons I need to print this material. There are large amounts of black areas at the sides of the page (after the page itself ends), "black margins". The image looks like this: I would like to remove the black places, but keeping all text. * The even and odd pages have the black part at different places. * Notably, there is a white edge outside the black one, too! * Most notably, the black areas has no fixed width (I've tried to overlay all the images for even and odd pages separately). It's width varies. The batch algorythm should be able to detect it. Is there a way to remove these black-white margins automatically, keeping the text?

    Read the article

  • 500 Internal Server Error after moving Joomla installation to new environment

    - by rad
    (This is the first time I moved the website so please don't be hard on me.) After moving the website, the homepage shows up properly but other pages do not. I get 500 Internal Server Error on all other pages. Before moving, the Search Engine Friendly URLs and Use URL rewriting were enabled in the Joomla Dashboard. Is this the reason the other pages are not showing up? If so, how do I fix this? I think the homepage shows up because the url myWebsite.com redirects to myWebsite.com/index.php automatically. Note that I have transferred all of the Joomla the files through Filezilla and imported the MySQL database properly and also edited the configuration.php as set the proper settings for the database.

    Read the article

  • google webmaster soft 404 on 301

    - by Daniel
    I'm looking through google webmaster that my page is generating soft 404 errors (https://support.google.com/webmasters/answer/181708?hl=en) google says: We recommend that you always return a 404 (Not found) or a 410 (Gone) response code in response to a request for a non-existing page But I've got redirects set up that handle old pages to redirect to the proper new pages using a 301. The website links changed because of a use of a framework, which allows it to be more consistent, but the old pages till have links out there to these. Should I be worried about this? IS google penalizing the site for this? (Using IIS 8, Tomcat, CF10, Win)

    Read the article

  • Split a table in Word without losing row title

    - by Shane Hsu
    Word has the feature to repeat title row of a table when a table is so long that it spans a bunch of pages. I need to categorize my data into several pages, and I did that by splitting the table and insert page split to put them all in a page of itself. So now I got several page of data, but only the first page has title row. Is there anyway else to do this beside manually adding the title row to all the other pages? Original data: _________________ | Cat. Data | | 1 * | | 1 * | | 1 * | | 1 * | | 1 * | | 1 * | | 2 * | | 2 * | | 2 * | | 2 * | | 3 * | |___3______*______| And then turn it into: _________________ | Cat. Data | | 1 * | | 1 * | | 1 * | | 1 * | | 1 * | |___1______*______| Next page _________________ | Cat. Data | | 2 * | | 2 * | | 2 * | |___2______*______| Next Page _________________ | Cat. Data | | 3 * | |___3______*______|

    Read the article

  • Windows server response time very high

    - by Nagaraju Bandla
    Server Specs Windows Server 2008 R2 64 bit Provider : Fasthosts .Net Framework: 4.0 6 GB RAM (its using 4.6 GB) i have a website with thousands of pages structured like folderone/1/one to 500.aspx folderone/2/one to 500.aspx . . folderone/500/one to 500.aspx To load this pages for the first time after the release, for each folder it takes about 20 to 30 minutes and once one page is loaded the rest of the pages loads fine. This happens for all folders. And this repeats every time i restart the server, when a added anything to app_code or if i change the web.config. My site is mainly works Google and due to this problem its giving errors. Any help will be highly appreciated please. i am happy to buy a beer for you if its resolved. Thanks in advance...

    Read the article

  • When load balancing, must all copies of static web page be exactly the same?

    - by Gilles Blanchette
    I am used to get answers for everything on the web, but not this time... Yesterday I enable Amazon DNS weight functionally to load balance 7 websites between two different IP addresses (split 50%-50%). Both servers run IIS 8.5, sites runs well on both sides. Today I found out that Google WebMasterTools is reporting fails error with file robots.txt, all close to 50% of access try errors. The robots.txt file is ok and accessible (even via Google testing URL page) on both servers. Lets say current version of static web pages are on the first computer and the updated version of the same web pages are on the second computer. Can it be the problem? When load balancing, can static web pages be slightly different from one host server to the other? Thank you for your help

    Read the article

  • ASP Fails with 500 Error

    - by VinceM
    We have a server setup as an IIS box and have some static pages with a few asp pages that handle the form submissions. The asp is really vbscript that sends a CDO message. When moving these pages to the new server the form will not submit, it gives a 500 error and the following shows in Event Viewer: Error: The Template Persistent Cache initialization failed for Application Pool 'DefaultAppPool' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. I can't seem to find any info on this anywhere... I was thinking it may have something to do with the fact that we created this server from an image of another server. Thanks for your help in advance... Vince

    Read the article

  • Google Chrome not using local cache

    - by Steve
    Hi. I've been using Google Chrome as a substitute for Firefox not being able to handle having lots of tabs open at the same time. Unfortunately, it looks like Chrome is having the same problem. Freakin useless. I had to end Chrome as my whole system had slowed to a crawl. When I restarted it, I opted to restore the tabs that were last open. At this stage, every one of the 20+ tabs srated downloading the pages they had previously had open. My question is: why can't they open a locally stored/saved copy of the web page from cache? Does Google Chrome store pages in a cache? Also: after most of the pages had completed their downloading, I clicked on each tab to view the page. Half of them only display a white page, and I have to reload the page manually. What is causing this? Thanks for your help.

    Read the article

  • Varnish cache and PHP session; setting header?

    - by StCee
    Varnish by default would not cache page with cookies. I read on some posts that one workaround for PHP pages is to set header('Cache-Control: public, s-maxage=60'); in php pages. But would it makes Varnish cache the page with the session cookie? Session is started on that page, and although there is nothing personal on that page, I would still want the session to persist in case the user would do something private later. So is there a way to cache the page without the session cookie? And still be able to pass session between pages? I can imagine some sort of weird solution with hidden form, but I would prefer if it can be done with VCL configuration or header setting. Thanks a lot!

    Read the article

  • CPU Configuration Issue for 2 Servers (Server 2008 R2)

    - by Bill Moreland
    I have 2 servers running the exact same Classic ASP code with Access DBs (yes, not ideal, but it is what it is, for now). 1) Xeon 5520 @ 2.27 GHz (6 GB Memory) 2) Xeon E5-2620 @ 2.00 GHz (2 processors, 32 GB Memory) For most pages the newer E5-2620 processes the pages between 10-15% faster. On pages requiring heavy and/or multiple complicated access stored procedures (queries) the older 5520 does a much better job. I believe the servers are configured nearly identically. My question: is it possible that the newer, multi-processor server is not as good at handling Classic ASP as the older single processor? Is there a configuration difference that needs to be in place that I'm missing since I'm shooting for identical implementations?

    Read the article

  • Table Formatting in Word

    - by user359217
    I have a table in Word which is 5 columns wide and multiple rows. In Row 3, cells 1, 2, 3 & 5 have simple text. Cell 4 contains a large quantity of text and therefore needs to wrap over several pages. Therefore, I mark "Allow row to break across pages". Problem: on next page where row has wrapped, cells 1, 2, 3 & 5 are blank with cell 4 displaying the wrapped text. Is there any way that I can get the simple text from Row 3, cells 1, 2 and 3 to repeat on the pages which contain the wrapped text of cell 4? I do not want the data to be in the table heading, as I have multiple rows which have a similar volume of text.

    Read the article

  • Binding menu items to a sitemap.

    - by Ricardo Deano
    Hello all..this is driving me nuts. I have a navigation menu I would like to display based upon user roles (using.net membership) After several hours and headaches (from banging my head against the desk) I was wondering if someone can point me in the error of my ways. Page: <body> <form runat="server"> <div class="page"> <div class="header"> <div class="loginDisplay"> <asp:LoginView ID="HeadLoginView" runat="server" EnableViewState="false"> <AnonymousTemplate> <a href="~/Login.aspx" ID="HeadLoginStatus" runat="server">Log In</a> </AnonymousTemplate> <LoggedInTemplate> Welcome <span class="bold"><asp:LoginName ID="HeadLoginName" runat="server" /></span>! [ <asp:LoginStatus ID="HeadLoginStatus" runat="server" LogoutAction="Redirect" LogoutText="Log Out" LogoutPageUrl="~/Open/Close.aspx"/> ] </LoggedInTemplate> </asp:LoginView> </div> <div class="clear hideSkiplink"> <asp:Menu ID="NavigationMenu" runat="server" CssClass="menu" IncludeStyleBlock="False" Orientation="Horizontal" DataSourceID="AugustSiteMap" /> <asp:SiteMapDataSource ID="AugustSiteMap" runat="server" ShowStartingNode="false"/> </div> </div> SiteMap: <?xml version="1.0" encoding="utf-8" ?> <siteMap xmlns="http://schemas.microsoft.com/AspNet/SiteMap-File-1.0" > <siteMapNode url="~/Default.aspx" title="Home" description="Home"> <siteMapNode title="Open Pages" description="Open Pages"> <siteMapNode url="~/Open/Login.aspx" title="Login Page" description="Login Page" roles="*"/> <siteMapNode url="~/Open/Close.aspx" title="Thank you for using Valpak Data Solutions Online Reporting" description="Thank you for using Valpak Data Solutions Online Reporting" roles="*"/> </siteMapNode> <siteMapNode title="Logged In Open Pages" description="Logged In Open Pages"> <siteMapNode url="~/Landing.aspx" title="Landing Page" description="Landing Page" roles="*"/> <siteMapNode url="~/ContactUs.aspx" title="Contact Us" description="Contact Us" roles="*"/> </siteMapNode> <siteMapNode title="Restricted Pages" description="Resticted Pages"> <siteMapNode url="~/Restricted/ProductSearch.aspx" title=" Product Search" description=" Product Search" roles="*"/> <siteMapNode url="~/Restricted/ReportOutput.aspx" title="Report Output" description="Report Output" roles="Admin"/> </siteMapNode> </siteMapNode> </siteMap> Webconfig: <roleManager enabled="true" /> <siteMap defaultProvider="XmlSiteMapProvider" enabled="true"> <providers> <add name="XmlSiteMapProvider" description="AugustSiteMap" type="System.Web.XmlSiteMapProvider " siteMapFile="AugustSiteMap.sitemap" securityTrimmingEnabled="true" /> </providers> </siteMap> How can I ensure that when the user is logged in, the appropriate menu items are displayed on the Landing page? Please excuse my ignorance. Still new to all of this and my current method of 'trial and error' has seen me reach suicide levels this morning!

    Read the article

  • Form submission info showing up in URL and not working

    - by kcurtin
    I am making a Rails 3.1 app and have a signup form that was working fine, but I seemed to have changed something to break it.. I'm using Twitter bootstrap and twitter_bootstrap_form_for gem. I made some change that messed with the formatting of the form fields, but more importantly, when I submit the Sign Up form to create a new User, the information is showing up in the URL and looks like this: EDIT: This is happening in the latest versions of Chrome and Firefox http://localhost:3000/?utf8=%E2%9C%93&authenticity_token=UaKG5Y8fuPul2Klx7e2LtdPLTRepBxDM3Zdy8S%2F52W4%3D&user%5Bemail%5D=kevinc%40example.com&user%5Bpassword%5D=testing&user%5Bpassword_confirmation%5D=testing&commit=Sign+Up Here is the code for the form: <div class="span7"> <h3 class="center" id="more">Sign Up Now!</h3> <%= twitter_bootstrap_form_for @user do |user| %> <%= user.email_field :email, :placeholder => '[email protected]' %> <%= user.password_field :password %> <%= user.password_field :password_confirmation, 'Confirm Password' %> <%= user.actions do %> <%= user.submit 'Sign Up' %> <% end %> <% end %> </div> Here is the code for the UsersController: class UsersController < ApplicationController def new @user = User.new end def create @user = User.new(params[:user]) if @user.save redirect_to about_path, :notice => "Signed up!" else render 'new' end end end Not sure if there is more you need but if so let me know! Thank you! Edit: For debugging I tried specifying :post and also using a plain form_for <%= form_for(@user, :method => :post) do |f| %> <div class="field"> <%= f.label :email %> <%= f.email_field :email %> </div> <div class="field"> <%= f.label :password %> <%= f.password_field :password %> </div> <div class="field"> <%= f.label :password_confirmation %> <%= f.password_field :password_confirmation %> </div> <div class="actions"><%= f.submit "Sign Up" %></div> <% end %> This gives me the same problem as above. Adding routes.rb: Auth31::Application.routes.draw do get "home" => "pages#home" get "about" => "pages#about" get "contact" => "pages#contact" get "help" => "pages#help" get "login" => "sessions#new", :as => "login" get "logout" => "sessions#destroy", :as => "logout" get "signup" => "users#new", :as => "signup" root :to => "pages#home" resources :pages resources :users resources :sessions resources :password_resets end

    Read the article

  • Speaker at developer conferences and user group meetings

    Catching up on a couple of sessions I did in the past. This article gives an overview of some of my activities. Mainly at the annual German Visual FoxPro Developer Conference also known as SQL-Server & ASP.NET Conference in Frankfurt. The below listed entries are excerpts from the original Conference Coverage documents you'll find on UniversalThread. German Visual FoxPro Developer Conference 2002 (1 session - Vendor session about Active FoxPro Pages 3.0) German Visual FoxPro Developer Conference 2003 (2.5 sessions - Visual FoxPro running on Linux) German Visual FoxPro Developer Conference 2004 (4 sessions - 2x Active FoxPro Pages, VFP on Linux, and VFP using additional databases) German Visual FoxPro Developer Conference 2005 (4 sessions - RegEx, XML, XSLT, and using free (as in beer) development tools) German Visual FoxPro Developer Conference 2006 (3 sessions - .NET interop via COM, writing own CLR host in VFP, and Active FoxPro Pages) Furthermore, I did a couple of (hopefully) interesting sessions at various user group meetings in Speyer and Stuttgart. A more comprehensive list is available under Presentations (in German language). And last but not least, back in May 2005 Microsoft Germany invited me to host a WebCast for MSDN on how to use 'Visual FoxPro mit Visual Studio 2005'. Unfortunately, I was too unexperienced and too nervous (first time ever), we experienced technical issues with the microphone, and the obviously low quality of recording demanded to replace it by a whole series on Visual FoxPro 9.0. The webcast covered the same topics I already described in other articles here on my blog. Despite the desaster I'd like to thank Ralf Westphal for his kind words afterwards - I really felt bad. Eventually, you might ask yourself why it stopped by the end of 2006... Well, new chapter in my life: Mauritius!

    Read the article

  • Google indexing and ranking a custom domain served by Google App Engine

    - by Hugues
    I have a website served on the following URL : "http://www.plugimmo.com" which is a custom domain served by Google App Engine on the following URL : http://plugimmo.appspot.com Since a while I have tried to optimise the Google indexing and ranking with no success. The problem is that searching on Google the keywords in the title of my home page does not retrieve my website at all even not in the 1,000 first results : When checking the cached version of google ( cache:www.plugimmo.com), it says that the cached version is the one of 20-Aug-12 of "plugimmo.appspot.com". It looks there are several issues : 1 - The cached version is really old. I have made a lot of changes since the 20-Aug-12 and I saw the googlebot crawling my site several times. 2 - The cached version is for "plugimmo.appspot.com" 3 - When looking at the Google Webmaster tools, I see that the number of pages indexed for www.plugimmo.com is 0, but that can't be the case given the number of changes I made since then. My questions would therefore be the following : Why is the version of the cache so old although I saw the googlebot crawling the site many times since 20-Aug-12 ? Is there a problem with indexing a custom domain served by Google App Engine ? Why is the Google Webmaster tools showing 0 pages indexed although new pages have been crawled and that no errors have been reported in the indexing ? Also, the site has been developed with Google Web Toolkit. I have followed the guidelines regarding crawling Ajax sites. The home page when crawled by a robot is redirected to http://www.plugimmo.com/HomeSnapshot.html Thanks a lot for your help ! Hugues

    Read the article

  • Blogger.com kills FTP

    - by Daniel Moth
    History (you can safely ignore) Back in 2002 I came across some (almost) free Linux/Apache space and set up my first manually-created HTML-based home page, which still exists: http://www.danielmoth.com/. In 2004 I wanted to have a blog that would be hosted on a sub-folder of my domain, and at the same time I did not want to mess with setting up a blog engine myself. I found the perfect solution in blogger.com, which offered a web interface for creating blog posts (and managing the pages' template) and it would then use FTP to upload HTML pages to my space (no server-side programming/installation required at all)! FTP feature dropped by blogger.com Unfortunately, along the way Google purchased blogger.com and a couple of months ago they announced that they decided to kill the FTP feature, and they are forcing customers using that feature to have their content hosted (in an opaque way) on Google's servers. Even though I prefer having my content on my own space, I would have considered moving it to Google's servers if I could host my blog in a sub-folder and preserve my full blog URL: http://www.danielmoth.com/Blog/ (including my home pages being hosted at the root of the domain). Sadly, that is not possible. What now So I decided to move my blog somewhere else. I'll document on the next few posts how I did that (inc. a tool I wrote) in case it helps someone else in the same situation and also as a reminder to me if I need to do something like this again in the future. Comments about this post welcome at the original blog.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >