Search Results

Search found 25493 results on 1020 pages for 'custom wordpress pages'.

Page 275/1020 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • How to fix some damages from site hack?

    - by Towhid
    My site had been hacked. I found vulnerability,fixed it, and removed shell scripts. But hacker had uploaded thousands of web pages on my web server. after I removed those pages I got over 4 thousand "Not Found" Pages on my site(All linked from an external free domain and host which is removed now). Also hundreds of Keywords had been added to my site. after 3 weeks I can still see keywords from removed pages on my Google Webmaster Tools. I had 1st result on google search for certain keywords but now I am on 3rd page for the same keywords. 50% of my traffic was from google which is now reduced to 6%. How can I fix both those "Not Found" pages problem and new useless keywords? and Will it be enough to get me back on first result on google? P.S: 1)Both vulnerability and uploaded files are certainly removed. 2)My site is not infected, checked on google webmaster and a few other security web scan tools. 3) all files had been uploaded on one directory so i got something like site.com/hacked/page1.html and site.com/hacked/webpage2.html

    Read the article

  • Removing Duplicate entries in grub2 Ubuntu 9.10

    - by Anders
    I have made a custom grub2 menu however, both the default and the custom show together. So my grub looks like the list below, the bolded entries are my custom ones. How do I get rid of the duplicates? I have tried apt-get remove and deleting old kernels. I am a bit lost. Thanks! in Advance. ubuntu,linux ... ubuntu,linux recovery memtest memtest windows7 windows7 ubuntu linux ubuntu linux recover I have tried apt-get remove I have tried marking and removing older kernels. This is how I made my custom grub by the way. I copied and pasted the grub.cfg menuentry code into the custom one and just renamed the titles so it would be perfectly clear for the user who doesn't want to know what version # it is.

    Read the article

  • IntelliSense for Razor Hosting in non-Web Applications

    - by Rick Strahl
    When I posted my Razor Hosting article a couple of weeks ago I got a number of questions on how to get IntelliSense to work inside of Visual Studio while editing your templates. The answer to this question is mainly dependent on how Visual Studio recognizes assemblies, so a little background is required. If you open a template just on its own as a standalone file by clicking on it say in Explorer, Visual Studio will open up with the template in the editor, but you won’t get any IntelliSense on any of your related assemblies that you might be using by default. It’ll give Intellisense on base System namespace, but not on your imported assembly types. This makes sense: Visual Studio has no idea what the assembly associations for the single file are. There are two options available to you to make IntelliSense work for templates: Add the templates as included files to your non-Web project Add a BIN folder to your template’s folder and add all assemblies required there Including Templates in your Host Project By including templates into your Razor hosting project, Visual Studio will pick up the project’s assembly references and make IntelliSense available for any of the custom types in your project and on your templates. To see this work I moved the \Templates folder from the samples from the Debug\Bin folder into the project root and included the templates in the WinForm sample project. Here’s what this looks like in Visual Studio after the templates have been included:   Notice that I take my original example and type cast the Context object to the specific type that it actually represents – namely CustomContext – by using a simple code block: @{ CustomContext Model = Context as CustomContext; } After that assignment my Model local variable is in scope and IntelliSense works as expected. Note that you also will need to add any namespaces with the using command in this case: @using RazorHostingWinForm which has to be defined at the very top of a Razor document. BTW, while you can only pass in a single Context 'parameter’ to the template with the default template I’ve provided realize that you can also assign a complex object to Context. For example you could have a container object that references a variety of other objects which you can then cast to the appropriate types as needed: @{ ContextContainer container = Context as ContextContainer; CustomContext Model = container.Model; CustomDAO DAO = container.DAO; } and so forth. IntelliSense for your Custom Template Notice also that you can get IntelliSense for the top level template by specifying an inherits tag at the top of the document: @inherits RazorHosting.RazorTemplateFolderHost By specifying the above you can then get IntelliSense on your base template’s properties. For example, in my base template there are Request and Response objects. This is very useful especially if you end up creating custom templates that include your custom business objects as you can get effectively see full IntelliSense from the ‘page’ level down. For Html Help Builder for example, I’d have a Help object on the page and assuming I have the references available I can see all the way into that Help object without even having to do anything fancy. Note that the @inherits key is a GREAT and easy way to override the base template you normally specify as the default template. It allows you to create a custom template and as long as it inherits from the base template it’ll work properly. Since the last post I’ve also made some changes in the base template that allow hooking up some simple initialization logic so it gets much more easy to create custom templates and hook up custom objects with an IntializeTemplate() hook function that gets called with the Context and a Configuration object. These objects are objects you can pass in at runtime from your host application and then assign to custom properties on your template. For example the default implementation for RazorTemplateFolderHost does this: public override void InitializeTemplate(object context, object configurationData) { // Pick up configuration data and stuff into Request object RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration; this.Request.TemplatePath = config.TemplatePath; this.Request.TemplateRelativePath = config.TemplateRelativePath; // Just use the entire ConfigData as the model, but in theory // configData could contain many objects or values to set on // template properties this.Model = config.ConfigData as TModel; } to set up a strongly typed Model and the Request object. You can do much more complex hookups here of course and create complex base template pages that contain all the objects that you need in your code with strong typing. Adding a Bin folder to your Template’s Root Path Including templates in your host project works if you own the project and you’re the only one modifying the templates. However, if you are distributing the Razor engine as a templating/scripting solution as part of your application or development tool the original project is likely not available and so that approach is not practical. Another option you have is to add a Bin folder and add all the related assemblies into it. You can also add a Web.Config file with assembly references for any GAC’d assembly references that need to be associated with the templates. Between the web.config and bin folder Visual Studio can figure out how to provide IntelliSense. The Bin folder should contain: The RazorHosting.dll Your host project’s EXE or DLL – renamed to .dll if it’s an .exe Any external (bin folder) dependent assemblies Note that you most likely also want a reference to the host project if it contains references that are going to be used in templates. Visual Studio doesn’t recognize an EXE reference so you have to rename the EXE to DLL to make it work. Apparently the binary signature of EXE and DLL files are identical and it just works – learn something new everyday… For GAC assembly references you can add a web.config file to your template root. The Web.config file then should contain any full assembly references to GAC components: <configuration> <system.web> <compilation debug="true"> <assemblies> <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add assembly="System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> <add assembly="System.Web.Extensions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </assemblies> </compilation> </system.web> </configuration> And with that you should get full IntelliSense. Note that if you add a BIN folder and you also have the templates in your Visual Studio project Visual Studio will complain about reference conflicts as it’s effectively seeing both the project references and the ones in the bin folder. So it’s probably a good idea to use one or the other but not both at the same time :-) Seeing IntelliSense in your Razor templates is a big help for users of your templates. If you’re shipping an application level scripting solution especially it’ll be real useful for your template consumers/users to be able to get some quick help on creating customized templates – after all that’s what templates are all about – easy customization. Making sure that everything is referenced in your bin folder and web.config is a good idea and it’s great to see that Visual Studio (and presumably WebMatrix/Visual Web Developer as well) will be able to pick up your custom IntelliSense in Razor templates.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Razor  

    Read the article

  • Page output caching for dynamic web applications

    - by Mike Ellis
    I am currently working on a web application where the user steps (forward or back) through a series of pages with "Next" and "Previous" buttons, entering data until they reach a page with the "Finish" button. Until finished, all data is stored in Session state, then sent to the mainframe database via web services at the end of the process. Some of the pages display data from previous pages in order to collect additional information. These pages can never be cached because they are different for every user. For pages that don't display this dynamic data, they can be cached, but only the first time they load. After that, the data that was previously entered needs to be displayed. This requires Page_Load to fire, which means the page can't be cached at that point. A couple of weeks ago, I knew almost nothing about implementing page caching. Now I still don't know much, but I know a little bit, and here is the solution that I developed with the help of others on my team and a lot of reading and trial-and-error. We have a base page class defined from which all pages inherit. In this class I have defined a method that sets the caching settings programmatically. For pages that can be cached, they call this base page method in their Page_Load event within a if(!IsPostBack) block, which ensures that only the page itself gets cached, not the data on the page. if(!IsPostBack) {     ...     SetCacheSettings();     ... } protected void SetCacheSettings() {     Response.Cache.AddValidationCallback(new HttpCacheValidateHandler(Validate), null);     Response.Cache.SetExpires(DateTime.Now.AddHours(1));     Response.Cache.SetSlidingExpiration(true);     Response.Cache.SetValidUntilExpires(true);     Response.Cache.SetCacheability(HttpCacheability.ServerAndNoCache); } The AddValidationCallback sets up an HttpCacheValidateHandler method called Validate which runs logic when a cached page is requested. The Validate method signature is standard for this method type. public static void Validate(HttpContext context, Object data, ref HttpValidationStatus status) {     string visited = context.Request.QueryString["v"];     if (visited != null && "1".Equals(visited))     {         status = HttpValidationStatus.IgnoreThisRequest; //force a page load     }     else     {         status = HttpValidationStatus.Valid; //load from cache     } } I am using the HttpValidationStatus values IgnoreThisRequest or Valid which forces the Page_Load event method to run or allows the page to load from cache, respectively. Which one is set depends on the value in the querystring. The value in the querystring is set up on each page in the "Next" and "Previous" button click event methods based on whether the page that the button click is taking the user to has any data on it or not. bool hasData = HasPageBeenVisited(url); if (hasData) {     url += VISITED; } Response.Redirect(url); The HasPageBeenVisited method determines whether the destination page has any data on it by checking one of its required data fields. (I won't include it here because it is very system-dependent.) VISITED is a string constant containing "?v=1" and gets appended to the url if the destination page has been visited. The reason this logic is within the "Next" and "Previous" button click event methods is because 1) the Validate method is static which doesn't allow it to access non-static data such as the data fields for a particular page, and 2) at the time at which the Validate method runs, either the data has not yet been deserialized from Session state or is not available (different AppDomain?) because anytime I accessed the Session state information from the Validate method, it was always empty.

    Read the article

  • Rebuilt website from static html to CMS need to redirect indexed links

    - by Michael Dunn
    I have rebuilt a website which was all created with static html pages, it has now been rebuilt using a CMS system. I need to find a way of redirecting all the existing links to there new corresponding pages which utilise friendly URL rewrites on the CMS based website I imagine there will be several hundred if not 1000s as i have pages and images linked from google. What is the most efficient way to complete this Thanks in advance Mike

    Read the article

  • Idea of an algorithm to detect a website's navigation structure?

    - by Uwe Keim
    Currently I am in the process of developing an importer of any existing, arbitrary (static) HTML website into the upcoming release of our CMS. While the downloading the files is solved successfully, I'm pulling my hair off when it comes to detect a site structure (pages and subpages) purely from the HTML files, without the user specifying additional hints. Basically I want to get a tree like: + Root page 1 + Child page 1 + Child page 2 + Child child page1 + Child page 3 + Root page 2 + Child page 4 + Root page 3 + ... I.e. I want to be able to detect the menu structure from the links inside the pages. This has not to be 100% accurate, but at least I want to achieve more than just a flat list. I thought of looking at multiple pages to see similar areas and identify these as menu areas and parse the links there, but after all I'm not that satisfied with this idea. My question: Can you imagine any algorithm when it comes to detecting such a structure? Update 1: What I'm looking for is not a web spider, but an algorithm do create a logical tree of the relationship of the pages to be able to create pages and subpages inside my CMS when importing them. Update 2: As of Robert's suggestion I'll solve this by starting at the root page, and then simply parse links as you go and treat every link inside a page simply as a child page. Probably I'll recurse not in a deep-first manner but rather in a breadth-first manner to get a more balanced navigation structure.

    Read the article

  • I'm building a theme for tumblr and the {HasPages} doesn't seem to work properly.

    - by Muhammad
    I've put the HTML draft of the theme so far, with minor CSS edits. Currently I have all the block posts and everything else that's essential to a tumblr theme but I can't seem to get the {HasPages} block to work properly. I've tested it on a different tumblr, also. There are pages created and I already have provided some basic CSS for it just in case. But there isn't anything showing up. Has anyone has this problem and if so, is there a solution I'm missing? The code to display the pages is included. {block:HasPages} <ul> <li><a href="/">Home</a></li> {block:Pages}<li><a href="{URL}">{Label}</a></li>{/block:Pages} </ul> {/block:HasPages} Also, is this a valid web masters' question. I'm not sure.

    Read the article

  • SharePoint 2007 Parser Error after updating master page

    - by Kelly Jones
    A few weeks ago I was updating the master page for a SharePoint 2007 (WSS) site.  The client wanted the site updated to reflect the new look and feel that is being applied to another set of sites in the organization. I created a new theme and master page, which I already wrote about here and here.  It worked well, except for a few pages on a subsite.  On those pages, I got the following error: Server Error in '/' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Code blocks are not allowed in this file.   I decided to go comb through my new master page and compare it to the existing master page that was already working.  After going through them line by line several times, I had no clue what would be causing the error because they were basically the same! It turns out, it was a combination of two things.  First, on a few of the pages in the site, there was some include code (basically an <% EVAL()%> snippet).  This was the code that was triggering my error “Code blocks are not allowed in this file”. However, this code was working fine with the previous master page. I decided to then try doing a full deployment of the site with the new master page, and it worked fine!  Apparently, if the master page is deployed using a Feature, then it is granted permission to allow code blocks, but if you upload pages either using web UI or SharePoint Designer, then the pages won’t be able to use code blocks. I haven’t been able to pin down the rules or official info about this, but I thought others might find it useful anyway.

    Read the article

  • Slow dvd burning/reading speeds: how to solve

    - by wouter205
    I have a problem on which I'm struggling since i started using linux a year ago on my desktop, but still haven't found a solution for it. When reading or burning a dvd, the speeds are very slow (mostly under 1x) whilst I did selected the fastest speed in k3b. As such, it takes up to 40-50 minutes to burn one dvd! I read about enabling dma this post but it didn't help. This is the output for dmesg | grep -i dma > [ 0.000000] DMA 0x00000010 -> 0x00001000 [ 0.000000] DMA32 0x00001000 -> 0x00100000 [ 0.000000] DMA zone: 56 pages used for memmap [ 0.000000] DMA zone: 5 pages reserved [ 0.000000] DMA zone: 3921 pages, LIFO batch:0 [ 0.000000] DMA32 zone: 3527 pages used for memmap [ 0.000000] DMA32 zone: 254441 pages, LIFO batch:31 [ 0.000000] Policy zone: DMA32 [ 0.120356] pnp 00:01: [dma 4] [ 0.120968] pnp 00:05: [dma 2] [ 0.121421] pnp 00:06: [dma 3] [ 0.122617] pnp 00:0b: [dma 0 disabled] [ 0.852321] ata1: SATA max UDMA/133 cmd 0xec00 ctl 0xe480 bmdma 0xe000 irq 19 [ 0.852325] ata2: SATA max UDMA/133 cmd 0xe400 ctl 0xe080 bmdma 0xe008 irq 19 [ 0.861633] ata3: PATA max UDMA/133 cmd 0x1f0 ctl 0x3f6 bmdma 0xff00 irq 14 [ 0.861636] ata4: PATA max UDMA/133 cmd 0x170 ctl 0x376 bmdma 0xff08 irq 15 [ 1.329411] ata1.00: ATA-7: Maxtor 6V250F0, VA111630, max UDMA/133 [ 1.345418] ata1.00: configured for UDMA/133 [ 1.820606] ata4.00: ATAPI: PHILIPS DVDR1660P1, P1.3, max UDMA/33 [ 1.820610] ata4.00: WARNING: ATAPI DMA disabled for reliability issues. It can be enabled [ 1.820613] ata4.00: WARNING: via pata_ali.atapi_dma modparam or corresponding sysfs node. [ 1.836681] ata4.00: configured for UDMA/33 [ 12.296600] parport0: PC-style at 0x378 (0x778), irq 7, dma 3 [PCSPP,TRISTATE,COMPAT,EPP,ECP,DMA] reading the third and fourth last line, I assume there is indeed a problem with dma? edit: this question still is not solved. Could anyone come up with an other solution please? Thanks

    Read the article

  • Website development from scratch v/s web framework [duplicate]

    - by Ali
    This question already has an answer here: What should every programmer know about web development? 1 answer Do people develop websites from scratch when there are no particular requirements or they just pick up an existing web framework like Drupal, Joomla, WordPress, etc. The requirements are almost similar in most cases; if personal, it will be a blog or image gallery; if corporate, it will be information pages that can be updated dynamically along with news section. And similarly, there are other requirements which can be fulfilled by WordPress, Joomla or Drupal. So, Is it advisable to develop a website from scratch and why ? Update: to explain more as got commentt from @Raynos (thanks for comment and helping me clearify the question), the question is about: Should web sites be developed and designed fully from scratch? Should they be done by using framework like Spring, Zend, CakePHP? Should they be done using CMS like Joomla, WordPress, Drupal (people in east are using these as frameworks)?

    Read the article

  • SEO optimisation problems after Google Panda [on hold]

    - by Daniel West
    I am currently trying to improve a website's SEO after it took quite a hit from the Google Panda upgrades. What are the main things I need to look at improving when trying to improve its ranking in Google? I have already made sure that the pages validate to W3C Standards, minimized css and js and done the obvious meta tags and header optimization but this hasn't made any difference yet. It could possibly be a content issue as the pages currently read much like a brochure and there were some pages with just a video and no text content on them which is also an issue. I've added a rel="nofollow" attribute to the links to these pages although i'm told this doesn't really work anymore. If anyone has any ideas let me know. Cheers!

    Read the article

  • Creating an ASP.NET Dynamic Web Page Using MS SQL Server 2008 Database (GridView Display)

    Dynamic pages pages that pull insert update and delete data or content from a database are extremely useful in modern websites. They provide a high level of user interactivity that improves user experience. This article will show you how to create such pages in ASP.NET that use a Microsoft SQL Server 2 8 database.... GoGrid Cloud Center Connect Cloud and Dedicated Servers on Your Private Data Center

    Read the article

  • How to Make Your Page Titles Keyword Rich

    In addition to including meta tags in your web pages, one of the most effective traffic generation technique is to include one of your main keywords in the page title tags. If you have a website with several pages, this should be done for all the pages of you website. Including the main keywords in your title is known to be one of the best traffic techniques which help in improving website ranking by search engines.

    Read the article

  • How to handle possible duplicate content across multiple sites?

    - by ElHaix
    Let's say I have two sites that cover the same vertical/topic. one in the USA and one in Canada. Both sites have local-related content, which is obviously unique by location. However they will share common news or blog pages. How do I avoid getting hit with duplicate content on both sites for those news/blog pages? If the content is exactly the same, I'm guessing I would have to pick which site's content I want to noindex,nofollow, is that correct, and if so, is that all I have to add on the URL links to those pages, and the pages' meta tags?

    Read the article

  • Function inside .profile results in no log-in

    - by bioShark
    I've created a custom function in my .profile, and I've added right at the bottom, after my custom aliases : # custom functions function eclipse-gtk { cd ~/development/eclipse-juno ./eclipse_wb.sh & cd - } The function starts a custom version of my eclipse. After I've added it, because I didn't wanted to log-out/log-in, I've reloaded my profile with the command: . ~/.profile and then I've tested my function by calling eclipse-gtk and it worked without any issue. Today when I booted, I couldn't log in. After providing my password, in a few seconds I was back at the log-in screen. Dropping to command line using CTR + ALT + F1, I've commented out the function in my .profile and the log-in was possible without any issue. My question is, what did I do wrong when I wrote the function? And if there is something wrong, why did it work yesterday after reloading the profile. Thanks in advance. Using: Ubuntu 12.04

    Read the article

  • Rendering Unity across multiple monitors

    - by N0xus
    At the moment I am trying to get unity to run across 2 monitors. I've done some research and know that this is, strictly, possible. There is a workaround where you basically have to fluff your window size in order to get unity to render across both monitors. What I've done is create a new custom screen resolution that takes in the width of both of my monitors, as seen in the following image, its the 3840 x 1080: How ever, when I go to run my unity game exe that size isn't available. All I get is the following: My custom size should be at the very bottom, but isn't. Is there something I haven't done, or missed, that will get unity to take in my custom screen size when it comes to running my game through its exe? Oddly enough, inside the unity editor, my custom screen size is picked up and I can have it set to that in my game window: Is there something that I have forgotten to do when I build and run the game from the file menu? Has someone ever beaten this issue before?

    Read the article

  • I want to consolidate two sites into a third. Will my search engine rankings be penalized if I rewrite and redirect pages one by one?

    - by Patrick Kenny
    I have two Drupal sites with different content-- let's call them Apple and Orange. I recently developed a much more sophisticated third Drupal site-- let's call it Tree. For a large number of reasons, the content on Apple and Orange is useful for the users of Tree, so I want to move the content to Tree. However, much of the content is out of date. (This whole process took about five years.) To update the content, I will rewrite it one article at a time myself. Now here's my question: if I move the articles one by one (as I rewrite them) and then redirect the old articles (using a 301 redirect) on Apple/Orange to the new site on Tree, will this have a huge negative effect on my search engine rankings? Is there a good way to redirect among sites when they merge like this, or would I be better off keeping the old articles on Apple/Orange and simply linking them to the new, rewritten articles on Tree?

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • HTTP 303 redirection and robots.txt

    - by Ian Dickinson
    On a site I'm working on, we're using the HTTP 303 redirect pattern (see this article for background) to distinguish between information and non-information resources. So: some URL's under /id get redirected to dynamically-created pages under /doc. These dynamic pages are built from a database, and contain links to other /doc/ resources, so in general we don't want them to be crawled. Our robots.txt contains: Disallow: /doc However, we do want the non-redirected pages under /id to get indexed by Google et al: Allow: /id So the question I have, which I can't find an answer to so far, is: if an allowed /id page 303-redirects to a /doc page, will it still be blocked by robots.txt? If yes, we're OK, but otherwise I'm going to disallow all /id resources in the robots file, as having the crawler hammer the db would be worse than losing search indexing for the /id pages.

    Read the article

  • Constructive criticism for my bounce rate being so high [closed]

    - by Daniel
    The bounce rate on my website's product pages is 80%, which is terrible. Could you offer any opinions on whether you consider the user experience to be bad, and how I could possibly improve it? Other pages, such as the home and category pages, have acceptable bounce rates, but the vast majority of my traffic lands on the product pages. I already tried removing some Google ads for a couple of days, but this didn't seem to help at all. I'm working on doing A/B testing at the moment. (It's tricky, as the site is based on a CMS - I custom coded the [Joomla] component, so hopefully I can get this testing working.)

    Read the article

  • Best way to provide folder level 301 redirect

    - by Vinay
    I have a website hosted in yahoo small business, I don't have access to .htaccess file. I have around 220 pages in a folder 'mysubfolder' (http://mysite.com/myfolder/mysubfolder). And the age of website is around 3 years. I am planning to move all 220 pages in 'mysubfolder' to 'myfolder' (one level up). All the pages are under 'mysubfolder' are indexed. what is the best way to do this.So that it should not affects the SEO.

    Read the article

  • folder level 301 redirect without .htaccess

    - by Vinay
    I have a website hosted in yahoo small business, I don't have access to .htaccess file. I have around 220 pages in a folder 'mysubfolder' (http://mysite.com/myfolder/mysubfolder). And the age of website is around 3 years. I am planning to move all 220 pages in 'mysubfolder' to 'myfolder' (one level up). All the pages in 'mysubfolder' are indexed. what is the best way to do this.So that it should not affects the SEO.

    Read the article

  • http to https upgrade -- SEO troubles

    - by SLIM
    I upgraded my site so that all pages have gone from using http to https. I didn't consider that Google treats https pages differently than http. I re-created my sitemap to so that all links now reflect the new https and let it be for a few days. (Whoops!) Google is now re-indexing all https pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new https. The problem is that Google sees all of these as brand new pages when many of them have a long http history. Of course most of you will recognize the problem, I didn't set up a 301 from the old http to the new https. Is it too late to do this? Should I switch my sitemap back to http and then 301 to the new https? Or should I leave the sitemap as is, and setup 301 redirects anyway.. I'm not even sure if Google is trying to reach the http site anymore. Currently the site is doing 303 redirects (from http to https), although I haven't figured out why yet. Thanks for any suggestions you can offer.

    Read the article

  • Tackling thin content on an images gallery

    - by Ted Wilmont
    We run an images gallery as part of our site, however we have over 8,000 images and every image has a separate HTML page of its own to display the image caption, related image and comments from users of the site. This seems to be a problem especially with the Google Panda update because these pages are technically "thin content". What would be the best way to tackle this? We'd love some feedback and advice regarding this scenario. We have a few options we thought of already but can't decide: We could noindex the separate image pages and loose any image search listings we have for the image in favour of removing these thin pages from the index. We could 301 all of the individual image pages back to the image category listing and anchor each image (e.g. #img2122) and include all of the comments and description on the category listing page itself. If we was to simply list all of the images and content on the category pages themself; what's the best method? We could add all of the content in the anchor tags and use jQuery to display them in a box when a user clicks on the image or we could use Ajax to retrieve the information. However, what's the best Ajax method for SEO? Any ideas, suggestions, tips or advice is greatly appreciated and thank you in advance for any given.

    Read the article

  • Access Offline or Overloaded Webpages in Firefox

    - by Asian Angel
    What do you do when you really want to access a webpage only to find that it is either offline or overloaded from too much traffic? You can get access to the most recent cached version using the Resurrect Pages extension for Firefox. The Problem If you have ever encountered a website that has become overloaded and unavailable due to sudden popularity (i.e. Slashdot, Digg, etc.) then this is the result. No satisfaction to be had here… Resurrect Pages in Action Once you have installed the extension you can add the Toolbar Button if desired…it will give you the easiest access to Resurrect Pages. Or you can wait for a problem to occur when trying to access a particular website and have it appear as shown here. As you can see there is a very nice selection of cache services to choose from, therefore increasing your odds of accessing a copy of that webpage. If you would prefer to have the access attempt open in a new tab or window then you should definitely use the Toolbar Button. Clicking on the Toolbar Button will give you access to the popup window shown here…otherwise the access attempt will happen in the current tab. Here is the result for the website that we wanted to view using the Google Listing. Followed by the Google (text only) Listing. The results with the different services will depend on how recently the webpage was published/set up. View Older Versions of Currently Accessible Websites Just for fun we decided to try the extension out on the How-To Geek website to view an older version of the homepage. Using the Toolbar Button and clicking on The Internet Archive brought up the following page…we decided to try the Nov. 28, 2006 listing. As you can see things have really changed between 2006 and now…Resurrect Pages can be very useful for anyone who is interested in how websites across the web have grown and changed over the years. Conclusion If you encounter a webpage that is offline or overloaded by sudden popularity then the Resurrect Pages extension can help you get access to the information that you need using a cached version. Links Download the Resurrect Pages extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Remove Colors and Background Images in WebpagesGet Last Accessed File Time In Ubuntu LinuxCustomize the Reading Format for Webpages in FirefoxGet Access to 100+ URL Shortening Services in FirefoxAccess Cached Versions of Webpages When a Website is Down TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >