Search Results

Search found 22992 results on 920 pages for 'custom pages'.

Page 29/920 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • duplicate pages

    - by Mert
    I did a small coding mistake and google indexed my site wrongly. this is correct form: https://www.foo.com/urunler/171/TENGA-CUP-DOUBLE-HOLE but google index my site like this : https://www.foo.com/urunler/171/cart.aspx first I fixed the problem and made a site map and only correct link in it. now I checked webmaster tools and I see this; Total indexed 513 Not selected 544 Blocked by robots 0 so I think this can be caused by double indexes and they looks not selected makes my data not selected. I want to know how to fix this "https://www.foo.com/urunler/171/cart.aspx" links. should I fix in code or should I connect to google to reindex my site. If I should redirect wrong/duplicate links to correct ones, what the way should be? thanks for your time in advance.

    Read the article

  • Free tool to automatically deskew and crop PDF made up of scanned pages [closed]

    - by Pietro M.
    I have several PDFs made up of book pages' scans. The scans are made from two pages at a time and some of these scans are skewed, making text appear slightly tilted. I'm looking for a tool that could allow me to do an automatic optimization by deskewing the scans without losing readability. I've found the GPL software briss to crop the scans in order to have a 1:1 page ratio instead of 2:1, but I don't have any tool to deskew the pages. I stumbled upon unpaper, another open source tool that seems perfect for what I want to do, but that tool is Linux only and it doesn't work on PDF files directly. Any hint is appreciated. Thank you.

    Read the article

  • schema.org specification for generic pages or posts on a CMS

    - by NateWr
    I'm trying to determine the best possible schema.org type to declare for the content section in the template of a content management system, which will handle regular news posts for small, local hospitality businesses. The type should represent the content of that page, which is likely to be a wide range of things. The description for Article pretty strongly encourages its use to be limited to the articles of a publication. For purely semantic reasons, I'm not sure if Blog is appropriate in this case -- businesses won't be creating typical "blog" content but are more likely to be writing about upcoming events, special deals, awards, etc. Would Webpage be appropriate in this instance? Although I'm a fan of the schema.org concept, I frequently find myself unsure how broadly or narrowly I'm meant to infer the meaning of a type. In such cases, is it safe to use a high-level element, such as CreativeWork, or does this blunt the usefulness of the markup?

    Read the article

  • The Increasing Importance of SEO Content Pages

    It's been a well known fact that SEO content is extremely important for the popularity and search engine rankings of a site however a lot of people are not too keen on emphasizing on its importance. These days search engine professionals are trying to implement the content of the site in such a way that the overall effectiveness of the page is enhanced right from the back end coding.

    Read the article

  • SQL SERVER Data Pages in Buffer Pool Data Stored in MemoryCache

    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Need private personal access to ~three PHP pages

    - by Roger
    I would like secure access to the text output by three PHP scripts (the text output is JavaScript and html) . The security level is much less then financial data but important none-the-less. I have considered purchasing AND studying https and SSL certificates. Hostgator charges an extra $2/month for a private ip plus $50+ anually for a certificate. This is more then I want to spend for this project (time + money). Is there a simpler solution that is: less expensive easier to implement. I'm open to different approaches.

    Read the article

  • Own mediawiki/wikipedia naming convention for pages

    - by Andy M
    I recently installed a mediawiki at home and I'm looking for a way to name pages. Let's say I have the following structure : Main - Dev - C# - Tips Main - Cooking - Mexixan Cooking - Tips Main - Annoying my girlfriend - Tips Each final page is a different Tips page. Naming them only "tips" won't work because I need three different pages. Now, I could name each of my tips page with its "path" (ex: main_cooking_mexican_cooking_tips) but it looks cumbersome and the problem is that, whenever I'll change the structure of my mediawiki, some pages will need to change their name in order to be corrects. Does it exist some convention to follow regarding this ? Thanks for your help !

    Read the article

  • 1 ASPX Page, Multiple Master Pages

    - by csmith18119
    So recently I had an ASPX page that could be visited by two different user types.  User type A would use Master Page 1 and user type B would use Master Page 2.  So I put together a proof of concept to see if it was possible to change the MasterPage in code.  I found a great article on the Microsoft ASP.net website. Specifying the Master Page Programmatically (C#) by Scott Mitchell So I created a MasterPage call Alternate.Master to act as a generic place holder.  I also created a Master1.Master and a Master2.Master.  The ASPX page, Default.aspx will use this MasterPage.  It will also use the Page_PreInit event to programmatically set the MasterPage.  1: protected void Page_PreInit(object sender, EventArgs e) { 2: var useMasterPage = Request.QueryString["use"]; 3: if (useMasterPage == "1") 4: MasterPageFile = "~/Master1.Master"; 5: else if (useMasterPage == "2") 6: MasterPageFile = "~/Master2.Master"; 7: }   In my Default.aspx page I have the following links in the markup: 1: <p> 2: <asp:HyperLink runat="server" ID="cmdMaster1" NavigateUrl="~/Default.aspx?use=1" Text="Use Master Page 1" /> 3: </p> 4: <p> 5: <asp:HyperLink runat="server" ID="cmdMaster2" NavigateUrl="~/Default.aspx?use=2" Text="Use Master Page 2" /> 6: </p> So the basic idea is when a user clicks the HyperLink to use Master Page 1, the default.aspx.cs code behind will set the property MasterPageFile to use Master1.Master.  The same goes with the link to use Master Page 2.  It worked like a charm!  To see the actual code, feel free to download a copy here: Project Name: Skyhook.MultipleMasterPagesWeb http://skyhookprojectviewer.codeplex.com

    Read the article

  • Learn How to Create Web Pages Using HTML Codes

    Once you understand the basic HTML codes you will have access to a wide range of opportunities. This will enable you to publish content online and link with text and other sites. This article discusses basic HTML codes. HTML codes are very easy to understand because they are very logical.

    Read the article

  • re-direct SSL pages using header statement based on port

    - by bob's your brother
    I found this in the header.php file of a e-commerce site. Is this better done in a .htaccess file. Also what would happen to any post parameters that get caught in the header statement. // flip between secure and non-secure pages $uri = $_SERVER['REQUEST_URI']; // move to secure SSL pages if required if (substr($uri,1,12) == "registration") { if($_SERVER['SERVER_PORT'] != 443) { header("HTTP/1.1 301 Moved Permanently"); header("Location: https://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']); exit(); } } // otherwise us regular non-SSL pages else { if($_SERVER['SERVER_PORT'] == 443) { header("HTTP/1.1 301 Moved Permanently"); header("Location: http://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']); exit(); } }

    Read the article

  • Google not showing any pages from my site in the index after three months [on hold]

    - by Alex Coisman
    Despite having a sitemap and using Google Webmaster Tools, it has been over 3 months and my site has not been added to the Google index at all. Here's the site: www.famouslefthandedpeople.com As far as I know, I have done everything correctly. However, there must be something I am overlooking that is preventing Google from indexing the site. I do not have a robots.txt file, so allow/disallow isn't the issue. Although the content of the site is sparse, it is original and not duplicated internally or externally so Panda/Penguin should not be a problem. I have reviewed the answers at Why isn't my website in Google search results? and I don't think it applies here. If it matters, I am using WordPress to create the site. What other factors should I be looking at in order to troubleshoot this?

    Read the article

  • Improving FAQ SEO with multiple pages?

    - by asdfasdf
    I have a client who has over 200 Question/Answer style content blocks. Neither the questions or answers are very long and most of them have almost the same question but with a word or two differentiating themselves from the rest of the questions. Would SEO be helped or hurt if I would to put each QA on its own page with the title of the page the question being asked etc... Or, would that be considered "farming"? If not, what would be the best way (in SEO world) do present all these QAs? Thanks for any advice..

    Read the article

  • Apache not serving pages stored in Subversion repository

    - by Stephen
    I've setup Apache and Subversion on an old PC, but Apache is not serving pages correctly, when I enter the address to my test site: http://HOME_IP_ADDRESS/test/index.html I just get a File Not Found error and the following output in the error log: File does not exist: /var/www/html/svn/repos/test but I know the file exists, when I enter the following URL into the browser: http://HOME_IP_ADDRESS/repos/test/index.html I just get a listing of the HTML. In my Apache config file I have the Document Root set as follows: DocumentRoot "/var/www/html/svn/repos" so I'm not sure what is going on, I have SVN installed and I think it may have something to do this. Edit * I changed the Document Root location, which helped as pages in the new location were served correctly, so the problem is with just serving the pages from the repository.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >