Search Results

Search found 29159 results on 1167 pages for 'xml configuration'.

Page 156/1167 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Nginx common configuration that I might have missed

    - by ApPeL
    I recently moved from Apache Mod_wsgi to Nginx, and I have seen a major improvement on speed a lowering on memory usage and I am generally very happy with the it. I am not a server expert, so please be gentle. I am wondering if there are any small configuration that I might have missed, that will cause me some issues in the long run... Please see my nginx.conf file user nginx nginx; worker_processes 4; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 2k; request_pool_size 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; ignore_invalid_headers on; index index.html; server { listen 80; server_name localhost; location /media/ { root /www/django_test1/myapp; # Notice this is the /media folder that we create above } location /mediaadmin/ { alias /opt/python2.6/lib/python2.6/site-packages/django/contrib/admin/media/; # Notice this is the /media folder that we create above } location / { # host and port to fastcgi server fastcgi_pass 127.0.0.1:8080; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; client_max_body_size 100M; } access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log; } }

    Read the article

  • Accessing Custom Configurations in NUnit class

    - by Martin Ongtangco
    I'm really banging my head onto this one. I can't make the Custom Configuration to work with NUnit. It kept on failing to read the configuration file. I followed carefully this article: http://devlicio.us/blogs/derik_whittaker/archive/2006/11/13/app-config-and-custom-configuration-sections.aspx placed the references in the Unit test class App.Config, still everything failed. Is there some sort of a magic setting to do here?

    Read the article

  • How do you manually insert options into boost.Program_options?

    - by windfinder
    I have an application that uses Boost.Program_options to store and manage its configuration options. We are currently moving away from configuration files and using database loaded configuration instead. I've written an API that reads configuration options from the database by hostname and instance name. (cool!) However, as far as I can see there is no way to manually insert these options into the boost Program_options. Has anyone used this before, any ideas? The docs from boost seem to indicate the only way to get stuff in that map is by the store function, which either reads from the command line or config file (not what I want). Basically looking for a way to manually insert the DB read values in to the map.

    Read the article

  • The Sitemap Paradox

    - by Jeff Atwood
    We use a sitemap on Stack Overflow, but I have mixed feelings about it. Web crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URLs in the Sitemap and learn about those URLs using the associated metadata. Using the Sitemap protocol does not guarantee that web pages are included in search engines, but provides hints for web crawlers to do a better job of crawling your site. Based on our two years' experience with sitemaps, there's something fundamentally paradoxical about the sitemap: Sitemaps are intended for sites that are hard to crawl properly. If Google can't successfully crawl your site to find a link, but is able to find it in the sitemap it gives the sitemap link no weight and will not index it! That's the sitemap paradox -- if your site isn't being properly crawled (for whatever reason), using a sitemap will not help you! Google goes out of their way to make no sitemap guarantees: "We cannot make any predictions or guarantees about when or if your URLs will be crawled or added to our index" citation "We don't guarantee that we'll crawl or index all of your URLs. For example, we won't crawl or index image URLs contained in your Sitemap." citation "submitting a Sitemap doesn't guarantee that all pages of your site will be crawled or included in our search results" citation Given that links found in sitemaps are merely recommendations, whereas links found on your own website proper are considered canonical ... it seems the only logical thing to do is avoid having a sitemap and make damn sure that Google and any other search engine can properly spider your site using the plain old standard web pages everyone else sees. By the time you have done that, and are getting spidered nice and thoroughly so Google can see that your own site links to these pages, and would be willing to crawl the links -- uh, why do we need a sitemap, again? The sitemap can be actively harmful, because it distracts you from ensuring that search engine spiders are able to successfully crawl your whole site. "Oh, it doesn't matter if the crawler can see it, we'll just slap those links in the sitemap!" Reality is quite the opposite in our experience. That seems more than a little ironic considering sitemaps were intended for sites that have a very deep collection of links or complex UI that may be hard to spider. In our experience, the sitemap does not help, because if Google can't find the link on your site proper, it won't index it from the sitemap anyway. We've seen this proven time and time again with Stack Overflow questions. Am I wrong? Do sitemaps make sense, and we're somehow just using them incorrectly?

    Read the article

  • Translate report data export from RUEI into HTML for import into OpenOffice Calc Spreadsheets

    - by [email protected]
    A common question of users is, How to import the data from the automated data export of Real User Experience Insight (RUEI) into tools for archiving, dashboarding or combination with other sets of data.XML is well-suited for such a translation via the companion Extensible Stylesheet Language Transformations (XSLT). Basically XSLT utilizes XSL, a template on what to read from your input XML data file and where to place it into the target document. The target document can be anything you like, i.e. XHTML, CSV, or even a OpenOffice Spreadsheet, etc. as long as it is a plain text format.XML 2 OpenOffice.org SpreadsheetFor the XSLT to work as an OpenOffice.org Calc Import Filter:How to add an XML Import Filter to OpenOffice CalcStart OpenOffice.org Calc andselect Tools > XML Filter SettingsNew...Fill in the details as follows:Filter name: RUEI Import filterApplication: OpenOffice.org Calc (.ods)Name of file type: Oracle Real User Experience InsightFile extension: xmlSwitch to the transformation tab and enter/select the following leaving the rest untouchedXSLT for import: ruei_report_data_import_filter.xslPlease see at the end of this blog post for a download of the referenced file.Select RUEI Import filter from list and Test XSLTClick on Browse to selectTransform file: export.php.xmlOpenOffice.org Calc will transform and load the XML file you retrieved from RUEI in a human-readable format.You can now select File > Open... and change the filetype to open your RUEI exports directly in OpenOffice.org Calc, just like any other a native Spreadsheet format.Files of type: Oracle Real User Experience Insight (*.xml)File name: export.php.xml XML 2 XHTMLMost XML-powered browsers provides for inherent XSL Transformation capabilities, you only have to reference the XSLT Stylesheet in the head of your XML file. Then open the file in your favourite Web Browser, Firefox, Opera, Safari or Internet Explorer alike.<?xml version="1.0" encoding="ISO-8859-1"?><!-- inserted line below --> <?xml-stylesheet type="text/xsl" href="ruei_report_data_export_2_xhtml.xsl"?><!-- inserted line above --><report>You can find a patched example export from RUEI plus the above referenced XSL-Stylesheets here: export.php.xml - Example report data export from RUEI ruei_report_data_export_2_xhtml.xsl - RUEI to XHTML XSL Transformation Stylesheetruei_report_data_import_filter.xsl - OpenOffice.org XML import filter for RUEI report export data If you would like to do things like this on the command line you can use either Xalan or xsltproc.The basic command syntax for xsltproc is very simple:xsltproc -o output.file stylesheet.xslt inputfile.xmlYou can use this with the above two stylesheets to translate RUEI Data Exports into XHTML and/or OpenOffice.org Calc ODS-Format. Or you could write your own XSLT to transform into Comma separated Value lists.Please let me know what you think or do with this information in the comments below.Kind regards,Stefan ThiemeReferences used:OpenOffice XML Filter - Create XSLT filters for import and export - http://user.services.openoffice.org/en/forum/viewtopic.php?f=45&t=3490SUN OpenOffice.org XML File Format 1.0 - http://xml.openoffice.org/xml_specification.pdf

    Read the article

  • OTN Developer Day: Oracle Database 11g Application Development

    - by stephen.garth
    When and Where: Tuesday June 15, 2010 from 8:00 am - 5:30 pm Hyatt Regency Reston, Reston VA This full-day FREE event offers a great learning and networking opportunity. With support from Oracle database application development experts, you'll get valuable hands-on experience developing database-backed apps with the latest Oracle tools and frameworks. Oh yeah, you get to use your own notebook and download some cool and very useful materials. Find out more and register here. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Daily tech links for .net and related technologies - Apr 8-10, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Apr 8-10, 2010 Web Development Using RIA DomainServices with ASP.NET and MVC 2 - geekswithblogs Using AntiXss As The Default Encoder For ASP.NET - Phil Haack New Syntax for HTML Encoding Output in ASP.NET 4 (and ASP.NET MVC 2) - Scott Gu Multi-Step Processing in ASP.NET - Dave M. Bush MvcContrib - Portable Area – Visual Studio project template - erichexter Encoding/Decoding URIs and HTML in the .NET 4 Client Profile - Pete Brown Jon Takes Five...(read more)

    Read the article

  • SQL University: Parallelism Week - Part 2, Query Processing

    - by Adam Machanic
    Welcome back for the second part of Parallelism Week here at SQL University . Get your pencils ready, and make sure to raise your hand if you have a question. Last time we covered the necessary background material to help you understand how the SQL Server Operating System schedules its many active threads, and the differences between its behavior and that of the Windows operating system's scheduler. We also discussed some of the variations on the theme of parallel processing. Today we'll take a look...(read more)

    Read the article

  • Daily tech links for .net and related technologies - May 13-16, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - May 13-16, 2010 Web Development Integrating Twitter Into An ASP.NET Website Using OAuth - Scott Mitchell T4MVC Extensions for MVC Partials - Evan Building a Data Grid in ASP.NET MVC - Ali Bastani Introducing the MVC Music Store - MVC 2 Sample Application and Tutorial - Jon Galloway Announcing the RTM of MvcExtensions - kazimanzurrashid Optimizing Your Website For Speed Web Design Validation with the jQuery UI Tabs Widget - Chris Love A Brief History...(read more)

    Read the article

  • More advanced 'Apple Automator' software?

    - by OrangeBox
    Is there any software similar to automator but more advanced? In our situation we have two files with the same name, one is a MOV the other XML. We want to use some of the metadata within the XML to rename both files. Then we want to re-arrange the contents of the XML file so that it is compatible with another piece of software we use (I think this is called mapping) Essentially some software that takes a bunch of variable from existing file and peforms file actions to them. I imagine this would be an easy task using applescript, but im wondering if there is a OSX application similar to Automator that can do the above? Questions are: Is there software that can do the above? Could Automator achieve this? What is the name of this process? If no such software exists, what would be the best kind of script to use? eg. Make an Apple Script, python script etc.

    Read the article

  • Scrapy cannot find div on this website [on hold]

    - by Jaspal Singh Rathour
    I am very new at this and have been trying to get my head around my first selector can somebody help? i am trying to extract data from page http://groceries.asda.com/asda-webstore/landing/home.shtml?cmpid=ahc--ghs-d1--asdacom-dsk-_-hp#/shelf/1215337195041/1/so_false all the info under div class = listing clearfix shelfListing but i cant seem to figure out how to format response.xpath(). I have managed to launch scrapy console but no matter what I type in response.xpath() i cant seem to select the right node. I know it works because when I type response.xpath('//div[@class="container"]') I get a response but don't know how to navigate to the listsing cleardix shelflisting. I am hoping that once i get this bit I can continue working my way through the spider. Thank you in advance! PS I wonder if it is not possible to scan this site - is it possible for the owners to block spiders?

    Read the article

  • scrapy cannot find div on this website [on hold]

    - by Jaspal Singh Rathour
    I am very new at this and have been trying to get my head around my first selector can somebody help? i am trying to extract data from page http://groceries.asda.com/asda-webstore/landing/home.shtml?cmpid=ahc--ghs-d1--asdacom-dsk-_-hp#/shelf/1215337195041/1/so_false all the info under div class = listing clearfix shelfListing but i cant seem to figure out how to format response.xpath(). I have managed to launch scrapy console but no matter what I type in response.xpath() i cant seem to select the right node. I know it works because when I type response.xpath('//div[@class="container"]') I get a response but don't know how to navigate to the listsing cleardix shelflisting. I am hoping that once i get this bit I can continue working my way through the spider. Thank you in advance! PS I wonder if it is not possible to scan this site - is it possible for the owners to block spiders?

    Read the article

  • Daily tech links for .net and related technologies - June 8-11, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - June 8-11, 2010 Web Development ASPNET MVC: Handling Multiple Buttons on a Form with jQuery - Donn Building a MVC2 Template, Part 14, Logging Services - Eric Simple Accordion Menu With jQuery & ASP.NET - Steve Boschi Conditional Validation in MVC -Simonince Creating a RESTful Web Service Using ASP.Net MVC Part 23 – Bug Fixes and Area Support - Shoulders of Giants Web Design The Principles Of Cross-Browser CSS Coding - Louis Lazaris Transparency...(read more)

    Read the article

  • 2D Tile Map files for Platformer, JSON or DB?

    - by Stephen Tierney
    I'm developing a 2D platformer with some uni friends. We've based it upon the XNA Platformer Starter Kit which uses .txt files to store the tile map. While this is simple it does not give us enough control and flexibility with level design. Some examples: for multiple layers of content multiple files are required, each object is fixed onto the grid, doesn't allow for rotation of objects, limited number of characters etc. So I'm doing some research into how to store the level data and map file. Reasoning for DB: From my perspective I see less redundancy of data using a database to store the tile data. Tiles in the same x,y position with the same characteristics can be reused from level to level. It seems like it would simple enough to write a method to retrieve all the tiles that are used in a particular level from the database. Reasoning for JSON: Visually editable files, changes can be tracked via SVN a lot easier. But there is repeated content. Do either have any drawbacks (load times, access times, memory etc) compared to the other? And what is commonly used in the industry? Currently the file looks like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### 1 - Player start point, X - Level Exit, . - Empty space, # - Platform, G - Gem

    Read the article

  • Code from my DevConnections Talks and Workshop

    - by dwahlin
    Thanks to everyone who attended my sessions at DevConnections Las Vegas. I had a great time meeting new people, discussing business problems and solutions and interacting. Here’s the code and slides for the sessions.  For those that came to the full-day Silverlight workshop I’ve included the slides that didn’t get printed plus a ton of code to help you get started with various Silverlight topics.   Get Started Building Silverlight Applications Building Architecturally Sound Silverlight Applications Using WCF RIA Services in Silverlight Applications (will post soon) Silverlight Data Integration Options and Usage Scenarios Silverlight Workshop Code

    Read the article

  • T-SQL Tuesday # 16 : This is not the aggregate you're looking for

    - by AaronBertrand
    This week, T-SQL Tuesday is being hosted by Jes Borland ( blog | twitter ), and the theme is " Aggregate Functions ." When people think of aggregates, they tend to think of MAX(), SUM() and COUNT(). And occasionally, less common functions such as AVG() and STDEV(). I thought I would write a quick post about a different type of aggregate: string concatenation. Even going back to my classic ASP days, one of the more common questions out in the community has been, "how do I turn a column into a comma-separated...(read more)

    Read the article

  • Creating a simple RSS reader using ListView,XMLDatasource,DataPager web server controls

    - by nikolaosk
    In my last ASP.Net seminar someone noticed that we did not talk at all about the XmlDataSource,ListView,DataPager web server controls. It is rather impossible to investigate/talk about all issues regarding ASP.Net in a seminar but I promised to write a blog post. I thought that I could combine all those three web server controls to create a RSS reader. 1) Launch Visual Studio 2008/2010. Express editions will work fine. 2) Create an empty asp.net web site. Choose an appropriate name. We will not write...(read more)

    Read the article

  • How can I export the results of my script to file?

    - by Meepster
    I need to get some details out of hundreds of XML files that were written for reporting. The script below provides the correct results, but I want them in file, cvs, xls, or txt so I can provide a list of the database fields that are in use by these reports. $get = get-childitem D:\Data\Desktop\Projects\Reports\Test -include *.xml -recurse ForEach ($xmlfile in $get) { $Report = $xmlfile $Fields = [xml](Get-Content $Report) $Fields.reportsettings.fieldlist.field | select @{ L = 'Description'; E = {$_.desc}}, @{ L = 'Field Name'; E = {$_.criterion}}, @{ L = 'Field ID'; E = {$_.id} } } Thank you very much for your time!

    Read the article

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • How to render RSS feed in a desktop RSS reader?

    - by Thiago Moraes
    Consider one feed like this: http://feeds.feedburner.com/codinghorror It has the entire content inside the description tag of the feed, so you don't need to access the website to read the post. Now I have the problem of creating an interface for a feed like this on a desktop client. What's the best way to render the text in a pleasant way to the user? My first thought was to parse the entire HTML as if I was a web browser, but that looks really hard to do in a satisfying way. Are there any better (faster) alternatives? Rephrasing: how a desktop rss client such as feeddamon parses the input to display it nicely? Does it have a web browser inside it?

    Read the article

  • JMonkey Engine (JME) load Blender scene with textures?

    - by leigero
    I am having the hardest time trying to accomplish the simplest task. I have created a floor and 4 walls (not that complicated) in Blender. I added a basic material and cloud texture so they have something to look at other than gray. When I import them into JMonkey they show up as solid white objects with no shading or depth. White silhouettes. I thought this may be a lighting issue, but I have ambient light added to the scene. I can remove that light or adjust its intensity and it has no affect on the scene. I exported all Blender files into OgreXML format, then converted them to .j3o format in JMonkey. I renamed the textures to match their corresponding mesh and this didn't do anything. Does anybody know how to create a flat object and put it into JMonkey with a texture? This sounds simple and there is absolutely no information on this. This should be step 1!

    Read the article

  • Update Since Microsoft/PSC Office Open XML Case Study

    - by Tim Murphy
    In 2009 Microsoft released a case study about a project that we had done using the OOXML SDK 1.0 for Research Directors Inc.  Since that time Microsoft has released version 2.0 of the SDK and PSC has done significant development with it.  Below are some of the mile stones we have reached since the original case study. At the time of the original case study two report types had been automated to output as PowerPoint presentations.  Now that the all the main products have been delivered we have added three reports with Word document outputs and five more reports with PowerPoint outputs. One improvement we made over the original application was to create a PowerPoint Add-In which allows the users to tag a slide.  These tags along with the strongly typed SDK 2.0 allows for the code to use LINQ to easily search for slides in the template files.  This allows for a more flexible architecture base on assembling a presentation from copied slide extracted from the template. The new library we created also enabled us to create two new Word based reports in two weeks.  The library we created abstracts the generation of the documents from the business logic and the data retrieval.  The key to this is the mark up.  Content Controls are a good method for identifying sections of a template to be modified or replaced.  Join this with the concept of all data being generically either scalar or two dimensional and the code becomes more generic. In the end we found the OOXML SDK 2.0 to be a great tool for accelerating document generation development and creating happy clients.  del.icio.us Tags: PSC Group,OOXML,Case Study,Office Open XML,Word,PowerPoint

    Read the article

  • adding tagged / dynamic pages in sitemap

    - by sam
    ive got a blog thats been running for about a year ive made about 200 posts, and there should be about 220 pages to index (additional pages for about / contact ect). When i go to crawl the site i get 1900 pages because of all the pages that are related to tags ive used in my blogs these 70% of these pages only contain one blog post. When submitting my site map to google should i exclude all pages with /tagged/ in the url so ill only be submitting unqiue pages, or should i submit the full site map ?

    Read the article

  • How to properly remove URL's from Google's index?

    - by ElHaix
    On some of our sites, we now have several thousand pages that dilute our website's keyword density. The website is an MVC site with SEO routing. If I submit a new sitemap with say only the 2000 or so pages that we want indexed, even though navigating to the diluting pages still works, will Google re-index the site with only those 2000 pages, dropping the superfluous ones? For example, I want to keep roughly 2000 of the following: www.mysite.com/some-search-term-1/some-good-keywords www.mysite.com/some-search-term-2/some-more-good-keywords And remove several thousand of the following that have already been indexed. www.mysite.com/some-search-term-xx/some-poor-keywords www.mysite.com/some-search-term-xx/some-poor-more-keywords These pages are not actually "removed" as navigating to these URL's still renders a page. Even though there are potentially hundreds of thousands of pages, I only want say 2000 to be re-indexed and retained. The others removed (without having to do these manually). Thanks.

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >