Search Results

Search found 11086 results on 444 pages for 'asynchronous pages'.

Page 76/444 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Is multiple domain names and links from same IP causing poor search engine rankings?

    - by John
    I have an ecommerce website which is not doing so well in Google. I am trying to improve this of course, and am looking at some possibilities for why it isn't doing well. The website has four domain names, all of which have been indexed by Google. A few months ago I applied 301 redirects to any requests for two of the domain names so now it is down to two domain names (one is a .net, the other is a .com.au, the others were .net.au and .com). I prefer to use my main domain name (the .com.au), but one of the names has been around for a long time and has more inbound links. According to a PageRank tool, both are PR2. It is a Classic ASP site and up until recently had a lot of querystring parameters. In the last week or so I added URL rewriting so there is now no parameters for most pages. I don't do 301 redirects from the old URLs but instead I add the META canonical tag indicating the preferred new URL. At the same time I redesigned the site and improved title tags, META descriptions, and H tags but it hasn't been long enough yet for Google to index many of these yet. I also looked at what pages Google has indexed and strangely it has some strange pages in the index, there are a lot of pages which are actual keyword searches (more a bunch of random letters than an actual word). What I mean is that it is as if they had typed in something to search for in my search box - there are no links to pages like this and the only way of getting this is to type something in to the search box). So I added a META robots tag with noindex,nofollow anytime that I render pages like this. Years ago I set up a fake price comparison site which lists all my products and links back to my site. It has a different keyword rich domain name but is on the same server and same IP address. It's a completely different layout but does have the same product categories and product descriptions (although I have stripped formatting out of them so they are not identical except in text). I also have a few blog sites which again are on the same server/IP and all have advertising for the website. My questions are: What should I do with the multiple domains, just use one, or continue with two or more? Should I add 301 redirects, not just the META canonical tag? Any idea about Google indexing my search results page, and did I do the right thing with the META robots tag? Is the fake price comparison site likely to be causing problems? Are all the links to the site from other domain names but the same IP address likely to be causing problems? Thanks for any help. Sorry for so many questions in one.

    Read the article

  • Web master tools is throwing out 404 errors on link not on page

    - by plantify
    Webmaster tools is showing thousands of 404 errors, where pages on the site are referring to another incorrect url. For example, URL not found www.plantify.co.uk/shop/=, linked from http://www.plantify.co.uk/shop/gift-voucher and http://www.plantify.co.uk/shop/special-plant-offers. I obviously have checked the source and cannot find any references to this link on any page. The only consistent issue is that it only seems to report this error on pages with two section i.e. www.plantify.co.uk/shop does not report any error whilst all pages with www.plantify.co.uk/shop/xxx (where xxx can be several different pages such as gift-voucher) all report this. I cannot seem to duplicate this error. I have run a link checker (we use Screaming Frog) and it does not report this error. I have fetched these pages as a bot, and these do not report this error. I am at a total loss. I cannot even duplicate the issue, but it is most definitely an issue, as Webmaster Tools is reporting new errors every day. Is this perhaps google bot doing its own thing?

    Read the article

  • SEO - Coaching Newbies

    All search engines use algorithms and each search engines have different ones. An algorithm is the formula that the search engine uses to evaluate your web pages. The robots will crawl all pages on your site but not all pages will be indexed.

    Read the article

  • how does private sales ecommerce site work on their SEO?

    - by 142857
    In a private sales ecommerce site, users need to sign up/in before they can access the pages of website. So, even if a user tries to directly navigate to a product page, he is redirected to sign in. I am wondering then how does these sites manage their SEO, as it would imply google too can't crawl these pages, or do they completely ignore the SEO benefit of allowing google to crawl the product and catalogue pages?

    Read the article

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

  • Canonical tags for separate mobile URLs

    - by DnBase
    I have a Drupal website serving mobile pages from different urls (starting from /mobile). According to Google recommendations I should use the canonical tag to map desktop and mobile pages. Right now I did this in case I serve the same node (e.g: node/123 and mobile/node/123) but should I do this for other pages as well that are equivalent but share a different content? For example do I need to map the desktop and mobile homepages even if they don't have the same content at all?

    Read the article

  • Will we be penalized for having multiple external links to the same site?

    - by merk
    There seem to be conflicting answers on this question. The most relevant ones seem to be at least a year or two old, so I thought it would be worth re-asking this question. My gut says it's ok, because there are plenty of sites out there that do this already. Every major retailer site usually has links to the manufacturer of whatever item they are selling. go to www.newegg.com and they have hundreds of links to the same site since they sell multiple items from the same brand. Our site allows people to list a specific genre of items for sale (not porn - i'm just keeping it generic since I'm not trying to advertise) and on each item listing page, we have a link back to their website if they want. Our SEO guy is saying this is really bad and google is going to treat us as a link farm. My gut says when we have to start limiting user useful features to our site to boost our ranking, then something is wrong. Or start jumping through hoops by trying to hide text using javascript etc Some clients are only selling 1 to a handful of items, while a couple of our bigger clients have hundreds of items listed so will have hundreds of pages that link back to their site. I should also mention, there will be a handful of pages with the bigger clients where it may appear they have duplicate pages, because they will be selling 2 or 3 of the same item, and the only difference in the content of the page might just be a stock #. The majority of the pages though will have unique content. So - will we be penalized in some way for having anywhere from a handful to a few hundred pages that all point to the same link? If we are penalized, what's the suggested way to handle this? We still want to give users the option to go to the clients site, and we would still like to give a link back to the clients site to help their own SE rankings.

    Read the article

  • Website Remodel Redirects

    - by inKit
    We've recently built a site for a new client who has not inserted all the content that they had from their old site into their new one. Also a lot of content is dynamic with ID's not matching from the old site to the new one. We have added dynamic redirects for most of the patterns we could find in pages that were 404ing, but there are still a lot of pages that had content, or just jumbled urls that we cannot match up with content pages on the new site. Is it better to redirect these leftover pages to the homepage? Or leave them 404ing?

    Read the article

  • How can I search in transcluded categories?

    - by Wikis
    I want to add functionality to a MediaWiki wiki to search in specific categories: Platform 1 Platform 2 etc. So I created a template which, based on a certain field, assigns pages to those categories. The template was already included on most of these pages. So now most pages are in either: Category:Platform 1 or Category:Platform 2 Then I thought I just need to add incategory to the search and I'm done, as described on the Wikipedia page. But then I reread it and to my horror discovered: incategory: – using the incategory: parameter returns pages in a given category (as long as the pages are directly categorized, and not transcluded through templates). Eeeek! Is there any other way to search even in transcluded templates? Or any other way of resolving this?

    Read the article

  • How to make the most of GWT's "Search queries"?

    - by DisgruntledGoat
    I've been looking at the "Search queries" section in Google Webmaster Tools recently, and it seems like there is a lot of potential there in finding which pages on a site need improvement. I'm trying to figure out exactly what to sort or filter on. Do I look at pages with a low average position? Low impressions but high clicks? Pages that are rising up/falling down the rankings? What is the low-hanging fruit here?

    Read the article

  • How to run WordPress and Java web app running on Tomcat on the same server?

    - by Chantz
    I have to run a WordPress site served via Apache2 & Java-based webapp using Tomcat on the same server. When users come to example.com or example.com/public-pages they need to served from WordPress but when they come to example.com/private-pages they need to be served from the Tomcat. I have asked this question on serverfault where they suggested using different port, different IP & sub-domain. I want to go for different port solution since it will mean I need to buy only one SSL certificate. I tried doing the reverse proxy method by having the following in my default-ssl.conf <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName localhost:443 DocumentRoot /var/www <Directory /var/www> #For Wordpress Options FollowSymLinks AllowOverride All </Directory> <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass /private-pages ajp://localhost:8009/ ProxyPassReverse /private-pages ajp://localhost:8009/ SSLEngine on SSLProxyEngine On SSLCertificateFile /etc/apache2/ssl/apache.crt SSLCertificateKeyFile /etc/apache2/ssl/apache.key </VirtualHost> As you have noticed I am using mod_proxy_ajp in Apache2 for this. And that my Tomcat is listening to port 8009 and then serving content. So now when I go to example.com/private-pages I am seeing the content from my Tomcat. But 2 issues are happening. All my static resources are getting 404-ed, so none of my images, CSS, js are getting loaded. I see that the browser is requesting for the resources using URL example.com/css/* This will clearly not work because it translates to example.com:80/css/* instead of example.com:8009/css/* & there are no such resources in the WordPress directory. If I go to example.com/private-pages/abcd I am somehow kicked to the WordPress site (which obviously displays a 404 page). I can understand why #1 is happening but have no clue why the #2 is happening. Regardless, if there is another clean solution for resolving this, I would appreciate y'alls help.

    Read the article

  • Duplicate page content and the Google index

    - by Kit Sunde
    I have a static pages with dynamically expanding content that google is indexing. I also have deep links into virtually duplicate pages which will pre-expand the relevant section of content into the relevant section. It seems like Google is ignoring all my specialized pages and not putting them in the index. Even after going through web-masters tools, crawling and submitting them to the index manually. I also use the google API for integrating search on the site, and the deep linked pages won't show up. Is there a good solution for this?

    Read the article

  • backlinks: Two domains with same IP

    - by Michal
    I run several different web pages on different servers (with different IP addresses). These pages are linking to each other in order to boost number of back links pointing to my pages. I would like to move all those projects to a single virtual host (with a single IP address). My question is, how google handles links within different domain names but same ip address. Is there some penalization for it? Could this lead to lower page rank?

    Read the article

  • Using the Same Domain to Bury Bad Publicity

    Receiving bad publicity can be a devastating blow to a brand's online reputation, and in order to mitigate the damage often the best course of action is to try to create enough alternate content to push the negative publicity down to the second, third, or even deeper, search result pages. Most people do this by creating a number of different pages on new or alternate domains, but in fact it can be much more effective to try to create pages on the same domain.

    Read the article

  • how can i track visits to ALL of the subpages of my website COMBINED TOGETHER?

    - by realcheesypizza
    Right now im using statcounter and google analytics. They are great. But my counts are currently separated. Ex: website.com = 1000 visits a day, website.com/about = 50 visits a day, website.com/privacy = 10 visits a day, etc.. How can have a combined count of all of my sub-pages? (mainpage + about page + about 100 other sub-pages ) I can of course manually add them all together, but that's time consuming because there are many pages. I tried placing a separate tracking code in a php include that sits in each of the sub-pages, but it doesnt seem to be working. It seems to require a single URL to create it, which it then only counts the visits from the one url, rather than ALL of them. Ex: website.com) Any help would be appreciated. Hopefully im just missing something very obvious. Thank you!

    Read the article

  • Exclude PHP from output from WYSIWYG in CMS

    - by bytewalls
    I'm writing a basic CMS for one of my sites and have run into an issue where some pages need to dynamically serve PHP and JS, where as others are plain HTMl. I want there to be a setting which will allow this for the pages that need it and will load ACE editor instead of a different wysiwyg editor. The challenge here is that on the pages which I do not explicitly tell it there will be code, I want to reject any inputs that code. I can set it up to insert a for all pages without JS, but how can I keep php code from running?

    Read the article

  • LOB Pointer Indexing Proposal

    - by jchang
    My observations are that IO to lob pages (and row overflow pages as well?) is restricted to synchronous IO, which can result in serious problems when these reside on disk drive storage. Even if the storage system is comprised of hundreds of HDDs, the realizable IO performance to lob pages is that of a single disk, with some improvement in parallel execution plans. The reason for this appears to be that each thread must work its way through a page to find the lob pointer information, and then generates...(read more)

    Read the article

  • How to specify Multiple Secure Webpages with .htaccess RewriteCond

    - by Patrick Ndille
    I have 3 pages that I want to make secure on my website using .htaccess -login.php -checkout.php -account.php I know how to make just one work page at a time using .htaccess RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} /login.php RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L] I and trying to figure out how to include the other 2 specific pages to make them also secure and used the expression below but it didn't work RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} /login.php RewriteCond %{REQUEST_URI} /checkout.php RewriteCond %{REQUEST_URI} /account.php RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L] Can someone help me the right expression that will work with multiple pages? The second part of the code is that, if https is already on and a user move to a page that Is not any of the pages i specified about, I want that it should get back to http. how should I write the statement for it to redirect back to http if its not any of the pages above? I have my statement like this but its not working RewriteCond %{HTTPS} on RewriteRule !(checkout|login|account|payment)\.php http://%{HTTP_HOST}%{REQUEST_URI} [L,R] Any thoughts?

    Read the article

  • IIS7ASP.Net 4.0 - 404 errors only for external clients

    - by dmcgiv
    recently we moved an ASP.Net 3.5 website to 4.0 (integrated mode) and when we deployed to the clients server (Windows Server 2008 Web edition) we notice that some .aspx pages are serving 404 errors. What is strange is that 1) the pages exist 2) if you browse from the server itself the page is served as normal, only external clients get the 404 3) it's the default 404 error page not the one configured in the web.config 4) it only happens for some .aspx pages, and I've not been able to establish a link between the pages that are not being served externally. We are using a URL rewriter module which I first thought may be at fault but then realised that only some of the failing pages are being rewritten. I've also tested removing the http module and the problem still persists. As everything is working as expected when logged onto the server I was thinking it my be some sort of permission issue, but why would it only affect a few pages? I turned on failed request tracking and the debug files are being generated with the expected 404 error, although at the moment I'm not sure what most of the data means so can't decipher what's going on internally. I'd really appreciate some help with this one.

    Read the article

  • CLR via C# 3rd Edition is out

    - by Abhijeet Patel
    Time for some book news update. CLR via C#, 3rd Edition seems to have been out for a little while now. The book was released in early Feb this year, and needless to say my copy is on it’s way. I can barely wait to dig in and chew on the goodies that one of the best technical authors and software professionals I respect has in store. The 2nd edition of the book was an absolute treat and this edition promises to be no less. Here is a brief description of what’s new and updated from the 2nd edition. Part I – CLR Basics Chapter 1-The CLR’s Execution Model Added about discussion about C#’s /optimize and /debug switches and how they relate to each other. Chapter 2-Building, Packaging, Deploying, and Administering Applications and Types Improved discussion about Win32 manifest information and version resource information. Chapter 3-Shared Assemblies and Strongly Named Assemblies Added discussion of TypeForwardedToAttribute and TypeForwardedFromAttribute. Part II – Designing Types Chapter 4-Type Fundamentals No new topics. Chapter 5-Primitive, Reference, and Value Types Enhanced discussion of checked and unchecked code and added discussion of new BigInteger type. Also added discussion of C# 4.0’s dynamic primitive type. Chapter 6-Type and Member Basics No new topics. Chapter 7-Constants and Fields No new topics. Chapter 8-Methods Added discussion of extension methods and partial methods. Chapter 9-Parameters Added discussion of optional/named parameters and implicitly-typed local variables. Chapter 10-Properties Added discussion of automatically-implemented properties, properties and the Visual Studio debugger, object and collection initializers, anonymous types, the System.Tuple type and the ExpandoObject type. Chapter 11-Events Added discussion of events and thread-safety as well as showing a cool extension method to simplify the raising of an event. Chapter 12-Generics Added discussion of delegate and interface generic type argument variance. Chapter 13-Interfaces No new topics. Part III – Essential Types Chapter 14-Chars, Strings, and Working with Text No new topics. Chapter 15-Enums Added coverage of new Enum and Type methods to access enumerated type instances. Chapter 16-Arrays Added new section on initializing array elements. Chapter 17-Delegates Added discussion of using generic delegates to avoid defining new delegate types. Also added discussion of lambda expressions. Chapter 18-Attributes No new topics. Chapter 19-Nullable Value Types Added discussion on performance. Part IV – CLR Facilities Chapter 20-Exception Handling and State Management This chapter has been completely rewritten. It is now about exception handling and state management. It includes discussions of code contracts and constrained execution regions (CERs). It also includes a new section on trade-offs between writing productive code and reliable code. Chapter 21-Automatic Memory Management Added discussion of C#’s fixed state and how it works to pin objects in the heap. Rewrote the code for weak delegates so you can use them with any class that exposes an event (the class doesn’t have to support weak delegates itself). Added discussion on the new ConditionalWeakTable class, GC Collection modes, Full GC notifications, garbage collection modes and latency modes. I also include a new sample showing how your application can receive notifications whenever Generation 0 or 2 collections occur. Chapter 22-CLR Hosting and AppDomains Added discussion of side-by-side support allowing multiple CLRs to be loaded in a single process. Added section on the performance of using MarshalByRefObject-derived types. Substantially rewrote the section on cross-AppDomain communication. Added section on AppDomain Monitoring and first chance exception notifications. Updated the section on the AppDomainManager class. Chapter 23-Assembly Loading and Reflection Added section on how to deploy a single file with dependent assemblies embedded inside it. Added section comparing reflection invoke vs bind/invoke vs bind/create delegate/invoke vs C#’s dynamic type. Chapter 24-Runtime Serialization This is a whole new chapter that was not in the 2nd Edition. Part V – Threading Chapter 25-Threading Basics Whole new chapter motivating why Windows supports threads, thread overhead, CPU trends, NUMA Architectures, the relationship between CLR threads and Windows threads, the Thread class, reasons to use threads, thread scheduling and priorities, foreground thread vs background threads. Chapter 26-Performing Compute-Bound Asynchronous Operations Whole new chapter explaining the CLR’s thread pool. This chapter covers all the new .NET 4.0 constructs including cooperative cancelation, Tasks, the aralle class, parallel language integrated query, timers, how the thread pool manages its threads, cache lines and false sharing. Chapter 27-Performing I/O-Bound Asynchronous Operations Whole new chapter explaining how Windows performs synchronous and asynchronous I/O operations. Then, I go into the CLR’s Asynchronous Programming Model, my AsyncEnumerator class, the APM and exceptions, Applications and their threading models, implementing a service asynchronously, the APM and Compute-bound operations, APM considerations, I/O request priorities, converting the APM to a Task, the event-based Asynchronous Pattern, programming model soup. Chapter 28-Primitive Thread Synchronization Constructs Whole new chapter discusses class libraries and thread safety, primitive user-mode, kernel-mode constructs, and data alignment. Chapter 29-Hybrid Thread Synchronization Constructs Whole new chapter discussion various hybrid constructs such as ManualResetEventSlim, SemaphoreSlim, CountdownEvent, Barrier, ReaderWriterLock(Slim), OneManyResourceLock, Monitor, 3 ways to solve the double-check locking technique, .NET 4.0’s Lazy and LazyInitializer classes, the condition variable pattern, .NET 4.0’s concurrent collection classes, the ReaderWriterGate and SyncGate classes.

    Read the article

  • SOA Suite 11g Dynamic Payload Testing with soapUI Free Edition

    - by Greg Mally
    Overview Many web service developers use soapUI for various tests like: smoke test, unit test, and load testing because you can get a free edition that is fairly robust. However, if you need to venture into more complex testing that requires a dynamic payload, then the free edition doesn't necessarily make it easy. This feature does exist in soapUI, but for obvious reasons it is in the Pro version. In this blog I will show you how to use soapUI free edition for dynamic payloads in a simplified example. Hopefully this will open the doors for you to expand into more complex scenarios. The following assumes that you have a working knowledge of soapUI and will not go into concepts like setting up a project etc. For the basics, please review the documentation for soapUI: http://www.soapui.org/Getting-Started/. Additionally, we will be using asynchronous web services and you can review the setup for this in my blog: SOA Suite 11g Asynchronous Testing with soapUI. Features in soapUI Free Edition Relating to this Topic The soapUI test tool provides a very feature rich environment that can do many things provided you are willing to go beyond point and click. For this example, we will be leveraging just a couple features for our dynamic payload example: Test Case Properties Scripting with Groovy Basically, we will be using a property as a global variable and we will manipulate that property using a Groovy script. Setting Up Our Property Properties are available throughout soapUI and here is a snippet from the soapUI website defining the locations: Projects : for handling Project scope values, for example a subscription ID TestSuite : for handling TestSuite scoped values, can be seen as "arguments" to a TestSuite TestCases : for handling TestCase scoped values, can be seen as "arguments" to a TestCase Properties TestStep : for providing local values/state within a TestCase Local TestStep properties : several TestStep types maintain their own list of properties specific to their functionality : DataSource, DataSink, Run TestCase MockServices : for handling MockService scoped values/arguments MockResponses : for handling MockResponse scoped values Global Properties : for handling Global properties, optionally from an external source For our example, we will be defining a custom property in a TestCase called SimpleAsyncPayload. The property can be created in either the Custom Properties tab located at the bottom of the Navigator panel when the TestCase is selected in the Navigator or the Properties label in the TestCase editor: Navigator Panel TestCase Editor You will notice that I set a value of “0” for the custom property. For this simplified example, we will need to retrieve that value and manipulate it prior to making the web service request invocation. In order to accomplish this, we will need to get Groovy ;) Let's Get Groovy We will now add a new Groovy Script step to the TestCase called Manipulate Payload: TestCase Editor > Append Step > Groovy Script Once we have added the Groovy Script step to our TestCase, we can open the Groovy Script editor to add the code to: Get the current value of the property we created called SimpleAsyncPayload. Convert the value of the property to an integer. Increment the value. Store the incremented value back into the TestCase property called SimpleAsyncPayload. The script should look something like the following: Groovy Script Editor – Manipulate Payload At this point we can test the script to see if it is working by simply running the TestCase (left-click on the green triangle in the upper left-hand corner of the TestCase editor). To verify if it ran correctly, we can look at the value of the SimpleAsyncPayload property which should now be 1: TestCase Editor – Run Results All that is left to complete the TestCase is to append another step of type Test Request. The information required to append the request is a name and an operation to invoke. In this example we will use the default name and select the SimpleAsyncBPELProcessBingd -> process as the operation (any other information being requested, simply use the defaults unless you are calling an asynchronous operation then do not add any assertions). We are now in familiar ground with the Test Request editor. Depending upon the type of operation you are invoking (synchronous or asynchronous), please update the request with the necessary information (e.g., callback information for asynchronous operations). We will now tweak the Test Request payload to retrieve the value of the SimpleAsyncPayload property. The soapUI editor makes this very simple: right-click in the payload and navigate to the property (e.g., right-click > Get Data.. > TestCase: [Groovy TestCase] > Property [SimpleAsyncPayload]): Test Request Editor – Insert Property Value Your payload should now look something like the following: Test Request Editor – Inserted Property Value Just like before, we are now ready to run the TestCase. If everything goes as expected we should see a response like the following: Message Viewer – Results of TestCase Run We are now setup to be able to run a stress test where the payload will change for each request. This simple example can be expanded to include multiple payload values, complex calculations in the scripts, or whatever can be done via the soapUI scripting. Hopefully you have found this useful and happy testing to you :)

    Read the article

  • AbcPDF renders the same page multiple times

    - by Steven
    I need to retrieve several pages and output this in a PDF document. I have the following page structure: Page 1 Sub 1 Sub 2 Sub 3 On page one, I have a link which executes the below code. What it does, is to retrieve child pages (one level) and put them in a page collection. Then I loop trough the page collection and retrieve each sub pages URL. This works. I've tested and seen that it retrieves 3 different URL's. The problem is that my PDF gets three pages of Page 1. It does not render Sub 1 to 3. Why isn't docID = document.AddImageUrl(pageLink) retrieving the pages? Websupergoo refers to a caching problem which may occur. But their solution did not help me. Any good suggestions anyone? protected void linkBtnCreateMultipagePDF_Click(object sender, EventArgs e) { string baseURL = Request.Url.ToString(); PageDataCollection pdc = GetChildren(CurrentPageLink); //Create PDF document Doc document = new Doc(); document.Rect.Inset(10, 20); int docID; string pageLink = string.Empty; foreach (PageData pd in pdc) { //This lops goes through the different pages and retrieves that pages URL pageLink = baseURL + pd.LinkURL; document.Page = document.AddPage(); // But for some reason, the same page is added here. docID = document.AddImageUrl(pageLink); //Chain pages together while (true) { if (!document.Chainable(docID)) break; document.Page = document.AddPage(); docID = document.AddImageToChain(docID); } } //Flatten file for (int i = 1; i <= document.PageCount; i++) { document.PageNumber = i; document.Flatten(); } byte[] theData = document.GetData(); Response.Clear(); Response.ContentType = "application/pdf"; Response.AddHeader("content-disposition", "inline; filename=MyPDF.PDF"); Response.AddHeader("content-length", theData.Length.ToString()); Response.BinaryWrite(theData); Response.End(); }

    Read the article

  • How do you set the sitemap priority for flatpages in django?

    - by mlissner
    I have a site with about 60,000 pages that are getting placed in the sitemap, and which have a priority of 0.3. These are all really long pages that are rich in keywords. I also have a few pages (like the about page), that need high priority, but which I've implemented with the django flatpages framework. Is it possible for pages created this way to have a higher priority in the sitemap?

    Read the article

  • IIS7 Request Mapping, File Extensions

    - by user189049
    I have a website that used to have .dsp file extensions for all pages. There are alot of other sites referencing mine that reference the pages like that, but my pages are all actually .aspx pages. In IIS5, I was able to configure this to work. My problem is I've recently switched from IIS5 to IIS7, and I have no idea how to map these requests (.dsp) to the real file (.aspx) without the server telling me the file doesn't exist.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >