Search Results

Search found 37135 results on 1486 pages for 'page brooks'.

Page 330/1486 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • How to utilize Varnish for A/B Testing and Feature Rollout?

    - by Ken
    Hi all, wasn't really sure if this should go here on or stackoverlow - admins, please move if i'm mistaken (and sorry). Today we have our web layer exposed to the world. We would like to add Varnish in front of our web layer to accelerate the site and reduce calls to the backend. However, we have some concerns and i was wondering how most people approach them: A/B Testing - How do you test two "versions" of each page and compare? I mean, how does varnish know which page to serve up? If and how do you save seperate versions on each page? Feature rollout - how would you set up a simple feature rollout mechanism? Let's say i want to open a new feature/page to just 10% of the traffic.. and then later increase that to 20%? How do you handle code deployments? Do you purge your entire varnish cache every deployment? (We have deployments on a daily basis). Or do you just let it slowly expire (using TTL)? Any ideas and examples regarding these issues is greatly appreciated! Thanks in advance. Ken.

    Read the article

  • Reconfiguring, then deleting obsolete pagefile.sys from C: in one go using a batch script

    - by DanielSmedegaardBuus
    I'm trying to set up an automated script for a Windows XP installer. It's a batch script that runs on first boot after installation, and among the things I'm trying to accomplish, is removing the pagefile from C: entirely, and putting a 16-768 MB pagefile on D: instead. Here're my batch file instructions: echo === Creating new page file on D: ... cscript %windir%\system32\pagefileconfig.vbs /create /i 16 /m 768 /vo d: >nul echo. echo === Removing old page file from C: ... cscript %windir%\system32\pagefileconfig.vbs /delete /vo C: attrib -s -h c:\pagefile.sys del c:\pagefile.sys My problem is that while these are sane commands, the removal of the pagefile on C: requires me to reboot before those commands succeed.b Or, in other words — I have to first create the D: pagefile, then reboot and delete the c:\pagefile.sys file, or I'm stuck with a c:\pagefile.sys file which isn't even recognized by Windows itself (it'll just say that there's a page file on D:, and that C: has no pagefile at all). Obviously because already some pages are written to the C:\pagefile.sys file. So how would I go about accomplishing this in one go? Or, in two gos, if this is "batch scriptable" :) TIA, Daniel :) EDIT: I should probably clarify: Running those commands above are all valid, but they'll only succeed fully if I re-run the "attrib" and "del" commands at next boot. The C: pagefile is in use at the time, so I cannot delete the file it uses, and Windows itself won't remove it when I configure it to not use C: as a page file drive. Instead, it'll leave an orphaned c:\pagefile.sys file behind (which is really large). I don't necessarily need this to work in one go, registering the last two commands to run after a reboot would also be great :)

    Read the article

  • Unable to access my IIS website using hostname. Works fine with localhost

    - by rajugaadu
    I am unable to access my IIS website or even the default website. I did a bit of research and checked/selected the option 'Integrate Windows Authentication' in the Properties > Directory Service tab. From then on I could access the website using localhost. But when I use my hostname, it asks for domain username/password. Why is it so? I don't understand why am I not able to access my website without checking this option to integrate windows authentication? My goal is to access the website using both localhost and hostname. More details on what I did: I haven't done anything out of world. What I did is: IIS - Websites - Create new Website - Create a working folder - Set a default page. I restart this website and then click on browse. And I do not see my default page. I had to go to Directory Service tab and select the check box "Integrate Windows Authentication". Then I can see the default page coming. On IE too I can see the default page coming when I use http://localhost. But when I use http://{hostname} it asks for domain username and password. Why???

    Read the article

  • Linksys router cannot change default password

    - by Jessica M.
    My wireless internet suddenly stopped working today. I have Windows 7 and Lindsys WRT54G router. I tried to log into the Linksys setup router page by typing in www.192.168.1.1 into firefox and it prompted me for a username and password as usual. The problem is when i tried to enter my regular username and password it did not work. I finally solved my problem when I came across this post here and the very last post solved my problem. It suggested I try username: root / password: admin. For some reason the username and password has been changed. When i tried username: root / password: admin , it worked and allowed me to get into the Linksys setup page. The problem is I can't change the username or password anymore. Every time I want to log into my Linksys setup page I have to enter username: root / password: admin. I can't change the "WPA shared key" (password). For the security settings I selected WPA2Personal + AES. Also the post said "If the firmware was upgraded to non-linksys firmware - the default will be different" . The problem is I didn't update anything and I'm worried that someone installed a virus or something or somehow changed the firmware on my router. How did I get non-Linksys firmware on my router? EDIT: I figured out how to change the password when I log into the Lynksys setup page. Administration -- Management -- password. I still don't understand if my router firmware was changed or who changed it or if it happened by mistake.

    Read the article

  • Cannot 301 redirect with IIS URL Rewrite Module

    - by Justin
    I am trying to troubleshoot my issue with the URL Rewrite Module on IIS 7. I migrated a Wordpress blog over to BlogEngine.net. There were only about 5 entries that I wanted to use 301 redirects to the new blog, so I wanted to simply create 5 exact match redirect rules using the rewrite module. For some reason the exact match rule never seems to take effect, I always get a 404 error when the original url is navigated to. I verified that my exact match pattern matched the existing backlinks and it does. I then tried a simple test and got the same behavior, no redirection. I created a page, test.html, on my site, I then created a second page, test2.html. So my exact match pattern is: "http://www.mydomain.com/test.html" And the rule is supposed to do a 301 redirect to "http://www.mydomain.com/test2.html " The redirect never happens. I created the steps for the rule based on the instructions in this page: http://learn.iis.net/page.aspx/461/creating-rewrite-rules-for-the-url-rewrite-module/ I don't see that I left out a step. After I apply the rule I've even gone as far as doing an IISReset to make sure it would be in effect but still no luck. Any thoughts on what I might have left out? (Note: my rewrite rules dont include the " " around them but I had to add since serverfault thinks I am trying to spam the system with multiple urls.)

    Read the article

  • Set default tab url in firefox 14

    - by sebster
    In the latest firefox update, new tabs show -instead of the previously default blank page- a window of recently viewed pages. Before this was available, I had installed an 'addon' to allow this (called 'fvd speed dial'). It worked fine however I have since delete.d this as it is no longer needed but still loads the page where the addon was housed:'chrome://fvd.speeddial/content/fvd_about_blank.html'. I have reinstalled firefox yet the same problem still occurs. On the 'about:config' page I have found the setting 'browser.newtab.url' but do not know the default url. Is there any way to remedy this? I will just add, I appologise if this is not the case with the new tab feature. It is all I have gathered from the firefox update page. Also, I do not want to, ideally, simply restore my settings as I have changed some of them (such as the search bar, that work fine. I am on windows-xp, home edition. Not sure of what service pack.

    Read the article

  • PowerPoint save group as picture creates asymmetric edge, how to fix?

    - by Se Norm
    I created tons of figures for my thesis in PowerPoint and now I realized that when I try to save the grouped items (= one figure) as a picture (EMF), it somehow asymmetrically adds a border on the left and the bottom. First one is original group, second is the same pasted as a picture. Original group: Pasted as a picture: Does anyone have an idea how to fix that for a huge number of figures? I think it only started happening when I used a page size of 1m x 1m in PowerPoint to be able to zoom in more for some figures. However, I cannot not simply change the page size now as it messes up font and object sizes. Also, copying it into a smaller page and then saving as EMF doesn't do the trick. Maybe it is not related to the page size after all. Cropping every figure individually would be a lot of work, so I hope there is a different solution. I found the origin of the problem: the text label in the left bottom corner of each image (0s, 8s, 16s). I still do not understand why it is happening though, since the text label does not expand over the edge of the image (it was aligned using the align left function). It would still be great if there was an easy way to fix this, especially as I want to keep the text where it is.

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • .NET not processing an XML file in IIS

    - by Stuart McIntosh
    We have 2 servers, 1 already configured with .net which works fine and a new one which appears to be configured the same but when I open an xml page in Internet Explorer it complains about the <% tag. We have IIS on win srvr 2003 SP2. The website is configured with .NET 1.1.4322. In ISAPI extensions have set the .XML extension to use c:\windows\microsoft.net\framework\v1.1.4322\aspnet_isapi.dll But the page: <property name="documentmaxage" value="0"/> <property name="documentmaxstale" value="0"/> <var name="m_Prompt_Path" /> <form id="InitVoiceXmlDoc"> <block> <assign name="m_Prompt_Path" expr="&quot;<% Response.Write(Request.QueryString["m_Prompt_Path"]); %>&quot;"/> </block> </form> gives the error: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. The character '<' cannot be used in an attribute value. Error processing resource 'http://localhost:11119/fails.xml'. Lin... &quo... We have the same config on another server which works fine. So are there other options apart from the ISAPI extensions that I need to look at. If I suffix the page .aspx, of course it works fine.

    Read the article

  • NVIDIA Tesla K20C in Dell PowerEdge R720xd --- power cables

    - by CptSupermrkt
    I am trying to put an NVIDIA Tesla K20C into a Dell PowerEdge R720xd. I'm having a bit of trouble understanding the power requirements of the card. First, here is a picture of two pages of the same manual, which seems contradictory to me. One page says only a single connector is required, while the next page says both are required. The entire manual for the card can be found here: http://www.nvidia.com/content/PDF/kepler/Tesla-K20-Active-BD-06499-001-v02.pdf Here is an photo taken of the power connections on the card: And here is a photo of where those connectors need to go, onto the PCI-E riser of the r720xd: Neither the R720xd NOR the GPU came with the necessary cables. And given what appears to be a contradiction in the GPU manual (above), I'm not even sure at this point what we actually need. I have searched high and low online for things like 2x6 pin PCI-E to 8 pin male-to-male and so on, and for the life of me cannot find what we need. In case anyone needs it, the owner's manual of the R720xd can be found here: ftp://ftp.dell.com/Manuals/all-products/esuprt_ser_stor_net/esuprt_poweredge/poweredge-r720xd_Owner%27s%20Manual_en-us.pdf The relevant page is page 68, which clearly indicates that the 8-pin female port on the riser card is for a GPU. The bottom line question: exactly what power cables do we need to buy, and where can we find them?

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • How to configure IIS for SVG and web testing with Visual Studio?

    - by macias
    Let's say I have a simple web page with svg image in it: <img src="foobar.svg" alt="not working" /> If I make this page as static html page and view it directly svg is displayed. If I type the address of this svg -- it is displayed. But when I make this as .aspx page and launch it dynamically from Visual Studio I get alt text. If I type the address of this svg (from localhost, not as a local file) -- browser tries to download it instead of displaying. I already defined mime type in IIS (for entire server -- "image/svg+xml") and restarted IIS. Same effect as before. Question: what should I do more? Update WireShark won't work (it is in documentation), I tried also RawCap, but it cannot trace my connection (odd), luckily Fiddler worked: From client: GET http://127.0.0.1:1731/svg/document_edit.svg HTTP/1.1 Host: 127.0.0.1:1731 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Answer from server: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Thu, 16 Feb 2012 11:14:38 GMT X-AspNet-Version: 4.0.30319 Cache-Control: private Content-Type: application/octet-stream Content-Length: 87924 Connection: Close <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- Created with Inkscape (http://www.inkscape.org/) --> <svg xmlns: *** FIDDLER: RawDisplay truncated at 128 characters. Right-click to disable truncation. *** For the record, here is useful Q&A for Fiddler: http://stackoverflow.com/questions/826134/how-to-display-localhost-traffic-in-fiddler-while-debugging-an-asp-net-applicati

    Read the article

  • Mac OS X duplex printing problem: one- vs. multi-paged documents

    - by Christian Lindig
    I like to print on pre-printed stationery using the Preview.app and a duplex-capable HP Color Laserjet 4700 (PostScript) printer. The print dialog handles one and two-paged documents differently: the paper needs to be placed differently into the tray if the document contains one page versus when it contains two pages. This is not obvious when printing on plain paper but becomes obvious when front and reverse side of sheets are marked. Otherwise the first page would end up on the reverse side of the first sheet. I believe the problem is caused by the printer driver setting duplex printing to false (using the PostScript setpagedevice operator) when emitting a single-page document versus keeping it set to true when emitting multi-page documents. All this despite that duplex printing is always specified in the printer dialog. When printing a single-sided document, duplex=true and duplex=false seem to make a difference with respect which side of a sheet gets printed on. It would be also helpful if others could confirm the problem actually exists. I suspect this problem is not limited to specific printers. I'm on OS X 10.6 and I checked two different HP printers.

    Read the article

  • Unable to record using Jmeter: [help me very urgent]

    - by krish
    Hi, I am trying to record a http web page using Jmeter 2.3.3 version.I has setup the JMeter proxy and tried, but did n't work. I have followed the below steps. 1. Launch jmeter 2.3.3, added thred group to test plan 2. Under Workbench-add-non-test elements- added HTTP proxy server. proxy server setting are port:9090, target:use recording controller, grouping:donot group samplers, Type:HTTp request and checked the boxes of all under http sampler settings 3.Saved the settings 4. Now in browser(IE 7.0 or firefox 3.0.16), under connection settings, setup the manual proxy settings as local host and port as 9090(no auto detect settings nothing, only manual proxy). Setting saved 5.Now in the jmeter, started the http proxy server. 6. Open a browser and hit the webpage needs to be tested. The page is not opened. In fact because of the changes made in browsers, no pages are opened. Whenever i try hitting a page, the pages are recorded in the Jmeter. but without the page open, how can i test. I looking for an immediate answer and my work is blocked. Immediate answer would be appreciated.

    Read the article

  • Unable to record using Jmeter

    - by krish
    Hi, I am trying to record a http web page using Jmeter 2.3.3 version.I has setup the JMeter proxy and tried, but did n't work. I have followed the below steps. Launch jmeter 2.3.3, added thred group to test plan Under Workbench-add-non-test elements- added HTTP proxy server. proxy server setting are port:9090, target:use recording controller, grouping:donot group samplers, Type:HTTp request and checked the boxes of all under http sampler settings Saved the settings Now in browser(IE 7.0 or firefox 3.0.16), under connection settings, setup the manual proxy settings as local host and port as 9090(no auto detect settings nothing, only manual proxy). Setting saved Now in the jmeter, started the http proxy server. Open a browser and hit the webpage needs to be tested. The page is not opened. In fact because of the changes made in browsers, no pages are opened. Whenever i try hitting a page, the pages are recorded in the Jmeter. but without the page open, how can i test. I looking for an immediate answer and my work is blocked. Immediate answer would be appreciated.

    Read the article

  • mod rewrite works fine apart from for missing directory index files

    - by j w
    I have a legacy web site hosted on Apache. It has a number of web pages sitting in the public web root and its subfolders. publicDocs/ directorywith_no_defaultfile/ some-legacy-flat-page.htm .htaccess index.php some-legacy-flat-page.htm I would like to start using Zend MVC for some of the newer pages. I have got a .htaccess mod rewrite rule working so that any request for a non-existent file is sent to be handled by the MVC bootstrap file (/index.php). With my current set-up, the following types of requests are routed to '/index.php', the MVC bootstrap: /index.php /blah /directorywith_no_defaultfile/bloo The following types of request are served by old legacy (flat) pages /some-legacy-flat-page.htm /directorywith_no_defaultfile/some-legacy-flat-page.htm But, when I a request a non-existent file that is a directory like these: /directorywith_no_defaultfile or /directorywith_no_defaultfile/ I get an error: Forbidden You don't have permission to access /directorywith_no_defaultfile/ on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I suspect this may have something to do with the way Apache handles default files. Do you know which Apache directives could be causing this?

    Read the article

  • Load is 0, yet site crawls (sometimes). What gives?

    - by Yegor
    I have a ~1.5-2mil page views per day site running on 2 servers. One for mysql, other for everything else. Mysql box has a load of 3, frontend is usually 0.0-0.1. Both are dual quad core with 8GB ram running SAS drives in raid5. CPU is idle for majority of the time, iowait is non-existent. Im running nginx, memcache, and site is built on php. Half the time everything runs perfect, while at other times it lags something severe, when it takes 10-15 seconds for a page to load. Page execution time is always super low, but it seems to hang, waiting for something before it actually loads the page. Whats even more weird is that it only happens to 1 file on the site (but its the one thats most commonly accessed, that actually loads the content on the site). Other pages are super fast at all times, even when it takes 15 seconds to load actual content. I have nginx_stats plugin installed, and if I monitor it, the lag spikes happen when the write column starts going above 100, and it frequently does... all the way to 500-1000. It does so at totally random times... not when traffic is heavy... it can do this in the middle of the night, and work perfectly at 5pm when traffic is at its highest. Any ideas?

    Read the article

  • Using wildcard domains to serve images without http blocking

    - by iopener
    I read that browsers sometimes block waiting for multiple images from the same host, and I'm trying to do everything I can to speed up page load times. One caveat: I need to serve files over HTTPS. Any opinions about whether this is feasible: Setup a wildcard cert for *.domain.com. Whenever I need an image, generate an number based on a hash mod 5 of the filename, and append it to an 'img' subdomain (eg img1.domain.com, img4.domain.com, img3.domain.com, etc.); the hash will make any filename always use the same subdomain, and therefore the browser should be able to cache the images Configure a dynamic virtualhost record to point all img#. subdomains to /var/www/img I am looking for feedback about this plan. My concerns are: Will I get warnings when my page has https:// links to multiple subdomains? Is the dynamic virtualhost record I'm talking about even possible? Considering the amount of processing this would require, is it likely to even produce any kind of overall benefit? I'm probably averaging a half-dozen images per page, with only half being changed on each page refresh. Thanks in advance for you feedback.

    Read the article

  • win 7 something is causing excessive disk usage, maybe chrome?

    - by camcam
    On my Win 7 computer, this happens: I work for several hours without problems, also using Chrome Suddenly, just after refreshing a page in Chrome, disk starts being used excessively (here I can close Chrome or not, no matter, the disk won't stop) Disk diod is on all the time and I hear it running like crazy, all computer is a little slowed down It last 5-10 minutes In the meantime, I go to Windows Task Manager and observe what processes are using disk and turn them off one by one - but no success in stopping the excessive disk usage After approximately 10 minutes everything stops I go to Chrome (or re-open it) and refresh the page with mixed results - sometimes the whole process repeats immediately, sometimes not Basically, it is almost always Chrome refreshing random page that starts the excessive disk usage, but killing Chrome process does not stop the disk. Going to the same page in Firefox is not causing problems. Windows Search is turned off. I would like to know what is really happening. Perhaps there is a utility which would allow me to see which process is really using the disk, so that I can disable the service ? (not chrome, because killing chrome does not change anything) or even better, perhaps there is a way to fix it?

    Read the article

  • Updating WordPress 3.6 to 3.7 via admin area on Nginx VPS hangs and fails

    - by harryg
    So I have a few WordPress sites running on my VPS (Ubuntu 12.10, Nginx, php-fpm 5.4) The sites are all on seperate vhosts and use their own config files (albeit similar to each other) and vary in complexity. One is very simple and uses minimal plugins. When I try to update core on any site via the admin area I click the "Update Now" button (which should run the script in wp-admin/update-core.php the page hangs for a minute or two before going to a blank admin page (i.e. the wp-admin menu bars and header bar are there but there is no content in the body of the page). Visiting another admin page via the still menu bar reveals that the core has not been updated. Checking the error log I see this entry: 2013/10/29 23:20:48 [error] 9384#0: *5318248 upstream timed out (110: Connection timed out) while reading upstream, client: --.---.--.---, server: www.mysite.com, request: "POST /wp-admin/update-core.php?action=do-core-upgrade HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "mysite.com", referrer: "http://mysite.com/wp-admin/update-core.php" This didn't happen in the past on older updates and the rest of the site including updating plugins works fine. Any ideas? Could it be as simple as a time-out error? I find that unlikely as the server should munch though a wp upgrade in seconds.

    Read the article

  • Removing Paths/ Landing Pages From SharePoint Search Results

    - by j.strugnell
    Hi there, We've been asked by a client to remove a number of pages from being shown up in their public website search results page. I've been into the SSP and created Crawl Rules to remove these pages. All seemed to have worked ok but we have an issue in that landing pages are still showing up in their "www.domain.com/sitearea/" form but not in their "www.domain.com/sitearea/pages/default.aspx". For each of this type of page we have created one rule to "Exclude" the "aspx" path and another rule to include the "/" path but to "Follow links on the URL without crawling the URL itself". We tried adding rules to exclude the "/" format but that only resulted in all results underneath that being excluded. Does anybody know how to remove the "area/pages/default.aspx" and the "area/" pats from Search Results? I'm not sure if it's the "done thing" to ask 2 questions in one but this is in a similar vein so it should be ok. I was wondering if anyone knew of a tool (or if it is possible) to allow site admins to exclude pages from search results (not via SSP/Crawl Rules). I know they can do it at the site level but I was wondering if anything out there enabled this to be done at the page level through either Page or Site Settings?

    Read the article

  • Recommended Smartphone for Reading PDFs [closed]

    - by mika
    This is as much a software than a hardware question. I use a lot of public transport and perhaps the best way to spend your time there is to read while listening to music. Currently I use Nokia E90 and Adobe Reader LE 2.5 (full version). I was wondering if there are any better alternatives? Requirements: at least 640px wide screen, preferably 800px physical size of the LCD display matters, it should be large, but the phone itself should be as small as possible. This favors touchscreen models PDF reader should be of high quality. It should render most PDFs correctly. Other important features include: full screen mode, keyboard controls for Page Down and page change, multiple zoom levels to adjust to the screen, opening recent documents at the last page read Downsides of E90 + Adobe Reader LE Phone is large compared to the display It is hard to read the display at sunlight Adobe Reader crashes the phone regularly, zoom could have more levels, doesn't remember last page EDIT: Switched to iPhone and GoodReader. Smaller physical screen width compared to E90 is a disimprovement, but other than that I'm happy. GoodReader is the highest quality smartphone PDF reader I've seen so far.

    Read the article

  • .htaccess modify rules and redirect if there's .php in the url

    - by Ron
    Hello everyone. I got the following code in my .htaccess: Options +FollowSymlinks RewriteBase /temp/test/ RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule ^about/(.*)/$ $1.php [L] RewriteRule ^(.*)/download/(.*)/(.*)/(.*)/downloadfile/$ file-download.php?product=$1&version=$2&os=$3&method=$4 [L] RewriteRule ^(.*)/download/(.*)/(.*)/(.*)/$ download-donate.php?product=$1&version=$2&os=$3&method=$4 [L] RewriteRule ^(.*)/download/(.*)/$ download.php?product=$1&version=$2 [L] RewriteRule ^newsletter-confirm/(.*)/$ newsletter-confirm.php?email=$1 [L] RewriteRule ^newsletter-remove/(.*)/$ newsletter-remove.php?email=$1 [L] RewriteRule ^(.*)/screenshots/$ screenshots.php?product=$1 [L] RewriteRule ^(.*)/(.*)/$ products.php?product=$1&page=$2 [L] RewriteRule ^schedule-manager/$ products.php?product=schedule-manager&page=view [L] RewriteRule ^visual-command-line/$ products.php?product=visual-command-line&page=view [L] RewriteRule ^windows-hider/$ products.php?product=windows-hider&page=view [L] RewriteRule ^(.*)/$ $1.php [L] RewriteRule ^products/$ products.php [L] everything work perfect. I would like to know how can I modify it so it will be less lines. I am pretty sure I can atleast remove 4-5 lines, but I dont know how. (merge the schedule-manager, visual-command-line and windows-hider, and some more). I know that the order of the rules is important, this order works - although I have no idea why, I just played with the rules until it worked. If you think that there'll be a bug with the following order please tell me where. Another thing - I would like to redirect for example www.myweb.com/products.php to www.myweb.com/products/ (I mean that the URL in the address bar will change). I dont know if the redirect can go along with my rewrite rules. Thank you.

    Read the article

  • Nginx + WordPress + HHVM: Why isn't Batcache working? Would Varnish help even more?

    - by javipas
    I've heard great things about HHVM, so I've setup a copy of WordPress blog (on another domain) with Nginx (with the Pagespeed module) and HHVM. Right now the benefits are obvious: on the same config, load times are between two and three times faster. I'm trying to speed up things a little bit, and I've also installed Memcached and Batcache. I've installed the memcached package, copied object-cache.php (Pastebin) onto the root folder of the WordPress blog, and after that I've installed the Batcache plugin and copied the advanced-cache.php (Pastebin) file onto the wp-content folder. Also, I've included the line define('WP_CACHE', true); in the wp-config.php file. It seems it doesn't work, though. If I quickly reload the page several times Batcache should show the cached page, but it doesn't. It's easy to check that by reloading (Cmd+R on Chrome on OS X) the page several times and then viewing the page's code. Under the <head> section I should see some batcache stats, but they aren't there. I wonder if someone could give me some hint on this. On a side note, I don't know if I could add some other component in order to help the performance be even better. I'm thing about Varnish, but I'm not sure if it's just useless and it's just another way to the same I'm currently doing. Any other component there? (I'll test CDN for images, minifying js, etc and some other tricks as well, but I'm talking from the server perspective).

    Read the article

  • Force request to miss cache but still store the response

    - by Tom Marthenal
    I have a slow web app that I've placed Varnish in front of. All of the pages are static (they don't vary for a different user), but they need to be updated every 5 minutes so they contain recent data. I have a simple script (wget --mirror) that crawls the entire website every 15 minutes. Each crawl takes about 5 minutes. The point of the crawl is to update every page in the Varnish cache so that a user never has to wait for the page to generate (since all pages have been generated recently thanks to the spider). The timeline looks like this: 00:00:00: Cache flushed 00:00:00: Spider starts crawling to update cache with new pages 00:05:00: Spider finishes crawling, all pages are updated until 1:15 A request that comes in between 0:00:00 and 0:05:00 might hit a page that hasn't been updated yet, and will be forced to wait a few seconds for a response. This isn't acceptable. What I'd like to do is, perhaps using some VCL magic, always foward requests from the spider to the backend, but still store the response in the cache. This way, a user will never have to wait for a page to generate since there is no 5-minute window in which parts of the cache are empty (except perhaps at server startup). How can I do this?

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >