Search Results

Search found 39980 results on 1600 pages for 'blank page'.

Page 88/1600 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Windows Swap (Page File): Enable or Disable?

    - by d03boy
    From my personal experience I've noticed that disabling the page file in Windows XP has given me, in general, the most speed gain out of any other software change I can make. Obviously this has to be done when a significant amount of RAM is available. Typically I find that it works nicely with +2GB of RAM. The only issues I've ever really had were loading up Adobe Photoshop. Is this really a speed improvement or am I imagining it? Note: In order to actually turn it off, you must not just set it to 0MB, but disable it. Otherwise Windows will just expand it when it needs to in order to meet its needs.

    Read the article

  • Is it possible to configure a CDN so that it will step out of the way for a subset of regional IPs?

    - by rwired
    We have a website which targets customers in China, both expat and local Chinese. We have an ICP license which allows us to host in a datacenter inside China. Internet in China is actually as fast as anywhere else (faster than most places actually), so long as the content is served-up within the boundaries of the Great-Firewall. Anything that crosses the wall is horribly slow. The problem is that most expats have some sort of VPN installed so that they can access all the blocked stuff. What this means is that when they access our site, the traffic first has to go out of China through the firewall to their VPN, and then back in. The performance is terrible, worse than if we were just hosting outside of China directly (which we used to do before the ICP was issued). So I want to use a global CDN to mirror the site automatically, but I only want to deliver the content via the CDN if the user's request IP address is outside of China. Inside China I would like the content to be served by our own server. I also want to be careful with the domain names. We currently use www.xxx.com and www.xxx.cn for language selection purposes, as these perform well in SEO on Google (which the expats use), and Baidu (which the locals use). If possible I would like to avoid having one domain on the outside, and the other on the inside since not all expats use a VPN, and some Chinese speakers also use VPNs. Also some of our legitimate customers in both languages are from outside of China. I also don't want to resort to using something like www2.xxx.com/cn for the outside connection if at all possible, since I have worries about duplicate content and canonical URLs ruining our SEO (unless you know of a quick fix for that). CDNs I'm considering are: Google PageSpeed, CloudFlare, Amazon CloudFront. None of which have datacenters inside China. I have complete control of the .com DNS zone records, but the .cn zones are under the control of the domain issuing body in China. I'm not sure at this time if they would allow even a CNAME to point to an IP outside of China (although I don't see why not). They no longer allow outside registrars like they used to.

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • Saving a Word document as Web Page, Filtered drastically reduces image resolution

    - by Abdullah Jibaly
    I have a document with hundreds of images. When I save the first image (right click and save picture) it ends up with a good resolution as shown below: However, when I save the document as Web Page, Filtered, all the images end up really low-res. Here's the exact same image afterwards: I've tried the following options in the Save As dialog with no luck: In Tools > Web Options... > Pictures > Target Monitor I've set the Pixels Per Inch to the highest value, 120. In Tools > Compress Pictures > Target Output I've set it to Print (220 ppi). Any ideas would be appreciated.

    Read the article

  • How to redirect (or Alias) jump page with Apache

    - by Meltemi
    I'm not an Apache expert but need to make a small change to a web server. We are introducing a "jump page" URL that is different from a primary URL (for tracking reasons). /productA/index.html /productA/jump_index.html Basically i want to log that jump_index.html was requested and then return index.html. I don't want the client to wait 8 seconds or so for a redirect. How should we be handling this? Simply symlink (or alias) the file in the filesystem? Use mod_alias Alias Match (if so how exactly)? something better still? Edit: mod_rewrite in httpd.conf: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^TRACE RewriteRule .* - [F] </IfModule>

    Read the article

  • Customize specific websites on chrome's new tab page

    - by Ben
    Chrome's new tab offers the 8 websites you like to visit. Unfortunately for one of the sites, 4chan.org, the main page is a directory, not somewhere I want to go. I would like the 4chan.org thumb to instead open to 4chan.org/u/, taking me directly there instead of the main directory where I would have to manually find /u/. Anyone know how to make this happen? Thanks. Edit: I would like this to happen without totally destroying the new tab's default functionality.

    Read the article

  • .htaccess url rewriting problem

    - by letsworktogether
    I'm kind of stuck at this part and was hoping that I'd get some assistance. I'm building a highscores page in PHP, that's going great, it works. however, I dislike the idea of "index.php?skill=name" and therefore wanted a bit of SEO in this. I have successfully replaced the url with a more friendly version: "highscores/skill/name" And this is where the problem starts, I have added pagination to the highscores and the page is read from the HTTP_GET page variable ($_GET['page']). I dislike the idea of "highscores/skill/name&page=2" and was hoping if you guys could assist me to make the url like the following: Page 1, so accessing the file without declaring the page number: DOMAIN.TLD/highscores/skill/name Page 1 so now the page variable is needed:DOMAIN.TLD/highscores/skill/name/2 As you can tell the "2" will define page 2 and load the correct data for page 2. However, I'm having much trouble in my .htaccess file to configure it this way. RewriteRule ^highscores\/skill\/(.*?)(\/(.*?)*)$ highscores/skills.php?skill=$1&page=$2 [L] # Skills page That is my latest attempt in order to get it to work, unfortunately it does not work, it makes the page look horrible (CSS doesn't work) and it doesn't go to the page specified on the URL. I hope you understand my issue, thank you!

    Read the article

  • Should Site Title be Before or After Page Title?

    - by NickAldwin
    Apologies if this is a dupe. I tried searching, but didn't find anything specifically addressing this concern. When creating a large(ish) site, page titles usually reference both the site name and the current page name. However, it seems there are two main conventions: Bob's Awesome Site - Contact Page and Contact Page - Bob's Awesome Site I've looked around, and pages usually use one of the two variants above. Is there any reason to use one over the other? SEO/readability/usability/etc? I've thought about it, and have only come up with: Page first - Differentiates the tab when the browser is crowded with lots of tabs Site first - Immediately see the "parent" site, so to speak; more cohesive experience

    Read the article

  • set up re-direct for lan clients to local t&c page

    - by tb2571989
    Hi, I'm trying to set up something on my network so that when users connect and try and use the internet they are re-directed to a locally-hosted terms and conditions and policy page. Once they click "accept" then they will be passed through to their homepage, otherwise if they decline then the window will close or show them an error message. I've spent a while looking into this and am wondering if it's possible to do witout having to setup/add to a firewall. Otheriwse let me know what my options are and I can pass it on. Many Thanks Tom

    Read the article

  • Coldfusion server VERY slow page loads

    - by Kevin
    I inherited a windows server 2003 coldfusion 7 server a few weeks ago. Today a network cable was unplugged by accident from the server. On plugging it back in, pages were NOT loading at all. Rather, we were receiving a generic coldfusion error page. After restarting IIS several times and coldfusion even more than that, we finally got pages to start loading. However, the loading is extremely slow (30+ seconds) on pages that used to load instantly. Loading through the local network (IE localhost/cfide/administrator) does nothing to help the load speed. I am not familiar with IIS or Coldfusion (We're in the process of migrating this to Linux/PHP), so this is all new territory to me. I'm hoping someone may have experienced this issue in the past and can help me solve it. I'm happy to provide any additional information that might be necessary....I'm just not sure what information you might need in order to help. Thanks for your time.

    Read the article

  • Browser extension (or other software) to delay page load

    - by Doug Harris
    The alt text to today's comic at xkcd.com (strip below) says: After years of trying, I broke this habit in a day by decoupling the action and the neurological reward. I set up a simple 30-second delay I had to wait through, in which I couldn't do anything else, before any new page or chat client would load (and only allowed one to run at once). The urge to check all those sites magically vanished--and my 'productive' computer use was unaffected. (bold is my emphasis) Does anybody know of a browser extension or other software that will add this sort of delay? I've seen extensions which simply block sites, but not a delay like this.

    Read the article

  • Memeory Leak in Windows Page file when calling a shell command

    - by Arno
    I have an issue on our Windows 2003 x64 Build Server when invoking shell commands from a script. Each call causes a "memory leak" in the page file so it grows quite rapidly until it reaches the maximum and the machine stops working. I can reproduce the problem very nicely by running a perl script like for ($count=1; $count<5000; $count++) { system "echo huhu"; } It is independent of he scripting language as the same happens with lua: for i=1,5000 do os.execute("echo huhu") end I found somebody describing the same issue with php at http://www.issociate.de/board/post/454835/Memory_leak_occurs_when_exec%28%29_function_is_used_on_Windows_platform.html His solution: Firewall/Virus Scanner does not apply, neither are running on the machine. We can also reproduce the issue on other Developer Machines running XP 64, but not on XP 32 Bit. The guilty guy for the allocation is C:\WINDOWS\System32\svchost.exe -k netsvcs which runs all the basic Windows services. Does anybody know the issue and how to resolve it ?

    Read the article

  • xampp admin page access forbidden

    - by Vihaan Verma
    I m new to apache world ! I read some docs online to setup virtual host . Which works fine ! Here are the content of httpd-vhosts.conf file <Directory C:/vhosts> Order Deny,Allow Allow from all </Directory> NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot "C:/htdocs" ServerName localhost </VirtualHost> <VirtualHost *:80> DocumentRoot "C:/vhosts/phpdw" ServerName phpdw </VirtualHost> But now when I access the xampp control panel and try accessing the apache admin page I get access defined eror (403) . My guess is that there needs to be some more configuration in this file to allow access to localhost. I could not find anything relevant . Thanks

    Read the article

  • Passenger, Apache and avoiding page caching

    - by user38382
    I'm hosting a rack application with passenger and apache. The application is setup to cache the content of each request to the public directory after each request. This allows apache to serve the content directly as a static page for future requests. I would like to tell Apache, presumably through some rewrite rules that any requests with query parameters should not be cached, but instead passed down to the rack application. With a mongrel setup I would just redirect it to the balancer if it meets my rewrite conditions. How do you do the same with passenger?

    Read the article

  • What can cause peaks in pagetables in /proc/meminfo ?

    - by Fuzzy76
    I have a gameserver running Debian Lenny on a VPS host. Even when experiencing a fairly low load, the players start experiencing major lag (ping times rise from 50 ms to 150-500 ms) in bursts of 3 - 10 seconds. I have installed Munin server monitoring, but when looking at the graphs it looks like the server has plenty of CPU, RAM and bandwidth available. The only weird thing I noticed is some peaks in the memory graph attributed to "page_tables" which maps to PageTables in /proc/meminfo but I can't find any good information on what this might mean. Any ideas what might be causing this? If you need any more graps, just let me know. The interrupts/second count is at roughly 400-600 during this period (nearly all from eth0). The drop in committed was caused by me trying to lower the allocated memory for the server from 512MB to 256MB, but that didn't seem to help.

    Read the article

  • Cannot open any web page, but MSN Messenger works

    - by Steven
    I use my computer behind a router. My MSN program can connect to the Internet, but I can not open any web site with my web browser. It seems that this problem is related to DNS because when I input an IP address directly in the address bar of my web browser, the web page can be displayed. However, I don't know how to fix this problem. I choose Google Public DNS server on my computer, this problem still exists. My OS is windows XP. How to fix this problem? Any ideas?

    Read the article

  • On wordpress.com how to set a category as front page?

    - by Shashank Sawant
    I first referred to a link explaining how to set a page as your front page. What I want is to set a category of my blogs as my front page display. Hoping for an answer, I went to the following link: http://en.forums.wordpress.com/topic/1-category-as-front-page?replies=4 It effectively redirects me to my first link. Though I can make a page on wordpress.com and set it as my front page, I still haven't understood how to set one of my blog categories as my front display.

    Read the article

  • All-on-one-page print view in Plone

    - by Kev
    We have a Plone 4 document that is in a hierarchy. At each node there's either a document, or a folder. Folders then have more documents and folders. We want to be able to print the entire hierarchy, which means rendering the whole thing on one page. I see a number of web sites like this one that seem to have something like this. Is it manually done or is there some add-on I can get to make this feature possible?

    Read the article

  • how to rewrite or redirect old or missing or invalid url to 404 page

    - by kath
    I recently upgraded a site and almost all URLs have changed. I have redirected all of them (or so I hope) but it may be possible that some of them have slipped by me. Is there a way to somehow catch all invalid URLs and send the user to a certain page I am using PHP Thanks so much! error file is already in .htaccess but seems nothing going to change you can see the error file as below AddHandler application/x-httpd-php5s .php ErrorDocument 404 /content/404.php <IfModule mod_rewrite.c> RewriteEngine on RewriteBase / here are 2 different url one the first one is old one which i edited and the secound one is edited one #1 old one (which is no longer on the server) http://adsbuz.com/vehicles-cars/toyoya/2009-toyota-land-cruiser-gxr-4686.htm #2 the editet one which is on the server http://adsbuz.com/vehicles-cars-for-sale/toyoya/2009-toyota-land-cruiser-gxr-4686.htm i need only the secound one with the vehicles-cars-for-sale because the other directory is already modified and its not on the server but as you can see after the (adsbuz) site name vehicles-cars and vehicles-cars-for-sale both are opening for same location I hope I made myself clear

    Read the article

  • .htaccess rewrite all queries to static page

    - by user127219
    I have an account where hundreds of inbound links to their calender are showing up as 404 (they moved their site to a new platform). I would like to make a wildcard redirection of all URLs with a query to their old event calender to land on a new static page, and do the same for their webstore queries. I've tried several variations, but can't seem to get it to work. CASE 1: I need to redirect URLs like these (note the difference between "showDay" and "showWeek"): apps/calendar/showWeek?calID=5107976&year=2011&month=7&day=10 apps/calendar/showDay?calID=5107976&year=2011&month=9&day=10 To: http://domain.com/events/ CASE 2: And also URLs like these: apps/webstore/products/show/1927074 TO: http://subdomain.domain.com/ I can't seem to get the syntax right to take all of the URLS and redirect them. I'm looking for the equivalent of a wildcard like "apps/calendar/*" would give you at a command line. Any help is appreciated!

    Read the article

  • Serve mirrored (static) web-page with original headers

    - by aioobe
    I have a dynamic webpage which I want to create a "frozen" copy of. Typically I would do something like wget -m http://example.com, and then put the files in the document root of the web-server. This site however has some dynamic content, including dynamically generated images, for instance http://example.com/company/123/logo This means that in order to mirror the page, I need to Save whatever headers the server currently serves for each URL. This can be done using the wget option --save-headers. Serve the static pages and serve the proper headers for each file. (This I have no idea of how to do.) What is the best way to solve this? Any suggestions are welcome.

    Read the article

  • Lost Page Write I/O Errors on CentOS LVM setup

    - by Gregg Leventhal
    I have a CentOS 6 box with LVM setup and one of the PVs is a USB disk (I know). One of them is getting the error: Oct 30 10:57:07 alpha01 kernel: lost page write due to I/O error on dm-3 Oct 30 10:57:07 alpha01 kernel: Buffer I/O error on device dm-3, logical block 4 Which is causing problems with all of the LVs on it. pvs shows the PV as unknown device. I can ls to the logical volumes and they show up in lvdisplay, but first I get a bunch of IO errors. I made sure the cables are secure between the USB drive. What should I do to get this back up and running for the meanwhile? Should I unmount each LV and run an fsck.ext4 on each one like fsck.ext4 -y /dev/vg1/lv_logvolname ?

    Read the article

  • Finding, which web component is causing problems during page loading

    - by Juhele
    I am using windows version of Firefox 5.0 and last few days I found that on one website (similar focus as stackexchange but in Czech), is something wrong as my browser loads it very slowly (while the connection is ok and never had problems with it). There is also another guy trying to solve the almost the same problem, but no help yet. So I would ask, whether there is some Firefox plugin showing which page components are loading and which could help me to find out the problem. I am writing from office pc so we have blocked all the social networks like FB, Twitter etc. and I also use Adblock Plus - however, deactivating it does not have any effect on it. thanks

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >