Search Results

Search found 9960 results on 399 pages for 'iwork pages'.

Page 154/399 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Varnish + Tomcat vs Apache + mod_jk + Tomcat

    - by Adrian Ber
    Does anyone have some comparison data in terms of performance for using in front of Tomcat either Varnish or Apache with mod_jk. I know that AJP connector suppose to be faster than HTTP, but I was thinking that in combination Varnish which is lighter and highly optimized could perform better. There is also the discussion between static resources (which I think will perform faster with Varnish than Apache, even with mod_cache) and dynamic pages.

    Read the article

  • Google Chrome shows garbage text instead of web page

    - by Sarfraz Ahmed
    On some websites, I receive garbage text instead of web page itself. I have noticed that this only happens for pages that are served with Content-Encoding set to GZIP or chunked headers, see the image please: I am using latest version 22.0.1229.94 of Chrome with Windows 7 Ultimaate. The encoding of browser is set to UTF-8. (I also tried changing encoding to Western, etc but same result) Can anyone suggest a solution to this? Thanks

    Read the article

  • Subversion/Hudson/Sonar/Artifactory - too much for my little server to handle! Help!

    - by Ricket
    I have a little dedicated server. It's at a cheap price and has a simple AMD 1800+ (1.5ghz), 256mb DDR RAM, ...need I continue? And I think I'm overloading it already. I have installed the following, and it's running CentOS 5.4: Webmin Apache MySQL Subversion as an Apache module Hudson (standalone) Sonar (standalone, runs with a standalone Jetty install) Artifactory (standalone) That's pretty much it. But I'm having problems; pages are loading quite slowly. Network speed of the server is excellent, but I think I'm just running out of CPU and/or memory. A side-effect of the pages loading slowly is that sometimes Hudson times out, not being able to start Maven or contact Sonar in a certain amount of time. I think the next step to speed things up might be to move to an application server and use the WAR version of Hudson, Sonar and Artifactory together on that server. I don't know that it will help, but it just seems to make sense, especially with Sonar running on its own Jetty install and the other two probably running their own mini application servers as well. Am I correct in thinking this? Is this the right course of action? Any other tips on how to make the server run faster? I can post more data if you'd like, just let me know what else would help you answer my question. Oh, also just to cure any suspicions, I don't have any sort of virus or spyware. I protect my SSH access with DenyHosts (which has blocked 300+ brute forcers in the past few months), and I have confirmed that the top four processes in terms of memory and CPU usage are Sonar, Artifactory, Hudson, and MySQL. Edit: I just thought of another thing that I'd like you to comment on as well: Apache currently has 8 spawned slave processes, taking 42MB of ram apiece. This is not my web server. Is everything else able to function if I shut down Apache? Can you point me towards a tutorial or something on migrating Subversion from Apache into something that might work along with the other three applications, maybe even make Subversion a WAR file or something?

    Read the article

  • Is there a way to specify minimum minibuffer/echo area size in emacs?

    - by Trevor Alexander
    I am running Emacs 24, and due to a separate issue, my input method displays input candidates in the minibuffer regardless of how I set its options. That would not be such a problem if the minibuffer did not resize from height 2 (when displaying candidates) to height 1 (when not), repeatedly, as I scroll through candidates--it's really jarring. I looked through the documentation online and searched the configuration pages, but I couldn't find a setting for this. Is it possible?

    Read the article

  • Scanner Daily Duty Cycle

    - by Juanp
    I'm comfused with the concept of 'Daily Duty Cycle'. For example if I have a scanner that the spec is: PPM (pages per minute): 90 and DDC (Daily Duty Cycle): 800. I am interested in scanning ONLY 10 hours continuously, what would it be the best choice: 90 * 60 * 10 = 54.000 or (800 / 24) * 10 = 333 It is very different results. what would it be the best option?

    Read the article

  • nginx proxypath https redirects to http

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass however several pages load with 404s The links on the pages have https:// in front, but result in a http request - which ends in a 404 - I only want these services to be available through https. I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf, I've also tried commenting out www.conf (just incase its location blocks could have caused any conflicts) to no effect. So if a link is too https://example.com/sickbeard/errorlogs in a browser when loaded https://example.com/sickbeard/errorlogs gives a 404 in a browser https://example.com/sickbeard/errorlogs/ loads nginx error log; 2011/11/23 14:21:58 [error] 28882#0: *6 "/var/www/sickbeard/errorlogs/recent.html" is not found (2: No such file or directory), client: 192.168.1.99, server: example.com, request: "GET /sickbeard/errorlogs/ HTTP/1.1", host: "example.com" Config files; proxy.conf location /sickbeard { proxy_pass http://localhost:8081/sickbeard; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 80; include www.conf; } server { listen 443; include proxy.conf; include www.conf; ssl on; } www.conf root /var/www; server_name example.com; location / { autoindex off; allow all; rewrite ^/$ /mainsite last; location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { expires max; } location ~ \.php$ { fastcgi_index index.php; include fastcgi_params; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • eAccelerator settings for PHP/Centos/Apache

    - by bobbyh
    I have eAccelerator installed on a server running Wordpress using PHP/Apache on CentOS. I am occassionally getting persistent "white pages", which presumably are PHP Fatal Errors (although these errors don't appear in my error_log). These "white pages" are sprinkled here and there throughout the site. They persist until I go to my eAccelerator control.php page and clear/clean/purge my caches, which suggests to me that I've configured eAccelerator improperly. Here are my current /etc/php.ini settings: memory_limit = 128M; eaccelerator.shm_size="64", where shm.size is "the amount of shared memory eAccelerator should allocate to cache PHP scripts" (see http://eaccelerator.net/wiki/Settings) eaccelerator.shm_max="0", where shm_max is "the maximum size a user can put in shared memory with functions like eaccelerator_put ... The default value is "0" which disables the limit" eaccelerator.shm_ttl="0" - "When eAccelerator doesn't have enough free shared memory to cache a new script it will remove all scripts from shared memory cache that haven't been accessed in at least shm_ttl seconds. By default this value is set to "0" which means that eAccelerator won't try to remove any old scripts from shared memory." eaccelerator.shm_prune_period="0" - "When eAccelerator doesn't have enough free shared memory to cache a script it tries to remove old scripts if the previous try was made more then "shm_prune_period" seconds ago. Default value is "0" which means that eAccelerator won't try to remove any old script from shared memory." eaccelerator.keys = "shm_only" - "These settings control the places eAccelerator may cache user content. ... 'shm_only' cache[s] data in shared memory" On my phpinfo page, it says: memory_limit 128M Version 0.9.5.3 and Caching Enabled true On my eAccelerator control.php page, it says 64 MB of total RAM available Memory usage 77.70% (49.73MB/ 64.00MB) 27.6 MB is used by cached scripts in the PHP opcode cache (I added up the file sizes myself) 22.1 MB is used by the cache keys, which is populated by the Wordpress object cache. My questions are: Is it true that there is only 36.4 MB of room in the eAccelerator cache for total "cache keys" (64 MB of total RAM minus whatever is taken by cached scripts, which is 27.6 MB at the moment)? What happens if my app tries to write more than 22.1 MB of cache keys to the eAccelerator memory cache? Does this cause eAccelerator to go crazy, like I've seen? If I change eaccelerator.shm_max to be equal to (say) 32 MB, would that avoid this problem? Do I also need to change shm_ttl and shm_prune_period to make eAccelerator respect the MB limit set by shm_max? Thanks! :-)

    Read the article

  • Redirect visitors to a "site in maintenance" page?

    - by serhio
    My site is in maintenance(construction). How to redirect visitors to a "site in maintenance" single page? I heard something about app_offline.htm for ASP.NET. Is there something similar for PHP? I want that every page from "mysite.com" be redirected to "mysite.com/maintenance.php"; I don't want to do the minimum modification in the existing site pages, ideally no one. Apache version 2.2.15 PHP version 5.2.13

    Read the article

  • Record browsing history

    - by nc3b
    How can I record everything I browse so that, ideally, it might later enable me to re-surf the same pages without internet access ? For instance, if I go to http://www.example.com/example.html I would like to be able to view the same page later exactly as initially (but without reconnecting to www.example.com). Thank you in advance.

    Read the article

  • What are some "must have" Windows programs?

    - by hmemcpy
    Inspired by this question, what 3rd party essential software do you have on a Windows machine? One per answer please, and if you can, please provide the download/site link. Notice: Check the other pages before you propose a new answer, as your program might have been mentioned already. If so, vote for this answer instead of writing a new one.

    Read the article

  • SharePoint Search Center issue

    - by George2
    Hello everyone, I am using SharePoint Server 2007 with collaboration portal template on Windows Server 2008. The default search address for a site is pointed to /SearchCenter/Pages/Results.aspx. Any ideas how to change the address to some other address? thanks in advance, George

    Read the article

  • Log parser for ftp server

    - by Sergei
    Can anyone suggest good log reporting software for Proftpd? I am looking for something at least as good as http://xferlogdb.sourceforge.net where log is fed into the database and dynamic web pages are built to retrieve historical data and statistics per user, time period and so on. Xferlogdb is very helpful , but unfortunately latest release is 2004

    Read the article

  • SPS 2007 backup webparts etc.

    - by elhombre
    Hi all I would like to backup my hole Share-point 2007 stuff. But as I read on http://searchwinit.techtarget.com/generic/0,295582,sid1_gci1319629,00.html Share-point isn't able to backup all content for an disaster restore. Following can't be backuped * Third-party or custom Web parts * SharePoint site definitions and XML files * SharePoint .aspx template pages * SharePoint script files Know I want to know how can I backup these items, especially web parts!

    Read the article

  • Local Apache on Windows XP not finishing page requests

    - by asgeo1
    I have Apache 2.2.11 installed locally on my Windows XP (SP3) dev machine, which I setup about 3 months ago. I have just started having a strange problem in the last few week. Apache is serving some basic PHP applications like phpMyAdmin. When I make a page request, Apache appears to not finish serving all resources for that page. Firefox shows the "Transferring data from servername..." message, and the page never completes. The same problem happens in Internet Explorer too. I can sometimes tell which resource it is waiting on, because most of the page will render except for some image or similar resources. (Not sure why Firebug doesn't show this) It doesn't have the problem every page request - for page requests where most of the resources are cached in my browser, the page request will work with no problems. Or pages that are very light will work with no problems. However, if I "hard" refresh the page, I will have this problem (probably because it is requesting all page resources) Does anyone know what this could be? It is so strange that it has only just started happening - and I did not make any changes to my system (that I am aware of) I tried playing with the Apache ThreadsPerChild setting, but it did not seem to make a difference. UPDATE: I have been doing some more tests. I have been serving the most basic of pages, just a plain HTML file: <html> <body> <h1>testing</h1> </body> </html> If I request this page multiple times in a row, AND each request occurs immediately after the previous has completed, then 50% of the time the request will time out. However, if I put a 1-2 second gap between requests, then there is no problem. This correlates to what I have observed when the brower requests a real application page. When the browser has nothing cached, then all of the page resources are requested from the browser in a short amount of time - this appears to trigger the problem. UPDATE2: Nathan Long has helped me understand the issue a little better with the server-status page (see below). It is weird, it is like the server has a hickup sending data to the client. The client sits there waiting forever for data that never arrives. Closing the client process does not terminate the connection on the server - the server still has active threads for each previously attempted connection, but they just sit there - not sending any data and never terminating. (even though the client is now closed) Only a restart of the server seems to terminate them.

    Read the article

  • GlassFish docroot internationalization

    - by Mr.J4mes
    At the moment, I am using Apache web server to redirect all HTTP request to port 8080 to be served by GlassFish app server. Just like Apache, GlassFish has a docroot folder to store static pages. I've tried to googled for a while but I could not figure out whether there's a way to set up internationalization for GlassFish's docroot. I'd be very grateful if you could give me a hint or a link to some tutorial regarding this matter.

    Read the article

  • Pasting images into gmail from clipboard, shows fine when sent, but arrives as text

    - by John Robertson
    I have had consistent problems with pasting images into Gmail. I use Firefox (just in case that is relevant, but I wouldn't expect so). The image displays fine as I write it. But when it arrives at my family members Gmail account, it is being displayed in some base64 encoded form as many pages of text with a beginning like: img src="data:image/png;base64,iVBORw0KGgoAAA..... On the receiving end I cannot get the images to display properly.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >