Search Results

Search found 7217 results on 289 pages for 'jboss cache'.

Page 109/289 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Squid: The request or reply is too large

    - by Ueli
    I have done a reverse proxy with an Apache in the background (on the same server). All works great but I can't open one page. I get the error "The request or reply is too large." In my cache.log contains: 2010/12/09 15:28:29| WARNING: http.c:971: HTTP header too large 2010/12/09 15:29:03| ctx: enter level 0: 'http://server/admin/cms/nav' 2010/12/09 15:29:03| httpProcessReplyHeader: Too large reply header 2010/12/09 15:29:03| ctx: exit level 0 In my squid.conf i disabled the limitations of the request and reply header, without success: reply_body_max_size 0 allow all request_body_max_size 0 Does someone know why that don't work? Thank you very much. Squid Version: Squid Cache: Version 2.7.STABLE3 configure options: '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads' '--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' '--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp' '--enable-delay-pools' '--enable-htcp' '--enable-cache-digests' '--enable-underscores' '--enable-referer-log' '--enable-useragent-log' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-carp' '--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 'amd64-debian-linux' 'build_alias=amd64-debian-linux' 'host_alias=amd64-debian-linux' 'target_alias=amd64-debian-linux' 'CFLAGS=-Wall -g -O2' 'LDFLAGS=' 'CPPFLAGS='

    Read the article

  • HP DL185 - very slow disk read speed

    - by fistameeny
    Hi, I have a HP DL185 G6 Server (12 disk model) with the following spec: Quad Core Xeon 2.27GHz 6GB RAM HP P212 RAID controller with battery backup 2 x 128GB 15K SAS 3.5" (RAID-1 for the operating system) 4 x 750GB 7.5K SAS 3.5" (RAID-5 for the data, 2TB usable space) The operating system is Ubuntu Server 9.10. Both drives have been formatted as EXT4. We are finding that read speed of the RAID-5 array is poor. Disk test results below: sudo hdparm -tT /dev/cciss/c0d1p1 /dev/cciss/c0d1p1: Timing cached reads: 15284 MB in 2.00 seconds = 7650.18 MB/sec Timing buffered disk reads: 74 MB in 3.02 seconds = 24.53 MB/sec For info, the RAID-1 array performs as follows: sudo hdparm -tT /dev/cciss/c0d0p1 /dev/cciss/c0d0p1: Timing cached reads: 15652 MB in 2.00 seconds = 7834.26 MB/sec Timing buffered disk reads: 492 MB in 3.01 seconds = 163.46 MB/sec We thought this was because with no battery, read/write cache is disabled. We have bought and installed the battery backup and have used the HP bootable CD to change the cache settings to 50% read / 50% write and check cache is enabled on the drives and the controller. Is there something I'm missing?

    Read the article

  • memory tuning with rails/unicorn running on ubuntu

    - by user970193
    I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7. It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely. My question concerns memory usage, and what concerns I should have with what I am seeing. (if any) Here is the scenario: Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour. If I drop my page caches using the linux command echo 1 /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours. Before: total used free shared buffers cached Mem: 7130244 5005376 2124868 0 113628 422856 -/+ buffers/cache: 4468892 2661352 Swap: 33554428 0 33554428 After: total used free shared buffers cached Mem: 7130244 4467144 2663100 0 228 11172 -/+ buffers/cache: 4455744 2674500 Swap: 33554428 0 33554428 My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour? FWIW, my Unicorn config: worker_processes 15 listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog = 1024 listen 8080, :tcp_nopush = true timeout 180 pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid" GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true before_fork do |server, worker| STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK" print_gemfile_location defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! defined?(Resque) and Resque.redis.client.disconnect old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # already killed end end File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)} end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection defined?(Resque) and Resque.redis.client.connect end Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour? tia

    Read the article

  • Why is MySQL table_cache full but never used

    - by Jeremy Clarke
    I have been using the tuning-primer.sh script to tune my my.cnf settings. I have most things working well but the part about TABLE CACHE makes no sense: TABLE CACHE Current table_cache value = 900 tables. You have a total of 0 tables You have 900 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use. You should probably increase your table_cache When I do SHOW STATUS; I get the following table-related numbers: Open_tables = 900 Opened_tables = 0 It seems like something is going wrong. I have some extra memory I could use on increasing the table_cache size, but my sense is that the 900 tables already available aren't doing anything, and increasing it will just waste more energy. Why might this be happening? Are there other settings that could cause all my table_cache slots to be used even though there are no hits to them? I have 150 max connections and probably no more than 4 tables per join, FWIW. Here is the tuner script output for temp tables, which I've also been tuning: TEMP TABLES Current max_heap_table_size = 90 M Current tmp_table_size = 90 M Of 11032358 temp tables, 40% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables. Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables.

    Read the article

  • What to look for in a switch with LAN/WAN verses an iSCSI SAN?

    - by Luke
    I'm setting up a VMWare ESXi 5 environment with 3 server nodes. Dell recommended 2x Force10 S60 switches shared (iSCSI SAN, LAN/WAN). The S60 switches are extremely powerful. They have 1.25 GB of buffer cache, < 9us latency. But they are very expensive (online price ~$15k per switch, actual quote a little less). I've been told that "by the book" you should at least have 2 internal switches for SAN, and 2 switches for LAN/WAN (each with a redundant). I know some of the pros and cons of each approach. What I'm wondering is, would it be more cost effective to disjoin the SAN from LAN with less expensive switches? The answer to this question highlights what I should be looking for in a switch for the SAN. What should I be looking for in a LAN/WAN switch, in comparison to the SAN? With the above linked question for the SAN: How is buffer latency measured? When you see 36 MB of buffer cache, is that shared or per port? So 36 MB would be 768kb or 36MB per port? With 3 to 6 servers how much buffer cache do you really need? What else should I be looking at? Our application will be heavily using HTML5 websockets (high number of persistent connections). The amount of data being sent is small; Data sent between client <- server isn't broadcasted (not a chat/IM service). We will be doing some database reporting too (csv export, sums, some joins). We are a small business and on a budget. We'd probably only be able to spend no more than $20k on switches total (2 or 4).

    Read the article

  • DNS resolve .com domain on local domain

    - by Joost Verdaasdonk
    I'm building a local 2008 R2 domain as a test case to be able to write a roadmap for the real new domain that needs to be created soon. What I would like to know if I'm able to make a record in DNS that will point the domain name: www.example.com and example.com to one of the servers in my network. I tried creating an a-record for it but that doesn't work. To be honest I'm not even sure if this is possible? So can I do this? That way I would be able to fully test all our services (and webb app) offline before I build the real domain and switch the DNS records at the provider. Some advice if possible and where to start is appreciated. The solution (Thanks Brent): Create new Forward lookup zone pointing to example.com Create empty A record pointing to IP of the webserver you are targeting If www is needed create A record with Name: www and IP of your webserver sub domains repeat the process but then with names for example: sub or www.sub (and ip your webserver) Be aware of the DNS Cache while you are in this process. Things can take time or do the following: Right click the server and choose clear cache in CMD: ipconfig /flushdns (to flush the client cache)

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • external drive and CentOS - Reset high speed USB device number

    - by Phil
    I have 2 external drives (3TB) and both will not work with my centOS Box. Tested them in windows ( different machine ) No problems ( 2.6.32-279.9.1.el6.i686 ) dmesg reports: usb 2-2: new high speed USB device number 3 using ehci_hcd usb 2-2: New USB device found, idVendor=2109, idProduct=0700 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 2-2: Product: USB 3.0 SATA Bridge usb 2-2: Manufacturer: VIA Labs, Inc. usb 2-2: SerialNumber: 0000000000006121 usb 2-2: configuration #1 chosen from 1 choice scsi6 : SCSI emulation for USB Mass Storage devices usb-storage: device found at 3 usb-storage: waiting for device to settle before scanning usb-storage: device scan complete scsi 6:0:0:0: Direct-Access ST3000DM 001-9YN166 CC4B PQ: 0 ANSI: 2 sd 6:0:0:0: Attached scsi generic sg3 type 0 sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] 5860533165 512-byte logical blocks: (3.00 TB/2.72 TiB) sd 6:0:0:0: [sdd] Write Protect is off sd 6:0:0:0: [sdd] Mode Sense: 00 06 00 00 sd 6:0:0:0: [sdd] Assuming drive cache: write through sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] Assuming drive cache: write through sdd: sdd1 sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] Assuming drive cache: write through sd 6:0:0:0: [sdd] Attached SCSI disk Tyring to use cfdisk / fdisk / gdisk or even fdisk -l results in the program hanging and dmesg reports: usb 2-2: reset high speed USB device number 3 using ehci_hcd usb 2-2: reset high speed USB device number 3 using ehci_hcd usb 2-2: reset high speed USB device number 3 using ehci_hcd I have the same 2 drives physically installed in the computer via SATA Any Ideas?

    Read the article

  • A web app provider has asked for specific browser config

    - by Matthew
    They have asks to turn off caching on our browsers. I was aghast that they would ask such a thing. I said to them; To avoid caching it is best practice to use; <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="cache-control" content="no-cache" /> This should work across all browsers. Their reply was; We need to refresh javascript at runtime, this will not help us – any more ideas? I replied; Unsure what you mean by “refresh javascript at runtime”. If you are using ajax, browser caching can effect the XMLHttpRequest open method. Adding these meta tags to the source has fixed this for me in the past. Browser caching only caches resources, it should have no effect on site scripting. These meta tags will bypass browser caching. This is a reasonable request, isn't it?

    Read the article

  • Can not mount my USB disk-- Ubuntu nor windows[dmesg including]

    - by EthanZ6174
    first, here is my dmesn|tail result right after i plugged the disk: $ dmesg | tail [ 2578.697224] scsi 6:0:0:0: Direct-Access HP v100w PMAP PQ: 0 ANSI: 0 CCS [ 2578.698322] sd 6:0:0:0: Attached scsi generic sg2 type 0 [ 2578.916464] sd 6:0:0:0: [sdb] 3921920 512-byte logical blocks: (2.00 GB/1.87 GiB) [ 2578.916950] sd 6:0:0:0: [sdb] Write Protect is off [ 2578.916956] sd 6:0:0:0: [sdb] Mode Sense: 23 00 00 00 [ 2578.916961] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2578.922460] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2578.922470] sdb: [ 2578.969570] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2578.969578] sd 6:0:0:0: [sdb] Attached SCSI removable disk there is nothing after 'sdb:' ... at the meantime, the lsusb shows: $ lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 004: ID 03f0:3207 Hewlett-Packard Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 002: ID 045e:0737 Microsoft Corp. Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub so... can anyone help me? what's wrong with my USB disk? THX

    Read the article

  • What are the possible problems, when wget returns code 500 but same request works in normal browsers?

    - by markus
    What should I be looking for, when wget returns 500 but the same URL works fine in my web browser? I don't see any access_log entries that seem to be related to the error. DEBUG output created by Wget 1.14 on linux-gnu. <SSL negotiation info stripped out> ---request begin--- GET /survey/de/tools/clear-caches/password/<some-token> HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: testing.thesurveylab.net Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.0 500 Internal Server Error Date: Wed, 12 Dec 2012 14:53:07 GMT Server: Apache/2.2.3 (CentOS) Set-Cookie: blueprint2-staging=8jnbmkqapl30hjkgo0u6956pd1; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Strict-Transport-Security: max-age=8640000;includeSubdomains X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 5 Connection: close Content-Type: text/html; charset=UTF-8 ---response end--- 500 Internal Server Error Stored cookie testing.thesurveylab.net -1 (ANY) / <session> <insecure> [expiry none] blueprint2-staging 8jnbmkqapl30hjkgo0u6956pd1 Closed 3/SSL 0x0000000001f33430 2012-12-12 15:53:07 ERROR 500: Internal Server Error.

    Read the article

  • Why am I unable to mount my USB drive (unknown partition table)?

    - by Pat
    I'm a real newbie to linux. Anyway the problem is that my USB doesn't get recognized anymore which is really annoying because I need information from it. I've read like a zillion threads how to manually mount it but I really can't it to work. I hope it's just some easy, stupid problem where any of you could help me out quickly.. Here is the syslog: kernel: [ 6872.420125] usb 2-2: new high-speed USB device number 11 using ehci_hcd mtp-probe: checking bus 2, device 11: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-2" kernel: [ 6872.556295] scsi8 : usb-storage 2-2:1.0 mtp-probe: bus: 2, device: 11 was not an MTP device kernel: [ 6873.558081] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 8.01 PQ: 0 ANSI: 0 CCS kernel: [ 6873.559964] sd 8:0:0:0: Attached scsi generic sg3 type 0 kernel: [ 6873.562833] sd 8:0:0:0: [sdc] 15682559 512-byte logical blocks: (8.02 GB/7.47 GiB) kernel: [ 6873.564867] sd 8:0:0:0: [sdc] Write Protect is off kernel: [ 6873.564878] sd 8:0:0:0: [sdc] Mode Sense: 45 00 00 08 kernel: [ 6873.565485] sd 8:0:0:0: [sdc] No Caching mode page present kernel: [ 6873.565495] sd 8:0:0:0: [sdc] Assuming drive cache: write through kernel: [ 6873.568377] sd 8:0:0:0: [sdc] No Caching mode page present kernel: [ 6873.568387] sd 8:0:0:0: [sdc] Assuming drive cache: write through kernel: [ 6873.574330] sdc: unknown partition table kernel: [ 6873.576853] sd 8:0:0:0: [sdc] No Caching mode page present kernel: [ 6873.576863] sd 8:0:0:0: [sdc] Assuming drive cache: write through kernel: [ 6873.576871] sd 8:0:0:0: [sdc] Attached SCSI removable disk Thanks in advance

    Read the article

  • Does image block (firefox addon) save internet bandwidth usage?

    - by dkjain
    Does image block save internet bandwidth usage. I have a data capped plan from my ISP ( 5GB at 2mbps and thereafter 256 kpbs / pm). I doubt if the addon or other similar addon actually saves bandwidht. Here is my point of view, pls correct if that is wrong. When a request is sent to the server, the server sends out whatever page it's requested to serve with all its text and images etc. So essentially my ISP has made his pipe available for the data to reach me thus he would count those bytes under my data plan. When the data arrives it's all first stored to my browser cache (folder) area which means all the data has actually been received by me/computer using my ISP's pipe. The browser then fetches those data from the cache and displays it. By hitting the stop button or blocking images via ur addon I am just choosing not to display the data which would remain in the cache or eventually be discarded if still on the network pipe after a timeout limit. The point is the data request have been completed by the ISP and so the data would be metered and thus using addon such as image block or hitting stop button while page is loading does not in any way save internet bandwidth. Your comments plz....... Regards dk.

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    Hi, I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

  • One single page showing 3 requests (also printing the headers)

    - by Korcholis
    Someone in my studio designed a webpage some years ago, and now the client decided to change the server (he moved to a Linux Apache server running Gen2 SMP, 64 bits, PHP version 5.3.8, Standard MYSQL version 5). It suddenly started to do weird things. When clicking on a link that requires login, the page redirects you to the login page using header() function in PHP. Curiously, the page shows this: OK The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [no address given] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. HTTP/1.1 200 OK Date: Mon, 15 Oct 2012 17:27:32 GMT Server: Apache/2.2.22 (Unix) FrontPage/5.0.2.2635 X-Powered-By: PHP/5.3.8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Keep-Alive: timeout=5, max=399 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html 232c Then the page itself, and then, another header: 0 1f4 OK The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [no address given] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. 0 What's most intriguing is that if you refresh the page or hit enter on the url, it loads correctly. I've been checking the logs, and it only blames of an inexisting favicon. I also checked the .htaccess, everything was correct (RewriteBase was / as intended, and the only stuff there is another rule that moves ^en/ requests to request?lang=en. Has anyone faced something like this? Edit: IE doesn't trigger these two headers. This is getting wierder.

    Read the article

  • How do I prevent lighttpd from caching static files, even when modified on disk?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

  • Java EE talks at JAX Conf

    - by arungupta
    JAX Conf is starting in San Jose today and there are several talks on Java EE there. Java EE Wednesday and Thursday Java Persistence API 2.0 with Eclipse Link RESTful Services with Java EE Cast Study: Functional programming in Scala with CDI GlassFish 3.1: Deploying your Java EE 6 Applications The future of Java Enterprise Testing Forge new ground in Rapid Enterprise Development The Java EE 7 Platform: Developing for the Cloud (Keynote) Exploring Java EE 6 for the Enterprise Developer JBoss Day JSF Summit CDI Tutorial And many more ... Check out the complete schedule and see ya there!

    Read the article

  • Java EE talks at JAX Conf

    - by arungupta
    JAX Conf is starting in San Jose today and there are several talks on Java EE there. Java EE Wednesday and Thursday Java Persistence API 2.0 with Eclipse Link RESTful Services with Java EE Cast Study: Functional programming in Scala with CDI GlassFish 3.1: Deploying your Java EE 6 Applications The future of Java Enterprise Testing Forge new ground in Rapid Enterprise Development The Java EE 7 Platform: Developing for the Cloud (Keynote) Exploring Java EE 6 for the Enterprise Developer JBoss Day JSF Summit CDI Tutorial And many more ... Check out the complete schedule and see ya there!

    Read the article

  • What are the design decisions involved in choosing how to expose a Java web application?

    - by Gary Rowe
    There are many ways to expose a Java web application to the consumer: application container (JBoss etc), servlet container (Tomcat etc), OSGi (Knopflerfish etc), self-executable WAR (Winstone etc) and so on. Are there any clear considerations where one approach should be favoured over another? As an example, could a collection of self-executable WARs running as raw Unix processes outperform the same applications deployed within Tomcat taking into account administration and scalability concerns?

    Read the article

  • links for 2011-03-17

    - by Bob Rhubart
    Siba Prasad: Oracle Database on Amazon RDSg Siba Prasad share an analysis of the pros and cons. (tags: oracle database cloud amazon) LIVE WEBCAST March 24 2pm PT- Why Switch from Red Hat and SUSE Linux to Oracle Linux? (Oracle's Linux Blog) Featuring Oracle's Monica Kumar, Sr.Director of Linux, Oracle VM and MySQL and Avi Miller, Principal Sales Consultant, Linux and Virtualization. (tags: oracle linux) Webcast: IBM SOA vs. Oracle SOA, March 24, 1pm ET / 10am PT Maneesh Joshi and Bruce Tierney guide you to a solid understanding of the differences between the Oracle and IBM approach to comprehensive SOA. (tags: oracle soa bpm) Finding the Right Solution to Source and Manage Your Contractors (PeopleSoft Apps Strategy) "Talent has become a primary competitive advantage for most organizations. Contingent labor offers talent on flexible terms; it offers the ability to scale up operations, close skill gaps, and manage risk in the process of delivering services." - Mark Rosenberg (tags: oracle peoplesoft enterprisearchitecture) Oracle Business Intelligence Customers: Have Your Voice Heard in the "2011Wisdom of the Crowds Business Intelligence Market Survey" (BI & Analytics Pulse) "The Wisdom of the Crowds survey combines social media, crowd sourcing, and good old fashioned market research to provide vendors and customers alike an unvarnished and insightful snap shot of what's top of mind with business intelligence professionals." (tags: oracle businessintelligence) Martin Bach: Troubleshooting Grid Infrastructure startup Martin Bach hunts down the problem that caused one of his blades to reboot after an EXT3 journal error. (tags: oracle grid rac) Oracle WebCenter: Social Networking & Collaboration (Oracle Enterprise 2.0 Blog) Kelley Ruppel with information on "how the new release of Oracle WebCenter provides unprecedented Social Networking and Collaboration." (tags: oracle webcenter enterprise2.0 collaboration) VirtaThon: 100% Virtual Java/Oracle/MySQL Conference! | Bex Huff "The goal is simple," says Oracle ACE Director Bex Huff. "Because it's all online, the conference is very cheap. Pricing is not yet announced... but it should be around $300. Also, unlike other conferences, every speaker gets paid a small fee depending on the popularity of his or her session." (tags: oracle oracleace java mysqql) Griffiths Waite Blog: BPM 11g PS3 GW's Ian Heathcock shares a link to "a most interesting article on Oracle's recent release discussing the new features and how PS3 adds value  to the whole SOA message." (tags: oracle soa) The Buttso Blathers: Tutorial: JSF 2.0 and JPA 2.0 with WebLogic Server using NetBeans Should you take application architecture advice from a man named Buttso? In this case, yes. (tags: oracle jsf jpa weblogic) Setting-up a High Available Tuned SOA Environment Middleware Magic (tags: ping.fm) How to Configure Weblogic Messaging Bridge with JBoss Middleware Magic (tags: ping.fm Weblogic JBoss) Richard Veryard on Architecture: Emergent Architecture (tags: ping.fm entarch emergence)

    Read the article

  • Java enviroment book recommendations

    - by ipavlic
    I come from a C# background and would like to learn Java. Programming and Java as a language are not a problem. What is bewildering to me is the sheer amount of various choices in "Java environment" - Ivy, Maven, Ant, JAXB, Glassfish, JBoss, Struts, Spring are just some of the names that I keep seeing. I am looking for "who is who" and "who works together" beginner's guide. Is there such a book? Something similar?

    Read the article

  • Excellent JAX-RS 2 Article on JavaLobby

    - by reza_rahman
    JAX-RS 2 is a key part of Java EE 7. It is currently in early draft stage and this is a great time to provide feedback. With this goal in mind, well-respected Java EE veteran Bill Burke of JBoss wrote an excellent article on DZone/JavaLobby overviewing what's in JAX-RS 2 so far. He discusses: The client API Asynchronous processing Filters and entity interceptors The full article is posted here. Enjoy!

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >