Search Results

Search found 1310 results on 53 pages for '600'.

Page 11/53 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • nginx proxypass content 404s when adding caching location block

    - by Thermionix
    Below is my nginx conf - the location block for adding expires max to content is causing issues with content from the /internal proxied sites. nginx error log; 2011/11/22 15:51:23 [error] 22124#0: *2 open() "/var/www/internal/static/javascripts/lib.js" failed (2: No such file or directory), client: 127.0.0.1, server: example.com, request: "GET /internal/static/javascripts/lib.js?0.6.11RC1 HTTP/1.1", host: "example.com", referrer: "https://example.com/internal/" browser error; lib.js Failed to load resource: the server responded with a status of 404 (Not Found) commenting out the expires max location block allows the proxied sites to work as intended. Config files; proxy.conf location /internal { proxy_pass http://localhost:10001/internal/; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 80; include www.conf; } server { listen 443; include proxy.conf; include www.conf; ssl on; } www.conf root /var/www; server_name example.com; location / { autoindex off; allow all; rewrite ^/$ /mainsite last; } location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { expires max; } # hide protected files location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~ \.php$ { fastcgi_index index.php; include fastcgi_params; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • Recommendation for a redundant 60V DC Powersupply

    - by Lairsdragon
    We have some Telco-Equipment in our data space center which we had been given by our Telco. What they didn't provide is a redundant power supply and therefore we are struggling with outages of this equipment. What I am searching for is a redundant Power-Supply for 60V and 600W 60 Volt DC output 600 Watts rated power 2 220V Input with galvanic separation Rack mountable Any suggestions?

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • iSCSI targets don't appear after rescan

    - by asmr
    Hi everybody, I have an Equallogic 4000PS SAN box to which I have connected 2 x ESX 4.0.0 hosts sharing the LUNs. I have an older ESX 3.5 host which I want to setup to share the same LUNs. I have setup a vmkernel port with 2 NICs attached to 2 the iSCSI switch. When I perform an iSCSI software adapter rescan, it takes a long time and it doesn't find the targets. In the ESX-3.5 host's log file I find these messages: Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.394 cpu5:1039)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.394 cpu5:1039)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.394 cpu5:1039)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.397 cpu0:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.397 cpu0:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.397 cpu0:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.442 cpu1:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.442 cpu1:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.442 cpu1:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.874 cpu3:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.874 cpu3:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.874 cpu3:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.884 cpu4:1041)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.884 cpu4:1041)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.884 cpu4:1041)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.888 cpu3:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.888 cpu3:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.888 cpu3:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.042 cpu7:1039)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.042 cpu7:1039)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.042 cpu7:1039)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.044 cpu3:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.044 cpu3:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.044 cpu3:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.045 cpu4:1041)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.045 cpu4:1041)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.045 cpu4:1041)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.308 cpu3:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.309 cpu3:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.309 cpu3:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.598 cpu2:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.598 cpu2:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.598 cpu2:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.600 cpu7:1039)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.600 cpu7:1039)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.600 cpu7:1039)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Any ideas what the problem is?

    Read the article

  • nginx proxypath https redirect fails without trailing slash

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass. The links on the pages that lack trailing slashes do have https:// in front, but get redirected to a http request with a trailing slash - which ends in connection refused - I only want these services to be available through https. So if a link is too https://example.com/internal/errorlogs in a browser when loaded https://example.com/internal/errorlogs gives Error Code 10061: Connection refused (it redirects to http://example.com/internal/errorlogs/) If I manually append the trialing slash https://example.com/internal/errorlogs/ it loads I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf to no effect, have also added server_name_in_redirect off; This happens on more than one app under nginx, and works in apache reverse proxy Config files; proxy.conf location /internal { proxy_pass http://localhost:8081/internal; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 443; server_name example.com; server_name_in_redirect off; include proxy.conf; ssl on; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Proto https; curl output -$ curl -I -k https://example.com/internal/errorlogs/ HTTP/1.1 200 OK Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:07 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 14327 -$ curl -I -k https://example.com/internal/errorlogs HTTP/1.1 301 Moved Permanently Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:11 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 127 Location: http://example.com/internal/errorlogs/

    Read the article

  • Increasing resolution in FreeNX headless server

    - by syrenity
    Hi. I'm running a FreeNX server on headless CentOS machine, and the resolution seems to be locked on 800 x 600. I tried editing the xorg.conf file, but without success so far. Has anyone succeed of running the FreeNX remote under 1280 x 1024 resolution, and can post a working configuration? Thanks! P.S.: Here is the pastebin of my current xorg.cof file: http://pastie.org/835308

    Read the article

  • Which laptop should I get for web design / development

    - by Mikey1010
    Hi, I require to buy a laptop, but I'm clueless really-need a laptop that will run the usual software for web work-photoshop,fireworks,dreamweaver,flash etc but also run php and .net, maybe most programs open at same time. My budget is between £400-£600 ($600-$900) please provide links if possible! Any help is very much appreciated.

    Read the article

  • Which laptop should I get for web design / development?

    - by Mikey1010
    Hi, I require to buy a laptop, but I'm clueless really-need a laptop that will run the usual software for web work-photoshop,fireworks,dreamweaver,flash etc but also run php and .net, maybe most programs open at same time. My budget is between £400-£600 ($600-$900) please provide links if possible! Any help is very much appreciated.

    Read the article

  • Which laptop should I get for web design / development

    - by Mikey1010
    Hi, I require to buy a laptop, but I'm clueless really-need a laptop that will run the usual software for web work-photoshop,fireworks,dreamweaver,flash etc but also run php and .net, maybe most programs open at same time. My budget is between £400-£600 ($600-$900) please provide links if possible! Any help is very much appreciated.

    Read the article

  • Why is my cron daemon is being killed every few minutes? OpenVZ?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({56, 0}, 0x7fffa7184c80) = 0 stat("crontabs", {st_mode=S_IFDIR|S_ISVTX|0730, st_size=4096, ...}) = 0 stat("/etc/crontab", {st_mode=S_IFREG|0644, st_size=1100, ...}) = 0 stat("/etc/cron.d", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 stat("/etc/cron.d/php5", {st_mode=S_IFREG|0644, st_size=475, ...}) = 0 stat("/etc/cron.d/anacron", {st_mode=S_IFREG|0644, st_size=244, ...}) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 rt_sigaction(SIGCHLD, NULL, {0x4036f0, [CHLD], SA_RESTORER|SA_RESTART, 0x2b0e8465f230}, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has about 500 MB of free memory, and cat /proc/loadavg returns 0.10 0.21 0.45 so resources shouldn't be the issue. I also tried removing and reinstalling the cron package using apt-get. What else should I check? How do I find out what's killing my crond? Edit: I'm on a virtual machine under OpenVZ (and as such, I have no swap). With cron running, free -m reports: total used free shared buffers cached Mem: 1024 465 558 0 0 0 -/+ buffers/cache: 465 558 Swap: 0 0 0 My OpenVZ User Beancounters via cat /proc/user_beancounters: Version: 2.5 uid resource held maxheld barrier limit failcnt 172087: kmemsize 8275718 25561636 51200000 51200000 0 lockedpages 0 968 2048 2048 0 privvmpages 113442 266465 262200 262200 3740757 shmpages 788 4004 128000 128000 0 dummy 0 0 0 0 0 numproc 39 98 600 600 0 physpages 50521 208434 0 9223372036854775807 0 vmguarpages 0 0 512000 512000 0 oomguarpages 50521 208447 512000 512000 0 numtcpsock 7 323 4096 4096 0 numflock 7 64 2048 2048 0 numpty 1 4 32 32 0 numsiginfo 0 23 1024 1024 0 tcpsndbuf 137984 17878480 20480000 20480000 0 tcprcvbuf 114688 6983504 20480000 20480000 0 othersockbuf 162960 1074440 20480000 20480000 0 dgramrcvbuf 0 24208 10240000 10240000 0 numothersock 101 353 2048 2048 0 dcachesize 459171 747444 10240000 10240000 0 numfile 1010 4221 50000 50000 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 39 424 2048 2048 0

    Read the article

  • Preventing Gigabit Loss due to printers [on hold]

    - by Charles
    HOW CAN I MAINTAIN Gigabit Ethernet integrity given this situation: What I have to work with: ** AC-router w/4 port gigabit N-600 router w/4 port gigabit Switch w/8 port gigabit All PCs have gigabit NICs 4-port POE injector at gigabit (all wiring = Cat 6) **Problem = Printer @ 10/100 (built-in) Printer @ 10/100 (built-in) Scanner @ 10/100 (built-in) Printer @ 10/100 (built-in)** What device (not setting up a PC) or configuration would I have to incorporate to get gigabit going given those devices? WILD SHOT: IS THERE SUCH A THING AS A SWITCH THAT CAN ACCOMMODATE THIS? THANK YOU ALL

    Read the article

  • Does extra hard drive cache make a difference for streaming video?

    - by johnny
    I am looking at the following two drives for a RAID device, which will be streaming normal things but also a lot of video: Seagate Constellation ES.3 ST4000NM0033 - hard drive - 4 TB - SATA-600 TOSHIBA DT01ACA300 3TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Will the 128 MB cache on the Seagate have an effect in my described scenario, compared to the 64 MB on the Toshiba? If so, what sort of difference can I expect? I'm using a qnap device, if that matters.

    Read the article

  • Apache maximum request number 256?

    - by victor hugo
    I have a very good server running an Apache instance with mod_jk for proxying the request to an Application server. I'm doing a load test and although I'm sending over 600 requests, the status worker keep showing this: 256 requests currently being processed, 0 idle workers I'm using 'prefork MPM' <IfModule prefork.c> ServerLimit 2048 StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 1000 MaxRequestsPerChild 0 </IfModule> Is there a compiled limit for Apache to handle just 256 request or what would I be missing?

    Read the article

  • Tomcat 6 going down after reaching its maximum number of threads

    - by user73628
    Our Tomcat 6.0.29 goes down after reaching its maximum number of Threads. I would really appreciate any help with it because it is a production server. Here is part of the catalina.log file: INFO: Maximum number of threads (600) created for connector with address null and port 80 Mar 8, 2011 11:19:37 AM org.apache.coyote.http11.Http11Protocol pause INFO: Pausing Coyote HTTP/1.1 on http-80 Mar 8, 2011 11:19:38 AM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina Mar 8, 2011 11:19:38 AM org.apache.catalina.core.StandardWrapper unload INFO: Waiting for 8 instance(s) to be deallocated

    Read the article

  • Better graphics or better cpu on a budget laptop.

    - by jones
    Which would have better overall performance on a cheap (~$600) laptop; Intel Atom 330 with Nvidia ion or intel Pentium/Celeron with Intel graphics. I don't need 8 hour battery life and will hopefully be using this for programming/web browsing and occasionally light gaming.

    Read the article

  • Excel Data Organization: Array Formulas? Tables? Named Range?

    - by Joe Arasin
    I'm trying to make a huge Excel sheet reasonably maintainable, but it's huge in the "hundred-table-db" direction, rather than the "hundred-thousand-row-table" direction. I want to have a baseline data table that looks something like this: | Indicator | Units | 2010 | 2015 | 2020 | 2025 | Source | | GDP | $Gazillion | 300 | 350 | 400 | 450 | BLS | | Population | Millions | 350 | 400 | 450 | 500 | Census | | PetMonkeyPopulation | Thousands | 50 | 60 | 70 | 80 | SimiansRUs | And then be able to have another sheet that looks like: | | 2010 | 2015 | 2020 | 2025 | | MonkeysPerCapita | .1 | .2 | .3 | .4 | | MonkeysPerDollar | .01 | .01 | .01 | .01 | | GDPPerCapita | 300 | 400 | 450 | 600 | Is there some standard way to make this kind of thing maintainable?

    Read the article

  • few questions on packet sniffer/analyzer

    - by user37652
    I have few questions about packet sniffer. I'm using a zyxel p-600 series modem and a hub to distribute the internet connection. Can I use a packet sniffer here to determine if the user is downloading something? Can I determine if the user is downloading a file based on the modem alone.(The lights blink faster) Is there an application that I could use for the modem or the hub to limit or avoid direct downloads. Details: OS: Ubuntu 9.10 and Windows 7

    Read the article

  • Office scanner recommendation

    - by rodey
    We need to buy a scanner for a user in our company would be be considered a light-to-moderate scanner. The main requirements is a TWAIN/Kofax driver, MUST have a ADF, does not need a flat bed and is in the $500-600 range, at most $1000. What 'cha got?

    Read the article

  • nginx proxypath https redirects to http

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass however several pages load with 404s The links on the pages have https:// in front, but result in a http request - which ends in a 404 - I only want these services to be available through https. I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf, I've also tried commenting out www.conf (just incase its location blocks could have caused any conflicts) to no effect. So if a link is too https://example.com/sickbeard/errorlogs in a browser when loaded https://example.com/sickbeard/errorlogs gives a 404 in a browser https://example.com/sickbeard/errorlogs/ loads nginx error log; 2011/11/23 14:21:58 [error] 28882#0: *6 "/var/www/sickbeard/errorlogs/recent.html" is not found (2: No such file or directory), client: 192.168.1.99, server: example.com, request: "GET /sickbeard/errorlogs/ HTTP/1.1", host: "example.com" Config files; proxy.conf location /sickbeard { proxy_pass http://localhost:8081/sickbeard; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 80; include www.conf; } server { listen 443; include proxy.conf; include www.conf; ssl on; } www.conf root /var/www; server_name example.com; location / { autoindex off; allow all; rewrite ^/$ /mainsite last; location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { expires max; } location ~ \.php$ { fastcgi_index index.php; include fastcgi_params; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • CNAME to another domain fails on some office networks, why?

    - by crashalpha
    Our domain "aspenfasteners.com" is hosted by Volusion. We have CNAME records "find" and "search" which point to site indexing accounts on www.picosearch.com. These addresses fail on SOME private office networks which have their own DNS. We suspect the problem comes from Volusion's own name servers, n2.volusion.com and n3.volusion.com. Volusion support on problems this technical is non-existant. We have tried an NSLOOKUP on find.aspenfasteners.com with level 2 debugging info, and we got the results below. Is it possible that the local DNS is recursing to Volusion's name servers, and that while Volusion DOES return the canonical name, they do NOT resolve the address? Can anybody with expertise in this sort of stuff PLEASE look at the NSLOOKUP below and tell me if we are right, because Volusion is giving me absolutely NO support on this topic. I need proof of where the problem lies. Thanks VERY much! Carlo find.aspenfasteners.com Server: mtl-srm-dbsv-01.fastenerwholesale.com Address: 192.168.0.44 SendRequest(), len 61 HEADER: opcode = QUERY, id = 8, rcode = NOERROR header flags: query, want recursion questions = 1, answers = 0, authority records = 0, additional = 0 QUESTIONS: find.aspenfasteners.com.fastenerwholesale.com, type = A, class = IN ------------ Got answer (138 bytes): HEADER: opcode = QUERY, id = 8, rcode = NXDOMAIN header flags: response, auth. answer, want recursion, recursion avail. questions = 1, answers = 0, authority records = 1, additional = 0 QUESTIONS: find.aspenfasteners.com.fastenerwholesale.com, type = A, class = IN AUTHORITY RECORDS: -> fastenerwholesale.com type = SOA, class = IN, dlen = 46 ttl = 3600 (1 hour) primary name server = mtl-srm-dbsv-01.fastenerwholesale.com responsible mail addr = admin.fastenerwholesale.com serial = 10219 refresh = 900 (15 mins) retry = 600 (10 mins) expire = 86400 (1 day) default TTL = 3600 (1 hour) ------------ SendRequest(), len 41 HEADER: opcode = QUERY, id = 9, rcode = NOERROR header flags: query, want recursion questions = 1, answers = 0, authority records = 0, additional = 0 QUESTIONS: find.aspenfasteners.com, type = A, class = IN ------------ Got answer (141 bytes): HEADER: opcode = QUERY, id = 9, rcode = NXDOMAIN header flags: response, auth. answer questions = 1, answers = 1, authority records = 1, additional = 1 QUESTIONS: find.aspenfasteners.com, type = A, class = IN ANSWERS: -> find.aspenfasteners.com type = CNAME, class = IN, dlen = 17 canonical name = www.picosearch.com ttl = 3600 (1 hour) AUTHORITY RECORDS: -> com type = SOA, class = IN, dlen = 43 ttl = 900 (15 mins) primary name server = ns3.volusion.com responsible mail addr = admin.volusion.com serial = 1 refresh = 900 (15 mins) retry = 600 (10 mins) expire = 86400 (1 day) default TTL = 3600 (1 hour) ADDITIONAL RECORDS: -> ns3.volusion.com type = A, class = IN, dlen = 4 internet address = 65.61.137.154 ttl = 900 (15 mins) * mtl-srm-dbsv-01.fastenerwholesale.com can't find find.aspenfasteners.com: Non-existent domain

    Read the article

  • Force caching of handler output which actively resists caching

    - by deceze
    I'm trying to force caching of a very obnoxious piece of PHP script which actively tries to resist caching for no good reason by actively setting all the anti-cache headers: Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Content-Type: text/html; charset=UTF-8 Date: Thu, 22 May 2014 08:43:53 GMT Expires: Thu, 19 Nov 1981 08:52:00 GMT Last-Modified: Pragma: no-cache Set-Cookie: ECSESSID=...; path=/ Vary: User-Agent,Accept-Encoding Server: Apache/2.4.6 (Ubuntu) X-Powered-By: PHP/5.5.3-1ubuntu2.3 If at all avoidable I do not want to have to modify this 3rd party piece of code at all and instead just get Apache to cache the page for a while. I'm doing this very selectively to only very specific pages which have no real impact on session cookies or the like, i.e. which do not contain any personalised information. CacheDefaultExpire 600 CacheMinExpire 600 CacheMaxExpire 1800 CacheHeader On CacheDetailHeader On CacheIgnoreHeaders Set-Cookie CacheIgnoreCacheControl On CacheIgnoreNoLastMod On CacheStoreExpired On CacheStoreNoStore On CacheLock On CacheEnable disk /the/script.php Apache is caching the page alright: [cache:debug] AH00698: cache: Key for entity /the/script.php?(null) is http://example.com:80/the/script.php? [cache_disk:debug] AH00709: Recalled cached URL info header http://example.com:80/the/script.php? [cache_disk:debug] AH00720: Recalled headers for URL http://example.com:80/the/script.php? [cache:debug] AH00695: Cached response for /the/script.php isn't fresh. Adding conditional request headers. [cache:debug] AH00750: Adding CACHE_SAVE filter for /the/script.php [cache:debug] AH00751: Adding CACHE_REMOVE_URL filter for /the/script.php [cache:debug] AH00769: cache: Caching url: /the/script.php [cache:debug] AH00770: cache: Removing CACHE_REMOVE_URL filter. [cache_disk:debug] AH00737: commit_entity: Headers and body for URL http://example.com:80/the/script.php? cached. However, it is always insisting that the "cached response isn't fresh" and is never serving the cached version. I guess this has to do with the Expires header, which marks the document as expired (but I don't know whether that's the correct assumption). I've tried to overwrite and unset headers using mod_headers, but this doesn't help; whatever combination I try the cache is not impressed at all. I'm guessing that the order of operation is wrong, and headers are being rewritten after the cache sees them. early header processing doesn't help either. I've experimented with CacheQuickHandler Off and trying to set explicit filter chains, but nothing is helping. But I'm really mostly poking in the dark, as I do not have a lot of experience with configuring Apache filter chains. Is there a straight forward solution for how to cache this obnoxious piece of code?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >