Search Results

Search found 10384 results on 416 pages for 'plan cache'.

Page 104/416 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Apache reverse proxy: no protocol handler

    - by gonvaled
    I am trying to configure a reverse proxy with apache, but I am getting a No protocol handler was valid for the URL error, which I do not understand. This is the relevant configuration of apache: ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /gonvaled/examples/jsonrpc/output/services/ http://localhost:8000/services/ ProxyPassReverse /gonvaled/examples/jsonrpc/output/services/ http://localhost:8000/services/ The requests is reaching apache as: POST /gonvaled/examples/jsonrpc/output/services/EchoService.py HTTP/1.1 And they should be forwarded to my internal service, located at: 0.0.0.0:8000/services/EchoService.py These are the logs: ==> /var/log/apache2/error.log <== [Wed Jun 20 02:05:20 2012] [debug] proxy_util.c(1506): [client 127.0.0.1] proxy: http: found worker http://localhost:8000/services/ for http://localhost:8000/services/EchoService.py, referer: http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html [Wed Jun 20 02:05:20 2012] [debug] mod_proxy.c(998): Running scheme http handler (attempt 0) [Wed Jun 20 02:05:20 2012] [warn] proxy: No protocol handler was valid for the URL /gonvaled/examples/jsonrpc/output/services/EchoService.py. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule. [Wed Jun 20 02:05:20 2012] [debug] mod_deflate.c(615): [client 127.0.0.1] Zlib: Compressed 614 to 373 : URL /gonvaled/examples/jsonrpc/output/services/EchoService.py, referer: http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html ==> /var/log/apache2/access.log <== 127.0.0.1 - - [20/Jun/2012:02:05:20 +0200] "POST /gonvaled/examples/jsonrpc/output/services/EchoService.py HTTP/1.1" 500 598 "http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html" "Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.162 Safari/535.19"

    Read the article

  • Intel z77 vs h77 for intensive compiling, gaming [closed]

    - by Bilal Akhtar
    I'm in the market for a desktop motherboard (preferably ATX) that functions well with Intel i7-3770 Ivy Bridge processor at 3.4 GHz with LGA1155 socket. That processor is very fast, and it should handle all my tasks. My question is about the type of motherboard chipset I should choose to accompany it. I plan to use my rig for compiling and developing Debian package and other OS components, web development, occasional Android apps, chroots, VMs, FlightGear, other gaming but nothing serious, and heavy multitasking, all on Ubuntu. I do NOT plan to overclock, and I never will, so that's not a cause of concern for me. That said, I'm down to three chipset choices: Intel H77 Intel Z68 Intel Z77 I'm planning to go for H77 since I don't need any of the new features in Z77. I don't plan to use a second GPU and I will never overclock my CPU/GPU. My question is, will H77 based MoBos handle all my tasks well? Intel advertises that chipset as "everyday computing" but other sites say it's base functionality is the same as Z77. Intel rather advertises Z77 for "serious multitaskers, hardcore gamers and overclocking enthusiasts". But the problem with all Z77 motherboards I've seen is, they're way too expensive and their main feature seems to be overclocking, which won't be useful to me. Will I lose any raw CPU/GPU performance or HDD R/w with the H77 when comparing it to a Z77? Will heat, etc be an issue too? From what I've seen, Z77 motherboards have larger heat sinks when compared to H77 ones. Will that be an issue too, if I go with an H77 motherboard with no heat sinks for the chipset? The CPU will have a fan in both cases, of course. tl;dr When it comes to CPU/GPU performance and HDD r/w, is the Intel H77 chipset slower than the Z77? I don't care about overclocking or multiple GPUs, and for the processor, I'm set on Ivy Bridge i7-3770.

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • Installing Linux on an Asus p8z68-m PRO Motherboard

    - by Holland
    Here is a challenge: how is this done? I've tried disabling the ASM1061 controller in the Onboard Devices section, using Wubi, booting from USB (as I don't have a DVD drive, yet), and even booting from RAID/IDE (with AHCI as the default) to do this. Still, no dice. Google shows up virtually nothing related about Linux and this mobo, apart from a people just saying "disable ASMedia" (which, I assume is the ASM1061 controller, as that's all I see - apart from the USB 3.0, which I disabled already) and it hasn't really helped much. Thus, what is wrong here? Edit My problem is that I cannot boot Linux via USB or a simple Windows installer such as Wubi (for Ubuntu). I wind up getting error messages along the lines of write cache failed, along with many other cryptic error messages similar to the following: [ 1400.351374] sd 4:0:0:0: [sdb] Test WP failed, assume Write Enabled [ 1400.353433] sd 4:0:0:0: [sdb] Asking for cache data failed [ 1400.356601] sd 4:0:0:0: [sdb] Assuming drive cache: write through This seems to be common for Asus P8Z68-M Pro motherboards, with the only notable solution being to "disable ASMedia", which, as I said before, I'm guessing is the ASM1061 controller on the motherboard. Despite already disabling this, I have tried this with both Fedora and Ubuntu without any success. I need to know what I can do about this; has anyone ran into something similar or heard about this issue before? I know these motherboards are relatively new...

    Read the article

  • Why doesn't apache2 consistently load template fragments from memcached?

    - by Hobhouse
    I run a webserver on an ubuntu box in the rackspacecloud with django 1.0x, apache2/WSGI and memcached 1.2.2. Some of my templates make use of template fragment caching: {% load cache %} {% cache 604800 keyname %} <!-- cache: {% now "H:i, j. b" %} --> {{ my_content }} {% endcache %} When I reload apache2 everything is fine. If keyname is not set, my_content is generated and keyname is set in memcached. After that, my_content is served from memcached. My problem is that after some hours (notably less time than 604800 seconds ), apache2 seems to stop talking to memcached, and my_content is generated from scratch everytime. When this happens I can still set and get keys from memcached from my python shell. Memcached also has more than enough memory to store keys. But to get apache2 to start talking to memcached again I have to restart apache2, and then it will once again start to get the now several hours old keys from memcached. What can be the reason for this behaviour, and how do I fix it?

    Read the article

  • Can't mount FAT32 drive under Ubuntu Linux

    - by Josh
    I have a 320GB USB drive with a single large FAT32 partition. The volume mounts perfectly fine on my Mac OS X 10.5.8 machine and Disk Utility on the mac reports no issues with the volume. I can read/write all data on the drive. However when I connect the drive to my Ubuntu 9.10 Karmic system, the partition does not mount. dmesg|tail says: [ 2752.334822] scsi3 : SCSI emulation for USB Mass Storage devices [ 2752.335040] usb-storage: device found at 3 [ 2752.335044] usb-storage: waiting for device to settle before scanning [ 2757.330301] usb-storage: device scan complete [ 2757.331005] scsi 3:0:0:0: Direct-Access WD 3200AAK External 1.65 PQ: 0 ANSI: 0 [ 2757.331772] sd 3:0:0:0: Attached scsi generic sg2 type 0 [ 2757.355647] sd 3:0:0:0: [sdb] 625142448 512-byte logical blocks: (320 GB/298 GiB) [ 2757.360737] sd 3:0:0:0: [sdb] Write Protect is off [ 2757.360749] sd 3:0:0:0: [sdb] Mode Sense: 00 00 00 00 [ 2757.360755] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367618] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367631] sdb: sdb1 [ 2762.797622] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2762.797636] sd 3:0:0:0: [sdb] Attached SCSI disk [ 2822.866228] FAT: bogus number of reserved sectors [ 2822.866237] VFS: Can't find a valid FAT filesystem on dev sdb1. When I run fsck.vfat -a /dev/sdb1 I get: root@cartman:~# fsck.vfat -a /dev/sdb1 dosfsck 3.0.3, 18 May 2009, FAT32, LFN Logical sector size is zero. Googling "vfat Logical sector size is zero" produced no consensus as to the solution. I would prefer not to have to completely reformat the disk if possible because it contains about 280GB of data I would rather not have to find a temporary home for. Any suggestions?

    Read the article

  • DEB: "Provides:" field ignored

    - by Creshal
    I need to replace a package with a custom one, which gets its own name (foo-origpackage). To allow it to be used as drop-in replacement, I added the Provides: origpackage line to the control file. apt-cache show foo-origpackage lists the "Provides" entry just fine. However, when I want to install a file depending on origpackage, it fails ("Package origpackage not installed"). Is there some distinction between "real" and virtual packages I'm missing? EDIT: To be precise, what I want to replace is xen-utils-common for Squeeze. My tao-xen-utils-common has the following control file: Source: tao-xen-utils-common Section: kernel Priority: optional Maintainer: Creshal <[email protected]> Build-Depends: debhelper Standards-Version: 3.8.0 Homepage: http://tao.at Package: tao-xen-utils-common Architecture: all Depends: gawk, lsb-base, udev, xenstore-utils, tao-firewall Provides: xen-utils-common Conflicts: xen-utils-common Replaces: xen-utils-common Description: Xen administrative tools - common files (modified) The userspace tools to manage a system virtualized through the Xen virtual machine monitor. Modified for use with TAO Firewall. Installing xen-utils-4.0 fails, however: foo@bar# apt-cache showpkg tao-xen-utils-common Package: tao-xen-utils-common Versions: 4.0.0-1tao1 (/var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages MD5: 7c2503f563fca13b33b4eb3cbcb3c129 Reverse Depends: tao-firewall,tao-xen-utils-common tao-firewall,tao-xen-utils-common Dependencies: 4.0.0-1tao1 - gawk (0 (null)) lsb-base (0 (null)) udev (0 (null)) xenstore-utils (0 (null)) tao-firewall (0 (null)) xen-utils-common (0 (null)) xen-utils-common (0 (null)) Provides: 4.0.0-1tao1 - xen-utils-common Reverse Provides: foo@bar# apt-get install xen-utils-4.0 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: xen-utils-common Suggested packages: xen-docs-4.0 The following packages will be REMOVED: tao-xen-utils-common The following NEW packages will be installed: xen-utils-4.0 xen-utils-common Edit:foo@bar# apt-cache policy xen-utils-4.0 xen-utils-4.0: Installed: (none) Candidate: 4.0.1-4 Version table: 4.0.1-4 0 500 http://ftp.at.debian.org/debian/ stable/main amd64 Packages 4.0.1-4 0 500 http://security.debian.org/ stable/updates/main amd64 Packages

    Read the article

  • Extend RAID 1 (HP SmartArray P410i) running Linux

    - by Oliver
    I took over a fairly simple server setup with the following RAID 1 config running Ubuntu 11.10 (Kernel 3.0.0-12-server x86_64): => ctrl all show config Smart Array P410i in Slot 0 (Embedded) (sn: removed) array A (SAS, Unused Space: 1335535 MB) logicaldrive 1 (279.4 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 1 TB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 1 TB, OK) Initially there were two 300GB disks that got replaced by 1TB disks and I now have to extend the logical volume to use that extra space. However, when trying to do so I get the following warning: => ctrl slot=0 ld 1 modify size=max Warning: Extension may not be supported on certain operating systems. Performing extension on these operating systems can cause data to become inaccessible. See ACU documentation for details. Continue? (y/n) Is it safe to say yes or am I at risk of corrupting the file system / loosing data? Rearranging and extending the file system afterwards shouldn't be an issue as I can take the server offline and boot from a gparted live disk. Here's the config of the RAID controller in use: => ctrl all show detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: removed RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: Rev C Firmware Version: 5.12 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: False Drive Write Cache: Disabled SATA NCQ Supported: True And the partition table: Number Start End Size Type File system Flags 1 1049kB 274GB 274GB primary ext4 boot 2 274GB 300GB 25.8GB extended 5 274GB 300GB 25.8GB logical linux-swap(v1)

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • In spite of correct DNS, Exchange sending to wrong destination server for single outbound domain

    - by beporter
    My company uses an SBS 2003 server and makes use of Exchange to host our own email. We also have a linux server hosting domains for some of our clients. In order for us to send to those clients, we had internal DNS set up to shadow the client domains to provide "correct" MX records inside our network. For example, public DNS for a domain abc.com might point to 1.2.3.4, but internally we have MX records set up to route mail for abc.com to 172.16.0.4, which is the linux email server. This setup was entirely functional; this is just back story. We've recently moved one of our client domains from our internal linux server to an external email provider. When we did that, we naturally deleted our internal shadow DNS records so our Exchange server would fetch correct (public) DNS records and route mail out to the new external host. This has NOT had any effect on Exchange though. Even after rebooting the Exchange server and completely flushing the DNS cache (nslookups on the Exchange machine itself correctly resolve to the new external address) Exchange still attempts to deliver messages for the domain to our internal server! Exchange correctly routes to all other internal and external domains when sending email. Somehow Exchange is trying to deliver to a machine that by all accounts it has no business trying to use for just this one domain. Is there a DNS cache that Exchange uses internally? Is there a way to flush that internal cache? What else could I be missing?

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • ddclient to update namecheap subdomain?

    - by LF4
    I have a subdomain that I want to update with ddclient. I configured the ddclient to get the IP from dyndns but it's not updating the subdomain on namecheap. They said to use yourdomain.com as the login instead of my actual domain. Has anyone been able to get namecheap DNS updated with ddclient? I'm running CentOS 6.2 with ddclient 3.7.3. When I run ddclient I get the following. CONNECT: checkip.dyndns.org CONNECTED: using HTTP SENDING: GET / HTTP/1.0 SENDING: Host: checkip.dyndns.org SENDING: User-Agent: ddclient/3.7.3 SENDING: Connection: close SENDING: RECEIVE: HTTP/1.1 200 OK RECEIVE: Content-Type: text/html RECEIVE: Server: DynDNS-CheckIP/1.0 RECEIVE: Connection: close RECEIVE: Cache-Control: no-cache RECEIVE: Pragma: no-cache RECEIVE: Content-Length: 106 RECEIVE: RECEIVE: <html><head><title>Current IP Check</title></head><body>Current IP Address: IPADD</body></html> Use of uninitialized value in string ne at /usr/sbin/ddclient line 1998. WARNING: skipping update of lf4bot from <nothing> to IPADD WARNING: last updated <never> but last attempt on Fri Jun 15 22:46:21 2012 failed. WARNING: Wait at least 5 minutes between update attempts. ddclient.conf File daemon=300 # check every 300 seconds syslog=yes # log update msgs to syslog mail=root # mail all msgs to root mail-failure=root # mail failed update msgs to root pid=/var/run/ddclient.pid # record PID in file. ssl=yes # use ssl-support. Works with use=web, web=checkip.dyndns.org/, web-skip='IP Address' # found after IP Address protocol=namecheap \ server=dynamicdns.park-your-domain.com \ login=yourdomain.com \ password=PASSWORD \ lf4bot

    Read the article

  • Squid: The request or reply is too large

    - by Ueli
    I have done a reverse proxy with an Apache in the background (on the same server). All works great but I can't open one page. I get the error "The request or reply is too large." In my cache.log contains: 2010/12/09 15:28:29| WARNING: http.c:971: HTTP header too large 2010/12/09 15:29:03| ctx: enter level 0: 'http://server/admin/cms/nav' 2010/12/09 15:29:03| httpProcessReplyHeader: Too large reply header 2010/12/09 15:29:03| ctx: exit level 0 In my squid.conf i disabled the limitations of the request and reply header, without success: reply_body_max_size 0 allow all request_body_max_size 0 Does someone know why that don't work? Thank you very much. Squid Version: Squid Cache: Version 2.7.STABLE3 configure options: '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads' '--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' '--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp' '--enable-delay-pools' '--enable-htcp' '--enable-cache-digests' '--enable-underscores' '--enable-referer-log' '--enable-useragent-log' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-carp' '--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 'amd64-debian-linux' 'build_alias=amd64-debian-linux' 'host_alias=amd64-debian-linux' 'target_alias=amd64-debian-linux' 'CFLAGS=-Wall -g -O2' 'LDFLAGS=' 'CPPFLAGS='

    Read the article

  • Debian, How to convert filesystem from ISO-8859-1 into UTF-8?

    - by Johan
    I have a old pc that is running Debian stable, that is in need of a upgrade. The problem is that it is using latin1 (ISO-8859-1) for everything, and since the rest of the world has moved to UTF-8 I plan to convert this computer as well. And for this question I will focus in on the files that are served with Samba, and some has some latin1 characters in the filenames (like åäö). Now my plan is to move all data of this old computer onto and a brand new one that is running Debian stable (but with UTF-8). Does anybody have a good idea? Thanks Johan Note: later I plan to use iconv to convert the content of some files with something like this: iconv --from-code=ISO-8859-1 --to-code=UTF-8 iso.txt > utf.txt However I don't know of a good way to convert the filesystem it self. Note: Normally I usaly just scp from one computer to the next, but then I end up with latin1 characters in the utf-8 filesystem... Update: Did a small test round with a hand full of files (with funny chars) in the filenames, and that seemed like it could work. convmv -r -f ISO-8859-1 -t UTF-8 * So it was only to execute with the --notest convmv -r -f ISO-8859-1 -t UTF-8 --notest * Nothing more to it.

    Read the article

  • HP DL185 - very slow disk read speed

    - by fistameeny
    Hi, I have a HP DL185 G6 Server (12 disk model) with the following spec: Quad Core Xeon 2.27GHz 6GB RAM HP P212 RAID controller with battery backup 2 x 128GB 15K SAS 3.5" (RAID-1 for the operating system) 4 x 750GB 7.5K SAS 3.5" (RAID-5 for the data, 2TB usable space) The operating system is Ubuntu Server 9.10. Both drives have been formatted as EXT4. We are finding that read speed of the RAID-5 array is poor. Disk test results below: sudo hdparm -tT /dev/cciss/c0d1p1 /dev/cciss/c0d1p1: Timing cached reads: 15284 MB in 2.00 seconds = 7650.18 MB/sec Timing buffered disk reads: 74 MB in 3.02 seconds = 24.53 MB/sec For info, the RAID-1 array performs as follows: sudo hdparm -tT /dev/cciss/c0d0p1 /dev/cciss/c0d0p1: Timing cached reads: 15652 MB in 2.00 seconds = 7834.26 MB/sec Timing buffered disk reads: 492 MB in 3.01 seconds = 163.46 MB/sec We thought this was because with no battery, read/write cache is disabled. We have bought and installed the battery backup and have used the HP bootable CD to change the cache settings to 50% read / 50% write and check cache is enabled on the drives and the controller. Is there something I'm missing?

    Read the article

  • Network Load Balancing and AnyCast Routing

    - by user126917
    Hi All can anyone advise on problems with the following? I am planning on installing the following setup on my estate: I have 2 sites that both have a large amount of users. Goals are to keep things simple for the users and to have automatic failover above the database level. Our Database will exist at the primary site and be async mirrored to the secondary site with manual failover procedures.The database generate sequential ID's so distributing it is not an option. I plan to site IIS boxes at both sites with all of the business logic on them and heavy operations. The connections to SQL will be lightweight and DB reads will be cached on IIS. On this layer I plan to use Windows network load balancing and have the same IP or IPs across all IIS boxes at both sites. This way there will be automatic failover and no single point of failure. Also users can have one web address regardless of which site they are in automatically be network load balanced to their local IIS. This is great but obviously our two sites are on different subnets and as this will be one IP address with most of our traffic we can't go broadcasting everything across the link between the sites. To solve this problem we plan to use AnyCast routing over our network layer to route the traffic to the most local box that is listening which will be defined by the network load balancing. Has anyone used this setup before? Can anyone think of any issues with this? Also some specifics I can't find anywhere at the moment. If my Windows box is assigned an IP and listening on that IP but network load balancing is not accepting specific traffic then will AnyCast route away from that? Also can I AnyCast on a socket level?

    Read the article

  • memory tuning with rails/unicorn running on ubuntu

    - by user970193
    I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7. It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely. My question concerns memory usage, and what concerns I should have with what I am seeing. (if any) Here is the scenario: Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour. If I drop my page caches using the linux command echo 1 /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours. Before: total used free shared buffers cached Mem: 7130244 5005376 2124868 0 113628 422856 -/+ buffers/cache: 4468892 2661352 Swap: 33554428 0 33554428 After: total used free shared buffers cached Mem: 7130244 4467144 2663100 0 228 11172 -/+ buffers/cache: 4455744 2674500 Swap: 33554428 0 33554428 My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour? FWIW, my Unicorn config: worker_processes 15 listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog = 1024 listen 8080, :tcp_nopush = true timeout 180 pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid" GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true before_fork do |server, worker| STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK" print_gemfile_location defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! defined?(Resque) and Resque.redis.client.disconnect old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # already killed end end File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)} end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection defined?(Resque) and Resque.redis.client.connect end Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour? tia

    Read the article

  • Why is MySQL table_cache full but never used

    - by Jeremy Clarke
    I have been using the tuning-primer.sh script to tune my my.cnf settings. I have most things working well but the part about TABLE CACHE makes no sense: TABLE CACHE Current table_cache value = 900 tables. You have a total of 0 tables You have 900 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use. You should probably increase your table_cache When I do SHOW STATUS; I get the following table-related numbers: Open_tables = 900 Opened_tables = 0 It seems like something is going wrong. I have some extra memory I could use on increasing the table_cache size, but my sense is that the 900 tables already available aren't doing anything, and increasing it will just waste more energy. Why might this be happening? Are there other settings that could cause all my table_cache slots to be used even though there are no hits to them? I have 150 max connections and probably no more than 4 tables per join, FWIW. Here is the tuner script output for temp tables, which I've also been tuning: TEMP TABLES Current max_heap_table_size = 90 M Current tmp_table_size = 90 M Of 11032358 temp tables, 40% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables. Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables.

    Read the article

  • What to look for in a switch with LAN/WAN verses an iSCSI SAN?

    - by Luke
    I'm setting up a VMWare ESXi 5 environment with 3 server nodes. Dell recommended 2x Force10 S60 switches shared (iSCSI SAN, LAN/WAN). The S60 switches are extremely powerful. They have 1.25 GB of buffer cache, < 9us latency. But they are very expensive (online price ~$15k per switch, actual quote a little less). I've been told that "by the book" you should at least have 2 internal switches for SAN, and 2 switches for LAN/WAN (each with a redundant). I know some of the pros and cons of each approach. What I'm wondering is, would it be more cost effective to disjoin the SAN from LAN with less expensive switches? The answer to this question highlights what I should be looking for in a switch for the SAN. What should I be looking for in a LAN/WAN switch, in comparison to the SAN? With the above linked question for the SAN: How is buffer latency measured? When you see 36 MB of buffer cache, is that shared or per port? So 36 MB would be 768kb or 36MB per port? With 3 to 6 servers how much buffer cache do you really need? What else should I be looking at? Our application will be heavily using HTML5 websockets (high number of persistent connections). The amount of data being sent is small; Data sent between client <- server isn't broadcasted (not a chat/IM service). We will be doing some database reporting too (csv export, sums, some joins). We are a small business and on a budget. We'd probably only be able to spend no more than $20k on switches total (2 or 4).

    Read the article

  • DNS resolve .com domain on local domain

    - by Joost Verdaasdonk
    I'm building a local 2008 R2 domain as a test case to be able to write a roadmap for the real new domain that needs to be created soon. What I would like to know if I'm able to make a record in DNS that will point the domain name: www.example.com and example.com to one of the servers in my network. I tried creating an a-record for it but that doesn't work. To be honest I'm not even sure if this is possible? So can I do this? That way I would be able to fully test all our services (and webb app) offline before I build the real domain and switch the DNS records at the provider. Some advice if possible and where to start is appreciated. The solution (Thanks Brent): Create new Forward lookup zone pointing to example.com Create empty A record pointing to IP of the webserver you are targeting If www is needed create A record with Name: www and IP of your webserver sub domains repeat the process but then with names for example: sub or www.sub (and ip your webserver) Be aware of the DNS Cache while you are in this process. Things can take time or do the following: Right click the server and choose clear cache in CMD: ipconfig /flushdns (to flush the client cache)

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • external drive and CentOS - Reset high speed USB device number

    - by Phil
    I have 2 external drives (3TB) and both will not work with my centOS Box. Tested them in windows ( different machine ) No problems ( 2.6.32-279.9.1.el6.i686 ) dmesg reports: usb 2-2: new high speed USB device number 3 using ehci_hcd usb 2-2: New USB device found, idVendor=2109, idProduct=0700 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 2-2: Product: USB 3.0 SATA Bridge usb 2-2: Manufacturer: VIA Labs, Inc. usb 2-2: SerialNumber: 0000000000006121 usb 2-2: configuration #1 chosen from 1 choice scsi6 : SCSI emulation for USB Mass Storage devices usb-storage: device found at 3 usb-storage: waiting for device to settle before scanning usb-storage: device scan complete scsi 6:0:0:0: Direct-Access ST3000DM 001-9YN166 CC4B PQ: 0 ANSI: 2 sd 6:0:0:0: Attached scsi generic sg3 type 0 sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] 5860533165 512-byte logical blocks: (3.00 TB/2.72 TiB) sd 6:0:0:0: [sdd] Write Protect is off sd 6:0:0:0: [sdd] Mode Sense: 00 06 00 00 sd 6:0:0:0: [sdd] Assuming drive cache: write through sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] Assuming drive cache: write through sdd: sdd1 sd 6:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16). sd 6:0:0:0: [sdd] Assuming drive cache: write through sd 6:0:0:0: [sdd] Attached SCSI disk Tyring to use cfdisk / fdisk / gdisk or even fdisk -l results in the program hanging and dmesg reports: usb 2-2: reset high speed USB device number 3 using ehci_hcd usb 2-2: reset high speed USB device number 3 using ehci_hcd usb 2-2: reset high speed USB device number 3 using ehci_hcd I have the same 2 drives physically installed in the computer via SATA Any Ideas?

    Read the article

  • A web app provider has asked for specific browser config

    - by Matthew
    They have asks to turn off caching on our browsers. I was aghast that they would ask such a thing. I said to them; To avoid caching it is best practice to use; <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="cache-control" content="no-cache" /> This should work across all browsers. Their reply was; We need to refresh javascript at runtime, this will not help us – any more ideas? I replied; Unsure what you mean by “refresh javascript at runtime”. If you are using ajax, browser caching can effect the XMLHttpRequest open method. Adding these meta tags to the source has fixed this for me in the past. Browser caching only caches resources, it should have no effect on site scripting. These meta tags will bypass browser caching. This is a reasonable request, isn't it?

    Read the article

  • Can not mount my USB disk-- Ubuntu nor windows[dmesg including]

    - by EthanZ6174
    first, here is my dmesn|tail result right after i plugged the disk: $ dmesg | tail [ 2578.697224] scsi 6:0:0:0: Direct-Access HP v100w PMAP PQ: 0 ANSI: 0 CCS [ 2578.698322] sd 6:0:0:0: Attached scsi generic sg2 type 0 [ 2578.916464] sd 6:0:0:0: [sdb] 3921920 512-byte logical blocks: (2.00 GB/1.87 GiB) [ 2578.916950] sd 6:0:0:0: [sdb] Write Protect is off [ 2578.916956] sd 6:0:0:0: [sdb] Mode Sense: 23 00 00 00 [ 2578.916961] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2578.922460] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2578.922470] sdb: [ 2578.969570] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2578.969578] sd 6:0:0:0: [sdb] Attached SCSI removable disk there is nothing after 'sdb:' ... at the meantime, the lsusb shows: $ lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 004: ID 03f0:3207 Hewlett-Packard Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 002: ID 045e:0737 Microsoft Corp. Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub so... can anyone help me? what's wrong with my USB disk? THX

    Read the article

  • Is there anything like Heroku for PHP and/or .NET?

    - by Wayne M
    In my area PHP is very widespread, so is .NET. Ruby not so much; most places have never heard of it. For some personal things I am "forced" to choose Rails because I want to take advantage of Heroku - the ability to deploy and scale on the cloud very easily is the main reason. Also, they offer a small FREE plan, with no ads or strings attached, that I can use for demo sites or, in this case, for my business' static page; as a totally bootstrapped startup I have maybe $50 or so in initial capital and cannot afford to pay monthly fees while I'm getting started. Are there any similar offerings for other languages? Specifically, I really like the small, 5MB site for free that Heroku offers - is there anything like that for PHP and/or .NET? I'm not even that concerned about the "cloud" part, but that would be a nice bonus. If there is, I might be able to kill two birds with one stone and pick up a useful skill as I'm doing my own thing instead of using something that nobody else knows or cares about. I should add I'm specifically interested in something that offers a free plan. As I said, Heroku has a 5mb plan that you can have as many as you want for free; I have yet to find anything similar for any other platform (most of the "free" sites require you to have ugly banners on your page, or don't allow you to use your own domain name), and to be honest I'm not too thrilled about using Ruby on Rails for everything simply to take advantage of this. I'm asking this here because I already asked it on StackOverflow and someone suggested it would be better suited here.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >