Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 421/465 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • Apache and MySQL taking all the memory? Maximum connections?

    - by lpfavreau
    I've a had one of our servers going down (network wise) but keeping its uptime (so looks the server is not losing its power) recently. I've asked my hosting company to investigate and I've been told, after investigation, that Apache and MySQL were at all time using 80% of the memory and peaking at 95% and that I might be needing to add some more RAM to the server. One of their justifications to adding more RAM was that I was using the default max connections setting (125 for MySQL and 150 for Apache) and that for handling those 150 simultaneous connections, I would need at least 3Gb of memory instead of the 1Gb I have at the moment. Now, I understand that tweaking the max connections might be better than me leaving the default setting although I didn't feel it was a concern at the moment, having had servers with the same configuration handle more traffic than the current 1 or 2 visitors before the lunch, telling myself I'd tweak it depending on the visits pattern later. I've also always known Apache was more memory hungry under default settings than its competitor such as nginx and lighttpd. Nonetheless, looking at the stats of my machine, I'm trying to see how my hosting company got those numbers. I'm getting: # free -m total used free shared buffers cached Mem: 1000 944 56 0 148 725 -/+ buffers/cache: 71 929 Swap: 1953 0 1953 Which I guess means that yes, the server is reserving around 95% of its memory at the moment but I also thought it meant that only 71 out of the 1000 total were really used by the applications at the moment looking a the buffers/cache row. Also I don't see any swapping: # vmstat 60 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 57612 151704 742596 0 0 1 1 3 11 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 1 1 24 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 2 1 18 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 0 1 13 0 0 100 0 And finally, while requesting a page: top - 08:33:19 up 3 days, 13:11, 2 users, load average: 0.06, 0.02, 0.00 Tasks: 81 total, 1 running, 80 sleeping, 0 stopped, 0 zombie Cpu(s): 1.3%us, 0.3%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1024616k total, 976744k used, 47872k free, 151716k buffers Swap: 2000052k total, 0k used, 2000052k free, 742596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24914 www-data 20 0 26296 8640 3724 S 2 0.8 0:00.06 apache2 23785 mysql 20 0 125m 18m 5268 S 1 1.9 0:04.54 mysqld 24491 www-data 20 0 25828 7488 3180 S 1 0.7 0:00.02 apache2 1 root 20 0 2844 1688 544 S 0 0.2 0:01.30 init ... So, I'd like to know, experts of serverfault: Do I really need more RAM at the moment? How do they calculate that for 150 simultaneous connections I'd need 3Gb? Thanks for your help!

    Read the article

  • Remote installing an msi on citrix servers using WMI

    - by capn
    OK, I'm a C# programmer that is trying to streamline the deployment of a custom windows form app I inherited and built an installer for with WiX (this app will need to be reinstalled regularly as I'm making changes to it). I'm not really used to admin type things (or vbs, or WMI, or terminal servers, or Citrix, and even WiX and MSI are not things I usually deal with) but so far I put together some vbs and have an end goal in mind. The msi does work, and I've installed it from the mapped O: drive on my dev machine and while RDP'd to a citrix machine. End Goal: Deploy code written on my dev machine and compiled into an MSI (that I can improve upon within the confines of WiX and whatever the Windows Installer Engine allows) to the cluster of Citrix machines my users have access to. What am I missing in my script to get the MSI to execute on the remote machines? Layout: Machine A is my dev machine, and has the vbs script and the msi file (XP SP3) Machines C1 - C6 are the Citrix Servers that need the application installed them via the msi (Server 2003 R2 SP2) There is also optionally a shared network resource that all the machines can access. Script: 'Set WMI Constants Const wbemImpersonationLevelImpersonate = 3 Const wbemAuthenticationLevelPktPrivacy = 6 'Set whether this is installing to the debug Citrix Servers Const isDebug = true 'Set MSI location 'Network location yields error 1619 (This installation package could not be opened.) msiLocation = "\\255.255.255.255\odrive\Citrix Deployment\Setup.msi" 'Directory on machine A yields error 3 (file not found) 'msiLocation = "C:\Temp\Deploy\Setup.msi" 'Mapped network drive (on both machines) yield error 3 (file not found) 'msiLocation = "O:\Citrix Deployment\Setup.msi" 'Set login information strDomain = "MyDomain" Wscript.StdOut.Write "user name:" strUser = Wscript.StdIn.ReadLine Set objPassword = CreateObject("ScriptPW.Password") Wscript.StdOut.Write "password:" strPassword = objPassword.GetPassword() 'Names of Citrix Servers Dim citrixServerArray If isDebug Then citrixServerArray = array("C4") Else 'citrixServerArray = array("C1","C2","C3","C5","C6") End If 'Loop through each Citrix Server For Each citrixServer in citrixServerArray 'Login to remote computer Set objLocator = CreateObject("WbemScripting.SWbemLocator") Set objWMIService = objLocator.ConnectServer(citrixServer, _ "root\cimv2", _ strUser, _ strPassword, _ "MS_409", _ "ntlmdomain:" + strDomain) 'Set Remote Impersonation level objWMIService.Security_.ImpersonationLevel = wbemImpersonationLevelImpersonate objWMIService.Security_.AuthenticationLevel = wbemAuthenticationLevelPktPrivacy 'Reference to a process on the machine Dim objProcess : Set objProcess = objWMIService.Get("Win32_Process") 'Change user to install for terminal services errReturn = objProcess.Create _ ("cmd.exe /c change user /install", Null, Null, intProcessID) WScript.Echo errReturn 'Install MSI here 'Reference to a product on the machine Set objSoftware = objWMIService.Get("Win32_Product") 'All users set in option parameter, I'm led to believe that the third parameter is actually ignored 'http://www.webmasterkb.com/Uwe/Forum.aspx/vbscript/2433/Installing-programs-with-VbScript errReturn = objSoftware.Install(msiLocation,"ALLUSERS=2 REBOOT=ReallySuppress",True) Wscript.Echo errReturn 'Change user back to execute errReturn = objProcess.Create _ ("cmd.exe /c change user /execute", Null, Null, intProcessID) WScript.Echo errReturn Next I also tried using this to install, it doesn't return an error code, but doesn't install the msi either, and it makes me wonder if the change user /install command is even really working. errReturn = objProcess.Create _ ("cmd.exe /c msiexec /i ""O:\Citrix Deployment\Setup.msi"" /quiet") Wscript.Echo errReturn

    Read the article

  • Download - Upload is too slow on Centos

    - by Mehdi
    My download/upload in server and out of server is too slow (around 50 KB/s !) ! Did I miss some configuration ? Some information: CentOS release 6.3 uptime load average: 0.17, 0.32, 0.37 Memory free -m total used free shared buffers cached Mem: 24009 21988 2021 0 806 18098 -/+ buffers/cache: 3083 20926 Swap: 4095 28 4067 lshw -C network *-network description: Ethernet interface product: 82574L Gigabit Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 00 serial: 00:25:90:70:17:4a size: 100MB/s capacity: 1GB/s width: 32 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=off broadcast=yes driver=e1000e driverversion=1.9.5-k duplex=full firmware=2.1-2 ip=108.175.8.123 latency=0 link=yes multicast=yes port=twisted pair speed=100MB/s resources: irq:16 memory:fb900000-fb91ffff ioport:e000(size=32) memory:fb920000-fb923fff ethtool ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off MDI-X: off Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000001 (1) Link detected: yes dmesg |grep e1000e dmesg |grep e1000e e1000e: Intel(R) PRO/1000 Network Driver - 1.9.5-k e1000e: Copyright(c) 1999 - 2012 Intel Corporation. e1000e 0000:02:00.0: Disabling ASPM L0s e1000e 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 e1000e 0000:02:00.0: setting latency timer to 64 e1000e 0000:02:00.0: irq 33 for MSI/MSI-X e1000e 0000:02:00.0: irq 34 for MSI/MSI-X e1000e 0000:02:00.0: irq 35 for MSI/MSI-X e1000e 0000:02:00.0: eth0: (PCI Express:2.5GT/s:Width x1) 00:25:90:70:17:4a e1000e 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection e1000e 0000:02:00.0: eth0: MAC: 3, PHY: 8, PBA No: FFFFFF-0FF e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e 0000:02:00.0: eth0: Unsupported Speed/Duplex configuration e1000e: eth0 NIC Link is Up 10 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e 0000:02:00.0: Disabling ASPM L1 e1000e 0000:02:00.0: eth0: changing MTU from 1500 to 9000 e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO

    Read the article

  • Puppet master fails to run under nginx+passenger configuration as rack app, works when run as system service

    - by Anadi Misra
    I get the error [anadi@bangda ~]# tail -f /var/log/nginx/error.log [ pid=19741 thr=23597654217140 file=utils.rb:176 time=2012-09-17 12:52:43.307 ]: *** Exception LoadError in PhusionPassenger::Rack::ApplicationSpawner (no such file to load -- puppet/application/master) (process 19741, thread #<Thread:0x2aec83982368>): from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from config.ru:13 from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize' from config.ru:1:in `new' from config.ru:1 when I start nginx server with passenger module configured, puppet master configured to run through rack. here is the config.ru [anadi@bangda ~]# cat /etc/puppet/rack/config.ru # a config.ru, for use with every rack-compatible webserver. # SSL needs to be handled outside this, though. # if puppet is not in your RUBYLIB: #$:.unshift('/usr/share/puppet/lib') $0 = "master" # if you want debugging: # ARGV << "--debug" ARGV << "--rack" require 'puppet/application/master' # we're usually running inside a Rack::Builder.new {} block, # therefore we need to call run *here*. run Puppet::Application[:master].run and the nginx configuration for puppet master is as follows [anadi@bangda ~]# cat /etc/nginx/conf.d/puppet-master.conf server { listen 8140 ssl; server_name bangda.mycompany.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; access_log /var/log/nginx/puppet/master.access.log; error_log /var/log/nginx/puppet/master.error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangda.mycompany.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangda.mycompany.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } however when I run puppet through the ususal puppetmasterd daemon it works perfect with no errors. I can see somehow the nginx+passenger+rack setup fails to initialize while the same works when running the natvie puppetmaster daemon. Any configuration that I am missing?

    Read the article

  • GeForce 8800GT not even giving basic output

    - by Sam
    My Dad bought a GeForce 8800GT graphics card quite a long time ago now. It has never worked in his PC. Print out from a dxdiag: System Information Time of this report: 4/13/2010, 19:52:40 Machine name: USER-PC Operating System: Windows Vista™ Home Premium (6.0, Build 6001) Service Pack 1 (6001.vistasp1_gdr.091208-0542) Language: English (Regional Setting: English) System Manufacturer: To Be Filled By O.E.M. System Model: To Be Filled By O.E.M. BIOS: Default System BIOS Processor: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz (4 CPUs), ~2.3GHz Memory: 2046MB RAM Page File: 1045MB used, 3296MB available Windows Dir: C:\Windows DirectX Version: DirectX 10 DX Setup Parameters: Not found DxDiag Version: 6.00.6001.18000 32bit Unicode DxDiag Notes Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Sound Tab 3: No problems found. Input Tab: No problems found. DirectX Debug Levels Direct3D: 0/4 (retail) DirectDraw: 0/4 (retail) DirectInput: 0/5 (retail) DirectMusic: 0/5 (retail) DirectPlay: 0/9 (retail) DirectSound: 0/5 (retail) DirectShow: 0/6 (retail) Display Devices Card name: ATI Radeon HD 2400 PRO Manufacturer: ATI Technologies Inc. Chip type: ATI Radeon Graphics Processor (0x94C3) DAC type: Internal DAC(400MHz) Device Key: Enum\PCI\VEN_1002&DEV_94C3&SUBSYS_00000000&REV_00 Display Memory: 1012 MB Dedicated Memory: 245 MB Shared Memory: 767 MB Current Mode: 1280 x 960 (32 bit) (75Hz) Monitor: Generic PnP Monitor Driver Name: atiumdag.dll,atiumdva.dat,atitmmxx.dll Driver Version: 7.14.0010.0523 (English) DDI Version: 10 Driver Attributes: Final Retail Driver Date/Size: 8/22/2007 02:43:14, 3021312 bytes That info is from the current card that is installed in it and has been installed since its purchase roughly 3-4 years ago. When I physically install the card I put it into a purple slot on the motherboard that the old card was in (if I go into the device manager and select properties on the current card it confirms that the slot is a "PCI Slot 16 (PCI bus 2, device 0, function 0)") and boot up the computer but get absolutely no output. The screen that we have registers that it is connected to something (by not displaying the screen it does when the cable is unplugged) but just remains blank, no output at all. I recently took the card to my University and one of my friends who is better with hardware issues than I am tried it in his system and it worked perfectly. No issues whatsoever. I do not have a spec list for his system but I could get one if you need it. If you need any more information on this issue I will be happy to supply you with it as I am starting to get very annoyed with this problem ^_^

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

  • Areca 1280ml RAID6 volume set failed

    - by Richard
    Today we hit some kind of worst case scenario and are open to any kind of good ideas. Here is our problem: We are using several dedicated storage servers to host our virtual machines. Before I continue, here are the specs: Dedicated Server Machine Areca 1280ml RAID controller, Firmware 1.49 12x Samsung 1TB HDDs We configured one RAID6-set with 10 discs that contains one logical volume. We have two hot spares in the system. Today one HDD failed. This happens from time to time, so we replaced it. Upon rebuilding a second disc failed. Normally this is no fun. We stopped heavy IO-operations to ensure a stable RAID rebuild. Sadly the hot-spare disc failed while rebuilding and the whole thing stopped. Now we have the following situation: The controller says that the raid set is rebuilding The controller says that the volume failed It is a RAID 6 system and two discs failed, so the data has to be intact, but we cannot bring the volume online again to access the data. While searching we found the following leads. I don't know whether they are good or bad: Mirroring all the discs to a second set of drives. So we would have the possibility to try different things without loosing more than we already have. Trying to rebuild the array in R-Studio. But we have no real experience with the software. Pulling all drives, rebooting the system, changing into the areca controller bios, reinserting the HDDs one-by-one. Some people are saying that the brought the system online by this. Some are saying that the effect is zero. Some say, that they blew the whole thing. Using undocumented areca commands like "rescue" or "LeVel2ReScUe". Contacting a computer forensics service. But whoa... primary estimates by phone exceeded 20.000€. That's why we would kindly ask for help. Maybe we are missing the obvious? And yes of course, we have backups. But some systems lost one week of data, thats why we'd like to get the system up and running again. Any help, suggestions and questions are more than welcome.

    Read the article

  • Building NanoBSD inside a jail

    - by ptomli
    I'm trying to setup a jail to enable building a NanoBSD image. It's actually a jail on top of a NanoBSD install. The problem I have is that I'm unable to mount the md device in order to do the 'build image' part. Is it simply not possible to mount an md device inside a jail, or is there some other knob I need to twiddle? On the host /etc/rc.conf.local jail_enable="YES" jail_mount_enable="YES" jail_list="build" jail_set_hostname_allow="NO" jail_build_hostname="build.vm" jail_build_ip="192.168.0.100" jail_build_rootdir="/mnt/zpool0/jails/build/home" jail_build_devfs_enable="YES" jail_build_devfs_ruleset="devfsrules_jail_build" /etc/devfs.rules [devfsrules_jail_build=5] # nothing Inside the jail [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# sysctl security.jail security.jail.param.cpuset.id: 0 security.jail.param.host.hostid: 0 security.jail.param.host.hostuuid: 64 security.jail.param.host.domainname: 256 security.jail.param.host.hostname: 256 security.jail.param.children.max: 0 security.jail.param.children.cur: 0 security.jail.param.enforce_statfs: 0 security.jail.param.securelevel: 0 security.jail.param.path: 1024 security.jail.param.name: 256 security.jail.param.parent: 0 security.jail.param.jid: 0 security.jail.enforce_statfs: 1 security.jail.mount_allowed: 1 security.jail.chflags_allowed: 1 security.jail.allow_raw_sockets: 0 security.jail.sysvipc_allowed: 0 security.jail.socket_unixiproute_only: 1 security.jail.set_hostname_allowed: 0 security.jail.jail_max_af_ips: 255 security.jail.jailed: 1 [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# mdconfig -l md2 md0 md1 md0 and md1 are the ramdisks of the host. bsdlabel looks sensible [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# bsdlabel /dev/md2s1 # /dev/md2s1: 8 partitions: # size offset fstype [fsize bsize bps/cpg] a: 1012016 16 4.2BSD 0 0 0 c: 1012032 0 unused 0 0 # "raw" part, don't edit newfs runs ok [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# newfs -U /dev/md2s1a /dev/md2s1a: 494.1MB (1012016 sectors) block size 16384, fragment size 2048 using 4 cylinder groups of 123.55MB, 7907 blks, 15872 inodes. with soft updates super-block backups (for fsck -b #) at: 160, 253184, 506208, 759232 mount fails [root@build /usr/obj/nanobsd.PROLIANT_MICROSERVER]# mount /dev/md2s1a _.mnt/ mount: /dev/md2s1a : Operation not permitted UPDATE: One of my colleagues pointed out There are some file systems types that can't be securely mounted within a jail no matter what, like UFS, MSDOFS, EXTFS, XFS, REISERFS, NTFS, etc. because the user mounting it has access to raw storage and can corrupt it in a way that it will panic entire system. From http://www.mail-archive.com/[email protected]/msg160389.html So it seems that the standard nanobsd.sh won't run inside a jail while it uses the md device to build the image. One potential solution I'll try is to chroot from the host into the build jail, rather than jexec a shell.

    Read the article

  • How to unlock and remove a protected partition from Prestigio USB stick?

    - by mr.b
    Ok, so, I have one of those fancy schmancy devices, which is given to me by a frustrated friend of mine. Device is a Prestigio Leather 8GB, which identifies itself to Linux host as: Bus 001 Device 006: ID 1307:0165 Transcend Information, Inc. 2GB/4GB Flash Drive Kernel messages as USB device is plugged in: kernel: [ 2769.580042] usb 1-9: new high speed USB device using ehci_hcd and address 7 kernel: [ 2769.714782] scsi8 : usb-storage 1-9:1.0 kernel: [ 2770.713937] scsi 8:0:0:0: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.714535] scsi 8:0:0:1: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.715734] sd 8:0:0:0: Attached scsi generic sg3 type 0 kernel: [ 2770.716108] sd 8:0:0:1: Attached scsi generic sg4 type 0 kernel: [ 2770.722175] sd 8:0:0:0: [sdc] 962560 512-byte logical blocks: (492 MB/470 MiB) kernel: [ 2770.722657] sd 8:0:0:0: [sdc] Write Protect is on kernel: [ 2770.731078] sd 8:0:0:1: [sdd] 14012416 512-byte logical blocks: (7.17 GB/6.68 GiB) kernel: [ 2770.731215] sdc: kernel: [ 2770.738251] sd 8:0:0:1: [sdd] Write Protect is off kernel: [ 2770.880328] kernel: [ 2770.885876] sd 8:0:0:0: [sdc] Attached SCSI removable disk kernel: [ 2770.887442] sdd: unknown partition table kernel: [ 2771.049605] sd 8:0:0:1: [sdd] Attached SCSI removable disk So, symptoms are typical for U3-like devices: two separate devices inside of a single flash device. Windows sees it also as two identical usb devices, and mounts two separate drives to system, whereas first one presents itself as a CDROM device, holding a write-protected content, and second is a regular flash-disk partition, that "can" be written to. However, it seems like it's broken in some weird way, since it won't let me write anything to it, format it, nothing, but that's not the issue right now. Question: How can I unlock entire USB stick so it appears to system as a single, 8GB device which can be partitioned and used normally, without restrictions? Since it appeared to be an U3 device, I have tried standard utilities: both U3 Uninstaller by u3.com (found on SoftPedia), and opensource u3_tool from sourceforge (on both Windows and Linux). First utility failed to even detect USB stick as U3 device (simply stood idle while I re-plugged stick several times), while second tool failed with some obscure error about SCSI command unable to do something (I might be able to provide exact errors when I switch back to windows). u3_tool -i /dev/sg3 (Display device info) fails with u3_partition_info() failed: Device reported command failed: status 1 ...and every other option fails with same error, minus first part which states which command precisely has failed. So, apparently, this isn't a U3 device. Or, if it is, it doesn't behave like one. I read on a few occasions that this device protection is done by special command sent to device which tells it to lock itself, and so there should be an unlock command, that would set drive straight. Does anyone have any idea about what could I do to this device to fix it? P.S. I also mentioned a problem with being unable to use second "drive", but I'll tackle that problem when (and if) I manage to merge those two devices into one...

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • File/folder permissions and groups on Linux with Apache

    - by phobia
    I'm trying to learn about permissions on linux webserver with apache. Some clues to the system: The server I have to play around with is Fedora based. Apache runs as apache:apache. To allow for e.g. php to write to a file the file needs to be chmod 777. 755 is not sufficiant. What I'm wondering is basically how set up permissions like they should be on e.g. a "shared web host". My main problem is that if I set a permission so that one user cannot access anothers home folder, then apache can't read from the public_html folder either. To keep the users out I need to set chmod 700. But to let apache to read I need to have at least execute on world, so a 701 basically works, but won't let some users in. So I'm really stuck on what to do. Have been concidering adding the apache user to the frous grours below to avoid having to add the world execute flag, but is that a bad thing? Should it be the other way around, the users in the groups below should also be in the apache group? I was aiming at having 4 groups: 1. webapp same as dev_int, but is the only one that can go inside the webapp/live folder to e.g. do an update from the repo. 2. dev_int can read,write and execute everything in the "web root", including the two below, but nothing outside of the web root 3. dev_ext can read write and execute in all client folders, but cannot access anything outside of the webapp root 4. clientsBasic ftp accounts. Has a home folder with a public_html, but cannot access any other home folders An example of folder structure: webroot    no users in the aforementioned groups can go outside of here some_project    :dev_int only webapp live    :webapp only staging    :dev_int and :dev_ext clients    :dev_int and :dev_ext client_1    :dev_int, :dev_ext and client1:clients public_html dev developer_1    developer_1:dev_int OR :dev_ext public_html

    Read the article

  • Varnish, hide port number

    - by George Reith
    My set up is as follows: OS: CentOS 6.2 running on an OpenVZ virtual machine. Web server: Nginx listening on port 8080 Reverse proxy: Varnish listening on port 80 The problem is that Varnish redirects my requests to port 8080 and this appears in the address bar like so http://mysite.com:8080/directory/, causing relative links on the site to include the port number (8080) in the request and thus bypassing Varnish. The site is powered by WordPress. How do I allow Varnish to use Nginx as the backend on port 8080 without appending the port number to the address? Edit: Varnish is set up like so: I have told the Varnish daemon to listen to port 80 by default. VARNISH_VCL_CONF=/etc/varnish/default.vcl # # # Default address and port to bind to # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. # VARNISH_LISTEN_ADDRESS= VARNISH_LISTEN_PORT=80 # # # Telnet admin interface listen address and port VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 VARNISH_ADMIN_LISTEN_PORT=6082 # # # Shared secret file for admin interface VARNISH_SECRET_FILE=/etc/varnish/secret # # # The minimum number of worker threads to start VARNISH_MIN_THREADS=1 # # # The Maximum number of worker threads to start VARNISH_MAX_THREADS=1000 # # # Idle timeout for worker threads VARNISH_THREAD_TIMEOUT=120 # # # Cache file location VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin # # # Cache file size: in bytes, optionally using k / M / G / T suffix, # # or in percentage of available disk space using the % suffix. VARNISH_STORAGE_SIZE=1G # # # Backend storage specification VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" # # # Default TTL used when the backend does not specify one VARNISH_TTL=120 The VCL file that Varnish calls (through an include in default.vcl) consists of: backend playwithbits { .host = "127.0.0.1"; .port = "8080"; } acl purge { "127.0.0.1"; } sub vcl_recv { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { set req.backend = playwithbits; set req.http.Host = regsub(req.http.Host, ":[0-9]+", ""); if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_hit { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } } sub vcl_miss { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_fetch { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; } } }

    Read the article

  • Unable to commit file through svn, server sent truncated HTTP response body

    - by Rocket3G
    I have my own VPS, on which I want to run a simple SVN + chiliproject setup. I have re-installed SVN, CHILI and the OS several times, and it always works for a couple of hours/days and then it just stops working. Well, everything works, except I can't upload any files. Committing directories seems to work just fine, but when I try to commit a file it breaks. I have an error log file, which gives me the following text when I try to commit something x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "OPTIONS /project HTTP/1.1" 200 149 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "PROPFIND /project HTTP/1.1" 207 346 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "MKACTIVITY /project/!svn/act/c11d45ac-86b6-184a-ac5a-9a1105d64563 HTTP/1.1" 401 345 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "MKACTIVITY /project/!svn/act/c11d45ac-86b6-184a-ac5a-9a1105d64563 HTTP/1.1" 201 262 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "PROPFIND /project HTTP/1.1" 207 236 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "CHECKOUT /project/!svn/vcc/default HTTP/1.1" 201 271 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "PROPPATCH /project/!svn/wbl/c11d45ac-86b6-184a-ac5a-9a1105d64563/1 HTTP/1.1" 207 267 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "CHECKOUT /project/!svn/ver/1 HTTP/1.1" 201 271 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "HEAD /project/index.html HTTP/1.1" 404 - x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "PUT /project/!svn/wrk/c11d45ac-86b6-184a-ac5a-9a1105d64563/index.html HTTP/1.1" 201 269 x.x.x.x - admin [19/Oct/2013:00:02:04 +0200] "DELETE /project/!svn/act/c11d45ac-86b6-184a-ac5a-9a1105d64563 HTTP/1.1" 204 - So it seems that it PUTs the file (test.html) correctly, and somehow somewhere something is wrong (file permissions are alright, when I purposely stated that they are wrong, it gave me errors, which is expected, and they were about the file permissions being incorrect. The odd thing is that files won't get added, but directories are fine. I also have enough storage left on my machine. What I should note, perhaps, is that I use Ubuntu 12.04.3 with ruby 1.9.3, mysql 14.14 and I have it set up that Chiliproject handles the authentication and authorization for the project. It works, because I can commit directories and read it all correctly, though I can't upload files. Help would really be appreciated, as I don't know what on earth is going on with this 'truncated http response body'. I tried to read them with wireshark, but it basically gave me the same information. With regards, Ps. I have no clue what the delay between put and delete is, as it's a file of a mere 500 bytes, so it's uploaded in approximately a second. Pps. I copied this question from StackOverflow to this site, as I didn't know the existence of this site and another user suggested that I'd get more answers here, as it's basically a server fault.

    Read the article

  • Adobe Reader not loading form content

    - by wullxz
    We have an FDL file which is used to offer an online application possibility. The FDL is filled out and sent to a mailbox. When I open the received file, Adobe Reader starts, loads the document in Internet Explorer (had to change my default browser because it doesn't work in chrome - the customer uses IE as default) and displays a warning that Adobe Reader has blocked the connection to the server where the initial document is saved: I can then click on "Trust this document once" (translated by me!) or "Add this host to trusted hosts" (also translated by me!). The second option doesn't work at all. The first option works but is a little bit annoying. I looked into Adobe Readers options (Edit - "Voreinstellungen" in german / the last option - Security (advanced)) and found the possibility to add hosts, files and directories or allow Adobe Reader to use the "Trusted Websites" list from Internetoptions. When I add the website either to Trusted Websites or the trusted list in Adobe Readers options, the warning doesn't pop up but the content in the prefilled (by the applicant) input boxes of the document doesn't show up on Windows 7 but it does show up on Windows XP. This Screenshot shows the settings window described in the last paragraph. The big input box at the bottom normally holds the trusted files/directories/hosts list. System Information: Windows 7 Enterprise x64 Adobe Reader X multiple IE versions (mine is latest but there's also IE 7 or 8) How do I get Adobe Reader to load the content of the form? This behaviour can be reproduced on a PC. When opening an fdf from a command line the form fields are blank even though there is data in the fdf and the pdf is located in a mnaully entered trsuted folder. Steps to reproduce: Clean install a Windows 7 PC (or use a virtual box) Map a network drive to a shared folder with a subfolder e.g. c:\test\docs becomes m:\docs Set security permissions to allow full control to everyone Add an fdf and a matching pdf file in the subfolder Manually add m:\docs to each of the trusted folders in the trust manager registry settings Ensure that Enhanced Security is on Run a command line to open the fdf file Expected result: pdf is opened in Adobe Reader with form fields filled out with data Actual results: pdf is opened with blank fields 'Yellow bar' appears asking to add document to trusted locations It appears that Adobe Reader XI is ignoring the privileged locations entries in the registry. Adding the document via the 'yellow bar' adds the individual document, with the same folder, to the privileged locations but means that the process has to be repeated for every document that needs to be opened from the folder.

    Read the article

  • Aliased network interfaces and isc dhcp server

    - by Jonatan
    I have been banging my head on this for a long time now. There are many discussions on the net about this and similar problems, but none of the solutions seems to work for me. I have a Debian server with two ethernet network interfaces. One of them is connected to internet, while the other is connected to my LAN. The LAN network is 10.11.100.0 (netmask 255.255.255.0). We have some custom hardware that use network 10.4.1.0 (netmask 255.255.255.0) and we can't change that. But we need all hosts on 10.11.100.0 to be able to connect to devices on 10.4.1.0. So I added an alias for the LAN network interface so that the Debian server acts as a gateway between 10.11.100.0 and 10.4.1.0. But then the dhcp server stopped working. The log says: No subnet declaration for eth1:0 (no IPv4 addresses). ** Ignoring requests on eth1:0. If this is not what you want, please write a subnet declaration in your dhcpd.conf file for the network segment to which interface eth1:1 is attached. ** No subnet declaration for eth1:1 (no IPv4 addresses). ** Ignoring requests on eth1:1. If this is not what you want, please write a subnet declaration in your dhcpd.conf file for the network segment to which interface eth1:1 is attached. ** I had another server before, also running Debian but with the older dhcp3 server, and it worked without any problems. I've tried everything I can think of in dhcpd.conf etc, and I've also compared with the working configuration in the old server. The dhcp server need only handle devices on 10.11.100.0. Any hints? Here's all relevant config files: /etc/default/isc-dhcp-server INTERFACES="eth1" /etc/network/interfaces (I've left out eth0, that connects to the Internet, since there is no problem with that.) auto eth1:0 iface eth1:0 inet static address 10.11.100.202 netmask 255.255.255.0 auto eth1:1 iface eth1:1 inet static address 10.4.1.248 netmask 255.255.255.0 /etc/dhcp/dhcpd.conf ddns-update-style none; option domain-name "???.com"; option domain-name-servers ?.?.?.?; default-lease-time 86400; max-lease-time 604800; authorative; subnet 10.11.100.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; pool { range 10.11.100.50 10.11.100.99; } option routers 10.11.100.102; } I have tried to add shared-network etc, but didn't manage to get that to work. I get the same error message no matter what...

    Read the article

  • centos 6 ps aux hangs up

    - by Guntis
    I have problem with my server. Server is running centos 6 (CloudLinux Server release 6.2). uname -a = 2.6.32-320.4.1.lve1.1.4.el6.x86_64 That is a kvm guest. On host is debian 6. If i run command ps aux, it stuck on random process (shows some processes only), top command is working fine. htop doesn't work too (black screen). top - 12:11:51 up 34 min, 1 user, load average: 4.26, 6.71, 16.15 Tasks: 201 total, 7 running, 192 sleeping, 0 stopped, 2 zombie Cpu(s): 7.9%us, 2.8%sy, 0.0%ni, 87.5%id, 1.6%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 9862044k total, 2359484k used, 7502560k free, 171720k buffers Swap: 10485720k total, 0k used, 10485720k free, 1336872k cached server has one Intel(R) Xeon(R) CPU E5606 @ 2.13GHz, free -m total used free shared buffers cached Mem: 9630 2336 7293 0 170 1324 -/+ buffers/cache: 841 8789 Swap: 10239 0 10239 php -v PHP 5.3.19 (cli) (built: Nov 28 2012 10:03:07) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies with the ionCube PHP Loader v4.2.2, Copyright (c) 2002-2012, by ionCube Ltd., and with Zend Guard Loader v3.3, Copyright (c) 1998-2010, by Zend Technologies with Suhosin v0.9.33, Copyright (c) 2007-2012, by SektionEins GmbH mysql Server version: 5.1.63-cll php -i disable_functions => apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_e xec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, openlog, passthru, php _uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, posix_mkfifo, posix_setpgid, posix_set sid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exec, syslog, system, xmlrpc_entity_de code, xmlrpc_server_create, putenv, show_source,mail => apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_exec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, openlog, passthru, php_uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, pos ix_mkfifo, posix_setpgid, posix_setsid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exe c, syslog, system, xmlrpc_entity_decode, xmlrpc_server_create, putenv, show_source,mail ... suhosin.executor.disable_eval => Off => Off suhosin.executor.eval.blacklist => include,include_once,require,require_once,curl_init,fpassthru,base64_encode,base64_decode,mail,exec,system,proc_open,leak, syslog,pfsockopen,shell_exec,ini_restore,symlink,stream_socket_server,proc_nice,popen,proc_get_status,dl, pcntl_exec, pcntl_fork, pcntl_signal,pcntl_waitpid, pcntl_wexitstatus, pcntl_wifexited, pcntl_wifsignaled,pcntl_wifstopped, pcntl_wstopsig, pcntl_wtermsig, socket_accept,socket_bind, socket_connect, socket_cr eate, socket_create_listen,socket_create_pair,link,register_shutdown_function,register_tick_function,gzinflate => include,include_once,require,require_once,c url_init,fpassthru,base64_encode,base64_decode,mail,exec,system,proc_open,leak,syslog,pfsockopen,shell_exec,ini_restore,symlink,stream_socket_server,proc_nic e,popen,proc_get_status,dl, pcntl_exec, pcntl_fork, pcntl_signal,pcntl_waitpid, pcntl_wexitstatus, pcntl_wifexited, pcntl_wifsignaled,pcntl_wifstopped, pcntl _wstopsig, pcntl_wtermsig, socket_accept,socket_bind, socket_connect, socket_create, socket_create_listen,socket_create_pair,link,register_shutdown_function, register_tick_function,gzinflate Sometimes i cannot kill httpd process. I run kill -9 PID even several times, and nothing happens. php runs via suphp. I learned somewhere that it can be trojan. I ran strace ps aux and it stops on open("/proc/PID/cmdline", O_RDONLY) If i reboot server, problem is gone but after some time it is back again .. :( Thanks.

    Read the article

  • Snow Leopard can see Windows shares in Finder but can't connect

    - by Randy Miller
    I have an iMac with the latest version of Snow Leopard on it. I have a NAS drive and a Windows machine that both show up in the Finder's 'Shared' section. However, if I click on them, Finder says "Connection Failed". Clicking on 'Connect As...' gives an error dialog that says "The server 'blah' may not exist or it is unavailable at this time." Points of interest: All machines are receiving their IP/DNS info from the router using DHCP. I have a Mac Mini on the same network that connects to the NAS drive and windows machine perfectly with no config (i.e. worked out of the box). Both Macs are on the same version of Snow Leopard. There is no password required to access the NAS share. I've never setup a WINS server on any machines and all machines are using 'workgroup' by default. I've tried putting "workgroup" in the Mac's workgroup entry and have tried leaving it blank, neither solves the problem. Here are some things I have tried: Finder-Connect To Server: smb:///share. This works, but by name does not. Terminal-mount_smbfs //@/share share. This also works by ip, but not be name, resulting in "mount_smbfs: server connection failed: No route to host". If I put the IP address of the NAS in the WINS server entry in the Mac's network setup, I can connect by name. It obviously seems to be a name resolution error, but I can't figure out why. The only thing that has changed since it used to work is that I got a new router that now gives out DHCP (all machines are dhcp clients) addresses of 192.168.x.x, but used to be 10.0.x.x. I've grep'd through everything that might have saved that old address, but can't find anything. It's also worth noting that the second Mac (the one that connects successfully) was added to the network after the router change. Please let me know if there are additional points of information needed to troubleshoot this further. Thanks, Randy

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Why MySQL sat for 2 minutes doing nothing?

    - by Alex R
    This was a one-time thing, not reproducible... But I saved the show innodb status output. Can anybody tell what's going on here? The simple insert took almost 3 minutes to complete. | InnoDB | | ===================================== 110201 15:58:10 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 34 seconds ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 11963, signal count 11766 --Thread 1824 has waited at .\btr\btr0cur.c line 443 for 118.00 seconds the sema phore: S-lock on RW-latch at 09D6453C created in file .\buf\buf0buf.c line 550 a writer (thread id 1824) has reserved it in mode wait exclusive number of readers 1, waiters flag 1 Last time read locked in file .\buf\buf0flu.c line 599 Last time write locked in file .\btr\btr0cur.c line 443 Mutex spin waits 0, rounds 527817, OS waits 7133 RW-shared spins 2532, OS waits 1226; RW-excl spins 1652, OS waits 1118 ------------ TRANSACTIONS ------------ Trx id counter 0 95830 Purge done for trx's n:o < 0 95814 undo n:o < 0 0 History list length 11 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 0 0, not started, OS thread id 3704 MySQL thread id 551, query id 2702112 localhost 127.0.0.1 root show innodb status ---TRANSACTION 0 95829, not started, OS thread id 3132 MySQL thread id 534, query id 2702020 localhost 127.0.0.1 root ---TRANSACTION 0 95828, not started, OS thread id 3152 MySQL thread id 527, query id 2701973 localhost 127.0.0.1 root ---TRANSACTION 0 95827, ACTIVE 118 sec, OS thread id 1824 inserting, thread decl ared inside InnoDB 500 mysql tables in use 1, locked 1 1 lock struct(s), heap size 320, 0 row lock(s) MySQL thread id 526, query id 2701972 localhost 127.0.0.1 root update INSERT INTO log_searchcriteria (userid,search_criteria,date,search_type) VALUES ( NAME_CONST('userid',NULL), NAME_CONST('search_criteria',_latin1' SELECT SQL_C ALC_FOUND_ROWS idx_search.CTCX_LATITUDE, idx_search.CTCX_LONGITUDE, idx_search.b uilding_id, idx_search.LN_LIST_NUMBER, idx_search.LP_LIST_PRICE, idx_search.HSN_ ADRESS_HOUSE_NUMBER, idx_search.STR_ADDRESS_STREET, idx_search.CP_ADDRESS_COMPAS S_POINT, idx_search.UN_UNIT, idx_search.CIT_CITY, idx_search.ZP_ZIP_CODE, idx_se arch.AR_AREA_NAME, idx_search.BR_BEDROOMS, idx_search.BTH_BATHS, idx_search.ST_S TATUS, idx_search.CTCX_STYLE_TYPE, idx_s -------- FILE I/O -------- I/O thread 0 state: wait Windows aio (insert buffer thread) I/O thread 1 state: wait Windows aio (log thread) I/O thread 2 state: wait Windows aio (read thread) I/O thread 3 state: wait Windows aio (write thread) Pending normal aio reads: 0, aio writes: 1, ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 151006 OS file reads, 120758 OS file writes, 6844 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s ------------------------------------- INSERT BUFFER AND ADAPTIVE HASH INDEX ------------------------------------- Ibuf: size 1, free list len 5, seg size 7, 24664 inserts, 24664 merged recs, 4612 merges Hash table size 553253, node heap has 629 buffer(s) 0.00 hash searches/s, 0.00 non-hash searches/s --- LOG --- Log sequence number 5 2318193115 Log flushed up to 5 2318193115 Last checkpoint at 5 2318129891 0 pending log writes, 0 pending chkp writes 3036 log i/o's done, 0.00 log i/o's/second ---------------------- BUFFER POOL AND MEMORY ---------------------- Total memory allocated 213459462; in additional pool allocated 1720192 Dictionary memory allocated 240416 Buffer pool size 8192 Free buffers 0 Database pages 7563 Modified db pages 18 Pending reads 0 Pending writes: LRU 0, flush list 18, single page 0 Pages read 150973, created 28788, written 115137 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout -------------- ROW OPERATIONS -------------- 1 queries inside InnoDB, 0 queries in queue 1 read views open inside InnoDB Main thread id 2992, state: flushing buffer pool pages Number of rows inserted 794294, updated 89203, deleted 13698, read 1453084305 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s ---------------------------- END OF INNODB MONITOR OUTPUT ============================ Thanks

    Read the article

  • 64-bit linux kernel only seeing 3 of 4GB after upgrade...

    - by Blaine
    Hey everyone. I am running Ubuntu 9.04 64-bit on my macbook. I had 2GB of ram before, and everything ran great. I just upgraded to 2x2GB (4GB), but my system only sees 3GB of it. OS X, which I am dual booting, sees all 4GB. Also, my video performance is incredibly lacking. Before the upgrade my compiz benchmark was full at 80fps, and now it is at 22fps with very choppy window dragging. Has anyone ever heard of this on a 64-bit kernel? I just don't quite understand what could be the issue. 10$ uname -a Linux macbook 2.6.28-15-generic #49-Ubuntu SMP Tue Aug 18 19:25:34 UTC 2009 x86_64 GNU/Linux $ free -m total used free shared buffers cached Mem: 2953 1031 1921 0 114 427 -/+ buffers/cache: 489 2463 Swap: 7812 0 7812 9$ lsmod Module Size Used by i915 77960 2 drm 123232 3 i915 binfmt_misc 18572 1 ppdev 16904 0 btusb 21784 2 bridge 63776 0 stp 11140 1 bridge bnep 22912 2 vboxnetadp 109356 0 vboxnetflt 116972 0 vboxdrv 1721612 1 vboxnetflt uvcvideo 69640 0 compat_ioctl32 18304 1 uvcvideo videodev 45184 2 uvcvideo,compat_ioctl32 v4l1_compat 23940 2 uvcvideo,videodev lp 19588 0 parport 49584 2 ppdev,lp snd_hda_intel 557492 3 snd_pcm_oss 52352 0 snd_mixer_oss 24960 1 snd_pcm_oss snd_pcm 99464 2 snd_hda_intel,snd_pcm_oss arc4 10240 2 snd_seq_dummy 11524 0 ecb 11392 2 snd_seq_oss 41984 0 snd_seq_midi 15744 0 snd_rawmidi 33920 1 snd_seq_midi snd_seq_midi_event 16512 2 snd_seq_oss,snd_seq_midi snd_seq 66272 6 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event ath9k 310584 0 snd_timer 34064 2 snd_pcm,snd_seq snd_seq_device 16276 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq mac80211 251528 1 ath9k iTCO_wdt 21712 0 iTCO_vendor_support 12420 1 iTCO_wdt joydev 20992 0 video 29204 0 snd 78920 15 snd_hda_intel,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq_oss,snd_rawmidi,snd_seq,snd_timer,snd_seq_device applesmc 37700 0 output 11648 1 video soundcore 16800 1 snd pcspkr 11136 0 cfg80211 43680 1 mac80211 appletouch 19972 0 isight_firmware 11520 0 input_polldev 12688 1 applesmc intel_agp 39408 1 snd_page_alloc 18704 2 snd_hda_intel,snd_pcm led_class 13064 2 ath9k,applesmc hid_apple 15872 0 usbhid 47040 0 ohci1394 42164 0 ieee1394 108288 1 ohci1394 sky2 63364 0 fbcon 49792 0 tileblit 11264 1 fbcon font 17024 1 fbcon bitblit 14464 1 fbcon softcursor 10368 1 bitblit Some information from dmesg: [ 795.820163] ACPI: EC: GPE storm detected, transactions will use polling mode [ 1762.709516] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 1763.078130] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 2362.760889] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 2416.352084] ACPI: EC: missing confirmations, switch off interrupt mode. [ 3718.721095] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 3719.108914] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 4318.773266] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 9513.813066] CE: hpet increasing min_delta_ns to 15000 nsec [ 9693.815684] npviewer.bin[6736]

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Our VPS is being used as a Warez mule

    - by Mikuso
    The company I work for runs a series of ecommerce stores on a VPS. It's a WAMP stack, 50gb storage. We use an archaic piece of ecommerce software which operates almost entirely client-side. When an order is taken, it writes it to disk and then we schedule a task to download the orders once every 10 minutes. A few days ago, we ran out of disk space, which caused orders to fail to be written. I quickly hopped on to delete some old logs from the mailserver and freed up a couple of GB pretty quickly, but I wondered how we could fill up 50gb will nothing much more than logs. Turns out, we didn't. Hidden deep within the c:\System Volume Information directory, we have a stack of pirated videos, which seem to have appeared (looking at the timestamps) over the past three weeks. Porn, American Sports, Australian cooking shows. A very odd collection. Doesn't look like an individual's personal tastes - more like the VPS is being used as a mule. We have a 5-attempts and you're blocked policy on our FTP server (plus, there is no FTP account with access to that directory), and the windows user account has had it's password changed recently. The main avenues are sealed - and logs can verify that. I thought I'd watch and see if it happened again, and yes, another cooking show has appeared this morning. I am the only one to know of this problem at my company, and only one of two with access to the VPS (the other being my boss, but no - it's not him). So how is this happening? Is there a vulnerability in some of the software on the VPS? Are the VPS owners peddling warez across our rented space? (can they do this?) I don't want to delete the warez in case it is seen as a hostile action against this outside force, and they choose to retaliate. What should I do? How do I troubleshoot this? Has this happened to anyone else before?

    Read the article

  • CUPS Web Admin Error 500 Unknown

    - by Floyd Resler
    I keep getting a 500 Unknown error whenever I navigate off the home page of my CUPS web admin. I'm sure I have something misconfigured but I'm not sure what. Here's my configuration: # # "$Id: cupsd.conf.in 8805 2009-08-31 16:34:06Z mike $" # # Sample configuration file for the CUPS scheduler. See "man cupsd.conf" for a # complete description of this file. # # Log general information in error_log - change "warn" to "debug" # for troubleshooting... LogLevel warn # Administrator user group... SystemGroup lpadmin sys root # Only listen for connections from the local machine. Listen 192.168.6.101:631 Listen /var/run/cups/cups.sock ServerName 192.168.6.101 # Show shared printers on the local network. Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS BrowseAddress 192.168.6.255 # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... Order allow,deny Allow From All Allow From 127.0.0.1 # Restrict access to the admin pages... AuthType Default Require user @SYSTEM Order allow,deny Allow From All Allow From 127.0.0.1 # Restrict access to configuration files... AuthType Default Require user @SYSTEM Order allow,deny Allow From All Allow From 127.0.0.1 # Set the default printer/job policies... # Job-related operations must be done by the owner or an administrator... Require user @OWNER @SYSTEM Order deny,allow # All administration operations require an administrator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # All printer operations require a printer operator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # Only the owner or an administrator can cancel or authenticate a job... Require user @OWNER @SYSTEM Order deny,allow Order deny,allow # Set the authenticated printer/job policies... # Job-related operations must be done by the owner or an administrator... AuthType Default Order deny,allow AuthType Default Require user @OWNER @SYSTEM Order deny,allow # All administration operations require an administrator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # All printer operations require a printer operator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # Only the owner or an administrator can cancel or authenticate a job... AuthType Default Require user @OWNER @SYSTEM Order deny,allow Order deny,allow # # End of "$Id: cupsd.conf.in 8805 2009-08-31 16:34:06Z mike $". #

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >