Search Results

Search found 31717 results on 1269 pages for 'response write'.

Page 232/1269 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • How to confirm php enabled on ubuntu server

    - by Shishant
    Hello, I am not much into linux. I am trying to setup a server through ssh. I installed apache php and mysql through this command. sudo aptitude install apache2 php5-mysql libapache2-mod-php5 mysql-server but I think php is not enabled on the server. when I run command I receive response as below $ which apache2ctl /usr/sbin/apache2ctl but when i check $ which php i receive no response. $ locate php5 /etc/apparmor.d/abstractions/php5 /usr/share/ubuntu-serverguide/html/C/php5.html available apache2 modules aptitude package manager

    Read the article

  • Strange focus bug in Firefox (chrome vs content)

    - by Marius
    Here is a strange bug I'm experiencing in Firefox: I can only use either the chrome, or the content, not both at the same time! For example, I can click on tabs and the toolbar icons, focus the search bar and write in it as well as the address bar, but if I try to click on anything in the content (eg a link or a textfield to write something), then nothing happens. The mouse pointer doesn't change either, it just stays a pointer when I hover over things, and the links I hover don't react either. But if I alt-tab to another program (or click on it in the taskbar), then back to Firefox, then I can use the area that I click on. So if I click somewhere on the webpage to get focus back to Firefox, then I can click on links and write things (like this text), but I cannot click on tabs or refresh or anything else in the chrome. I can't even click on the minimize, restore and close icons! To get focus back on the chrome I have to alt-tab to another program, and then click on the chrome to get back to Firefox to be able to use the chrome again. I've tried closing and starting it again, but the bug is still there. I have experienced this before, but I don't remember what I did to fix it. This bug seems to occur sometimes when I wake up the computer from standby, but I leave by computer in standby all the time, so that is not the only factor.

    Read the article

  • Apache Serving 403 Forbidden after OS X Snow Leopard Upgrade to Version 10.6.6

    - by Ian Oxley
    I've just upgraded my MacBook Pro to OS X Snow Leopard version 10.6.6 and now Apache is misbehaving: requests to http://localhost/ generate a 403 Forbidden response -- FIXED requests to any of my virtual hosts seem to generate a 200 Ok response, but contain zero bytes Some further info that might be useful: I'm using the Apache that comes bundled with OS X. I'm using PHP from http://www.entropy.ch/software/macosx/php/ (which is in /usr/local/bin) I've had look at the Apache error log and the only error seems to be the following: [notice] child pid 744 exit signal Segmentation fault (11) I'm completely stumped by this. Any help would be much appreciated. UPDATE Ok, I've managed to resolve the 403 Forbidden error thanks to http://techtrouts.com/mac-os-x-105-web-sharing-forbidden-403-on-httplocalhostusername/ I'm still having the second problem though for any request e.g. this now happens when I request http://localhost

    Read the article

  • Nginx request forking

    - by Adam
    Hi, I'm wondering if nginx can "fork" a request. Let's imagine config: upstream backend { server localhost:8080; ... more servers here } server { location /myloc { FORK-REQUEST http://my-other-url:3135/something proxy_pass http://backend; } } I would like nginx to send a copy of request to the url specified by FORK-REQUEST and after that to load balance it with backend servers and return the response to the client. As I don't need the response from FORK-REQUEST it would be best if this request was async so normal prcessing doesn't have to wait. Is a scenario like this possible?

    Read the article

  • How come my Intel 520 180GB SSD performs extremely poorly?

    - by Willem
    I recently installed a new Intel 520 series 180GB SSD in my brand new MacBook Pro. The system is as follows: Model: MacBook Pro 15-inch, Late 2011 (MacBookPro8,2) Processor: 2.4 GHz Intel Core i7 Memory: 16 GB 1333 MHz DDR3 Graphics: AMD Radeon HD 6770M 1024 MB Software: Mac OS X Lion 10.7.3 Main Drive Bay: Intel 520-series 180GB SATA-3 (6GB/s negotiated link) SSD (Firmware: 400i) [80GB free] Optical Bay: Toshiba 5400 RPM 750GB SATA-2 HDD Trim: Enabled (according to Trim Enabler App) And here are the speeds I'm getting: Read: 412 MB/s Write: 186 MB/s What have I done wrong? Results expected: Read/write both 500MB/s I have seen benchmarks with lesser SSD:s (SATA-2 even) outperform my write-speeds by far. Also, Intel 520 SSD:s are supposed to be the top class of SSD:s. Trim Enabler report: This looks a bit odd compared to screenshots from their site: These is the defined S.M.A.R.T attributes (taken from Intel): And here are my S.M.A.R.T attributes read using smartctl tool from smartmontools: They don't seem very compatible. I'm going to try and look for a S.M.A.R.T attributes reader tool for OS X which might support Intel 520 series.

    Read the article

  • Remote Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (104)

    - by user39891
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for multiplexing connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on their port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR?), which in the network context suggests that ...'the connection was reset by peer'; or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? I use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right, but of course in series not parallel. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. Any ideas? I cannot figure-out whatever is wrong even with Valgrind, GDB or much logging. Kindly help where you can.

    Read the article

  • virtualbox ftp hangs on list command

    - by Tiddo
    Hi all, I have virtual box installed on a windows 7 64-bit computer, with Cent OS 5.5 as guest os. I want to be able to use ftp between those. I've installed vsftpd on the guest os, and the guest os uses a nat connection with the host os for internet. So far, I am able to connect to the guest os using ftp (in filezilla), but after the list command is executed, nothing happens, until the command is timed out. This happens in both active and passive mode. I do have set a pasv_min/max_port in the vsftpd.conf file, listing is enabled, and the ports are redirected in virtualbox. Also the ftp_data_port is set to 20. I also tried setting the pasv_address, but I had to set it to 127.0.0.1, but than filezilla gives me this: Command: PASV Response: 500 OOPS: bad family Command: PORT 127,0,0,1,139,204 Response: 500 OOPS: child died Can someone help me with this?

    Read the article

  • How do I disable nginx sending messages to syslog?

    - by altman
    My nginx sends lots of messages to syslog, but I don't need them. In my nginx.conf: error_log /var/log/nginx-error.log notice; ...... server { access_log off; location / { .... } } but, in my /var/log/message you see Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32172530 kevent() reported about an closed connection (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://www.igoido012.com//vk HTTP/1.1", upstream: "http:////vk", host: "www.igoido012.com", referrer: "http://www.baidu.com/" Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32099531 upstream timed out (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://t.web2.qq.com/channel/poll?msg_id=0&clientid=431509&t=1321975433305 HTTP/1.1", upstream: "http://:80/channel/poll?msg_id=0&clientid=431509&t=1321975433305", host: "t.web2.qq.com", referrer: "http://t.web2.qq.com/proxy.html?v=20110331001" How can I prevent nginx sending messages to my syslog?

    Read the article

  • Nginx. How do I reject request to unlisted ssl virtual server?

    - by Osw
    I have a wildcard SSL certificate and several subdomains on the same ip. Now I want my nginx to handle only mentioned server names and drop connection for others so that it'd look like nginx is not running for unlisted server names (not responding, rejecting, dead, not a single byte in response). I do the following ssl_certificate tls/domain.crt; ssl_certificate_key tls/domain.key; server { listen 1.2.3.4:443 ssl; server_name validname.domain.com; // } server { listen 1.2.3.4:443 ssl; server_name _; // deny all; // return 444; // return 404; //location { // deny all; //} } I've tried almost everything in the last server block, but no success. I get either valid response from known virtual server or error code. Please help.

    Read the article

  • Allignment of ext3 partition on LVM RAID volume group

    - by John P
    I'm trying to add a partition on a LVM that resides on a RAID6 volume group and fdisk is complaining about the partition not residing on a physical sector boundry. My question is, how do you calculate the correct starting sector for a partition on a LVM? This partition will be formated ext3. Would it be better to just format the LVM directly instead of creating a new partition? Disk /dev/dedvol/backup: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 1048576 bytes / 8388608 bytes Disk identifier: 0x4e428f49 Device Boot Start End Blocks Id System /dev/dedvol/backup1 63 267349 2146982827+ 83 Linux Partition 1 does not start on physical sector boundary. lvdisplay /dev/dedvol/backup --- Logical volume --- LV Name /dev/dedvol/backup VG Name dedvol LV UUID OV2n5j-7LHb-exJL-t8dI-dU8A-2vxf-uIicCt LV Write Access read/write LV Status available # open 0 LV Size 2.00 TiB Current LE 524288 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 32768 Block device 253:1 vgdisplay dedvol --- Volume group --- VG Name dedvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 14.55 TiB PE Size 4.00 MiB Total PE 3815448 Alloc PE / Size 3670016 / 14.00 TiB Free PE / Size 145432 / 568.09 GiB VG UUID 8fBcOk-aXGx-P3Qy-VVpJ-0zK1-fQgy-Cb691J

    Read the article

  • basic device that can connect to internet

    - by Hellnar
    Hello, I am looking for a cheap solution to my problem: I need to find either an already existing common device (that is used in restaurants, bars clubs) or a cheap new device that I will distribute to those places, which can connect to internet (via the already existing ethernet or wireless based internet) and do HTTP request/receive response and retrieve information ? (For instance can a POS device connect to internet?) For a project, I need to do identity validation on several restaurants and bars and not all of them have computers. So I will be giving "cheap and easy to use devices" and non-IT personal can use it to do http request to my server and get response. All I can think of is Cell phones and SMS.

    Read the article

  • Using a named pipe to simulate a serial port on a VMware virtual machine (linux host and client)

    - by Dave M
    Trying to write a python program to create a simulated data stream and feed it, through a named pipe, to a VMware virtual machine. The host is running Ubuntu 11.10 and VMware player 5.0.0. The Vm is running Ubuntu netbook 10.04. I am able to get the pipe working on the local machine but I am not able to get the pipe to pass data through the virtual serial port to the programs running on the virtual machine. #!/usr/bin/python import os # # Create a named pipe that will be used as the serial port on a VMware virtual machine SerialPipe = '/tmp/gpsd2NMEA' try: os.unlink(SerialPipe) except: pass os.mkfifo(SerialPipe) # # Open the named pipe NMEApipe = os.open(SerialPipe, os.O_RDWR|os.O_NONBLOCK) # # Write a string to the named pipe NMEAtime = "235959" os.write(NMEApipe, str( '%s\n' % NMEAtime )) Test to see if the python program is working on the host machine (displays 235959 if data is passing through the pipe) $ cat /tmp/gpsd2NMEA 235959 Serial port as defined in the VMware .vmx file: serial0.present = "TRUE" serial0.startConnected = "TRUE" serial0.fileType = "pipe" serial0.fileName = "/tmp/gpsd2NMEA" serial0.pipe.endPoint = "client" serial0.autodetect = "FALSE" serial0.tryNoRxLoss = "TRUE" serial0.yieldOnMsrRead = "TRUE" Test to see if the serial port in the VM is receiving data $ cat /dev/ttyS0 or $ minicom -D /dev/ttyS0 or $ stty -F /dev/ttyS0 cs8 -parenb -cstopb 115200 $ echo < /dev/ttyS0 None of these display any data from the python program.

    Read the article

  • IIS 6 windows 2003 help installing SSL cert

    - by ADAM
    I requested a new ssl cert from godaddy which has been issued. When try to install it in iis through the website directory security tab i get a "the pending certificate request for this response file was not found. this request may be cancelled. you cannot install selected response certificate using this wizard" error. I may have run the wizard and deleted the pending request. Is there any way i can install the certificate without getting a new one? (i hope so) I have the original certrequest.txt file

    Read the article

  • Which upgrade path for disk IO bound postgres server?

    - by user41679
    Hi all, We currently have a Sun x4270 with 2xquad core Xeon Nehalmen 2.93ghz cores (16 threads), 72 gig of ram and 16 x 10k SAS disks split between the os raid 1, a partition for the Write Ahead Logs which is raid 10 and a partition for the database tables and indexes which is also raid 10, all xfs. I'm currently evaluating which path to go down in terms of upgrades. We'll be sharding the DB at some point soon, but for now I need to focus on hardware upgrades specifically. The machine is not CPU or memory bound at all at the moment, just IOWait is become an issue. The machine is mostly write access as we have a heavy caching layer. We're seeing about 300 write IOPS average on both the database partitions. We don't have any additional storage infrastructure like a Fiber Channel or ISCSI network. Budget isn't too much of a concern, something inline with the size of this server (i.e no $1m IBM machines) Space is ok on the DB side of things, we're running out obviously but there's also some reduction we can do. Additional space would be good though. My current thoughts are either: * ISCSI SAN, possible with 10Gbit network that has solid state acceleration. * FusionIO card / Sun F20 card (will the FusionIO card work in the Sun box? * DAS shelf (something like this http://www.broadberry.co.uk/das-direct-attached-storage-servers/cyberstore-224s-das) which a combination of 15k sas disks and some Intel X25-E drives for DB indexes etc) what would I need to put in the x4270 to add a DAS shelf? I think it's a SAS HBA card, do I have to use Sun's own card or will any PCI Express card work? Anything else??? what would you guys do from your experience? I appreciate it's a lot of questions, but I haven't expanded a DB machine for a number of years and the landscape has changed dramatically since then! Any advice or feedback would be very much appreciated. Let me know if there's anything else I can clarify. Thanks in advance!

    Read the article

  • Postfix enable SSL 465 failed

    - by user221290
    I have installed the Postfix and enabled SSL/TLS, just tested, I can sent email from port 25, 578, but cannot sent email from port 465, the log is: May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write server hello A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write certificate A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write server done A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 flush data May 26 17:24:06 mail postfix/smtpd[28721]: SSL3 alert read:fatal:certificate unknown May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:failed in SSLv3 read client certificate A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept error from unknown[10.155.36.240]: 0 May 26 17:24:06 mail postfix/smtpd[28721]: warning: TLS library problem: 28721:error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown:s3_pkt.c:1197:SSL alert number 46: May 26 17:24:06 mail postfix/smtpd[28721]: lost connection after CONNECT from unknown[10.155.36.240] May 26 17:24:06 mail postfix/smtpd[28721]: disconnect from unknown[10.155.36.240] My email server is: 10.155.34.117, and email client is: 10.155.36.240, the client error is: Could not connect to SMTP host: 10.155.34.117, port: 465. My Master.cf: smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes My main.cf: smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_key_file = /etc/pki/myca/mail.key smtpd_tls_cert_file = /etc/pki/myca/mail.crt smtpd_tls_CAfile = /etc/pki/myca/cacert_new.pem smtpd_tls_loglevel = 2 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_tls_session_cache_database = btree:/etc/postfix/smtpd_scache Seems it's my certificate issue, but I have tried to grant the file many times...I have no idea on this, please help!

    Read the article

  • NFS confusion - writing many small files

    - by Antonis Christofides
    I have a Debian squeeze amd64 which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 176.9.116.102:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS directory goes at more than 100 MB/s.) I am confused by the description of async in NFS. If my application accesses the local directory, system calls like write and close return even if caches have not been flushed to permanent storage. Apparently this is not true with NFS sync behaviour. However, with NFS async behaviour, even calls like fsync are ignored. Isn't it possible to work like local files, i.e. generally work asynchronously, but honour fsync and O_SYNC?

    Read the article

  • Managing arbitrary user permissions under PureFTPd

    - by Sebastián Grignoli
    I need to provide an FTP service that needs to be web-managed in the simplest way possible. My customer wants to create folders and users, and give them read only or read/write access arbitrarily. For example: The folder 'Documents' should be read only for several users, writable for internal users, and invisible for the rest. The folder 'Pictures' should be read only for journalists, writable for associates, and invisible for the rest. The folder 'Media' should be read only, writable or invisible for arbitrary users specified on the admin. There could be a large number of users and folders. I can't find a good way to accomplish that. I thought that I could give each user a home folder and put symlinks for the folders he has read access to, and make the user part of the folder's group when he has write access too, but now I think that this wouldn't work, because with PureFTPd (or ProFTPd) I can only specify the virtual user's mapping to a system user, and only one GUID for each virtual user. My approach requires that I could specify several GUIDs for each user (one by each folder he has write access to). I need to start programming this admin and I still don't know wich approach would work, if any. ¿Any ideas?

    Read the article

  • SFTP, Chroot problems on Redhat

    - by Curtis_w
    I'm having problems setting up sftp with a ChrootDirectory. I've done an equivalent setup on other distros, but for some reason I cannot get it to work on a Redhat AMI. The changes to my sshd_config file are: Subsystem sftp internal-sftp Match Group ftponly PasswordAuthentication yes X11Forwarding no ChrootDirectory %h ForceCommand internal-sftp AllowTcpForwarding no I have the concerned usere's homes at /home/user, owned by root. After connecting with a user in the ftponly group, I'm dropped into / without permissions for anything, and am unable to do anything. sftp bob@localhost Connecting to localhost... bob@localhost's password: sftp> pwd Remote working directory: / I can connect normally with users not in the ftponly group. openssh version 5.3 I've experimented with different permissions, as well as having users own their own home directory (gives a Write failed: Broken pipe error), and so far, nothing has seemed to work. I'm sure it's a permissions error, or something equally as trivial, but at this point my eyes are beginning to glaze over, and any help would be greatly appreciated. EDIT: James and Madhatter, thanks for clarifying. I was confused by chroot dropping me in /... just didn't think through it properly. I've added the appropriate directories and permissions to get read access. One other key part was enabling write access to chrooted homes: setsebool -P ssh_chroot_rw_homedirs on in order to get write access. I think I'm all set now. Thanks for the help.

    Read the article

  • IIS web service responds on server, not from remote client

    - by Aharon Manne
    I have installed a web service on a server running IIS (v6, as far as I can tell). There is another service installed, which responds as expected. My service responds correctly when a browser is pointed to localhost, but there is no response when a remote client tries to query the service. Fiddler on the remote client simply reports a timeout. Wireshark on the remote client shows no response at all from the server, no NACK, nothing. Wireshark on the server detects no query at the relevant port (the service is installed on port 8080). There are no relevant entries in the event viewer. Obviously there is some issue of permissions or authentication. I have tried to compare my service to the service that works, but I have not been able to locate relevant parameters. Any help would be greatly appreciated.

    Read the article

  • OpenLDAP ACLs are not working

    - by Dr I
    First things first, I'm currently working with an OpenLDAP: slapd 2.4.36 on a Fedora release 19 (Schrödinger’s Cat). I've just install the openldap with yum and my configuration is the following one: ##### OpenLDAP Default configuration ##### # ##### OpenLDAP CORE CONFIGURATION ##### include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/inetorgperson.schema include /etc/openldap/schema/nis.schema pidfile /var/lib/ldap/slapd.pid loglevel trace ##### Default Schema ##### database mdb directory /var/lib/ldap/ maxsize 1073741824 suffix "dc=domain,dc=tld" rootdn "cn=root,dc=domain,dc=tld" rootpw {SSHA}SECRETP@SSWORD ##### Default ACL ##### access to attrs=userpassword by self write by group.exact="cn=administrators,ou=builtin,ou=groups,dc=domain,dc=tld" write by anonymous auth by * none I launch my OpenLDAP service using: /usr/sbin/slapd -u ldap -h ldapi:/// ldap:/// -f /etc/openldap/slapd.conf As you can see it's a pretty simple ACL which aim to allow access to the userPassword attribute to a specific group read only, then to the owner read and write to anonymous requiring auth and refuse the access to everyone else. The problem is: Even using a valid user with correct password my ldapsearch ends with zero informations retrieved from the directory, plus I've got a strange response on the result line. # search result search: 2 result: 32 No such object # numResponses: 1 here is the ldapsearch request: ldapsearch -H ldap.domain.tld -W -b dc=domain,dc=tld -s sub -D cn=user,ou=service,ou=employees,ou=users,dc=domain,dc=tld I did not specify any filter as I want to check that ldapsearch is correctly printing only allowed attribute.

    Read the article

  • Zpool disk failure - Where am I at?

    - by JT.WK
    After checking the status of one of my zpools today, I was faced with the following: root@server: zpool status -v myPool pool: myPool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: resilver completed after 3h6m with 0 errors on Tue Sep 28 11:15:11 2010 config: NAME STATE READ WRITE CKSUM myPool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c6t7d0 ONLINE 0 0 0 c6t8d0 ONLINE 0 0 0 spare ONLINE 0 0 0 c6t9d0 ONLINE 54 0 0 c6t36d0 ONLINE 0 0 0 c6t10d0 ONLINE 0 0 0 c6t11d0 ONLINE 0 0 0 c6t12d0 ONLINE 0 0 0 spares c6t36d0 INUSE currently in use c6t37d0 AVAIL c6t38d0 AVAIL errors: No known data errors From what I can see, c6t9d0 has encountered 54 write errors. It seems as though it has automatically resilvered with the spare disk c6t36d0, which is now currently in use. My question is, where exactly am I at? Yes the 'action' tells me to determine whether or not the disk needs replacing, but is this disk currently still in use? Can I replace / remove it? Any explanation would be much appreciated as I'm quite new to this stuff :) update: After following the advice from C10k Consulting, ie detaching: zpool detach myPool c6t9d0 and adding as a spare: zpool add myPool spare c6t9d0 It appears as though all is well. The new status of my zpool is: root@server: zpool status -v myPool pool: myPool state: ONLINE scrub: resilver completed after 3h6m with 0 errors on Tue Sep 28 11:15:11 2010 config: NAME STATE READ WRITE CKSUM muPool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c6t7d0 ONLINE 0 0 0 c6t8d0 ONLINE 0 0 0 c6t36d0 ONLINE 0 0 0 c6t10d0 ONLINE 0 0 0 c6t11d0 ONLINE 0 0 0 c6t12d0 ONLINE 0 0 0 spares c6t37d0 AVAIL c6t38d0 AVAIL c6t9d0 AVAIL errors: No known data errors Thanks for your help c10k consulting :)

    Read the article

  • Why is writing to my external hard drive slow, while benchmarks show fast writing?

    - by matix2267
    I have an iOmega eGo 320GB portable drive connected through USB2.0 to my laptop running Windows Vista. It's been working fine for quite some time until recently it became very slow when writing e.g. when copying ~300MB movie over to the drive at first it is extremely fast but it actually doesn't write it only puts in cache and then hangs on last 10-20MBs for about a minute. When copying larger files it's the same story: starts fast but then slows down to ~5MB/s (sometimes even slower down to 2MB/s). Strange thing is that I have always had caching disabled for this drive (it was disabled by default and I never bothered changing it). At first I thought that the disk is dying so I checked S.M.A.R.T. values and everything is fine there. I also run chkdsk and it seemed to fix the problem - it worked fast for a few minutes but then it slowed down again. I also tried plugging it into another USB port - no difference. Additionally I noticed that reading under certain circumstances is sometimes slower e.g. loading times for some games are ~10 times longer, whereas simple copying files from this drive to my internal HDD is fast. I ran a speed benchmark using CrystalDiskMark with a 5x100MB run and strangely got these results: read write (MB/s) Seq 33.05 28.25 512k 17.30 15.27 4k 0.267 0.372 4kQD32 0.510 0.260 This is different from what most other people have (I've found many threads about slow disk write while googling but all of them were slow on benchmarks too) which is why I decided to post this problem here. BTW most of the time when writing (or sometimes reading) the activity led is mostly idle (blinks a while and then stops for longer, sometimes has slower blinks ~1 sek, sometimes goes off for a few seconds - extremely long blink :) ) but when benchmarking, defragmenting or just reading (copying from this drive, installing apps from installers there, watching HD videos) it is blinking really fast (like it should) and there are no slowdowns. It shouldn't be driver issue unless stock Windows drivers have some issues I'm not aware of.

    Read the article

  • How to rewrite the domain part of Set-Cookie in a nginx reverse proxy?

    - by Tobia
    I have a simple nginx reverse proxy: server { server_name external.domain.com; location / { proxy_pass http://backend.int/; } } The problem is that Set-Cookie response headers contain ;Domain=backend.int, because the backend does not know it is being reverse proxied. How can I make nginx rewrite the content of the Set-Cookie response headers, replacing ;Domain=backend.int with ;Domain=external.domain.com? Passing the Host header unchanged is not an option in this case. Apache httpd has had this feature for a while, see ProxyPassReverseCookieDomain, but I cannot seem to find a way to do the same in nginx.

    Read the article

  • Setting Up nginx Site Down That Responds Differently to Ajax?

    - by dave mankoff
    I am trying to set up an automatic site-down page for nginx. So far I have this: location / { try_files /sitedown.html @myapp; } location @myapp { ... } That works well enough: if sitedown.html is present, it serves that, otherwise it serves the app. What I'd like to do, however, is respond differently to Ajax requests so that they don't error out the javascript. I believe, using the rewrite module, that I can do something like if ($http_x_requested_with = XMLHttpRequest) { but it's unclear to me how to use this in order to do what I want. I'd like requests that come with that header to return a simple JSON response like "sitedown" with the appropriate json encoding header. Barring that, it would be nice to return a 503 response code that the javascript could react to.

    Read the article

  • Slow website load with CNAME, fast when using IP

    - by Nate Strandberg
    I setup two DNS servers on my network: ns1.byte-werx.com && ns2.byte-werx.com I can ping the DNS servers and get a fairly good response time, when I dig them I also get a fairly reasonable response, but any website I filter through them is painfully slow (an upwards of 20+ seconds) -- verifiable by performing a tracert or attempting to access the URL in a browser. The DNS servers are running CentOS 6.3 and BIND9 with 500MB of memory (I figure that should be more than enough?). I have a reverse look-up zone (1.168.192) along with two website zones (www.byte-werx.com and www.stayhomedental.com) If I access the websites using their IP the page loads nearly instantly so I do not believe the issue is with the hosting server, but that is running Ubuntu Server 12.04 and Apache2 with 12GB memory. Any thoughts? I do not have the named.conf file in front of me but I can edit this post to include it if you feel it would be useful. Thanks for any advice!

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >