Search Results

Search found 7606 results on 305 pages for 'raam dev'.

Page 105/305 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • postgresql No space left on device

    - by pstanton
    Postgres is reporting that it is out of disk space while performing a rather large aggregation query: Caused by: org.postgresql.util.PSQLException: ERROR: could not write block 31840050 of temporary file: No space left on device at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:192) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:304) at org.hibernate.engine.query.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:189) ... 8 more However the disk has quite a lot of space: Filesystem Size Used Avail Use% Mounted on /dev/sda1 386G 123G 243G 34% / udev 5.9G 172K 5.9G 1% /dev none 5.9G 0 5.9G 0% /dev/shm none 5.9G 628K 5.9G 1% /var/run none 5.9G 0 5.9G 0% /var/lock none 5.9G 0 5.9G 0% /lib/init/rw The query is doing the following: INSERT INTO summary_table SELECT t.a, t.b, SUM(t.c) AS c, COUNT(t.*) AS count, t.d, t.e, DATE_TRUNC('month', t.start) AS month, tt.type AS type, FALSE, tt.duration FROM detail_table_1 t, detail_table_2 tt WHERE t.trid=tt.id AND tt.type='a' AND DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')>=23 OR DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')<13 GROUP BY month, type, t.a, t.b, t.d, t.e, FALSE, tt.duration any tips?

    Read the article

  • USB To Serial under OpenSuse 11.3

    - by Exsisto
    I have a LogiLink USB-To-Serial adapter. This has the PL2303 chip inside. When I insert the device: [26064.927083] usb 7-1: new full speed USB device using uhci_hcd and address 9 [26065.076090] usb 7-1: New USB device found, idVendor=067b, idProduct=2303 [26065.076099] usb 7-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [26065.076105] usb 7-1: Product: USB-Serial Controller [26065.076110] usb 7-1: Manufacturer: Prolific Technology Inc. [26065.079181] pl2303 7-1:1.0: pl2303 converter detected [26065.091296] usb 7-1: pl2303 converter now attached to ttyUSB0 So the device is recognized and the converter is attached to ttyUSB0. When I do screen /dev/ttyUSB0 9600 I get the error: bash: /dev/ttyUSB0: Permission denied So I went looking in the file permissions. ls -l from the /dev folder reports: crw-rw---- 1 root dialout 188, 0 2011-04-26 15:47 ttyUSB0 I added my user lars to the dialout group. When I use the commands groups under lars it shows that I'm in the group. Though I still recieve the permissions denied error, as lars, and as root. I'm trying to connect to a console cable to configure some Cisco switches. My OS is OpenSuse 11.3 x86_64 with kernel version 2.6.34.7-0.7-desktop.

    Read the article

  • SQL Server restore a backup results in an error.

    - by Mario
    I have a database in dev (SQL Server 2005 on Windows Server 2008) that I need to move to prod (SQL Server 2000 on Windows Server 2003). My process is as follows: Login to dev, open SQL Server Management Studio Right click on the database | Tasks | Backup. Keep all default options (full backup etc.) Move .bak file locally to prod (no network drive), login to prod, open SQL Server Enterprise Manager. Right click Databases node | All Tasks | Restore database. Change Restore as database to reflect the same database name. Click radio button 'From device'. Click 'Select Devices' Click Restore from: Add..., browse to .bak file (small - only 6mb) Now I am ready to restore the database, so I click OK and get the following error: "The media family on device 'E:...bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE DATABASE is terminating abnormally." This error is immediate. I have tried a few different variations of this - restoring the db to dev machine with a different db name and log file names (where it originated), creating an empty database with the same physical path to files before and trying to restore to that, making a few different .bak files and making sure they are verified before uploading them to prod. I know for a fact the directory for the .mdf and .ldf files exist on prod, though the files themselves don't exist. If, before I click OK to restore, go to the options tab instead I get the following error: Error 3241: The media family on device 'E:...bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE FILELIST is terminating abnormally. Anyone have any bright ideas?

    Read the article

  • ext4 filesystem corruption -- maybe hardware error?

    - by pts
    I'm getting these errors in dmesg after about half an hour after I turn on the computer: [ 1355.677957] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318420: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251700offset=0(0), inode=1802725748, rec_len=179136, name_len=32 [ 1355.677973] Aborting journal on device sda2-8. [ 1355.678101] EXT4-fs (sda2): Remounting filesystem read-only [ 1355.690144] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318416: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251699offset=0(0), inode=2194783952, rec_len=53280, name_len=152 [ 1356.864720] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1312795: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251176offset=1460(13748), inode=1432317541, rec_len=208208, name_len=119 /dev/sda is an SSD, and it's using the noop scheduler. /etc/fstab entry: UUID=acb4eefa-48ff-4ee1-bb5f-2dccce7d011f / ext4 errors=remount-ro,noatime,discard,user_xattr 0 1 System information: $ cat /proc/mounts | grep /dev/sd /dev/sda1 /boot ext2 rw,noatime,errors=continue 0 0 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.3 LTS" $ uname -a Linux leetpad 2.6.35-30-generic-pae #61~lucid1-Ubuntu SMP Thu Oct 13 21:14:29 UTC 2011 i686 GNU/Linux I've run memtest for 7 hours, it didn't found any memory errors. Any obvious ideas what can go wrong in this case? The most reasonable thing I can imagine is that the SSD is silently dropping some write requests, which eventually leads to an EXT4 filesystem inconsistency (but no disk I/O errors). How can this happen? Is there a relevant configuration option I should ensure to be set correctly? What tools should I use to diagnose the hardware failures? Would it be possible to diagnose the SSD failure without overwriting data?

    Read the article

  • Cannot Install Bootcamp on Mac Fusion Drive

    - by user3377019
    I have two drive installed in my mid-2013 Macbook pro. It is running osx maverick. I set the two drives (an ssd and a regular HD) up using the fusion drive (following the diy fusion drive guides out there). I went ahead and created a 40gb partition to host my windows install. I removed the SSD, installed windows, then reinstalled the SSD. As soon as the ssd was placed back in the bootcamp partition stopped booting. I get a blinking cursor on a black screen. I checked out the partition info in disk utility and it appears that the windows partition is not marked bootable. Below is some info I managed to gather. I am wondering if there is a way to fix the partition table so my bootcamp will boot. /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *120.0 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage 119.7 GB disk0s2 3: Apple_Boot Boot OS X 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk1 1: EFI EFI 209.7 MB disk1s1 2: Microsoft Basic Data BOOTCAMP 40.0 GB disk1s2 3: Apple_CoreStorage 459.2 GB disk1s3 4: Apple_Boot Boot OS X 650.0 MB disk1s4 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Macintosh HD *573.4 GB disk2 Name : BOOTCAMP Type : Partition Disk Identifier : disk1s2 Mount Point : /Volumes/BOOTCAMP File System : Windows NT File System (NTFS) Connection Bus : SATA Device Tree : IODeviceTree:/PCI0@0/SATA@1F,2/PRT1@1/PMP@0 Writable : No Universal Unique Identifier : 584BAED6-4C46-4F18-93B3-957F6E27003C Capacity : 40 GB (39,998,980,096 Bytes) Free Space : 16.34 GB (16,339,972,096 Bytes) Used : 23.66 GB (23,659,003,904 Bytes) Number of Files : 86,424 Number of Folders : 0 Owners Enabled : No Can Turn Owners Off : No Can Be Formatted : No Bootable : No Supports Journaling : No Journaled : No Disk Number : 1 Partition Number : 2

    Read the article

  • Managing a test iSCSI target server

    - by dyasny
    Hi all, I am using a RHEL server with a few hard drives, and tgtd as the iscsi target software. I a looking for a way to allocate and deallocate space and targets with that space, without restarting my system, or harming other LUNs. Currently, all my HDDs are PVs in a single VG, and I lvcreate/lvremove as required, and then export the allocated LVs using a tgt script: usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=1 --targetname iqn.2001-04.com.lab.gss:300gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_300Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL /usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=2 --targetname iqn.2001-04.com.lab.gss:200gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_200Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 2 -I ALL /usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=3 --targetname iqn.2001-04.com.lab.gss:100gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 3 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_100Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 3 -I ALL tgtadm --mode target --op show So in order to remove a LUN, I stop the tgtd service, lvremove the lv, and remove the entry from the iscsi target script When I add a lun, I run lvcreate, and then add an entry to the script and run it. This is not quite optimal, since restarting the service is a bad idea while other LUNs are busy, so I am looking for a more scalable and safer way. Thanks

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • GlusterFS is failing to mount on boot

    - by J. Pablo Fernández
    I'm running the official GlusterFS 3.5 packages on Ubuntu 12.04 and everything seems to be working fine, except mounting the GlusterFS volumes at boot time. This is what I see in the log files: [2014-06-13 08:52:28.139382] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.0 (/usr/sbin/glusterfs --volfile-server=koraga --volfile-id=/private_uploads /var/www/shared/private/uploads) [2014-06-13 08:52:28.147186] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled [2014-06-13 08:52:28.147237] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread [2014-06-13 08:52:28.148183] E [socket.c:2161:socket_connect_finish] 0-glusterfs: connection to 176.58.113.205:24007 failed (Connection refused) [2014-06-13 08:52:28.148236] E [glusterfsd-mgmt.c:1601:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: koraga (No data available) [2014-06-13 08:52:28.148251] I [glusterfsd-mgmt.c:1607:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2014-06-13 08:52:28.148477] W [glusterfsd.c:1095:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x27) [0x7fe077f8e0f7] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x1a4) [0x7fe077f91cc4] (-->/usr/sbin/glusterfs(+0xcada) [0x7fe078655ada]))) 0-: received signum (1), shutting down [2014-06-13 08:52:28.148513] I [fuse-bridge.c:5444:fini] 0-fuse: Unmounting '/var/www/shared/private/uploads'. My fstab contains: proc /proc proc defaults 0 0 /dev/xvda / ext4 noatime,errors=remount-ro 0 1 /dev/xvdb none swap sw 0 0 /dev/xvdc /var/lib/glusterfs/brick01 ext4 defaults 1 2 koraga:/private_uploads /var/www/shared/private/uploads glusterfs defaults,_netdev 0 0 Any ideas what's going on and/or how to fix it?

    Read the article

  • Find out the type of an automounted device

    - by Steve Bennett
    I'm working on a system (Ubuntu Precise) with a mount defined in /etc/fstab as follows: /dev/vdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 Originally I just wanted to find out if it's NFS (due to potential MySQL locking issues). Judging from man mount, it's not: If no -t option is given, or if the auto type is specified, mount will try to guess the desired type. Mount uses the blkid library for guessing the filesystem type; if that does not turn up anything that looks familiar, mount will try to read the file /etc/filesystems, or, if that does not exist, /proc/filesystems. All of the filesystem types listed there will be tried, except for those that are labeled "nodev" (e.g., devpts, proc and nfs). If /etc/filesystems ends in a line with a single * only, mount will read /proc/filesystems afterwards. But, out of curiosity now, how can I find out more about what type of device it actually is? (For context, this is a VM running on OpenStack. The device is a 60Gb allocation mounted from somewhere - but I don't know how.)` EDIT Including answers here: $ mount /dev/vdb on /mnt type ext3 (rw,_netdev) $ df -T /dev/vdb ext3 61927420 2936068 55845624 5% /mnt

    Read the article

  • Can OpenVPN invoke DHCP Client?

    - by Ency
    I have got working VPN connection through openvpn, but I would like to use also my DHCP server and not openvpn's push feature. Currently everything works fine, but I have to manually start dhcp client, eg. dhclient tap0 and I get IP and other important stuff from my DHCP, is there any directive which start DHCP Client when connection is established? There is my client's config: remote there.is.server.com float dev tap tls-client #pull port 1194 proto tcp-client persist-tun dev tap0 #ifconfig 192.168.69.201 255.255.255.0 #route-up "dhclient tap0" #dhcp-renew ifconfig 0.0.0.0 255.255.255.0 ifconfig-noexec ifconfig-nowarn ca /etc/openvpn/ca.crt cert /etc/openvpn/encyNtb_openvpn_client.crt key /etc/openvpn/encyNtb_openvpn_client.key dh /etc/openvpn/dh-openvpn.dh ping 10 ping-restart 120 comp-lzo verb 5 log-append /var/log/openvpn.log Here comes server's config: mode server tls-server dev tap0 local servers.ip.here port 1194 proto tcp-server server-bridge # Allow comunication between clients client-to-client # Allowing duplicate users per one certificate duplicate-cn # CA Certificate, VPN Server Certificate, key, DH and Revocation list ca /etc/ssl/CA/certs/ca.crt cert /etc/ssl/CA/certs/openvpn_server.crt key /etc/ssl/CA/private/openvpn_server.key dh /etc/ssl/CA/dh/dh-openvpn.dh crl-verify /etc/ssl/CA/crl.pem # When no response is recieved within 120seconds, client is disconected keepalive 10 60 persist-tun persist-key user openvpn group openvpn # Log and Connected clients file log-append /var/log/openvpn verb 3 status /var/run/openvpn/vpn.status 10 # Compression comp-lzo #Push data to client push "route-gateway 192.168.69.1" push "redirect-gateway def1"

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Can't mount FAT32 drive under Ubuntu Linux

    - by Josh
    I have a 320GB USB drive with a single large FAT32 partition. The volume mounts perfectly fine on my Mac OS X 10.5.8 machine and Disk Utility on the mac reports no issues with the volume. I can read/write all data on the drive. However when I connect the drive to my Ubuntu 9.10 Karmic system, the partition does not mount. dmesg|tail says: [ 2752.334822] scsi3 : SCSI emulation for USB Mass Storage devices [ 2752.335040] usb-storage: device found at 3 [ 2752.335044] usb-storage: waiting for device to settle before scanning [ 2757.330301] usb-storage: device scan complete [ 2757.331005] scsi 3:0:0:0: Direct-Access WD 3200AAK External 1.65 PQ: 0 ANSI: 0 [ 2757.331772] sd 3:0:0:0: Attached scsi generic sg2 type 0 [ 2757.355647] sd 3:0:0:0: [sdb] 625142448 512-byte logical blocks: (320 GB/298 GiB) [ 2757.360737] sd 3:0:0:0: [sdb] Write Protect is off [ 2757.360749] sd 3:0:0:0: [sdb] Mode Sense: 00 00 00 00 [ 2757.360755] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367618] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367631] sdb: sdb1 [ 2762.797622] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2762.797636] sd 3:0:0:0: [sdb] Attached SCSI disk [ 2822.866228] FAT: bogus number of reserved sectors [ 2822.866237] VFS: Can't find a valid FAT filesystem on dev sdb1. When I run fsck.vfat -a /dev/sdb1 I get: root@cartman:~# fsck.vfat -a /dev/sdb1 dosfsck 3.0.3, 18 May 2009, FAT32, LFN Logical sector size is zero. Googling "vfat Logical sector size is zero" produced no consensus as to the solution. I would prefer not to have to completely reformat the disk if possible because it contains about 280GB of data I would rather not have to find a temporary home for. Any suggestions?

    Read the article

  • Recover LVM2 volume group after one HDD failed

    - by Bernd
    I had two HDDs, each one containing a LVM partition which formed a volume group. Then I had two LVs, one for my / directory and one for my /home/ directory. Yesterday where I had my / dir failed. I'm trying to recover at least my /home/ dir. What I've done so far: Boot a live system Extract LVM2 metadata from the working HDD using dd Copy metadata to /etc/lvm/backup/vg0 Now I'm trying to do this: pvcreate --restore /etc/lvm/backup/vg0 --uuid "[uuid of my working hdd]" /dev/sdb2 But I always get: Couldn't find device with uuid '[uuid of broken hdd]'. Couldn't find device with uuid '[uuid of working hdd]'. Device /dev/sdb2 not found (or ignored by filtering). I confirmed that /dev/sdb2 exists and I've commented out all filtering settings from /etc/lvm/lvm.conf so I don't know what might be causing pvcreate not to find the device. So: What might be the problem? Is it even possible to restore this partition? (As I'm writing this I'm starting to think it's impossible D:) Edit: Okay, looks like I've got it figured out. I was using a Ubuntu 8.10 CD (yeah, I know it's not supported anymore) and it seems that was the problem. When I started from a Ubuntu 10.04 CD everything worked 'fine', I could mount my LVM partitions partially without problems. (Will answer the question in 4 hours. But if anyone has still got some hints/tips, please share! :)

    Read the article

  • Automated software installation for MS Windows?

    - by Duncan Bayne
    I am currently setting up a Windows development environment (the whole Visual Studio 2010 stack plus plugins on top of Windows 7). This has got me wondering whether there's a Windows equivalent to what I do for dev environment setup in Ubuntu. It takes literally hours to get a dev environment set up in Windows, involving a lot of manual intervention. On Ubuntu, I have two shell scripts - one I run as root which configures the system using apt-get (amongst other things), one I run as me which configures my user account. Those scripts live in my private Subversion repository. To set up a dev environment from scratch requires five commands: sudo apt-get install -y subversion svn co http://svn.XXXX.XXX/personal/ cd personal sudo ./ubuntu_setup_root.sh ./ubuntu_setup_user.sh The only human intervention required is to pick a root password for MySQL. So it takes only a few minutes of human attention to go from a vanilla Ubuntu installation to a full development environment with the latest builds of everything, perfectly tailored down to shortcut keys and wallpaper. Is there an equivalent process for Windows? In an ideal world it'd be something trivially scriptable using C# Script or Powershell, which could live in source control & make use of a repository of ISOs downloaded from MSDN ...

    Read the article

  • Perl wrapper to start daemon leaves zombie when run by cron

    - by leonstr
    I've got a Perl script to start a process as a daemon. But when I call it from cron I'm left with a defunct process. I've stripped this down to a minimal script, I'm starting 'tail' as a placeholder for the daemon: use POSIX "setsid"; $SIG{CHLD} = 'IGNORE'; my $pid = fork(); exit(0) if ($pid > 0); (setsid() != -1) || die "Can't start a new session: $!"; open (STDIN, '/dev/null') or die ("Cannot read /dev/null: $!\n"); my $logout = "logger -t test"; open (STDOUT, "|$logout") or die ("Cannot pipe stdout to $logout: $!\n"); open (STDERR, "|$logout") or die ("Cannot pipe stderr to $logout: $!\n"); my $cmd = "tail -f"; exec($cmd); exit(1); I run this with cron and end up with: root 18616 18615 0 11:40 ? 00:00:00 [test.pl] <defunct> root 18617 1 0 11:40 ? 00:00:00 tail -f root 18618 18617 0 11:40 ? 00:00:00 logger -t test root 18619 18617 0 11:40 ? 00:00:00 logger -t test As far as I can tell it's the piping to logger that it doesn't like, if I send STDOUT and STDERR to /dev/null the problem doesn't occur. Am I doing something wrong or is this just not possible? (CentOS 5.8) Thanks, leonstr

    Read the article

  • ESX 4.0 space: DASD, NAS, or ?

    - by thormj
    I put together an ESX box for better management, but its performance is a WTF item; I'm a noob at dealing with ESX, so I'm looking for a laundry-list of reading material to help me straighten this out so I can go back to .NET programming. Current storage system: We're running Raid5+Hotspare (8x500 GB spindles) on a PERC6i on a Dell 2910. Due to ESX limitatios, the PERC is showing the storage as 1x2TB + 1x800GB "partitions." I'm not sure of the setup's configuration (stride / stripe / ???) at all. Our Applications We have a SBS server as well as a minor (2x50 GB, but growing at 10GB/month) database server... Our application that lives on the database VM is CPU and I/O insense; it's a database churning excercise mixed in with a lot of computation on the data (fixing that performance is what I'm supposed to be working on)... Perfomance Issue When I do a backup, restore, or worse (copy a backup from 1 vm to another to move it to the QA VM), the entire system slows to a crawl (even "unrelated" VMs). I originally thought a DASD situation would be quite good since you had PCI-x bandwidth, but the systemwide slowdown is killing productivity. Questions What should I do to make an intelligent decision about NAS vs RAID vs SAN vs DASD? Are there sweet spots/ugly spots in the storage setup? Can you use a SSD PCI-X card in ESX for the tempdb? Good/Bad idea? Is there any way to "share" some image in a copy-on-write fashion? Most of the "Backup-Copy-Restore" is to "put a clean image on the dev boxes"; if I could have them "share" the master image, the "big copy" (2x50 GB) would only need to be done once per week instead of once per dev per week...[runtime performance isn't a concern with the dev boxes, but the backup/copy/restore kills production, SBS, and everything else on the box]

    Read the article

  • gpg symmetric encryption using pipes

    - by Thomas
    I'm trying to generate keys to lock my drive (using DM-Crypt with LUKS) by pulling data from /dev/random and then encrypting that using GPG. In the guide I'm using, it suggests using the following command: dd if=/dev/random count=1 | gpg --symmetric -a >./[drive]_key.gpg If you do it without a pipe, and feed it a file, it will pop up an (n?)curses prompt for you to type in a password. However when I pipe in the data, it repeats the following message four times and sits there frozen: pinentry-curses: no LC_CTYPE known assuming UTF-8 It also says can't connect to '/root/.gnupg/S.gpg-agent': File or directory doesn't exist, however I am assuming that this doesn't have anything to do with it, since it shows up even when the input is from a file. So I guess my question boils down to this: is there a way to force gpg to accept the passphrase from the command line, or in some other way get this to work, or will I have to write the data from /dev/random to a temporary file, and then encrypt that file? (Which as far as I know should be alright due to the fact that I'm doing this on the LiveCD and haven't yet created the swap, so there should be no way for it to be written to disk.)

    Read the article

  • What are best practices on virtual lab/test bed architecture?

    - by WooYek
    I am currently preparing a new small virtual environment for development and testing with Windows Server + SQL Server + AD + Sharepoint + Exchange + IIS(ASP.NET) + Biztalk + ?, for a small (up to 5) dev team. What are pros and cons on different approaches, eg. splitting up over different machines or packing everything up per machine. I your experience what are the best practices I should follow in terms of architecture and various system/servers placement. What to share and what to split per person. I would like to achieve some flexibility for the dev and testing process (so teammebers would not be steeping on each other's toes) and limit administrative effort needed to propagate settings, integrate work items and revert changes when something breaks up. It's not supposed to be an everyday development working environment, more a tier 2 developer testing environment, and not yet an integration or QA testing environment with formal change process. IMO the two borderline solutions are: creating one all-inclusive machine for each dev team member giving them freedom to manage creating shared environment managed by the one with somehow formalized change request process What golden mean would you recommend, and why?

    Read the article

  • Procmail Postfix issue

    - by Blucreation
    Our server is using CENTOS uses postfix: Nov 1 11:31:52 webserver postfix/smtpd[30424]: 822A91872F: client=unknown[5.133.168.42], sasl_method=PLAIN, [email protected] Nov 1 11:31:52 webserver postfix/cleanup[30427]: 822A91872F: message-id=<[email protected]> Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: from=<[email protected]>, size=620, nrcpt=1 (queue active) Nov 1 11:31:52 webserver postfix/virtual[30505]: 822A91872F: to=<[email protected]>, relay=virtual, delay=0.12, delays=0.12/0/0/0, dsn=2.0.0, status=sent (delivered to maildir) Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: removed Nov 1 11:31:52 webserver postfix/smtpd[30424]: disconnect from unknown[5.133.168.42] I have this in my etc/postfix/main.cf: mailbox_command = /usr/bin/procmail -a "$EXTENSION" My etc/procmailrc contains: PATH="/usr/bin" SHELL="/bin/bash" LOGFILE="/var/log/procmail.log" VERBOSE="YES" LOG="#TEST#" I don't think procmail is picking up on my procmailrc as nothing ever gets logged from normal emails. If i type this: procmail DEFAULT=/dev/null VERBOSE=yes LOGFILE=/var/log/procmail.log /dev/null </dev/null I get entries in my log file so i know procmail is working Am i doing something wrong? am i missing something? I eventually want my rule to call a php script only if the subject contains "SUPPORT TICKET" and the to is "[email protected]" but that's once i this issue solved.

    Read the article

  • Apache2: 400 Bad Reqeust with Rewrite Rules, nothing in error log?

    - by neezer
    This is driving me nuts. Background: I'm using the built-in Apache2 & PHP that comes with Mac OS X 10.6 I have a vhost setup as follows: NameVirtualHost *:81 <Directory "/Users/neezer/Sites/"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> <VirtualHost *:81> ServerName lobster.dev ServerAlias *.lobster.dev DocumentRoot /Users/neezer/Sites/lobster/www RewriteEngine On RewriteCond $1 !^(index\.php|resources|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L,QSA] LogLevel debug ErrorLog /private/var/log/apache2/lobster_error </VirtualHost> This is in /private/etc/apache2/users/neezer.conf. My code in the lobster project is PHP with the CodeIgniter framework. Trying to load http://lobster.dev:81/ gives me: 400 Bad Request Normally, I'd go check my logs to see what caused it, yet my logs are empty! I looked in both /private/var/log/apache2/error_log and /private/var/log/apache2/lobster_error, and neither records ANY message relating to the 400. I have LogLevel set to debug in /private/etc/apache2/http.conf. Removing the rewrite rules gets rid of the error, but these same rules work on my MAMP host. I've double-checked and rewrite_module is loaded in my default Apache installation. My http.conf can be found here: https://gist.github.com/1057091 What gives? Let me know if you need any additional info. NOTE: I do NOT want to add the rewrite rules to .htaccess in the project directory (it's checked into a git repo and I don't want to touch it).

    Read the article

  • Changes to grub in ubuntu 10

    - by jdege
    I've been running CentOS 5 for some years. I've decided to upgrade to Ubuntu, and with 10.04 just out, this seemed like a good time. I'm a tad paranoid, so I started off with a new set of drives - one to install on, one to backup to, and one as a spare. I removed my existing CentOS 5 drives, and did an install, and had no problems. I installed the server version, and used the default full-disk LVM installation. Next, I copies my backup scripts over, edited them to work with the new configuration, and did a test backup. That worked fine, as well. Then comes the real test, could I do an install of the backup onto the spare drive? (I won't put anything of importance on a system that doesn't have a reliable backup, and if I've never done a restore, it's not reliable.) I booted from a System Rescue CD (ver 1.5.3), with the spare drive as /dev/sda, and the backup drive as /dev/sdb. I had no trouble in partitioning, configuring LVM, formatting, making swap, or restoring the file systems. But when I got to restoring grub to the MBR, I ran into problems. My restore instructions from CentOS 5 said run grub, then enter two commands: root (hd0,0) setup (hd0) The first command exits with an error: "Checking if /boot/grub/stage1 exists ... no" I did some googling around, and found that the Grub2 included in recent Ubuntus is very different than the Grub 0.97 included in CentOS 5. One site suggested I use: grub-install --root-dir=/mnt/restore /dev/sda That appeared to work, but when I booted from the drive, I ended up at a grub prompt. Any ideas as to what I need to do? It seems like a simple problem, but my attempts at searching out answers on the web are being swamped by references to the old version of Grub. Help would be appreciated.

    Read the article

  • Local DNS and Apache Server Configuration Interferring - example.com / www.example.com

    - by nicorellius
    I have a domain for my site: example.com I am also running local DNS with these lines: www IN CNAME server.<host_provider>.com. dev IN CNAME server.<host_provider>.com. So www.example.com and dev.example.com go to production and development sites, respectively, that are hosted by a host company. In my Apache configuration for the main site, I'm running a rewrite rule like this: RewriteEngine ON RewriteCond %{HTTP_HOST} ^example\.com$|!dev\.example\.com$ [NC] RewriteRule ^(.*)$ http://www\.%{HTTP_HOST}/$1 [R=302,L,NE] This rule seems to work, as when you are off the network and go to example.com in the browser, you get redirected to www.example.com. The problem is when I'm on the network, and I go to example.com I get an error page, saying page can't be found. No server errors; just a page can't be found, as if the local DNS is causing it to stop looking at that point. I'm also using Nettica for DNS service and have this A record in place: example.com Host (A) Default xxx.xx.xxx.xx This handles the external DNS, but my problem seems to be related to my internal DNS. For example, inside my network, I can go to servers on the network with addresses like this: server.example.com server1.example.com server2.example.com These are configured in my local DNS. I'm just not sure how to get past the "empty" subdomain and go to example.com. Adding to this since it might not be clear. If I'm out side the example.com network, on another network, like example123.com, then when I go to example.com I'm redirected to www.example.com as expected, eg, the Apache rewrite rule is working. Thanks in advance for any information.

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • How to point a subdomain to local server with dynamic IP

    - by jlego
    I see there are many related questions to this one, however the answers given seem to be a little vague for a novice like me. I've got a dedicated LAMP stack running Fedora 16 locally on my home network. Everything works fine internally. I can access the Apache server from other machines on the network using the internal IP in a browser. I'm using the stack for a local file server as well as a development environment for websites. There are a couple of reasons why I would like the development sites hosted on the machine to be available publicly. 1.) I use a CMS that has paid add-ons which allows you to assign the paid license to a domain. I can't develop with paid add-ons on the closed dev server. 2.) I would occasionally like for clients to be able to view the site dev at late stages before it goes live. I have a domain (foo.com, and I want to point a *sub*domain (dev.foo.com) to the local server. I know this is best accomplished with a Static IP, however my IP from my ISP is Dynamic and I don't think there is any way to change that. From what I have read, services like ZoneEdit & DynDNS are supposed to be able to accomplish this, but I have tried both and found it very confusing. Also the server is behind a router and I have also read that you need to set up DDNS(?) in your router, that many routers have presets for these services, and I've found that DynDNS is the only one my router seems to support.

    Read the article

  • ASP.NET Web API returns 404 for PUT only on some servers

    - by Greg Bacchus
    Ok, I have been racking my brain and the internet for a solution to this. I just can't figure it out. I have written a site that uses ASP.NET MVC Web API and all working nicely until I put it on staging server. The site works fine on my local machine and the dev web server. Both dev and staging servers are Win Server 2008 R2. The problem is this: basically the site works, but there are some API calls that use the HTTP PUT method. These fail on staging returning a 404, but work fine elsewhere. The first problem that I came across and fixed was in Request Filtering. But still getting the 404. I have turned on tracing in IIS and get the following problem. 168. -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName IIS Web Core Notification 16 HttpStatus 404 HttpReason Not Found HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification MAP_REQUEST_HANDLER ErrorCode The system cannot find the file specified. (0x80070002) The configs are the same on dev and staging, matter of fact the whole site is a direct copy. Why would the GETs and POSTs work, but not the PUTs? Thanks Greg

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >