Search Results

Search found 5685 results on 228 pages for 'encrypted partition'.

Page 224/228 | < Previous Page | 220 221 222 223 224 225 226 227 228  | Next Page >

  • Windows 2008 Unknown Disks

    - by Ailbe
    I have a BL460c G7 blade server with OS Windows 2008 R2 SP1. This is a brand new C7000 enclosure, with FlexFabric interconnects. I got my FC switches setup and zoned properly to our Clariion CX4, and can see all the hosts that are assigned FCoE HBAs on both paths in both Navisphere and in HP Virtual Connect Manager. So I went ahead and created a storage group for a test server, assigned the appropriate host, assigned the LUN to the server. So far so good, log onto server and I can see 4 unknown disks.... No problem, I install MS MPIO, no luck, can't initialize the disks, and the multiple disks don't go away. Still no problem, I install PowerPath version 5.5 reboot. Now I see 3 disks. One is initialized and ready to go, but I still have 2 disks that I can't initialize, can't offline, can't delete. If I right click in storage manager and go to properties I can see that the MS MPIO tab, but I can't make a path active. I want to get rid of these phantom disks, but so far nothing is working and google searches are showing up some odd results, so obviously I'm not framing my question right. I thought I'd ask here real quick. Does anyone know a quick way to get rid of these unknown disks. Another question, do I need the MPIO feature installed if I have PowerPath installed? This is my first time installing Windows 2008 R2 in this fashion and I'm not sure if that feature is needed or not right now. So some more information to add to this. It seems I'm dealing with more of a Windows issue than anything else. I removed the LUN from the server, uninstalled PowerPath completely, removed the MPIO feature from the server, and rebooted twice. Now I am back to the original 4 Unknown Disks (plus the local Disk 0 containing the OS partition of course, which is working fine) I went to diskpart, I could see all 4 Unknown disks, I selected each disk, ran clean (just in case i'd somehow brought them online previously as GPT and didn't realize it) After a few minutes I was no longer able to see the disks when I ran list disk. However, the disks are still in Disk Management. When I try and offline the disks from Disk Management I get an error: Virtual Disk Manager - The system cannot find the file specified. Accompanied by an error in System Event Logs: Log Name: System Source: Virtual Disk Service Date: 6/25/2012 4:02:01 PM Event ID: 1 Task Category: None Level: Error Keywords: Classic User: N/A Computer: hostname.local Description: Unexpected failure. Error code: 2@02000018 Event Xml: 1 2 0 0x80000000000000 4239 System hostname.local 2@02000018 I feel sure there is a place I can go in the Registry to get rid of these, I just can't recall where and I am loathe to experiement. So to recap, there are currently no LUNS attached at all, I still have the phantom disks, and I'm getting The system cannot find the file specified from Virtual Disk Manager when I try to take them offline. Thanks!

    Read the article

  • How to get rid of a stubborn 'removed' device in mdadm

    - by T.J. Crowder
    One of my server's drives failed and so I removed the failed drive from all three relevant arrays, had the drive swapped out, and then added the new drive to the arrays. Two of the arrays worked perfectly. The third added the drive back as a spare, and there's an odd "removed" entry in the mdadm details. I tried both mdadm /dev/md2 --remove failed and mdadm /dev/md2 --remove detached as suggested here and here, neither of which complained, but neither of which had any effect, either. Does anyone know how I can get rid of that entry and get the drive added back properly? (Ideally without resyncing a third time, I've already had to do it twice and it takes hours. But if that's what it takes, that's what it takes.) The new drive is /dev/sda, the relevant partition is /dev/sda3. Here's the detail on the array: # mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Oct 26 12:27:49 2011 Raid Level : raid1 Array Size : 729952192 (696.14 GiB 747.47 GB) Used Dev Size : 729952192 (696.14 GiB 747.47 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 12 17:48:53 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 UUID : 2fdbf68c:d572d905:776c2c25:004bd7b2 (local to host blah) Events : 0.34665 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - spare /dev/sda3 If it's relevant, it's a 64-bit server. It normally runs Ubuntu, but right now I'm in the data centre's "rescue" OS, which is Debian 7 (wheezy). The "removed" entry was there the last time I was in Ubuntu (it won't, currently, boot from the disk), so I don't think that's not some Ubuntu/Debian conflict (and they are, of course, closely related). Update: Having done extensive tests with test devices on a local machine, I'm just plain getting anomalous behavior from mdadm with this array. For instance, with /dev/sda3 removed from the array again, I did this: mdadm /dev/md2 --grow --force --raid-devices=1 And that got rid of the "removed" device, leaving me just with /dev/sdb3. Then I nuked /dev/sda3 (wrote a file system to it, so it didn't have the raid fs anymore), then: mdadm /dev/md2 --grow --raid-devices=2 ...which gave me an array with /dev/sdb3 in slot 0 and "removed" in slot 1 as you'd expect. Then mdadm /dev/md2 --add /dev/sda3 ...added it — as a spare again. (Another 3.5 hours down the drain.) So with the rebuilt spare in the array, given that mdadm's man page says RAID-DEVICES CHANGES ... When the number of devices is increased, any hot spares that are present will be activated immediately. ...I grew the array to three devices, to try to activate the "spare": mdadm /dev/md2 --grow --raid-devices=3 What did I get? Two "removed" devices, and the spare. And yet when I do this with a test array, I don't get this behavior. So I nuked /dev/sda3 again, used it to create a brand-new array, and am copying the data from the old array to the new one: rsync -r -t -v --exclude 'lost+found' --progress /mnt/oldarray/* /mnt/newarray This will, of course, take hours. Hopefully when I'm done, I can stop the old array entirely, nuke /dev/sdb3, and add it to the new array. Hopefully, it won't get added as a spare!

    Read the article

  • MySQL 5.5 (Percona) assertion failure log.. what would cause this?

    - by Tom Geee
    256GB, 64 Core , AMD running Ubuntu 12.04 with Percona MySQL 5.5.28. Below is the assertion failure. We just had a second assertion failure (different "in file", position, etc) while running a large set of inserts. After the first failure, MySQL restarted after a reboot only - after continuously looping on the same error after trying to recover. I decided to do a mysqlcheck with -o for optimize. Since these are all Innodb tables (very large tables, 60+GB) this would do an alter table on all tables. In the middle of this , the below assertion failure happened again: 121115 22:30:31 InnoDB: Assertion failure in thread 140086589445888 in file btr0pcur.c line 452 InnoDB: Failing assertion: btr_page_get_prev(next_page, mtr) == buf_block_get_page_no(btr_pcur_get_block(cursor)) InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 03:30:31 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=536870912 read_buffer_size=131072 max_used_connections=404 max_threads=500 thread_count=90 connection_count=90 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1618416 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x14edeb710 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f687366ce80 thread_stack 0x30000 /usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x7b52ee] /usr/sbin/mysqld(handle_fatal_signal+0x484)[0x68f024] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9cbb23fcb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f9cbaea6425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f9cbaea9b8b] /usr/sbin/mysqld[0x858463] /usr/sbin/mysqld[0x804513] /usr/sbin/mysqld[0x808432] /usr/sbin/mysqld[0x7db8bf] /usr/sbin/mysqld(_Z13rr_sequentialP11READ_RECORD+0x1d)[0x755aed] /usr/sbin/mysqld(_Z17mysql_alter_tableP3THDPcS1_P24st_ha_create_informationP10TABLE_LISTP10Alter_infojP8st_orderb+0x216b)[0x60399b] /usr/sbin/mysqld(_Z20mysql_recreate_tableP3THDP10TABLE_LIST+0x166)[0x604bd6] /usr/sbin/mysqld[0x647da1] /usr/sbin/mysqld(_ZN24Optimize_table_statement7executeEP3THD+0xde)[0x64891e] /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x1168)[0x59b558] /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x30c)[0x5a132c] /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1620)[0x5a2a00] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x14f)[0x63ce6f] /usr/sbin/mysqld(handle_one_connection+0x51)[0x63cf31] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9cbb237e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9cbaf63cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6300004b60): is an invalid pointer Connection ID (thread ID): 876 Status: NOT_KILLED You may download the Percona Server operations manual by visiting http://www.percona.com/software/percona-server/. You may find information in the manual which will help you identify the cause of the crash. 121115 22:31:07 [Note] Plugin 'FEDERATED' is disabled. 121115 22:31:07 InnoDB: The InnoDB memory heap is disabled 121115 22:31:07 InnoDB: Mutexes and rw_locks use GCC atomic builtins .. Then it recovered , without a reboot this time. from the log, what would cause this? I am currently running a dump to see if the problem resurfaces. edit: data partition is all in / since this is a hosted, defaulted file system unfortunately: Filesystem Size Used Avail Use% Mounted on /dev/vda3 742G 445G 260G 64% / udev 121G 4.0K 121G 1% /dev tmpfs 49G 248K 49G 1% /run none 5.0M 0 5.0M 0% /run/lock none 121G 0 121G 0% /run/shm /dev/vda1 99M 54M 40M 58% /boot my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] skip-name-resolve innodb_file_per_table default_storage_engine=InnoDB user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /data/mysql tmpdir = /tmp skip-external-locking key_buffer = 512M max_allowed_packet = 128M thread_stack = 192K thread_cache_size = 64 myisam-recover = BACKUP max_connections = 500 table_cache = 812 table_definition_cache = 812 #query_cache_limit = 4M #query_cache_size = 512M join_buffer_size = 512K innodb_additional_mem_pool_size = 20M innodb_buffer_pool_size = 196G #innodb_file_io_threads = 4 #innodb_thread_concurrency = 12 innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 8M innodb_log_file_size = 1024M innodb_log_files_in_group = 2 innodb_max_dirty_pages_pct = 90 innodb_lock_wait_timeout = 120 log_error = /var/log/mysql/error.log long_query_time = 5 slow_query_log = 1 slow_query_log_file = /var/log/mysql/slowlog.log [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M

    Read the article

  • Trouble connecting to vsftpd on ubuntu server

    - by littleK
    I have installed Ubuntu Server 10.10 and I am using it to host a domain that I have. I am trying to set up FTP for the server, but I am running into some problems. I have successfully installed vsFTPd and I have opened up ports 20, 21 on my firewall. In my vsFTPd configuration, I have enabled SSL. Every time I try to connect to my server via FTP, I receive a "Connection Refused" error. I have had a little more success with SSL disabled, however the connection process will time out after the LIST command (but it does accept my authentication). Here is my vsFTPd configuration, the SSL stuff is at the bottom: # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. #xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. #chroot_local_user=YES # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Debian customization # # Some of vsftpd's settings don't fit the Debian filesystem layout by # default. These settings are more Debian-friendly. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/private/vsftpd.pem # SSL ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES Thanks!

    Read the article

  • Can't Get Virtual Users Setup in VSFTPD -Tried Everything

    - by N.T.
    Have Ubuntu 11.10 with vsftpd installed and working. Can not get virtual users setup at all? Vsftpd will allow main Ubuntu owner account to login, but nothing else? I've followed several tutorials on adding virtual users, but nothing works? I just need to add 2 virtual users and have them be able to upload files to vsftpd Ubuntu computer from other computers on my Lan network. Everywhere I've looked, people just point toward tutorials on adding virtual users, but that just is NOT working. I've been struggling with this for over a week now! PLEASE Help. Thanks. I'll even give a donation if someone can figure this out. here is the vsftpd.conf file I am using. I copied the original, and make a new one, every time I try a tutorial. So far, none have worked. Here is the vsftpd.conf file I'm using. (I hope this helps?) # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Welcome to Sage FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. chroot_local_user=YES # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Debian customization # # Some of vsftpd's settings don't fit the Debian filesystem layout by # default. These settings are more Debian-friendly. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd local_root=/media/FilesDrive # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/private/vsftpd.pem

    Read the article

  • apache webserver unresponsible with server-status showing all child processes waiting for connection

    - by Jeff
    My setup: i have 3 nearly identical webserver machines serving the same high loaded dynamic website with simple load balancing over dns. The service has been working for over two ears with the same apache config. apache2, php5, ubuntu 8.04 linux 2.6.24-29-server My problem: since about two weeks i'm experiencing problems with this config. Nearly every day i have one small moment about 5 minutes, in which the website is unreachable. I'm still able to login to the servers over ssh. If i run htop, i see the machine simply doing nothing. i have about 1000 apache processes running, but no cpu activity. i've used the apache mod_status to debug this situation. the process scoreboard looks like this: _C.___K_______________________R._______.__K_K____K___C_______.__ _______C__________.___________________________________.________C _.____K__________K___K_WK_____._K_____________________________._ W______K__________K________.____________________._______C_______ _C_.__K__K____.._.._____________________________________C_______ _R___________K___.______C________.C_________.______._____C______ ____________KKC____K_____K__WC_________________C_____.__.____.__ _____________________C_________K______.____C______._____________ _.___C____.___.___________________________.K______.____K________ W__.___________________C.__.____K________K_______R_._.__._______ __C__C_.__________C__C_______._____W______________C_.___C_______ ____.______C_____________C________.____C____________.________._K __.__________.K_____________K_________._____C____.K__________KW_ __K.W________R_________._______.___W___________.____.__K_____W__ W___.___..________W____K Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process So the most of the processes are just waiting for connection. after about 5 minutes the situation will return to normal: i have lot least processes on every machine, the most workers have the "."-status (meaing they are open to process a request) and of course the website is reachable! so i'm trying to find something in the logs, but there is simply nothing... the apache access log is silent for about 4 minutes, the same is for the error log. i also can not figure out anything wrong in other system logs. the situation is the same on all 3 webservers (all of them have this load peak and unresposibility at the same time), so i do not thing this is hardware related. but i think, this might be related to some network (tcp) issue. any ideas? EDIT: some more information, that i have just discovered: it has just happened again. and i was able to verify that i'm also not able to connect locally when this problem occurs. i have made some connection statistics with the following command after it happend netstat -an|awk '/tcp/ {print $6}'|sort|uniq -c 109 CLOSE_WAIT 2652 ESTABLISHED 2 FIN_WAIT1 11 LAST_ACK 12 LISTEN 91 SYN_RECV 1 SYN_SENT 16 TIME_WAIT If i execute the same command some time later, i have something like this: 4 CLOSING 108 ESTABLISHED 18 FIN_WAIT1 182 FIN_WAIT2 37 LAST_ACK 12 LISTEN 50 SYN_RECV 11276 TIME_WAIT So in the normal situation i have only 100-200 open connections by clients beeing handled by apache in this moment. when i have this "crash", i have a lot more connections. what is the best way to analyse this? EDIT2: the important lines in apache2.conf are: KeepAlive On MaxKeepAliveRequests 20 KeepAliveTimeout 1 <IfModule mpm_prefork_module> ServerLimit 920 StartServers 30 MinSpareServers 80 MaxSpareServers 120 MaxClients 920 MaxRequestsPerChild 700 </IfModule> it is an apache2 prefork with php_mod. the server has 8GB ram and a 4gb swap partition.

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • How to delete files and folders that cannot be deleted?

    - by glenneroo
    I have a backup copy of a previous Windows' Documents and Settings folder which only contains my original user and within 2 more directories: Favorites and Local Settings. When I try to delete Local Settings I get this error: When I try to delete Favorites, I get this error: I ran this in a cmd shell: attrib *.* -r -a -s -h /s ...but it did not help, nor did it return any errors/warnings. I used Unlocker v1.8.5 and LockHunter repeatedly at multiple levels to see if any files are in use, but both always say: No Files Locked. Update #1: I was able to rename the directory, which now gives me this warning before (trying to) delete: If I press Yes (or Yes to All) then I get this error: Update #2: I let chkdsk /f run which required a reboot since it's on my primary system partition. During Stage 2 scanning, I received about 40 of these: Deleting an index entry from index $0 of file 25. ...followed by: Deleting index entry cookies in index $I30 of file 37576. ...but I still get the first error dialog above when trying to delete. I ran chkdsk again, this time: chkdsk /f /r. Produced no messages. Same result when deleting. Update #3: Digging deeper, the 99 is the name of one of many directories located deep in here: C:\Documents and Settings.OLD\User\Local Settings\Application Data\Microsoft\Messenger\[email protected]\SharingMetadata\[email protected]\DFSR\Staging\CS{D4E4AE55-B5E2-F03B-5189-6C4DA6E41788}\ Inside each of those directories were files with names such as: 2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-Downloaded.frx I noticed that, unlike all the directories, I couldn't rename any of these files. I also noticed that the file + dir names were extremely long: Original directory = 194 characters Filenames = 100+ characters Together the length exceeds the 255-char limit which is bad and would explain the error message I posted in Update #1. Partial Solution: Rename all directories until the total path length is less than 100. Afterwards I was able to rename the .frx files, not to mention delete everything inside the Local Settings directory. This is only a partial solution because these (empty) directories are still not deleteable, C:\1\2\Favorites\Wien\What To Do.. C:\1\2\Favorites\Photography\FIRE Same error as above: Here is what Explorer properties shows for both folders: Update #4 (another partial solution): Using harrymc's answer combined with thoroughly reading through this amazing MS-KB article which contains nearly everyone's idea and then some, inconspicuously titled: You cannot delete a file or a folder on an NTFS file system volume. I was able to delete the 2nd folder C:\1\2\Favorites\Photography\FIRE - the problem being that there was an invisible trailing space at the end. I got lucky when I did an auto-complete whilst playing around with the del "\\?\<path>" command which he suggested. NOTE: A normal del did NOT work, nor did deleting from explorer. Now all that is left is the first directory C:\1\2\Favorites\Wien\What To Do.. (yes I tried endlessly with multiple combinations of the above solution ;) Keep 'em coming! =)

    Read the article

  • How to use more than 3 virtual disks in Linux using CentOS and XenServer

    - by 010110110101
    I've attached 5 virtual disks to a Virtual Machine in Citrix XenServer. The VM has the xs-tools installed. Initially it said that it couldn't add so many disks. After I installed the xs-tools, it let me add all the disks. But /dev doesn't show all the disks. It shows these: /dev/xvda /dev/xvdb /dev/xvdc /dev/cdrom Perhaps it is bound by the limits of an IDE bus? (3 disks + CD-ROM) If so, how does one change the VM to use SCSI? Edit: According to the documentation: 2.6.3. VM Block Devices In the PV Linux case, block devices are passed through as PV devices. XenServer does not attempt to emulate SCSI or IDE, but instead provides a more suitable interface in the virtual environment in the form of xvd* devices. It is also possible to get an sd* device using the same mechanism, where the PV driver inside the VM takes over the SCSI device namespace. This is not desirable so it is best to use xvd* where possible for PV guests (this is the default for Debian and RHEL). For Windows or other fully virtualized guests, XenServer emulates an IDE bus in the form of an hd* device. When using Windows, installing the Citrix Tools for Virtual Machines installs a special PV driver that works in a similar way to Linux, except in the fully virtualized environment. Still, with 5 virtual disks attached, I don't see the other xvd devices. Edit #2: (attached requested info) Host Machine: XenServer 6.1 Linux version 2.6.32.43-0.4.1.xs1.6.10.777.170770xen (geeko@buildhost) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-51)) #1 SMP Wed Apr 17 05:52:03 EDT 2013 Guest Machine: CentOS release 6.4 (Final) Linux version 2.6.32-358.6.2.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 20:59:36 UTC 2013 Output of 'fdisk -l' on Guest Machine: Note, the disk beyond the first 3 attached are not displaying -- there should be 4 100GB disks. (There are a total of 5 disks displayed in XenCenter -- 16GB, 100GB, 100GB, 100GB, 100GB) Disk /dev/xvdb: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfb6c95b9 Device Boot Start End Blocks Id System /dev/xvdb1 1 13054 104856223+ 83 Linux Disk /dev/xvda: 17.2 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e5f41 Device Boot Start End Blocks Id System /dev/xvda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvda2 64 2089 16264192 8e Linux LVM Disk /dev/xvdc: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed249ced Device Boot Start End Blocks Id System /dev/xvdc1 1 13054 104856223+ 83 Linux Disk /dev/mapper/vg_blue-lv_root: 14.6 GB, 14571012096 bytes 255 heads, 63 sectors/track, 1771 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_blue-lv_swap: 2080 MB, 2080374784 bytes 255 heads, 63 sectors/track, 252 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 I see that the Linux versions say SMP. The Guest VM doesn't say "xen" in the name. However, I have already run yum install kernel-xen. Could be a clue?

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • Using different SSDs types (not only SATA based) as system drive

    - by Hubert Kario
    Currently I have a Thinkpad X61s and want to make it both a bit faster and a bit more power efficient. For that reason I thought that adding SSD drive would make most sense. Unfortunately, because of financial reasons, buying SSD of over 200GB capacity is out of reach for me (not only it would be worth more than the rest of the laptop, but also I currently have a 500GB drive in it, so even such a drive would be kind of a downgrade for me). During preliminary testing with a cheap Transcend 4GB Class 6 (14MiB/s streaming, 9MiB/s random read) card I experienced boot times to be reduced by half so putting the OS only on it would already would be an improvement. Unfortunately, my system now is about 11GiB in size so anything less than 16GB would be constraining. In this laptop I can connect additional drives on at least 5 different ways: using SATA-ATA converter caddy in the X6 Ultrabase using internal mini PCIe slot using integrated SDHC slot using CardBus (a.k.a PCMCIA or PC Card) slot using USB Thankfully, because I use only Linux on this PC the bootability of them is irrelevant as I can put the /boot partition on internal HDD and / on any of the above mentioned Flash memories (as I already did for the SDHC test). From what I was able to research and from my own experience those options come with rather big downsides or other problems: SATA-ATA caddy It has three downsides: I have to carry the Ultrabse with me at all times (it's not really inconvenient, but those grams do add) and couldn't disconnect it when I want to disconnect the battery It makes the bay unusable for the optical drive and occasional quick access to other hard drives the only caddies I could buy have rather flaky controllers in them so putting my OS on it would hamper its stability Internal mini PCIe slot This would be an ideal solution, if only I could find real PCIe SSDs, not only devices that could talk only SATA or ATA over PCIe mechanical connection (the ones used in Dell Mini or Asus EEE). Theoretically Samsung did release such devices but I couldn't find them in retail anywhere. Integrated SDHC slot It's a nice solution with a single drawback: the fastest 16GB SDHC card on the market can only do around 35MiB/s read and 15MiB/s write while still costing like a normal 40GB SATA SSD that's 10 times faster. Not really cost-effective. CardBus (a.k.a PCMCIA or PC Card) slot Those cards are much faster than the SDHC option (there are ones that can do well over 50MiB/s read in benchmarks) and from what I could find the PCMCIA controller in my laptop does support UDMA so it should be able to deliver comparable speeds. They still cost similarly to SD cards but at least they provide streaming performance comparable to my current HDD. USB That's the worst option. Not only is it limited to 20-30MiB/s by the interface itself the drive would stick out of the laptop so it's a big no no. The question As such I think that going the "CF in a CardBus adapter" route will be the best option. My question is: did anyone try using CF cards in CardBus adapters as system drives with Linux on Thinkpad laptops? Laptops in general? What was the real-world performance? I don't have any CF cards so I can't check how well does it work with suspend/resume, or whatever it's easy to make it work in initramfs (I'm using ArchLinux and SD card was trivial — add 3 modules in single config line and rebuilding initramfs) so any tips/gotchas on this are welcome as well.

    Read the article

  • Do hard drive enclosures fail/is it the HDD or enclosure?

    - by x0a
    I'm having a whole host of problems with an external hard drive that was working just fine a couple of hours ago. I've had this problem before once, and that was about 3 months ago, here's what I documented: So a couple of hours ago I turned off all my computers and shut off the power to all my devices in my room, then went and turned the power off at the main switch so I could change an outlet. A couple hours later, after I've already slowly turned everything back on, I go to my xbox to try and watch a movie and it can't seem to list any of the movies I've got. So I go to my desktop to find that my external hard drive isn't there.. even though it's on and connected. It's also stationary and hidden behind something so there's not a whole lot of tampering/physical wear to that external. I plug it into my laptop to try and see what's going on. It starts making this endless loud screeching noise. None of that clicking that's usually associated with hd damage. It's not listed in my computers, and it shows up in Disk Management as "uninitialized" asking me to choose between two different partition types. After carefully disconnecting it and connecting it back, it asks me to format it, which I cancel. I start googling about my issue, starting to accept the situation, torn as hell and helpless and just about ready to toss the thing. Suddenly the screeching stops, after almost 45 minutes of it going, and Disk Management lists the drive as "Online" and "Healthy". Explorer pops up with all my files! I'm still being really careful with it and weary and treating it as though it's in fragile shape. I've downloaded some S.M.A.R.T. software to read the values and everything is listed as "OK" . No reallocated sectors, no read errors, no seek errors. I also ran a quick self-test, which completed without error. Everything seems fine. It looks to be a perfectly healthy external hard drive. So what the hell was that about? Was it doing some sort of maintenance or self-test? How am I supposed to tell the difference? I would've undoubtedly killed the drive for sure if had it gone on a bit longer. I've got the same problem now, with one exception: it doesn't magically reappear after the screeching stops. Occasionally I manage to get some S.M.A.R.T. diagnostics information, which basically reads everything as fine. The only problem is that my HD isn't initializing (so I can't access anything in it). I'm able to successfully run a quick smart test but not an extended one (I've only tried it once but got conflicting indications as to whether it was actually making any progress or not (was stuck on Random read test). So, final question (if all else fails): Could the hard drive enclosure be failing rather than the HDD? Is this a likely possibility at all? How would I know?

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

  • I am starting to think that Prevx.com isnt a legit site...but heres my long-winded question

    - by cop1152
    I apologize in advance for the long-winded post. I posted it all because I believe its informative and may be useful. Also, I posted my question at the end. Moments ago I was RDC to a file server in my home (from inside my home). I had opened Firefox and Googled for a manufacturers website. Immediately after clicking the link, Firefox abruptly closed. This seemed odd to me to so I checked the running processes and discovered d.exe, e.exe, and f.exe running. I Googled these processes on a different machine and found them belonging to a key-logger/screen-capturer/trojan called defender.exe, which according to the Prevx lives in c:\documents and settings\user\local settings\temp. (Prevx link http://www.prevx.com/filenames/147352809685142526-X1/DEFENDER32.EXE.html) Simultaneously, an obviously-spoofed Windows Firewall popup appeared on the server asking me to click ‘yes’ to update Windows Firewall. At this time I ended all rogue processes, emptied the temp folder, removed defender.exe from startup, and checked my registry and a few other locations. Before deleting Defender.exe I noted that it was created moments ago, just before Firefox crashed. I believe that I was ‘almost’ infected with this malware. I believe that it needed me to click the phony popup in order to complete infection because it wasn’t allowed to execute processes from the temp folder. After cleaning the machine, I restarted it and have been monitoring it for over an hour. I am debating on whether or not to restore the Windows partition (a separate physical drive from the data) or to just watch it for awhle. I should mention that, because of the specs on this machine, I do not run antivirus software, but I know it well and inspect it regularly. It is a very old Compaq with a 400mhz processer and 512mb of ram. I have a static IP and the server is in the DMZ running an FTP client and some HTTP server software. All files transferred to and stored on this machine are scanned for malware before transferring. Usually the machine only runs 19 processes and performs pretty well for its intended purpose. I posted the story so that you could be aware of a possible new piece of malware and how it acts, but I also have a question or two. First, over the last few months I have noticed that PREVX is listed at the top of most of my Google searches when researching malware, especially for new or obscure malware…and they always want you to purchase something. I don’t think they are one of the top AV companies, so it seems odd that they are always the top Google result. Does anyone have any experience with any of their products? Also, what sites do you rely on for malware researching? Recently, I have found it difficult to find good info because of HijackThis-logs and other deadend info cluttering up my searches. And lastly, besides antivirus, third-party firewall, etc, what settings would you use to lock down a machine to make it more secure in instances where a stubborn admin like myself refuses to run AV? Thanks.

    Read the article

  • linux raid 1: right after replacing and syncing one drive, the other disk fails - understanding what is going on with mdstat/mdadm

    - by devicerandom
    We have an old RAID 1 Linux server (Ubuntu Lucid 10.04), with four partitions. A few days ago /dev/sdb failed, and today we noticed /dev/sda had pre-failure ominous SMART signs (~4000 reallocated sector count). We replaced /dev/sdb this morning and rebuilt the RAID on the new drive, following this guide: http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array Everything went smooth until the very end. When it looked like it was finishing to synchronize the last partition, the other old one failed. At this point I am very unsure of the state of the system. Everything seems working and the files seem to be all accessible, just as if it synchronized everything, but I'm new to RAID and I'm worried about what is going on. The /proc/mdstat output is: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sdb4[2](S) sda4[0] 478713792 blocks [2/1] [U_] md2 : active raid1 sdb3[1] sda3[2](F) 244140992 blocks [2/1] [_U] md1 : active raid1 sdb2[1] sda2[2](F) 244140992 blocks [2/1] [_U] md0 : active raid1 sdb1[1] sda1[2](F) 9764800 blocks [2/1] [_U] unused devices: <none> The order of [_U] vs [U_]. Why aren't they consistent along all the array? Is the first U /dev/sda or /dev/sdb? (I tried looking on the web for this trivial information but I found no explicit indication) If I read correctly for md0, [_U] should be /dev/sda1 (down) and /dev/sdb1 (up). But if /dev/sda has failed, how can it be the opposite for md3 ? I understand /dev/sdb4 is now spare because probably it failed to synchronize it 100%, but why does it show /dev/sda4 as up? Shouldn't it be [__]? Or [_U] anyway? The /dev/sda drive now cannot even be accessed by SMART anymore apparently, so I wouldn't expect it to be up. What is wrong with my interpretation of the output? I attach also the outputs of mdadm --detail for the four partitions: /dev/md0: Version : 00.90 Creation Time : Fri Jan 21 18:43:07 2011 Raid Level : raid1 Array Size : 9764800 (9.31 GiB 10.00 GB) Used Dev Size : 9764800 (9.31 GiB 10.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:27:33 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : a3b4dbbd:859bf7f2:bde36644:fcef85e2 Events : 0.7704 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 1 - faulty spare /dev/sda1 /dev/md1: Version : 00.90 Creation Time : Fri Jan 21 18:43:15 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:39:06 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 8bcd5765:90dc93d5:cc70849c:224ced45 Events : 0.1508280 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 18 1 active sync /dev/sdb2 2 8 2 - faulty spare /dev/sda2 /dev/md2: Version : 00.90 Creation Time : Fri Jan 21 18:43:19 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:46:44 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 2885668b:881cafed:b8275ae8:16bc7171 Events : 0.2289636 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - faulty spare /dev/sda3 /dev/md3: Version : 00.90 Creation Time : Fri Jan 21 18:43:22 2011 Raid Level : raid1 Array Size : 478713792 (456.54 GiB 490.20 GB) Used Dev Size : 478713792 (456.54 GiB 490.20 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:19:20 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 0 0 1 removed 2 8 20 - spare /dev/sdb4 The active sync on /dev/sda4 baffles me. I am worried because if tomorrow morning I have to replace /dev/sda, I want to be sure what should I sync with what and what is going on. I am also quite baffled by the fact /dev/sda decided to fail exactly when the raid finished resyncing. I'd like to understand what is really happening. Thanks a lot for your patience and help. Massimo

    Read the article

  • Using AES encryption in .NET - CryptographicException saying the padding is invalid and cannot be removed

    - by Jake Petroules
    I wrote some AES encryption code in C# and I am having trouble getting it to encrypt and decrypt properly. If I enter "test" as the passphrase and "This data must be kept secret from everyone!" I receive the following exception: System.Security.Cryptography.CryptographicException: Padding is invalid and cannot be removed. at System.Security.Cryptography.RijndaelManagedTransform.DecryptData(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount, Byte[]& outputBuffer, Int32 outputOffset, PaddingMode paddingMode, Boolean fLast) at System.Security.Cryptography.RijndaelManagedTransform.TransformFinalBlock(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount) at System.Security.Cryptography.CryptoStream.FlushFinalBlock() at System.Security.Cryptography.CryptoStream.Dispose(Boolean disposing) at System.IO.Stream.Close() at System.IO.Stream.Dispose() ... And if I enter something less than 16 characters I get no output. I believe I need some special handling in the encryption since AES is a block cipher, but I'm not sure exactly what that is, and I wasn't able to find any examples on the web showing how. Here is my code: using System; using System.IO; using System.Security.Cryptography; using System.Text; public static class DatabaseCrypto { public static EncryptedData Encrypt(string password, string data) { return DatabaseCrypto.Transform(true, password, data, null, null) as EncryptedData; } public static string Decrypt(string password, EncryptedData data) { return DatabaseCrypto.Transform(false, password, data.DataString, data.SaltString, data.MACString) as string; } private static object Transform(bool encrypt, string password, string data, string saltString, string macString) { using (AesManaged aes = new AesManaged()) { aes.Mode = CipherMode.CBC; aes.Padding = PaddingMode.PKCS7; int key_len = aes.KeySize / 8; int iv_len = aes.BlockSize / 8; const int salt_size = 8; const int iterations = 8192; byte[] salt = encrypt ? new Rfc2898DeriveBytes(string.Empty, salt_size).Salt : Convert.FromBase64String(saltString); byte[] bc_key = new Rfc2898DeriveBytes("BLK" + password, salt, iterations).GetBytes(key_len); byte[] iv = new Rfc2898DeriveBytes("IV" + password, salt, iterations).GetBytes(iv_len); byte[] mac_key = new Rfc2898DeriveBytes("MAC" + password, salt, iterations).GetBytes(16); aes.Key = bc_key; aes.IV = iv; byte[] rawData = encrypt ? Encoding.UTF8.GetBytes(data) : Convert.FromBase64String(data); using (ICryptoTransform transform = encrypt ? aes.CreateEncryptor() : aes.CreateDecryptor()) using (MemoryStream memoryStream = encrypt ? new MemoryStream() : new MemoryStream(rawData)) using (CryptoStream cryptoStream = new CryptoStream(memoryStream, transform, encrypt ? CryptoStreamMode.Write : CryptoStreamMode.Read)) { if (encrypt) { cryptoStream.Write(rawData, 0, rawData.Length); return new EncryptedData(salt, mac_key, memoryStream.ToArray()); } else { byte[] originalData = new byte[rawData.Length]; int count = cryptoStream.Read(originalData, 0, originalData.Length); return Encoding.UTF8.GetString(originalData, 0, count); } } } } } public class EncryptedData { public EncryptedData() { } public EncryptedData(byte[] salt, byte[] mac, byte[] data) { this.Salt = salt; this.MAC = mac; this.Data = data; } public EncryptedData(string salt, string mac, string data) { this.SaltString = salt; this.MACString = mac; this.DataString = data; } public byte[] Salt { get; set; } public string SaltString { get { return Convert.ToBase64String(this.Salt); } set { this.Salt = Convert.FromBase64String(value); } } public byte[] MAC { get; set; } public string MACString { get { return Convert.ToBase64String(this.MAC); } set { this.MAC = Convert.FromBase64String(value); } } public byte[] Data { get; set; } public string DataString { get { return Convert.ToBase64String(this.Data); } set { this.Data = Convert.FromBase64String(value); } } } static void ReadTest() { Console.WriteLine("Enter password: "); string password = Console.ReadLine(); using (StreamReader reader = new StreamReader("aes.cs.txt")) { EncryptedData enc = new EncryptedData(); enc.SaltString = reader.ReadLine(); enc.MACString = reader.ReadLine(); enc.DataString = reader.ReadLine(); Console.WriteLine("The decrypted data was: " + DatabaseCrypto.Decrypt(password, enc)); } } static void WriteTest() { Console.WriteLine("Enter data: "); string data = Console.ReadLine(); Console.WriteLine("Enter password: "); string password = Console.ReadLine(); EncryptedData enc = DatabaseCrypto.Encrypt(password, data); using (StreamWriter stream = new StreamWriter("aes.cs.txt")) { stream.WriteLine(enc.SaltString); stream.WriteLine(enc.MACString); stream.WriteLine(enc.DataString); Console.WriteLine("The encrypted data was: " + enc.DataString); } }

    Read the article

  • Removing expired certificates from LDS (new ver of ADAM)

    - by jonthebrewer
    Hi all. This is my situation: We are in the process of replacing a certificate store currently hosted on Sun's iPlanet with Microsoft's Lightweight Directory Services (new version of ADAM with Server 2008). These certificates have been imported into LDS into an application partition (say o=myorg, C=AU). Under this structure I have around 40,000 OU's each one representing a customer under each customers OU are one or more user (iNetOrg) objects (around 60,000 in all). In each user are one or more certificates in the UserCertificate attribute. A combination of in-house written application code and proprietory PKI code reads and publishes these certficates to validate financial transactions. As the LDAP path of the certificates is stored within the customer certificates (and within the application code) and there is zero appetite for changing any of the code, I have had to pick up the iPlanet directory as a whole and dump it in LDS in the same structure. (I will not be using or hosting a Microsoft CA, just implementing an LDAP compliant directory to host these certificates) We have fully tested the application using the data in LDS and everything works fine - here is my dilema and question (finally, phew!) There was no process put in place for removing revoked or expired certificates, consequently the vast majority of the data is completely useless, the system has been running for about 8 years! I have done a quick analysis and I estimate that at least 80% of the data is no longer valid. As I am taking on responsibility for managing the directory I would like to start with a clean directory. Does anyone have any idea how I can cleanup these expired certificates. I am not a highly experienced scripter but have some background in VB. I have been researching the use of CAPICOM and have a feeling this may be able to be used but in exactly what way I am not sure?? I would prefer to write a script that I could specify an expiration date (say any certs that expired prior to 2010) then run against the LDS paritition. This way I can reuse the script periodically to cleanup the directory (as mentioned above - I have no way to adjust the applications that are writing the certs, this is with a third party). Another, less attractive, alternative is to massage the LDIF file (2.7 million lines!) to rip the certs out prior to the import Any help and advice MUCH appreciated. Cheers Jon

    Read the article

  • AD Password About to Expire check problem with ASP.Net

    - by Vince
    Hello everyone, I am trying to write some code to check the AD password age during a user login and notify them of the 15 remaining days. I am using the ASP.Net code that I found on the Microsoft MSDN site and I managed to add a function that checks the if the account is set to change password at next login. The login and the change password at next login works great but I am having some problems with the check for the password age. This is the VB.Net code for the DLL file: Imports System Imports System.Text Imports System.Collections Imports System.DirectoryServices Imports System.DirectoryServices.AccountManagement Imports System.Reflection 'Needed by the Password Expiration Class Only -Vince Namespace FormsAuth Public Class LdapAuthentication Dim _path As String Dim _filterAttribute As String 'Code added for the password expiration added by Vince Private _domain As DirectoryEntry Private _passwordAge As TimeSpan = TimeSpan.MinValue Const UF_DONT_EXPIRE_PASSWD As Integer = &H10000 'Function added by Vince Public Sub New() Dim root As New DirectoryEntry("LDAP://rootDSE") root.AuthenticationType = AuthenticationTypes.Secure _domain = New DirectoryEntry("LDAP://" & root.Properties("defaultNamingContext")(0).ToString()) _domain.AuthenticationType = AuthenticationTypes.Secure End Sub 'Function added by Vince Public ReadOnly Property PasswordAge() As TimeSpan Get If _passwordAge = TimeSpan.MinValue Then Dim ldate As Long = LongFromLargeInteger(_domain.Properties("maxPwdAge")(0)) _passwordAge = TimeSpan.FromTicks(ldate) End If Return _passwordAge End Get End Property Public Sub New(ByVal path As String) _path = path End Sub 'Function added by Vince Public Function DoesUserHaveToChangePassword(ByVal userName As String) As Boolean Dim ctx As PrincipalContext = New PrincipalContext(System.DirectoryServices.AccountManagement.ContextType.Domain) Dim up = UserPrincipal.FindByIdentity(ctx, userName) Return (Not up.LastPasswordSet.HasValue) 'returns true if last password set has no value. End Function Public Function IsAuthenticated(ByVal domain As String, ByVal username As String, ByVal pwd As String) As Boolean Dim domainAndUsername As String = domain & "\" & username Dim entry As DirectoryEntry = New DirectoryEntry(_path, domainAndUsername, pwd) Try 'Bind to the native AdsObject to force authentication. Dim obj As Object = entry.NativeObject Dim search As DirectorySearcher = New DirectorySearcher(entry) search.Filter = "(SAMAccountName=" & username & ")" search.PropertiesToLoad.Add("cn") Dim result As SearchResult = search.FindOne() If (result Is Nothing) Then Return False End If 'Update the new path to the user in the directory. _path = result.Path _filterAttribute = CType(result.Properties("cn")(0), String) Catch ex As Exception Throw New Exception("Error authenticating user. " & ex.Message) End Try Return True End Function Public Function GetGroups() As String Dim search As DirectorySearcher = New DirectorySearcher(_path) search.Filter = "(cn=" & _filterAttribute & ")" search.PropertiesToLoad.Add("memberOf") Dim groupNames As StringBuilder = New StringBuilder() Try Dim result As SearchResult = search.FindOne() Dim propertyCount As Integer = result.Properties("memberOf").Count Dim dn As String Dim equalsIndex, commaIndex Dim propertyCounter As Integer For propertyCounter = 0 To propertyCount - 1 dn = CType(result.Properties("memberOf")(propertyCounter), String) equalsIndex = dn.IndexOf("=", 1) commaIndex = dn.IndexOf(",", 1) If (equalsIndex = -1) Then Return Nothing End If groupNames.Append(dn.Substring((equalsIndex + 1), (commaIndex - equalsIndex) - 1)) groupNames.Append("|") Next Catch ex As Exception Throw New Exception("Error obtaining group names. " & ex.Message) End Try Return groupNames.ToString() End Function 'Function added by Vince Public Function WhenExpires(ByVal username As String) As TimeSpan Dim ds As New DirectorySearcher(_domain) ds.Filter = [String].Format("(&(objectClass=user)(objectCategory=person)(sAMAccountName={0}))", username) Dim sr As SearchResult = FindOne(ds) Dim user As DirectoryEntry = sr.GetDirectoryEntry() Dim flags As Integer = CInt(user.Properties("userAccountControl").Value) If Convert.ToBoolean(flags And UF_DONT_EXPIRE_PASSWD) Then 'password never expires Return TimeSpan.MaxValue End If 'get when they last set their password Dim pwdLastSet As DateTime = DateTime.FromFileTime(LongFromLargeInteger(user.Properties("pwdLastSet").Value)) ' return pwdLastSet.Add(PasswordAge).Subtract(DateTime.Now); If pwdLastSet.Subtract(PasswordAge).CompareTo(DateTime.Now) > 0 Then Return pwdLastSet.Subtract(PasswordAge).Subtract(DateTime.Now) Else Return TimeSpan.MinValue 'already expired End If End Function 'Function added by Vince Private Function LongFromLargeInteger(ByVal largeInteger As Object) As Long Dim type As System.Type = largeInteger.[GetType]() Dim highPart As Integer = CInt(type.InvokeMember("HighPart", BindingFlags.GetProperty, Nothing, largeInteger, Nothing)) Dim lowPart As Integer = CInt(type.InvokeMember("LowPart", BindingFlags.GetProperty, Nothing, largeInteger, Nothing)) Return CLng(highPart) << 32 Or CUInt(lowPart) End Function 'Function added by Vince Private Function FindOne(ByVal searcher As DirectorySearcher) As SearchResult Dim sr As SearchResult = Nothing Dim src As SearchResultCollection = searcher.FindAll() If src.Count > 0 Then sr = src(0) End If src.Dispose() Return sr End Function End Class End Namespace And this is the Login.aspx page: sub Login_Click(sender as object,e as EventArgs) Dim adPath As String = "LDAP://DC=xxx,DC=com" 'Path to your LDAP directory server Dim adAuth As LdapAuthentication = New LdapAuthentication(adPath) Try If (True = adAuth.DoesUserHaveToChangePassword(txtUsername.Text)) Then Response.Redirect("passchange.htm") ElseIf (True = adAuth.IsAuthenticated(txtDomain.Text, txtUsername.Text, txtPassword.Text)) Then Dim groups As String = adAuth.GetGroups() 'Create the ticket, and add the groups. Dim isCookiePersistent As Boolean = chkPersist.Checked Dim authTicket As FormsAuthenticationTicket = New FormsAuthenticationTicket(1, _ txtUsername.Text, DateTime.Now, DateTime.Now.AddMinutes(60), isCookiePersistent, groups) 'Encrypt the ticket. Dim encryptedTicket As String = FormsAuthentication.Encrypt(authTicket) 'Create a cookie, and then add the encrypted ticket to the cookie as data. Dim authCookie As HttpCookie = New HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket) If (isCookiePersistent = True) Then authCookie.Expires = authTicket.Expiration End If 'Add the cookie to the outgoing cookies collection. Response.Cookies.Add(authCookie) 'Retrieve the password life Dim t As TimeSpan = adAuth.WhenExpires(txtUsername.Text) 'You can redirect now. If (passAge.Days = 90) Then errorLabel.Text = "Your password will expire in " & DateTime.Now.Subtract(t) 'errorLabel.Text = "This is" 'System.Threading.Thread.Sleep(5000) Response.Redirect("http://somepage.aspx") Else Response.Redirect(FormsAuthentication.GetRedirectUrl(txtUsername.Text, False)) End If Else errorLabel.Text = "Authentication did not succeed. Check user name and password." End If Catch ex As Exception errorLabel.Text = "Error authenticating. " & ex.Message End Try End Sub ` Every time I have this Dim t As TimeSpan = adAuth.WhenExpires(txtUsername.Text) enabled, I receive "Arithmetic operation resulted in an overflow." during the login and won't continue. What am I doing wrong? How can I correct this? Please help!! Thank you very much for any help in advance. Vince

    Read the article

  • JBoss EJB Bean not bound

    - by portoalet
    Hi, I have the following error Exception in thread "main" javax.naming.NameNotFoundException: CounterBean not bound trying to access an EJB JAR CounterBean.jar deployed on JBoss5 from a client application outside the Application Server. From the Jboss log, it looks like it does not have a global JNDI name? Is this ok? What have I done wrong? JBoss log: 13:50:39,669 INFO [JBossASKernel] Created KernelDeployment for: Counter.jar 13:50:39,672 INFO [JBossASKernel] installing bean: jboss.j2ee:jar=Counter.jar,name=CounterBean,service=EJB3 13:50:39,672 INFO [JBossASKernel] with dependencies: 13:50:39,672 INFO [JBossASKernel] and demands: 13:50:39,673 INFO [JBossASKernel] partition:partitionName=DefaultPartition; Required: Described 13:50:39,673 INFO [JBossASKernel] jboss.ejb:service=EJBTimerService; Required: Described 13:50:39,673 INFO [JBossASKernel] and supplies: 13:50:39,673 INFO [JBossASKernel] jndi:CounterBean 13:50:39,673 INFO [JBossASKernel] Added bean(jboss.j2ee:jar=Counter.jar,name=CounterBean,service=EJB3) to KernelDeployment of: Counte r.jar 13:50:39,712 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=Counter.jar,name=CounterBean,service=EJB3 13:50:39,727 INFO [EJBContainer] STARTED EJB: com.don.CounterBean ejbName: CounterBean 13:50:39,732 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI: The client code is: public static void main(String[] args) throws NamingException, InterruptedException { InitialContext ctx = new InitialContext(); Counter s = (Counter)ctx.lookup("CounterBean/remote"); for(int i = 0; i < 100; i++ ) { s.printCount(i); Thread.sleep(1000); } } Error message: java -Djava.naming.provider.url=jnp://123.123.123.123:1099 -Djava.naming.factory.initial=org.jnp.interfaces.NamingContextFactory com.don.Client Exception in thread "main" javax.naming.NameNotFoundException: CounterBean not bound at org.jnp.server.NamingServer.getBinding(NamingServer.java:771) at org.jnp.server.NamingServer.getBinding(NamingServer.java:779) at org.jnp.server.NamingServer.getObject(NamingServer.java:785) at org.jnp.server.NamingServer.lookup(NamingServer.java:396) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) at sun.rmi.transport.Transport$1.run(Transport.java:159) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255) at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:142) at org.jnp.server.NamingServer_Stub.lookup(Unknown Source) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:726) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:686) at javax.naming.InitialContext.lookup(InitialContext.java:392) at com.don.Client.main(Client.java:10)

    Read the article

  • Reliable way of generating unique hardware ID

    - by mr.b
    Question: what's the best way to accomplish following. I have to come up with unique ID for each networked client, such that: it (ID) should persist once client software is installed on target computer, and should continue to persist if software is re-installed on same computer and same OS installment, it should not change if hardware configuration is modified in most ways (except changing the motherboard) When hard drive with client software installed is cloned to another computer with identical hardware configuration (or, as similar as possible), client software should be aware of that change. A little bit of explanation and some back-story: This question is basically age old question that also touches topic of software copy-protection, as some of mechanisms used in that area are mentioned here. I should be clear at this point that I'm not looking for a copy-protection scheme. Please, read on. :) I'm working on a client-server software that is supposed to work in local network. One of problems I have to solve is to identify each unique client in network (not so much of a problem), so that I can apply certain attributes to every specific client, retain and enforce those attributes during deployment lifetime of a specific client. While I was looking for a solution, I was aware of following: Windows activation system uses some kind of heavy fingerprinting mechanism, that is extremely sensitive to hardware modifications, Disk imaging software copies along all Volume IDs (tied to each partition when formatted), and custom, uniquely generated IDs during installation process, during first run, or in any other way, that is strictly software in its nature, and stored in registry or on hard drive, so it's very easy to confuse two Obvious choice for this kind of problem would be to find out BIOS identifiers (not 100% sure if this is unique through identical motherboard models, though), as that's the only thing I can rely on, that isn't duplicated, transferred by cloning, and that can't be changed (at least not by using some user-space program). Everything else fails as either being not reliable (MAC cloning, anyone?), or too demanding (in terms that it's too sensitive to configuration changes). Am I missing something obvious here? Sub-question that I'd like to ask is, am I doing it correctly, architecture-wise? Perhaps there is a better tool for task that I have to accomplish... Another approach I had in mind is something similar to handshake mechanism, where server maintains internal lookup table of connected client IDs (which can be even completely software-based and non-unique at any given moment), and tells client to come up with different ID during handshake, if duplicate ID is provided upon connection. That approach, unfortunately, doesn't play nicely with one of requirements to tie attributes to specific client during lifetime.

    Read the article

  • Connecting to Active Directory Application Mode from Perl

    - by Khurram Aziz
    I am trying to connect to Active Directory Application Mode instance. The instance is conenctable from third party LDAP clients like Softerra LDAP Browser. But I am getting the following error when connecting from Perl Net::LDAP=HASH(0x876d8e4) sending: Net::LDAP=HASH(0x876d8e4) received: 30 84 00 00 00 A7 02 01 02 65 84 00 00 00 9E 0A 0........e...... 01 01 04 00 04 84 00 00 00 93 30 30 30 30 30 34 ..........000004 44 43 3A 20 4C 64 61 70 45 72 72 3A 20 44 53 49 DC: LdapErr: DSI 44 2D 30 43 30 39 30 36 32 42 2C 20 63 6F 6D 6D D-0C09062B, comm 65 6E 74 3A 20 49 6E 20 6F 72 64 65 72 20 74 6F ent: In order to 20 70 65 72 66 6F 72 6D 20 74 68 69 73 20 6F 70 perform this op 65 72 61 74 69 6F 6E 20 61 20 73 75 63 63 65 73 eration a succes 73 66 75 6C 20 62 69 6E 64 20 6D 75 73 74 20 62 sful bind must b 65 20 63 6F 6D 70 6C 65 74 65 64 20 6F 6E 20 74 e completed on t 68 65 20 63 6F 6E 6E 65 63 74 69 6F 6E 2E 2C 20 he connection., 64 61 74 61 20 30 2C 20 76 65 63 65 00 __ __ __ data 0, vece.` My directory structure is Partition: CN=Apps,DC=MyCo,DC=COM User exists as CN=myuser,CN=Apps,DC=MyCo,DC=COM I have couple of other entries of the custom class which I am interested to browse; those instances appear fine in ADSI Edit, Softerra LDAP Browser etc. I am new to Perl....My perl code is #!/usr/bin/perl use Net::LDAP; $ldap = Net::LDAP->new("127.0.0.1", debug => 2, user => "CN=myuser,CN=Apps,DC=MyCo,DC=COM", password => "secret" ) or die "$@"; $ldap->bind(version => 3) or die "$@"; print "Connected to ldap\n"; $mesg = $ldap->search( filter => "(objectClass=*)" ) or die ("Failed on search.$!"); my $max = $mesg->count; print "$max records found!\n"; for( my $index = 0 ; $index < $max ; $index++) { my $entry = $mesg->entry($index); my $dn = $entry->dn; @attrs = $entry->attributes; foreach my $var (@attrs) { $attr = $entry->get_value( $var, asref => 1 ); if ( defined($attr) ) { foreach my $value ( @$attr ) { print "$var: $value\n"; } } } } $ldap->unbind();

    Read the article

  • Finding minimum cut-sets between bounded subgraphs

    - by Tore
    If a game map is partitioned into subgraphs, how to minimize edges between subgraphs? I have a problem, Im trying to make A* searches through a grid based game like pacman or sokoban, but i need to find "enclosures". What do i mean by enclosures? subgraphs with as few cut edges as possible given a maximum size and minimum size for number of vertices for each subgraph that act as a soft constraints. Alternatively you could say i am looking to find bridges between subgraphs, but its generally the same problem. Given a game that looks like this, what i want to do is find enclosures so that i can properly find entrances to them and thus get a good heuristic for reaching vertices inside these enclosures. So what i want is to find these colored regions on any given map. My Motivation The reason for me bothering to do this and not just staying content with the performance of a simple manhattan distance heuristic is that an enclosure heuristic can give more optimal results and i would not have to actually do the A* to get some proper distance calculations and also for later adding competitive blocking of opponents within these enclosures when playing sokoban type games. Also the enclosure heuristic can be used for a minimax approach to finding goal vertices more properly. A possible solution to the problem is the Kernighan-Lin algorithm: function Kernighan-Lin(G(V,E)): determine a balanced initial partition of the nodes into sets A and B do A1 := A; B1 := B compute D values for all a in A1 and b in B1 for (i := 1 to |V|/2) find a[i] from A1 and b[i] from B1, such that g[i] = D[a[i]] + D[b[i]] - 2*c[a][b] is maximal move a[i] to B1 and b[i] to A1 remove a[i] and b[i] from further consideration in this pass update D values for the elements of A1 = A1 / a[i] and B1 = B1 / b[i] end for find k which maximizes g_max, the sum of g[1],...,g[k] if (g_max > 0) then Exchange a[1],a[2],...,a[k] with b[1],b[2],...,b[k] until (g_max <= 0) return G(V,E) My problem with this algorithm is its runtime at O(n^2 * lg(n)), i am thinking of limiting the nodes in A1 and B1 to the border of each subgraph to reduce the amount of work done. I also dont understand the c[a][b] cost in the algorithm, if a and b do not have an edge between them is the cost assumed to be 0 or infinity, or should i create an edge based on some heuristic. Do you know what c[a][b] is supposed to be when there is no edge between a and b? Do you think my problem is suitable to use a multi level problem? Why or why not? Do you have a good idea for how to reduce the work done with the kernighan-lin algorithm for my problem?

    Read the article

  • Best way to handle multiple tables to replace one big table in Rails? (e.g. 'Books1', 'Books2', etc.

    - by mikep
    Hello, I've decided to use multiple tables for an entity (e.g. Books1, Books2, Books3, etc.), instead of just one main table which could end up having a lot of rows (e.g. just Books). I'm doing this to try and to avoid a potential future performance drop that could come with having too many rows in one table. With that, I'm looking for a good way to handle this in Rails, mainly by trying to avoid loading a bunch of unused associations. (I know that I could use a partition for this, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. I'm going to split the adds across the tables. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end class Books3 < ActiveRecord::Base belongs_to :user end I'm assuming that the main performance hit would come in terms of memory and possibly some method call overhead for each User object, since it has to load all of those associations, which in turn creates all of those nice, dynamic model accessor methods like User.find_by_. But for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. For example, with a user.id of 2, I'd only need books3.find_by_author('Author'), but the way I'm thinking of setting this up, I'd still have access to Books1..n. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. So, a few questions: 1) Is there's some sort of special Ruby/Rails methodology that could be applied to this 'multiple tables to represent one entity' scheme? Are there any 'best practices' for this? 2) Is it really bad to have so many unused has_many associations for each object? Is there a better way to do this? 3) Does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class? For example, so I can call books.find_by_author('Author') instead of books3.find_by_author('Author'). Thank you!

    Read the article

  • Is the design notion of layers contrived?

    - by Bruce
    Hi all I'm reading through Eric Evans' awesome work, Domain-Driven Design. However, I can't help feeling that the 'layers' model is contrived. To expand on that statement, it seems as if it tries to shoe-horn various concepts into a specific, neat model, that of layers talking to each other. It seems to me that the layers model is too simplified to actually capture the way that (good) software works. To expand further: Evans says: "Partition a complex program into layers. Develop a design within each layer that is cohesive and that depends only on the layers below. Follow standard architectural patterns to provide loose coupling to the layers above." Maybe I'm misunderstanding what 'depends' means, but as far as I can see, it can either mean a) Class X (in the UI for example) has a reference to a concrete class Y (in the main application) or b) Class X has a reference to a class Y-ish object providing class Y-ish services (ie a reference held as an interface). If it means (a), then this is clearly a bad thing, since it defeats re-using the UI as a front-end to some other application that provides Y-ish functionality. But if it means (b), then how is the UI any more dependent on the application, than the application is dependent on the UI? Both are decoupled from each other as much as they can be while still talking to each other. Evans' layer model of dependencies going one way seems too neat. First, isn't it more accurate to say that each area of the design provides a module that is pretty much an island to itself, and that ideally all communication is through interfaces, in a contract-driven/responsibility-driven paradigm? (ie, the 'dependency only on lower layers' is contrived). Likewise with the domain layer talking to the database - the domain layer is as decoupled (through DAO etc) from the database as the database is from the domain layer. Neither is dependent on the other, both can be swapped out. Second, the idea of a conceptual straight line (as in from one layer to the next) is artificial - isn't there more a network of intercommunicating but separate modules, including external services, utility services and so on, branching off at different angles? Thanks all - hoping that your responses can clarify my understanding on this..

    Read the article

  • How can I connect to a mail server using SMTP over SSL using Python?

    - by jakecar
    Hello, So I have been having a hard time sending email from my school's email address. It is SSL and I could only find this code online by Matt Butcher that works with SSL: import smtplib, socket version = "1.00" all = ['SMTPSSLException', 'SMTP_SSL'] SSMTP_PORT = 465 class SMTPSSLException(smtplib.SMTPException): """Base class for exceptions resulting from SSL negotiation.""" class SMTP_SSL (smtplib.SMTP): """This class provides SSL access to an SMTP server. SMTP over SSL typical listens on port 465. Unlike StartTLS, SMTP over SSL makes an SSL connection before doing a helo/ehlo. All transactions, then, are done over an encrypted channel. This class is a simple subclass of the smtplib.SMTP class that comes with Python. It overrides the connect() method to use an SSL socket, and it overrides the starttles() function to throw an error (you can't do starttls within an SSL session). """ certfile = None keyfile = None def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None): """Initialize a new SSL SMTP object. If specified, `host' is the name of the remote host to which this object will connect. If specified, `port' specifies the port (on `host') to which this object will connect. `local_hostname' is the name of the localhost. By default, the value of socket.getfqdn() is used. An SMTPConnectError is raised if the SMTP host does not respond correctly. An SMTPSSLError is raised if SSL negotiation fails. Warning: This object uses socket.ssl(), which does not do client-side verification of the server's cert. """ self.certfile = certfile self.keyfile = keyfile smtplib.SMTP.__init__(self, host, port, local_hostname) def connect(self, host='localhost', port=0): """Connect to an SMTP server using SSL. `host' is localhost by default. Port will be set to 465 (the default SSL SMTP port) if no port is specified. If the host name ends with a colon (`:') followed by a number, that suffix will be stripped off and the number interpreted as the port number to use. This will override the `port' parameter. Note: This method is automatically invoked by __init__, if a host is specified during instantiation. """ # MB: Most of this (Except for the socket connection code) is from # the SMTP.connect() method. I changed only the bare minimum for the # sake of compatibility. if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: host, port = host[:i], host[i+1:] try: port = int(port) except ValueError: raise socket.error, "nonnumeric port" if not port: port = SSMTP_PORT if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) msg = "getaddrinfo returns an empty list" self.sock = None for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res try: self.sock = socket.socket(af, socktype, proto) if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) self.sock.connect(sa) # MB: Make the SSL connection. sslobj = socket.ssl(self.sock, self.keyfile, self.certfile) except socket.error, msg: if self.debuglevel > 0: print>>stderr, 'connect fail:', (host, port) if self.sock: self.sock.close() self.sock = None continue break if not self.sock: raise socket.error, msg # MB: Now set up fake socket and fake file classes. # Thanks to the design of smtplib, this is all we need to do # to get SSL working with all other methods. self.sock = smtplib.SSLFakeSocket(self.sock, sslobj) self.file = smtplib.SSLFakeFile(sslobj); (code, msg) = self.getreply() if self.debuglevel > 0: print>>stderr, "connect:", msg return (code, msg) def setkeyfile(self, keyfile): """Set the absolute path to a file containing a private key. This method will only be effective if it is called before connect(). This key will be used to make the SSL connection.""" self.keyfile = keyfile def setcertfile(self, certfile): """Set the absolute path to a file containing a x.509 certificate. This method will only be effective if it is called before connect(). This certificate will be used to make the SSL connection.""" self.certfile = certfile def starttls(): """Raises an exception. You cannot do StartTLS inside of an ssl session. Calling starttls() will return an SMTPSSLException""" raise SMTPSSLException, "Cannot perform StartTLS within SSL session." And then my code: import ssmtplib conn = ssmtplib.SMTP_SSL('HOST') conn.login('USERNAME','PW') conn.ehlo() conn.sendmail('FROM_EMAIL', 'TO_EMAIL', "MESSAGE") conn.close() And got this error: /Users/Jake/Desktop/Beth's Program/ssmtplib.py:116: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. sslobj = socket.ssl(self.sock, self.keyfile, self.certfile) Traceback (most recent call last): File "emailer.py", line 5, in conn = ssmtplib.SMTP_SSL('HOST') File "/Users/Jake/Desktop/Beth's Program/ssmtplib.py", line 79, in init smtplib.SMTP.init(self, host, port, local_hostname) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/smtplib.py", line 239, in init (code, msg) = self.connect(host, port) File "/Users/Jake/Desktop/Beth's Program/ssmtplib.py", line 131, in connect self.sock = smtplib.SSLFakeSocket(self.sock, sslobj) AttributeError: 'module' object has no attribute 'SSLFakeSocket' Thank you!

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228  | Next Page >