Search Results

Search found 5874 results on 235 pages for 'idle mind'.

Page 212/235 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Trouble connecting to vsftpd on ubuntu server

    - by littleK
    I have installed Ubuntu Server 10.10 and I am using it to host a domain that I have. I am trying to set up FTP for the server, but I am running into some problems. I have successfully installed vsFTPd and I have opened up ports 20, 21 on my firewall. In my vsFTPd configuration, I have enabled SSL. Every time I try to connect to my server via FTP, I receive a "Connection Refused" error. I have had a little more success with SSL disabled, however the connection process will time out after the LIST command (but it does accept my authentication). Here is my vsFTPd configuration, the SSL stuff is at the bottom: # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. #xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. #chroot_local_user=YES # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Debian customization # # Some of vsftpd's settings don't fit the Debian filesystem layout by # default. These settings are more Debian-friendly. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/private/vsftpd.pem # SSL ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES Thanks!

    Read the article

  • Connecting tomcat6 to apache2

    - by StudentKen
    Disclaimier: Not a server admin I've been scratching my head over this for weeks now (not consistently mind you, as that would be maddening). I've been trying to connect my apache2 server to my tomcat server to the point where if someone encounters *.jsp or any servelet in navigating my web directory, it's handed over to tomcat. I have both Apache2.0 (port 9099) and Tomcat6 (9089) running on Debian lenny on the same box. Currently, mod_jk is enabled with mod_jk.conf in $apacheHOME/mods-enabled/ with content: # Where to find workers.properties JkWorkersFile /etc/apache2/workers.properties # Where to put jk shared memory JkShmFile /var/log/at_jk/mod_jk.shm # Where to put jk logs JkLogFile /var/log/at_jk/mod_jk.log # Set the jk log level [debug/error/info] JkLogLevel info # Select the timestamp log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " # Send servlet for context /examples to worker named worker1 JkMount /*/servlet/* worker1 # Send JSPs for context /examples to worker named worker1 JkMount /*.jsp worker1 my workers.properties located in $apacheHOME/ with content: workers.tomcat_home=/var/lib/tomcat6 workers.java_home=/usr/lib/jdk1.6.0_23/db/ worker.list=worker1 ps=/ worker.worker1.port=9071 worker.worker1.host=localhost worker.worker1.type=ajp13 my web.xml in $tomcatHOME/conf has the following servlets enabled <servlet> <servlet-name>default</servlet-name> <servlet-class>org.apache.catalina.servlets.DefaultServlet</servlet-cla$ <init-param> <param-name>debug</param-name> <param-value>0</param-value> </init-param> <init-param> <param-name>listings</param-name> <param-value>false</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet> <servlet-name>jsp</servlet-name> <servlet-class>org.apache.jasper.servlet.JspServlet</servlet-class> <init-param> <param-name>fork</param-name> <param-value>false</param-value> </init-param> <init-param> <param-name>xpoweredBy</param-name> <param-value>false</param-value> </init-param> <load-on-startup>3</load-on-startup> </servlet> <servlet-mapping> <servlet-name>jsp</servlet-name> <url-pattern>*.jsp</url-pattern> </servlet-mapping> <session-config> <session-timeout>30</session-timeout> </session-config> From what I can tell, there's no funny buisness as both the apache2, tomcat, and mod_jk logs show green; yet whenever I navigate to a jsp, it simply displays the javascript. I'm unsure what the problem is exactly despite pouring over the logs and documentation for aid. I'm quite a greenhorn in the servelet world.

    Read the article

  • How can I get FreeNAS to respond to libvirt shutdown requests

    - by ptomli
    I have a KVM VM of FreeNAS 0.7.1 Shere (revision 5127) running on Ubuntu Server 10.04 and I'm unable to convince the VM to shutdown from the host virsh shutdown freenas I would expect this to send some ACPI? trigger to the VM and FreeNAS then do what it's told. I'm not a FreeBSD fundi so I don't really know what packages or processes to poke to get this running. I have tried to convince powerd to run, but the VM cpus don't have the required freq entry Sysctl HW $ sysctl hw hw.machine: amd64 hw.model: QEMU Virtual CPU version 0.12.3 hw.ncpu: 1 hw.byteorder: 1234 hw.physmem: 523116544 hw.usermem: 463806464 hw.pagesize: 4096 hw.floatingpoint: 1 hw.machine_arch: amd64 hw.realmem: 536850432 hw.aac.iosize_max: 65536 hw.amr.force_sg32: 0 hw.an.an_cache_iponly: 1 hw.an.an_cache_mcastonly: 0 hw.an.an_cache_mode: dbm hw.an.an_dump: off hw.ata.to: 15 hw.ata.wc: 1 hw.ata.atapi_dma: 1 hw.ata.ata_dma_check_80pin: 1 hw.ata.ata_dma: 1 hw.ath.txbuf: 200 hw.ath.rxbuf: 40 hw.ath.regdomain: 0 hw.ath.countrycode: 0 hw.ath.xchanmode: 1 hw.ath.outdoor: 1 hw.ath.calibrate: 30 hw.ath.hal.swba_backoff: 0 hw.ath.hal.sw_brt: 10 hw.ath.hal.dma_brt: 2 hw.bce.msi_enable: 1 hw.bce.tso_enable: 1 hw.bge.allow_asf: 0 hw.cardbus.cis_debug: 0 hw.cardbus.debug: 0 hw.cs.recv_delay: 570 hw.cs.ignore_checksum_failure: 0 hw.cs.debug: 0 hw.cxgb.snd_queue_len: 50 hw.cxgb.use_16k_clusters: 1 hw.cxgb.force_fw_update: 0 hw.cxgb.singleq: 0 hw.cxgb.ofld_disable: 0 hw.cxgb.msi_allowed: 2 hw.cxgb.txq_mr_size: 1024 hw.cxgb.sleep_ticks: 1 hw.cxgb.tx_coalesce: 0 hw.firewire.hold_count: 3 hw.firewire.try_bmr: 1 hw.firewire.fwmem.speed: 2 hw.firewire.fwmem.eui64_lo: 0 hw.firewire.fwmem.eui64_hi: 0 hw.firewire.phydma_enable: 1 hw.firewire.nocyclemaster: 0 hw.firewire.fwe.rx_queue_len: 128 hw.firewire.fwe.tx_speed: 2 hw.firewire.fwe.stream_ch: 1 hw.firewire.fwip.rx_queue_len: 128 hw.firewire.sbp.tags: 0 hw.firewire.sbp.use_doorbell: 0 hw.firewire.sbp.scan_delay: 500 hw.firewire.sbp.login_delay: 1000 hw.firewire.sbp.exclusive_login: 1 hw.firewire.sbp.max_speed: -1 hw.firewire.sbp.auto_login: 1 hw.mfi.max_cmds: 128 hw.mfi.event_class: 0 hw.mfi.event_locale: 65535 hw.pccard.cis_debug: 0 hw.pccard.debug: 0 hw.cbb.debug: 0 hw.cbb.start_32_io: 4096 hw.cbb.start_16_io: 256 hw.cbb.start_memory: 2281701376 hw.pcic.pd6722_vsense: 1 hw.pcic.intr_mask: 57016 hw.pci.honor_msi_blacklist: 1 hw.pci.enable_msix: 1 hw.pci.enable_msi: 1 hw.pci.do_power_resume: 1 hw.pci.do_power_nodriver: 0 hw.pci.enable_io_modes: 1 hw.pci.host_mem_start: 2147483648 hw.syscons.kbd_debug: 1 hw.syscons.kbd_reboot: 1 hw.syscons.bell: 1 hw.syscons.saver.keybonly: 1 hw.syscons.sc_no_suspend_vtswitch: 0 hw.usb.uplcom.interval: 100 hw.usb.uvscom.interval: 100 hw.usb.uvscom.opktsize: 8 hw.wi.debug: 0 hw.wi.txerate: 0 hw.xe.debug: 0 hw.intr_storm_threshold: 1000 hw.availpages: 127714 hw.bus.devctl_disable: 0 hw.ste.rxsyncs: 0 hw.busdma.total_bpages: 32 hw.busdma.zone0.total_bpages: 32 hw.busdma.zone0.free_bpages: 32 hw.busdma.zone0.reserved_bpages: 0 hw.busdma.zone0.active_bpages: 0 hw.busdma.zone0.total_bounced: 0 hw.busdma.zone0.total_deferred: 0 hw.busdma.zone0.lowaddr: 0xffffffff hw.busdma.zone0.alignment: 2 hw.busdma.zone0.boundary: 65536 hw.clockrate: 2808 hw.instruction_sse: 1 hw.apic.enable_extint: 0 hw.kbd.keymap_restrict_change: 0 hw.acpi.supported_sleep_state: S3 S4 S5 hw.acpi.power_button_state: S5 hw.acpi.sleep_button_state: S3 hw.acpi.lid_switch_state: NONE hw.acpi.standby_state: S1 hw.acpi.suspend_state: S3 hw.acpi.sleep_delay: 1 hw.acpi.s4bios: 0 hw.acpi.verbose: 0 hw.acpi.disable_on_reboot: 0 hw.acpi.handle_reboot: 0 hw.acpi.cpu.cx_lowest: C1 Processes $ ps ax PID TT STAT TIME COMMAND 0 ?? DLs 0:00.00 [swapper] 1 ?? ILs 0:00.00 /sbin/init -- 2 ?? DL 0:00.08 [g_event] 3 ?? DL 0:00.29 [g_up] 4 ?? DL 0:00.33 [g_down] 5 ?? DL 0:00.00 [crypto] 6 ?? DL 0:00.00 [crypto returns] 7 ?? DL 0:00.00 [xpt_thrd] 8 ?? DL 0:00.00 [kqueue taskq] 9 ?? DL 0:00.00 [acpi_task_0] 10 ?? RL 34:12.42 [idle: cpu0] 11 ?? WL 0:01.13 [swi4: clock sio] 12 ?? WL 0:00.00 [swi3: vm] 13 ?? WL 0:00.00 [swi1: net] 14 ?? DL 0:00.04 [yarrow] 15 ?? WL 0:00.00 [swi6: task queue] 16 ?? WL 0:00.00 [swi2: cambio] 17 ?? DL 0:00.00 [acpi_task_1] 18 ?? DL 0:00.00 [acpi_task_2] 19 ?? WL 0:00.00 [swi5: +] 20 ?? DL 0:00.01 [thread taskq] 21 ?? WL 0:00.00 [swi6: Giant taskq] 22 ?? WL 0:00.00 [irq9: acpi0] 23 ?? WL 0:00.09 [irq14: ata0] 24 ?? WL 0:00.11 [irq15: ata1] 25 ?? WL 0:00.57 [irq11: ed0 uhci0] 26 ?? DL 0:00.00 [usb0] 27 ?? DL 0:00.00 [usbtask-hc] 28 ?? DL 0:00.00 [usbtask-dr] 29 ?? WL 0:00.01 [irq1: atkbd0] 30 ?? WL 0:00.00 [swi0: sio] 31 ?? DL 0:00.00 [sctp_iterator] 32 ?? DL 0:00.00 [pagedaemon] 33 ?? DL 0:00.00 [vmdaemon] 34 ?? DL 0:00.00 [idlepoll] 35 ?? DL 0:00.00 [pagezero] 36 ?? DL 0:00.01 [bufdaemon] 37 ?? DL 0:00.00 [vnlru] 38 ?? DL 0:00.14 [syncer] 39 ?? DL 0:00.01 [softdepflush] 1221 ?? Is 0:00.00 /sbin/devd 1289 ?? Is 0:00.01 /usr/sbin/syslogd -ss -f /var/etc/syslog.conf 1608 ?? Is 0:00.00 /usr/sbin/cron -s 1692 ?? Ss 0:00.03 /usr/local/sbin/mDNSResponderPosix -b -f /var/etc/mdn 1730 ?? S 0:00.43 /usr/local/sbin/lighttpd -f /var/etc/lighttpd.conf -m 1882 ?? DL 0:00.00 [system_taskq] 1883 ?? DL 0:00.00 [arc_reclaim_thread] 4139 ?? S 0:00.03 /usr/local/bin/php /usr/local/www/exec.php 4144 ?? S 0:00.00 sh -c ps ax 4145 ?? R 0:00.00 ps ax 1816 v0 Is 0:00.01 login [pam] (login) 1818 v0 I+ 0:00.03 -tcsh (csh) 1817 v1 Is+ 0:00.00 /usr/libexec/getty Pc ttyv1 1402 con- I 0:00.00 /usr/local/sbin/afpd -F /var/etc/afpd.conf 1404 con- S 0:00.00 /usr/local/sbin/cnid_metad 1682 con- I 0:02.78 /usr/local/sbin/mt-daapd -m -c /var/etc/mt-daapd.conf 1789 con- S 0:00.18 /usr/local/bin/fuppesd --config-dir /var/etc --config Libvert snippet <domain type='kvm'> <name>freenas</name> <uuid>********-****-****-****-************</uuid> <memory>524288</memory> <currentMemory>524288</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> Is this possible? Ideally I'd like to be able to stop the host without having to manually deal with shutting down the VM.

    Read the article

  • Can't Get Virtual Users Setup in VSFTPD -Tried Everything

    - by N.T.
    Have Ubuntu 11.10 with vsftpd installed and working. Can not get virtual users setup at all? Vsftpd will allow main Ubuntu owner account to login, but nothing else? I've followed several tutorials on adding virtual users, but nothing works? I just need to add 2 virtual users and have them be able to upload files to vsftpd Ubuntu computer from other computers on my Lan network. Everywhere I've looked, people just point toward tutorials on adding virtual users, but that just is NOT working. I've been struggling with this for over a week now! PLEASE Help. Thanks. I'll even give a donation if someone can figure this out. here is the vsftpd.conf file I am using. I copied the original, and make a new one, every time I try a tutorial. So far, none have worked. Here is the vsftpd.conf file I'm using. (I hope this helps?) # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Welcome to Sage FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. chroot_local_user=YES # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Debian customization # # Some of vsftpd's settings don't fit the Debian filesystem layout by # default. These settings are more Debian-friendly. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd local_root=/media/FilesDrive # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/private/vsftpd.pem

    Read the article

  • Strange Recurrent Excessive I/O Wait

    - by Chris
    I know quite well that I/O wait has been discussed multiple times on this site, but all the other topics seem to cover constant I/O latency, while the I/O problem we need to solve on our server occurs at irregular (short) intervals, but is ever-present with massive spikes of up to 20k ms a-wait and service times of 2 seconds. The disk affected is /dev/sdb (Seagate Barracuda, for details see below). A typical iostat -x output would at times look like this, which is an extreme sample but by no means rare: iostat (Oct 6, 2013) tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.00 0.00 156.00 9.75 21.89 288.12 36.00 57.60 5.50 0.00 44.00 8.00 48.79 2194.18 181.82 100.00 2.00 0.00 16.00 8.00 46.49 3397.00 500.00 100.00 4.50 0.00 40.00 8.89 43.73 5581.78 222.22 100.00 14.50 0.00 148.00 10.21 13.76 5909.24 68.97 100.00 1.50 0.00 12.00 8.00 8.57 7150.67 666.67 100.00 0.50 0.00 4.00 8.00 6.31 10168.00 2000.00 100.00 2.00 0.00 16.00 8.00 5.27 11001.00 500.00 100.00 0.50 0.00 4.00 8.00 2.96 17080.00 2000.00 100.00 34.00 0.00 1324.00 9.88 1.32 137.84 4.45 59.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22.00 44.00 204.00 11.27 0.01 0.27 0.27 0.60 Let me provide you with some more information regarding the hardware. It's a Dell 1950 III box with Debian as OS where uname -a reports the following: Linux xx 2.6.32-5-amd64 #1 SMP Fri Feb 15 15:39:52 UTC 2013 x86_64 GNU/Linux The machine is a dedicated server that hosts an online game without any databases or I/O heavy applications running. The core application consumes about 0.8 of the 8 GBytes RAM, and the average CPU load is relatively low. The game itself, however, reacts rather sensitive towards I/O latency and thus our players experience massive ingame lag, which we would like to address as soon as possible. iostat: avg-cpu: %user %nice %system %iowait %steal %idle 1.77 0.01 1.05 1.59 0.00 95.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 13.16 25.42 135.12 504701011 2682640656 sda 1.52 0.74 20.63 14644533 409684488 Uptime is: 19:26:26 up 229 days, 17:26, 4 users, load average: 0.36, 0.37, 0.32 Harddisk controller: 01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04) Harddisks: Array 1, RAID-1, 2x Seagate Cheetah 15K.5 73 GB SAS Array 2, RAID-1, 2x Seagate ST3500620SS Barracuda ES.2 500GB 16MB 7200RPM SAS Partition information from df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 480191156 30715200 425083668 7% /home /dev/sda2 7692908 437436 6864692 6% / /dev/sda5 15377820 1398916 13197748 10% /usr /dev/sda6 39159724 19158340 18012140 52% /var Some more data samples generated with iostat -dx sdb 1 (Oct 11, 2013) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 15.00 0.00 70.00 0.00 656.00 9.37 4.50 1.83 4.80 33.60 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 12.00 836.00 500.00 100.00 sdb 0.00 0.00 0.00 3.00 0.00 32.00 10.67 9.96 1990.67 333.33 100.00 sdb 0.00 0.00 0.00 4.00 0.00 40.00 10.00 6.96 3075.00 250.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 2.62 4648.00 500.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 1.00 0.00 16.00 16.00 1.69 7024.00 1000.00 100.00 sdb 0.00 74.00 0.00 124.00 0.00 1584.00 12.77 1.09 67.94 6.94 86.00 Characteristic charts generated with rrdtool can be found here: iostat plot 1, 24 min interval: http://imageshack.us/photo/my-images/600/yqm3.png/ iostat plot 2, 120 min interval: http://imageshack.us/photo/my-images/407/griw.png/ As we have a rather large cache of 5.5 GBytes, we thought it might be a good idea to test if the I/O wait spikes would perhaps be caused by cache miss events. Therefore, we did a sync and then this to flush the cache and buffers: echo 3 > /proc/sys/vm/drop_caches and directly afterwards the I/O wait and service times virtually went through the roof, and everything on the machine felt like slow motion. During the next few hours the latency recovered and everything was as before - small to medium lags in short, unpredictable intervals. Now my question is: does anybody have any idea what might cause this annoying behaviour? Is it the first indication of the disk array or the raid controller dying, or something that can be easily mended by rebooting? (At the moment we're very reluctant to do this, however, because we're afraid that the disks might not come back up again.) Any help is greatly appreciated. Thanks in advance, Chris. Edited to add: we do see one or two processes go to 'D' state in top, one of which seems to be kjournald rather frequently. If I'm not mistaken, however, this does not indicate the processes causing the latency, but rather those affected by it - correct me if I'm wrong. Does the information about uninterruptibly sleeping processes help us in any way to address the problem? @Andy Shinn requested smartctl data, here it is: smartctl -a -d megaraid,2 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:13 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 20 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 1236631092 Blocks received from initiator = 1097862364 Blocks read from cache and sent to initiator = 1383620256 Number of read and write commands whose size <= segment size = 531295338 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36556.93 number of minutes until next internal SMART test = 32 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 509271032 47 0 509271079 509271079 20981.423 0 write: 0 0 0 0 0 5022.039 0 verify: 1870931090 196 0 1870931286 1870931286 100558.708 0 Non-medium error count: 0 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36538 - [- - -] # 2 Background short Completed 16 36514 - [- - -] # 3 Background short Completed 16 36490 - [- - -] # 4 Background short Completed 16 36466 - [- - -] # 5 Background short Completed 16 36442 - [- - -] # 6 Background long Completed 16 36420 - [- - -] # 7 Background short Completed 16 36394 - [- - -] # 8 Background short Completed 16 36370 - [- - -] # 9 Background long Completed 16 36364 - [- - -] #10 Background short Completed 16 36361 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes] smartctl -a -d megaraid,3 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:26 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 19 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 288745640 Blocks received from initiator = 1097848399 Blocks read from cache and sent to initiator = 1304149705 Number of read and write commands whose size <= segment size = 527414694 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36596.83 number of minutes until next internal SMART test = 28 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 610862490 44 0 610862534 610862534 20470.133 0 write: 0 0 0 0 0 5022.480 0 verify: 2861227413 203 0 2861227616 2861227616 100872.443 0 Non-medium error count: 1 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36580 - [- - -] # 2 Background short Completed 16 36556 - [- - -] # 3 Background short Completed 16 36532 - [- - -] # 4 Background short Completed 16 36508 - [- - -] # 5 Background short Completed 16 36484 - [- - -] # 6 Background long Completed 16 36462 - [- - -] # 7 Background short Completed 16 36436 - [- - -] # 8 Background short Completed 16 36412 - [- - -] # 9 Background long Completed 16 36404 - [- - -] #10 Background short Completed 16 36401 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes]

    Read the article

  • Windows Server 2012 Migration (DNS/AD DS Standard Eval to Essentials OEM) P2V -> Do I need a Secondary Domain Controller during migration?

    - by Aubrey Robertson
    This is my first post on this exchange (although not my first on stack exchange), so please have patience. I am a 3rd year student intern, and I have been tasked with virtualizing the server systems at the company I work for. I have come a long way, and I am almost ready to install the VM Server in migration mode. Here is some information: Source Server: Windows Server 2012 Standard Evaluation DNS Server (local only) Advanced Directory Domain Services File and Storage stuff A few other server roles Destination Server: Windows Server 2012 Essentials OEM (Hyper-V client) Running under a temporary Hyper-V host (will migrate the Hyper-V host back to the old machine after the original server is virtualized as a client). Sitting currently at the "Select Installation Mode" screen. I have been following the guides on Microsoft tech net, and today I spent most of the day getting rid of issues in the Best Practices Analyser on the source machine. I have 3 remaining issues (which are all related): ERROR: DNS: DNS servers on Ethernet (adapter name) should include the loopback address, but not as the first entry (flavour text indicates that, during migration, the DNS server may not be found) WARNING: All domains should have at least two domain controllers for redundancy. WARNING: DNS: Ethernet should be configured to use both a preferred and an alternate DNS Server. All of these issues can be resolved by deploying a secondary domain controller, but I have never done that before (see my concerns below). The main issue here that I am concerned with for installing in migration mode is the FIRST one (the error). If I try and set-up the new server deployment, and the adapter domain controller is listed as localhost, then this may cause the installation to fail. (at least, this is what the Microsoft documentation suggests). But I do not have another IP address to enter here as I have no other local domain controllers. So I did the first obvious thing that came to my mind, and tried to use Google DNS servers as my alternates. That did not work because they couldn't recognize other computers in the "forest". Now I'm no expert when it comes to DNS, so please forgive my ignorance. This DNS server is concerned only with Active Directory stuffs for the local network. If I go ahead with migration, and it fails, then I will just have to go ahead and install a secondary DNS server I suppose. The problem I have here is that I am limited by the amount of Windows Server keys I have available (I have 2); however, I do have access to a Linux box running Debian Wheezy that I set-up two weeks ago as a Mantis server. I could install Windows Server 2012 as a secondary DNS (I think) in a VM and use that, but then it seems like I will be wasting time, and probably the Windows key too, and if there's another way to do it with Linux that would be much better. Even better still, do I even need a secondary DNS server for migration at all? The hints said that during migration the original machine "might" not be found. Thank you for your time and consideration.

    Read the article

  • Neighbour table overflow on Linux hosts related to bridging and ipv6

    - by tim
    Note: I already have a workaround for this problem (as described below) so this is only a "want-to-know" question. I have a productive setup with around 50 hosts including blades running xen 4 and equallogics providing iscsi. All xen dom0s are almost plain Debian 5. The setup includes several bridges on every dom0 to support xen bridged networking. In total there are between 5 and 12 bridges on each dom0 servicing one vlan each. None of the hosts has routing enabled. At one point in time we moved one of the machines to a new hardware including a raid controller and so we installed an upstream 3.0.22/x86_64 kernel with xen patches. All other machines run debian xen-dom0-kernel. Since then we noticed on all hosts in the setup the following errors every ~2 minutes: [55888.881994] __ratelimit: 908 callbacks suppressed [55888.882221] Neighbour table overflow. [55888.882476] Neighbour table overflow. [55888.882732] Neighbour table overflow. [55888.883050] Neighbour table overflow. [55888.883307] Neighbour table overflow. [55888.883562] Neighbour table overflow. [55888.883859] Neighbour table overflow. [55888.884118] Neighbour table overflow. [55888.884373] Neighbour table overflow. [55888.884666] Neighbour table overflow. The arp table (arp -n) never showed more than around 20 entries on every machine. We tried the obvious tweaks and raised the /proc/sys/net/ipv4/neigh/default/gc_thresh* values. FInally to 16384 entries but no effect. Not even the interval of ~2 minutes changed which lead me to the conclusion that this is totally unrelated. tcpdump showed no uncommon ipv4 traffic on any interface. The only interesting finding from tcpdump were ipv6 packets bursting in like: 14:33:13.137668 IP6 fe80::216:3eff:fe1d:9d01 > ff02::1:ff1d:9d01: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:9d01, length 24 14:33:13.138061 IP6 fe80::216:3eff:fe1d:a8c1 > ff02::1:ff1d:a8c1: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:a8c1, length 24 14:33:13.138619 IP6 fe80::216:3eff:fe1d:bf81 > ff02::1:ff1d:bf81: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:bf81, length 24 14:33:13.138974 IP6 fe80::216:3eff:fe1d:eb41 > ff02::1:ff1d:eb41: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff1d:eb41, length 24 which placed the idea in my mind that the problem maybe related to ipv6, since we have no ipv6 services in this setup. The only other hint was the coincidence of the host upgrade with the beginning of the problems. I powered down the host in question and the errors were gone. Then I subsequently took down the bridges on the host and when i took down (ifconfig down) one particularly bridge: br-vlan2159 Link encap:Ethernet HWaddr 00:26:b9:fb:16:2c inet6 addr: fe80::226:b9ff:fefb:162c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:120 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5286 (5.1 KiB) TX bytes:726 (726.0 B) eth0.2159 Link encap:Ethernet HWaddr 00:26:b9:fb:16:2c inet6 addr: fe80::226:b9ff:fefb:162c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1801 errors:0 dropped:0 overruns:0 frame:0 TX packets:20 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:126228 (123.2 KiB) TX bytes:1464 (1.4 KiB) bridge name bridge id STP enabled interfaces ... br-vlan2158 8000.0026b9fb162c no eth0.2158 br-vlan2159 8000.0026b9fb162c no eth0.2159 The errors went away again. As you can see the bridge holds no ipv4 address and it's only member is eth0.2159 so no traffic should cross it. Bridge and interface .2159 / .2157 / .2158 which are in all aspects identical apart from the vlan they are connected to had no effect when taken down. Now I disabled ipv6 on the entire host via sysctl net.ipv6.conf.all.disable_ipv6 and rebooted. After this even with bridge br-vlan2159 enabled no errors occur. Any ideas are welcome.

    Read the article

  • Intel Rapid Storage / Smart Response SSD caching issue

    - by goober
    Background Recently built my own PC. It works! Almost. It's been a while since getting into the guts of these things, so I'm familiar but may be missing something simple. FYI, I don't care about blowing the OS away -- it's brand new and we can go back from scratch as many times as necessary. Goal / Issue I'd like to use the SSD to take advantage of Intel's Smart Response technology (allows the SSD to act as a cache for HDDs) I would like the SSD cache to act as a cache for my HDDs, which I would like to be in a RAID1 array (so I get the speed from the SSD and the redundancy from the RAID1) However, Windows only sees the drive in device manager (not as a drive), so I'm unsure what to do about that. Related: as far as I know, for this to work, the drives all have to be in a single RAID array (i.e. a RAID0 pairing of the SSD and the RAID1 HDD array). However, when attempting this at the BIOS level, I am told there is not enough space for an array. Steps so Far Moved the SSD onto the Intel controller (I'd had it on the Marvel 6.0 controller instead of the Intel controller, so the BIOS was only seeing it in a strange way) Updated the BIOS of the motherboard to the latest version Reinstalled Intel's RST (iRST?) software several times, as some forums reported it working after reinstalling 3 times (which does not inspire confidence). Checked Intel storage: it does see the SSD as a physical, non-RAID device. However, it says no space exists if I try to create an array. Checked the BIOS: it does not show up in the boot order, but is an option that can be selected under boot options. Tried the firmware update for that model. Issue: the firmware CD doesn't detect a drive; maybe the Intel storage controller is making it difficult? moved the ssd to the marvel controller. The firmware update cd appeared to hang while searching for drives. swapped out the SATA cable for the manufacturer's and moved back to the intel storage controller. Noticed at this point that in the Intel RST software, a device DOES show up in addition to the RAID set -- only shown as a "60 GB internal disk". Windows doesn't appear to see it as a drive, but it does still show in device manager. Move SSD to port from 0-3 on MOBO and set SATA mode to IDE (after disconnecting RAID1 config) to allow the firmware update to work. Firmware was already at the latest version. Next Steps ? Components involved ASUS P8Z68-V PRO motherboard (Intel Z68 Chipset) Intel i7 2600k Processor 2 x 1TB 7200 RPM HDDs 64 GB Crucial M4 SSD (M4-CT064M4SSD2) For Reference -- Storage Configuration Intel 3 gbps Intel 3gbps Intel 6gbps Marvel 6gbps +----------+ +----------+ +----------+ +----------+ | | <----+ | | +-+ | | | |----------| | |----------| |-|--------| |----------| | | | | + | | | | | | +----------+ | +--|-------+ +-|--------+ +----------+ | | | + v v | 1 TB HDD 64 GB SSD + +> 1 TB HDD For Reference -- Intel RST (v10.8.0.1003) Screenshot Don't mind the "rebuilding" -- knocked a power cable out at one point; it's doing its job, not an indicator of a bad HDD. Any thoughts? Thanks in advance for any help!

    Read the article

  • How can I resolve this one application coming up with an "You don't have permission to use the application" error?

    - by morgant
    I've got a Mac OS X 10.6 Snow Leopard Server Open Directory Master with a user who's getting Mobility & Application managed preferences from a group (the only group they're a member of). The workstation is also running Mac OS X 10.6 Snow Leopard, when the user logs in and tries to run our primary application which they're explicitly allowed to run (via the group's preferences), it says "You don't have permission to use the application 'Blah'". Now, the application is added to the group's list of always allowed applications, unsigned (so a minor difference in application version or file contents shouldn't disallow it). It even lives in a subdirectory of /Applications which is in the list of folders to allow applications. I've run into this when logging this user into new workstations and the following usually works: Log them out Remove the following files from their mobile home folder on the workstation: /Library/Managed\ Preferences/, ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Remove the following files from their network home folder on the server: ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Log them back in on the workstation. However, this no longer resolves the issue. Their Home Sync preferences are set (on the group) to sync ~, but not the following files (manually, at login, and at logout... no background sync here): ~/.SymAVQSFile ~/NAVMac800QSFile ~/Library ~/.FileSync ~/.account Their Preferences Sync preferences are set (also on the group) to sync ~/Library & ~/Documents/Microsoft User Data, but not the following files (also manually, at login, and at logout... no background sync): ~/.SymAVQSFile ~/.Trash ~/.Trashes ~/Documents/Microsoft User Data/Entourage Temp ~/Library/Application Support/SyncServices ~/Library/Application Support/MobileSync ~/Library/Caches ~/Library/Calendars/Calendar Cache ~/Library/Logs ~/Library/Mail/AvailableFeeds ~/Library/Mail/Envelope Index ~/Library/Preferences/Macromedia/ ~/Library/Printers ~/Library/PubSub/Database ~/Library/PubSub/Downloads ~/Library/PubSub/Feeds ~/Library/Safari/Icons.db ~/Library/Safari/HistoryIndex.sk ~/Library/iTunes/iPhone Software Updates IMAP-* Exchange-* EWS-* Mac-* ~/Library/Preferences/ByHost ~/Library/Preferences/com.apple.dock.plist ~/Library/Preferences/com.apple.sitebarlists.plist ~/Library/Application Support/4D ~/Library/Preferences/com.apple.MCX.plist ~/.FileSync ~/.account Even with ~/Library/Preferences/com.apple.MCX.plist prevented from syncing during a Preferences Sync, it still seems to show up in the network home on the server frequently. Are there any other files other than ~/Library/Preferences/com.apple.MCX.plist that contain application Managed Preferences that might be causing this one app to be showing up as not allowed? Any ideas on how ~/Library/Preferences/com.apple.MCX.plist keeps getting sync'd back up the network home folder on the server? Update: I thought I had found a workaround this morning, but it also seemed to be extremely temporary. Basically, loking at /Library/Managed\ Preferences/[shortname]/com.apple.applicationaccess.new.plist I discovered that it didn't have an entry for the application in question, but /Library/Managed\ Preferences/[shortname]/complete.plist did. Naturally, I deleted com.apple.applicationaccess.new.plist, logged in again, and it worked... on one workstation. It failed on others, and after logging out & back in a couple more times it started failing on all of them again, even after further deletions of com.apple.applicationaccess.new.plist. Oddly, com.apple.applicationaccess.new.plist & complete.plist do both contain an entry for the application in question now, but it still says it's not allowed. Further Update: Okay, so I now have a reproducible workaround which seems to be required after every reboot of the workstation: Log in as the user (you'll discover you cannot launch the application in question). Fast User Switch to the local admin account on the workstation (we always have one on every machine). From that local admin account, run sudo mcxrefresh -n 'shortname' (logging out and back in as the user in question will not work). Fast User Switch back to the user (you'll still not be allowed to run the application). Log the user out and back in (you'll now be able to run the application in question.) Fast User Switch back to the local admin account, log it out, and log back in as the user in question. If you do all that exactly as described it'll keep working through log out & log back in, but NOT through a reboot. If, after a reboot, you try something like logging in as the local admin account, running sudo mcxrefresh -n 'shortname', logging out, then logging in as the user in question, it will NOT work. Yet Another Update We don't have any computer groups in our Open Directory, so it shouldn't be getting any conflicting settings from there. I ran sudo mcxquery -format xml -user shortname -group groupname before & after performing the aforementioned process to allow the application in question to be run and the results were identical (saved the result to files & diff'd... I'm not just guessing here). One Step Forward, Half a Step Back: When the Mac OS X 10.6.5 Server update was released, we upgraded our Open Directory Master to it as the changes included the following managed preferences fixes which I hoped might address this issue: Addresses an issue that could prevent managed preferences from being applied when a user logs in on a workstation that has been idle. Fixes an issue that could prevent administrators from bypassing client management settings on a workstation. This seemed to improve the situation slightly. The application in question now usually launches without error. If, and when it does launch with the "You don't have permission to use the application" error, logging the user out and back in seems to correct it. That said, we've since had to add a couple of applications to the user's ~/Applications/ directory and those are still prevented from launching. The workstations are running Mac OS X 10.6.4, the OD Master (which the workstations are bound to) is running Mac OS 10.6.5 Server (although there are two OD Replicas still running 10.6.4 Server), and we're using Workgroup Manager 10.6.3 (which is included with the Server Admin Tools 10.6.5 upgrade) to add the applications (unsigned, as always). This time, I've caught the following in /var/log/system.log when attempting to launch one of the allowed applications from ~/Applications: Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker checkApp:csFlags:] [954:username] -- *** Incoming app appears to be masquerading as white listed app and failed signature validation: /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro. Note: This may be a valid app of a different version than what was whitelisted (on a different volume?) Dec 22 17:36:24 hostname [0x0-0xa42a42].com.filemaker.filemakerpro[43304]: launch of /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro was blocked Dec 22 17:36:24 hostname com.apple.launchd.peruser.1340[6375] ([0x0-0xa42a42].com.filemaker.filemakerpro[43304]): Exited with exit code: 255 Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker(Private) _removeAppFromWhiteList:] [1362:username] -- *** Couldn't find local user record Running sudo mcxquery -format xml -user username -group groupname includes the following entry for FileMaker Pro 5.5 (and appears to include a full integration of the user's application whitelist & group's application whitelist): <dict> <key>bundleID</key> <string>com.filemaker.filemakerpro</string> <key>displayName</key> <string>FileMaker Pro</string> </dict> Note the lack of <key>appID</key><data> ... </data> which seems to specify a signed application. While whitelisted directories also appear to be correctly listed in the results, they too do not actually allow the applications to be run either. What is going on here?! Where else should I be looking?

    Read the article

  • More interruptions than cpu context switches

    - by Christopher Valles
    I have a machine running Debian GNU/Linux 5.0.8 (lenny) 8 cores and 12Gb of RAM. We have one core permanently around 40% ~ 60% wait time and trying to spot what is happening I realized that we have more interruptions than cpu context switches. I found that the normal ratio between context switch and interruptions is around 10x more context switching than interruptions but on my server the values are completely different. backend1:~# vmstat -s 12330788 K total memory 12221676 K used memory 3668624 K active memory 6121724 K inactive memory 109112 K free memory 3929400 K buffer memory 4095536 K swap cache 4194296 K total swap 7988 K used swap 4186308 K free swap 44547459 non-nice user cpu ticks 702408 nice user cpu ticks 13346333 system cpu ticks 1607583668 idle cpu ticks 374043393 IO-wait cpu ticks 4144149 IRQ cpu ticks 3994255 softirq cpu ticks 0 stolen cpu ticks 4445557114 pages paged in 2910596714 pages paged out 128642 pages swapped in 267400 pages swapped out 3519307319 interrupts 2464686911 CPU context switches 1306744317 boot time 11555115 forks Any ideas if that is an issue? And in that case, how can I spot the cause and fix it? Update Following the instructions of the comments and focusing on the core stuck in wait I checked the processes attached to that core and below you can find the list: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND 24 root RT -5 0 0 0 S 0 0.0 0:03.42 7 migration/7 25 root 15 -5 0 0 0 S 0 0.0 0:04.78 7 ksoftirqd/7 26 root RT -5 0 0 0 S 0 0.0 0:00.00 7 watchdog/7 34 root 15 -5 0 0 0 S 0 0.0 1:18.90 7 events/7 83 root 15 -5 0 0 0 S 0 0.0 1:10.68 7 kblockd/7 291 root 15 -5 0 0 0 S 0 0.0 0:00.00 7 aio/7 569 root 15 -5 0 0 0 S 0 0.0 0:00.00 7 ata/7 1545 root 15 -5 0 0 0 S 0 0.0 0:00.00 7 ksnapd 1644 root 15 -5 0 0 0 S 0 0.0 0:36.73 7 kjournald 1725 root 16 -4 16940 1152 488 S 0 0.0 0:00.00 7 udevd 2342 root 20 0 8828 1140 956 S 0 0.0 0:00.00 7 sh 2375 root 20 0 8848 1220 1016 S 0 0.0 0:00.00 7 locate 2421 root 30 10 8896 1268 1016 S 0 0.0 0:00.00 7 updatedb.findut 2430 root 30 10 58272 49m 616 S 0 0.4 0:17.44 7 sort 2431 root 30 10 3792 448 360 S 0 0.0 0:00.00 7 frcode 2682 root 15 -5 0 0 0 S 0 0.0 3:25.98 7 kjournald 2683 root 15 -5 0 0 0 S 0 0.0 0:00.64 7 kjournald 2687 root 15 -5 0 0 0 S 0 0.0 1:31.30 7 kjournald 3261 root 15 -5 0 0 0 S 0 0.0 2:30.56 7 kondemand/7 3364 root 20 0 3796 596 476 S 0 0.0 0:00.00 7 acpid 3575 root 20 0 8828 1140 956 S 0 0.0 0:00.00 7 sh 3597 root 20 0 8848 1216 1016 S 0 0.0 0:00.00 7 locate 3603 root 30 10 8896 1268 1016 S 0 0.0 0:00.00 7 updatedb.findut 3612 root 30 10 58272 49m 616 S 0 0.4 0:27.04 7 sort 3655 root 20 0 11056 2852 516 S 0 0.0 5:36.46 7 redis-server 3706 root 20 0 19832 1056 816 S 0 0.0 0:01.64 7 cron 3746 root 20 0 3796 580 484 S 0 0.0 0:00.00 7 getty 3748 root 20 0 3796 580 484 S 0 0.0 0:00.00 7 getty 7674 root 20 0 28376 1000 736 S 0 0.0 0:00.00 7 cron 7675 root 20 0 8828 1140 956 S 0 0.0 0:00.00 7 sh 7708 root 30 10 58272 49m 616 S 0 0.4 0:03.36 7 sort 22049 root 20 0 8828 1136 956 S 0 0.0 0:00.00 7 sh 22095 root 20 0 8848 1220 1016 S 0 0.0 0:00.00 7 locate 22099 root 30 10 8896 1264 1016 S 0 0.0 0:00.00 7 updatedb.findut 22108 root 30 10 58272 49m 616 S 0 0.4 0:44.55 7 sort 22109 root 30 10 3792 452 360 S 0 0.0 0:00.00 7 frcode 26927 root 20 0 8828 1140 956 S 0 0.0 0:00.00 7 sh 26947 root 20 0 8848 1216 1016 S 0 0.0 0:00.00 7 locate 26951 root 30 10 8896 1268 1016 S 0 0.0 0:00.00 7 updatedb.findut 26960 root 30 10 58272 49m 616 S 0 0.4 0:10.24 7 sort 26961 root 30 10 3792 452 360 S 0 0.0 0:00.00 7 frcode 27952 root 20 0 65948 3028 2400 S 0 0.0 0:00.00 7 sshd 30731 root 20 0 0 0 0 S 0 0.0 0:01.34 7 pdflush 31204 root 20 0 0 0 0 S 0 0.0 0:00.24 7 pdflush 21857 deploy 20 0 1227m 2240 868 S 0 0.0 2:44.22 7 nginx 21858 deploy 20 0 1228m 2784 868 S 0 0.0 2:42.45 7 nginx 21862 deploy 20 0 1228m 2732 868 S 0 0.0 2:43.90 7 nginx 21869 deploy 20 0 1228m 2840 868 S 0 0.0 2:44.14 7 nginx 27994 deploy 20 0 19372 2216 1380 S 0 0.0 0:00.00 7 bash 28493 deploy 20 0 331m 32m 16m S 4 0.3 0:00.40 7 apache2 21856 deploy 20 0 1228m 2844 868 S 0 0.0 2:43.64 7 nginx 3622 nobody 30 10 21156 10m 916 D 0 0.1 4:42.31 7 find 7716 nobody 30 10 12268 1280 888 D 0 0.0 0:43.50 7 find 22116 nobody 30 10 12612 1696 916 D 0 0.0 6:32.26 7 find 26968 nobody 30 10 12268 1284 888 D 0 0.0 1:56.92 7 find Update As suggested I take a look at /proc/interrupts and below the info there: CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 0: 35 0 0 1469085485 0 0 0 0 IO-APIC-edge timer 1: 0 0 0 8 0 0 0 0 IO-APIC-edge i8042 8: 0 0 0 1 0 0 0 0 IO-APIC-edge rtc0 9: 0 0 0 0 0 0 0 0 IO-APIC-fasteoi acpi 12: 0 0 0 105 0 0 0 0 IO-APIC-edge i8042 16: 0 0 0 0 0 0 0 580212114 IO-APIC-fasteoi 3w-9xxx, uhci_hcd:usb1 18: 0 0 142 0 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb6, ehci_hcd:usb7 19: 9 0 0 0 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3, uhci_hcd:usb5 21: 0 0 0 0 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb2 23: 0 0 0 0 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb4, ehci_hcd:usb8 1273: 0 0 1600400502 0 0 0 0 0 PCI-MSI-edge eth0 1274: 0 0 0 0 0 0 0 0 PCI-MSI-edge ahci NMI: 0 0 0 0 0 0 0 0 Non-maskable interrupts LOC: 214252181 69439018 317298553 21943690 72562482 56448835 137923978 407514738 Local timer interrupts RES: 27516446 16935944 26430972 44957009 24935543 19881887 57746906 24298747 Rescheduling interrupts CAL: 10655 10705 10685 10567 10689 10669 10667 396 function call interrupts TLB: 529548 462587 801138 596193 922202 747313 2027966 946594 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts SPU: 0 0 0 0 0 0 0 0 Spurious interrupts ERR: 0 All the values seems more or less the same for all the cores but this one IO-APIC-fasteoi 3w-9xxx, uhci_hcd:usb1 only affects to the core 7 (the same with the wait time of 40% ~ 60%) could be something attached to the usb port causing the issue? Thanks in advanced

    Read the article

  • Command-line video editing in Linux (cut, join and preview)

    - by sdaau
    I have rather simple editing needs - I need to cut up some videos, maybe insert some PNGs in between them, and join these videos (don't need transitions, effects, etc.). Basically, pitivi would do what I want - except, I use 640x480 30 fps AVI's from a camera, and as soon as I put in over a couple of minutes of that kind of material, pitivi starts freezing on preview, and thus becomes unusable. So, I started looking for a command line tool for Linux; I guess only ffmpeg (command line - Using ffmpeg to cut up video - Super User) and mplayer (Sam - Edit video file with mencoder under linux) are so far candidates, but I cannot find examples of the use I have in mind.   Basically, I'd imagine there's an encoder and player tools (like ffmpeg vs ffplay; or mencoder vs mplayer) - such that, to begin with, the edit sequence could be specified directly on the command line, preferably with frame resolution - a pseudocode would look like: videnctool -compose --file=vid1.avi --start=00:00:30:12 --end=00:01:45:00 --file=vid2.avi --start=00:05:00:00 --end=00:07:12:25 --file=mypicture.png --duration=00:00:02:00 --file=vid3.avi --start=00:02:00:00 --end=00:02:45:10 --output=editedvid.avi ... or, it could have a "playlist" text file, like: vid1.avi 00:00:30:12 00:01:45:00 vid2.avi 00:05:00:00 00:07:12:25 mypicture.png - 00:00:02:00 vid3.avi 00:02:00:00 00:02:45:10 ... so it could be called with videnctool -compose --playlist=playlist.txt --output=editedvid.avi The idea here would be that all of the videos are in the same format - allowing the tool to avoid transcoding, and just do a "raw copy" instead (as in mencoder's copy codec: "-oac copy -ovc copy") - or in lack of that, uncompressed audio/video would be OK (although it would eat a bit of space). In the case of the still image, the tool would use the encoding set by the video files.   The thing is, I can so far see that mencoder and ffmpeg can operate on individual files; e.g. cut a single section from a single file, or join files (mencoder also has Edit Decision Lists (EDL), which can be used to do frame-exact cutting - so you can define multiple cut regions, but it's again attributed to a single file). Which implies I have to work on cutting pieces first from individual files first (each of which would demand own temporary file on disk), and then joining them in a final video file. I would then imagine, that there is a corresponding player tool, which can read the same command line option format / playlist file as the encoding tool - except it will not generate an output file, but instead play the video; e.g. in pseudocode: vidplaytool --playlist=playlist.txt --start=00:01:14 --end=00:03:13 ... and, given there's enough memory, it would generate a low-res video preview in RAM, and play it back in a window, while offering some limited interaction ( like mplayer's keyboard shortcuts for play, pause, rewind, step frame). Of course, I'd imagine the start and end times to refer to the entire playlist, and include any file that may end up in that region in the playlist. Thus, the end result of all this would be: command line operation; no temporary files while doing the editing - and also no temporary files (nor transcoding) when rendering final output... which I myself think would be nice. So, while I think that all of the above may be a bit of a stretch - does there exist anything that would approximate the workflow described above?

    Read the article

  • Alternative Methods of Sharing Folders in Windows?

    - by Blaenk
    Hey guys. I'm running Windows 7 and as of now I simply share folders as one usually does in Windows. I then have a MacBook with Leopard (Now Snow Leopard) which I use to connect to my computer to mount the shares by going to Finder, then CMD + K and typing smb://BlaenkPC (The name of my PC) into the address box. This consequently connects to my computer and mounts all of the shares. The problem is that sometimes, if for example I close my MacBook (Which makes it go to sleep) or sometimes even without doing that, the connection somehow drops. Sometimes I close the MacBook and upon re-opening it, everything still works; it's random. It still shows the computer as being connected, but it just shows 'loading' indefinitely. If I hit 'eject' with the intention of re-connecting to the computer, it disappears from the sidebar (The Computer Icon) in Finder, but I cannot re-connect. Activity Monitor (or ps aux, whichever) both show hung instances of umount; one for each share that was mounted. I cannot kill these processes with kill or killall (Yes, even with sudo, and sending signal -9). This has happened to me before, and here is another person who has experienced this. My question boils down to this: Is there an alternative method of sharing folders in Windows, that my Mac can read/understand, that is possibly more reliable and preferably just as fast? I usually use the mounted shares to watch television episodes off my computer, or movies, etc. (In other words, I open them in VLC and they automatically stream from my computer). As far as I can tell, this is a problem with the Samba protocol. I have heard of NFS, but I am not sure if I would have to re-format my drives, or what. I don't mind running a service or daemon to allow the sharing of the folders, I just want it to be done and hopefully in a better way than typical Windows shares through Samba. Usually when I encounter this problem, which is often (read: every day), I have no other option but to restart the MacBook. As I stated in the first question I linked to, shutting down and restarting don't work; I have to manually force the shutdown by holding the power button. I have not modified my installation of Mac OS X in any hackish way, so I doubt it's something with the Operating System, but worst come to worst, I might end up reformatting and doing a clean install to see if that fixes anything, as I am at a complete loss as to what may be causing the problem, and no one else seems to have any idea or care, despite there being quite a few people suffering from this problem, as my research has shown. Any pieces of information that can help are extremely appreciated. You don't have to answer every question on here, but maybe even some insight as to why it might not be possible to kill those hung umount instances for example, or why I may not be able to reconnect using samba (Is it something regarding the way the protocol works?). One thing to note is that I have another computer in the home network that doesn't seem to have this problem. However, it is also running Windows 7 (Note though that I am not using the homegroup feature, but the typical windows sharing feature). My only deduction is that the problem is being caused by the way the Mac (Or Samba implementation, whichever) is handling things. Perhaps it is a limitation.

    Read the article

  • udp through nat

    - by youllknow
    Hi everyone! I've two private networks (each of them behind a typical dsl router). The routers are connected to the WWW. The extern interface of each router have one dynamic IP address. I want to stream data via UDP directly between one client in private network A and one client in private network B. I've already tried a lot of things (see: http://en.wikipedia.org/wiki/UDP_hole_punching, or STUN). But it wasn't possible for me to transfer data between the two clients. It's possible to use a server (located in the WWW, with static IP) to transfer the extern IPs (and extern ports) from the routers between the clients. So imagine client A knows client B's external IP and client B's external port assigned by his router. I simply tried sending UDP packet to the receivers external IP/port combination, but without any result. So does anyone know what do to communicate via UDP throw the two NAT routers? It must be possible??? Or does Skype, for example, not directly communicate between the clients when the call eachother (voice over ip). I am sorry for my bad English! If something is confusing don't mind asking me!!! Thanks for your help in advance. ::::EDIT:::: I can't get pwnat or chownat working. I tried it with my own dsl-gateway - didn't work. Then I set up a complete virtual environment using VMWare. C1 (Client 1, WinXP Prof SP3): 172.16.16.100/24, GW 172.16.16.1 C2 (Client 2, WinXP Prof SP3): 10.0.0.100/24, GW 10.0.0.1 C3 (Client 3, WinXP Prof SP3): 3.0.0.2/24, GW 3.0.0.1 S1 (Ubuntu 10.04 x64 Server): eth0: 172.16.16.1/24, eth1: 1.0.0.2/24 GW 1.0.0.1 S2 (Ubuntu 10.04 x64 Server): eth0: 10.0.0.1/24, eth1: 2.0.0.2/24 GW 2.0.0.1 S3 (Ubuntu 10.04 x64 Server): eth0: 1.0.0.1/24, eth1: 2.0.0.1/24, eth2: 3.0.0.1/24 +--+ +--+ +--+ +--+ +--+ |C1|-----|S1|-----|S3|-----|S2|-----|C2| +--+ +--+ +--+ +--+ +--+ | +--+ |C3| +--+ Server S1 and S2 provide NAT functionality. (they have routing enabled and provide a firewall, which allows trafic from the internal net and provide the nat functionality) Server S3 has routing enabled. The client firewalls are turned off. C1 and C2 are able to ping C3, e.g. visit C3's webserver. They are also able to send UDP Packets to C3 (C3 successful receives them)! C1 and C2 have also webservers running for test reasons. I run ""chownat -s 80 2.0.0.2"" at C1, and ""chownat -c 8000 1.0.0.2"" at C2. Then I tried to access the Webpage from C1 via webbrower localhost at port 8000. It didn't work. Can anybody help me? Any suggestions? If you have any questions to my question, please ask!

    Read the article

  • How can I resolve this one application coming up with an "You don't have permission to use the application" error?

    - by morgant
    I've got a Mac OS X 10.6 Snow Leopard Server Open Directory Master with a user who's getting Mobility & Application managed preferences from a group (the only group they're a member of). The workstation is also running Mac OS X 10.6 Snow Leopard, when the user logs in and tries to run our primary application which they're explicitly allowed to run (via the group's preferences), it says "You don't have permission to use the application 'Blah'". Now, the application is added to the group's list of always allowed applications, unsigned (so a minor difference in application version or file contents shouldn't disallow it). It even lives in a subdirectory of /Applications which is in the list of folders to allow applications. I've run into this when logging this user into new workstations and the following usually works: Log them out Remove the following files from their mobile home folder on the workstation: /Library/Managed\ Preferences/, ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Remove the following files from their network home folder on the server: ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Log them back in on the workstation. However, this no longer resolves the issue. Their Home Sync preferences are set (on the group) to sync ~, but not the following files (manually, at login, and at logout... no background sync here): ~/.SymAVQSFile ~/NAVMac800QSFile ~/Library ~/.FileSync ~/.account Their Preferences Sync preferences are set (also on the group) to sync ~/Library & ~/Documents/Microsoft User Data, but not the following files (also manually, at login, and at logout... no background sync): ~/.SymAVQSFile ~/.Trash ~/.Trashes ~/Documents/Microsoft User Data/Entourage Temp ~/Library/Application Support/SyncServices ~/Library/Application Support/MobileSync ~/Library/Caches ~/Library/Calendars/Calendar Cache ~/Library/Logs ~/Library/Mail/AvailableFeeds ~/Library/Mail/Envelope Index ~/Library/Preferences/Macromedia/ ~/Library/Printers ~/Library/PubSub/Database ~/Library/PubSub/Downloads ~/Library/PubSub/Feeds ~/Library/Safari/Icons.db ~/Library/Safari/HistoryIndex.sk ~/Library/iTunes/iPhone Software Updates IMAP-* Exchange-* EWS-* Mac-* ~/Library/Preferences/ByHost ~/Library/Preferences/com.apple.dock.plist ~/Library/Preferences/com.apple.sitebarlists.plist ~/Library/Application Support/4D ~/Library/Preferences/com.apple.MCX.plist ~/.FileSync ~/.account Even with ~/Library/Preferences/com.apple.MCX.plist prevented from syncing during a Preferences Sync, it still seems to show up in the network home on the server frequently. Are there any other files other than ~/Library/Preferences/com.apple.MCX.plist that contain application Managed Preferences that might be causing this one app to be showing up as not allowed? Any ideas on how ~/Library/Preferences/com.apple.MCX.plist keeps getting sync'd back up the network home folder on the server? Update: I thought I had found a workaround this morning, but it also seemed to be extremely temporary. Basically, loking at /Library/Managed\ Preferences/[shortname]/com.apple.applicationaccess.new.plist I discovered that it didn't have an entry for the application in question, but /Library/Managed\ Preferences/[shortname]/complete.plist did. Naturally, I deleted com.apple.applicationaccess.new.plist, logged in again, and it worked... on one workstation. It failed on others, and after logging out & back in a couple more times it started failing on all of them again, even after further deletions of com.apple.applicationaccess.new.plist. Oddly, com.apple.applicationaccess.new.plist & complete.plist do both contain an entry for the application in question now, but it still says it's not allowed. Further Update: Okay, so I now have a reproducible workaround which seems to be required after every reboot of the workstation: Log in as the user (you'll discover you cannot launch the application in question). Fast User Switch to the local admin account on the workstation (we always have one on every machine). From that local admin account, run sudo mcxrefresh -n 'shortname' (logging out and back in as the user in question will not work). Fast User Switch back to the user (you'll still not be allowed to run the application). Log the user out and back in (you'll now be able to run the application in question.) Fast User Switch back to the local admin account, log it out, and log back in as the user in question. If you do all that exactly as described it'll keep working through log out & log back in, but NOT through a reboot. If, after a reboot, you try something like logging in as the local admin account, running sudo mcxrefresh -n 'shortname', logging out, then logging in as the user in question, it will NOT work. Yet Another Update We don't have any computer groups in our Open Directory, so it shouldn't be getting any conflicting settings from there. I ran sudo mcxquery -format xml -user shortname -group groupname before & after performing the aforementioned process to allow the application in question to be run and the results were identical (saved the result to files & diff'd... I'm not just guessing here). One Step Forward, Half a Step Back: When the Mac OS X 10.6.5 Server update was released, we upgraded our Open Directory Master to it as the changes included the following managed preferences fixes which I hoped might address this issue: Addresses an issue that could prevent managed preferences from being applied when a user logs in on a workstation that has been idle. Fixes an issue that could prevent administrators from bypassing client management settings on a workstation. This seemed to improve the situation slightly. The application in question now usually launches without error. If, and when it does launch with the "You don't have permission to use the application" error, logging the user out and back in seems to correct it. That said, we've since had to add a couple of applications to the user's ~/Applications/ directory and those are still prevented from launching. The workstations are running Mac OS X 10.6.4, the OD Master (which the workstations are bound to) is running Mac OS 10.6.5 Server (although there are two OD Replicas still running 10.6.4 Server), and we're using Workgroup Manager 10.6.3 (which is included with the Server Admin Tools 10.6.5 upgrade) to add the applications (unsigned, as always). This time, I've caught the following in /var/log/system.log when attempting to launch one of the allowed applications from ~/Applications: Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker checkApp:csFlags:] [954:username] -- *** Incoming app appears to be masquerading as white listed app and failed signature validation: /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro. Note: This may be a valid app of a different version than what was whitelisted (on a different volume?) Dec 22 17:36:24 hostname [0x0-0xa42a42].com.filemaker.filemakerpro[43304]: launch of /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro was blocked Dec 22 17:36:24 hostname com.apple.launchd.peruser.1340[6375] ([0x0-0xa42a42].com.filemaker.filemakerpro[43304]): Exited with exit code: 255 Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker(Private) _removeAppFromWhiteList:] [1362:username] -- *** Couldn't find local user record Running sudo mcxquery -format xml -user username -group groupname includes the following entry for FileMaker Pro 5.5 (and appears to include a full integration of the user's application whitelist & group's application whitelist): <dict> <key>bundleID</key> <string>com.filemaker.filemakerpro</string> <key>displayName</key> <string>FileMaker Pro</string> </dict> Note the lack of <key>appID</key><data> ... </data> which seems to specify a signed application. While whitelisted directories also appear to be correctly listed in the results, they too do not actually allow the applications to be run either. What is going on here?! Where else should I be looking?

    Read the article

  • LDAP object class violation: attribute ou not allowed in suffix?

    - by Paramaeleon
    I am about to set up a LDAP directory. It is used as a tool to communicate user permissions from a web application to WebDav file system access, e.g. adding a user to the web platform shall allow login to the file system with the same credentials. There are no other usages intended. Following this German tutorial which encourages the use of the attributes c, o, ou etc. over dc, I configured the following suffix and root: suffix "ou=webtool,o=myOrg,c=de" rootdn "cn=ldapadmin,ou=webtool,o=myOrg,c=de" Server starts and I can connect to it by LDAP Admin, which reports “LDAP error: Object lacks”. Well, there aren’t any objects yet. I now want to create the root and admin elements from shell. I created an init.ldif file: dn: ou=webtool,o=myOrg,c=de objectclass: dcObject objectclass: organization dc: webtool o: webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Trying to load the file runs into an error, telling me that ou is not allowed: server:~ # ldapadd -x -D "cn=ldapadmin,ou=webtool,o=myOrg,c=de" -W -f init.ldif Enter LDAP Password: adding new entry "ou=webtool,o=myOrg,c=de" ldap_add: Object class violation (65) additional info: attribute 'ou' not allowed I am not using ou anywhere except in the suffix, so the question: Isn’t it allowed here? What is allowed here? Here is my answer. I am not allowed to post it as answer for 8 hours, so don’t mind that it is part of the question by now. I will move it outside some day, if I don’t forget to do so. There are numberous dependencies for the creation of elements, and error messages are rather confusing if you don’t know of the concept. The objectclass isn’t necessarily dcObject for the databases’ root node, as it is likely to guess when you read several tutoriales. Instead, it must correspond to the object’s type: Here, for a name starting with ou=, it must be organizationalUnit. I found this piece of information in these tables [Link removed due to restriction: Oops! Your edit couldn't be submitted because: We're sorry, but as a spam prevention mechanism, new users can only post a maximum of two hyperlinks. Earn more than 10 reputation to post more hyperlinks. Link is below]. Further on, the object class dictates which properties must and can be added in the record. Here, organizationalUnit must have an ou: entry and must not have neither dc: nor o: entry. The healthy init.ldif file looks like that: dn: ou=webtool,o=myOrg,c=de objectclass: organizationalUnit ou: LDAP server for my webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Note: The page also states: “While many objectClasses show no MUST attributes you must (ouch) follow any hierarchy […] to determine if this is the really case.” I thought that would mean my root record would have to provide the must fields for c= and o= (c: and o:, respectively) but this isn’t the case. Link in answer is (1): http :// www (dot) zytrax (dot) com/books/ldap/ape/ "Appendix E: LDAP - Object Classes and Attributes"

    Read the article

  • How to Eliminate Tape Backup and Off-site Storage Service?

    - by Daniel Lucas
    PLEASE READ UPDATE AT THE BOTTOM. THANKS! ;) Environment Info (all Windows): 2 sites 30 servers site #1 (3TB of backup data) 5 servers site #2 (1TB of backup data) MPLS backbone tunnel connecting site #1 and site #2 Current Backup Process: Online Backup (disk-to-disk) Site #1 has a server running Symantec Backup Exec 12.5 with four 1TB USB 2.0 disks. BE jobs for full backups run nightly on all servers in site #1 to these disks. Site #2 backs up to a central file server there using software they already had when we purchased them. A BE job pulls that data nightly to site #1 and stores them on said disks. Off-site Backup (tape) Connected to our backup server is a tape drive. BE backs up the external disks to tape once a week which gets picked up by our off-site storage company. Obviously we rotate two tape libraries, one is always here and one is always there. Requirements: Eliminate the need for tape and off-site storage service by doing disk-to-disk at each site and replicating site #1 to site #2 and vice versa. Software based solution as hardware options have been too pricey (ie, SonicWall, Arkeia). Agents for Exchange, SharePoint, and SQL. Some Ideas So Far: Storage DroboPro at each site with an initial 8TB of storage (these are expandable up to 16TB at present). I like these because they are rackmountable, allow disparate drives, and have iSCSI interfaces. They are relatively cheap too. Software Symantec Backup Exec 12.5 already has all the agents and licenses we need. I'd like to keep using it unless there is a better solution, similarly priced, that does everything BE does plus deduplication and replication. Server Because there is no more need for a SCSI adapter (for tape drive) we are going to virtualize our backup server as it is currently the only physical machine save for SQL boxes. Problems: When replicating between sites we want as little data as possible to go across the pipe. There is no deduplication or compression in what I have laid out here so far. The files being replicated are BE's virtual tape libraries from our disk-to-disk backup. Because of this each of those huge files will go across the wire every week because they change every day. And Finally, the Question: Is there any software out there that does deduplication, or at least compression, to handle just our site-to-site replication? Or, looking at our setup, is there any other solution that I am missing that might be cheaper, faster, better? Thanks. Sorry so long. UPDATE 2: I've set a bounty on this question to get it more attention. I'm looking for software that will handle replication of data between two sites using the least amount of data possible (either compression, deduplication, or some other method). Something similar to rsync would work but it needs to be native to Windows and not a port involving shenanigans to get up and running. Prefer a GUI based product and I don't mind shelling out a few bones if it works. Please, answers that meet the above criteria only. If you don't think one exists or if you think I'm being to restrictive keep it to yourself. If after seven days there is no answer at all, so be it. Thanks again everyone. UPDATE 2: I really appreciate everyone coming forward with suggestions. There is no way for me to try all of these before the bounty expires. For now I'm going to let this bounty run out and whoever has the most votes will get the 100 rep points. Thanks again!

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • e2fsck extremely slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there is some problem that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg ... [ 113.084079] usb 2-1: new high-speed USB device number 3 using ehci_hcd [ 113.217783] usb 2-1: New USB device found, idVendor=0bc2, idProduct=3320 [ 113.217787] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 113.217790] usb 2-1: Product: Expansion Desk [ 113.217792] usb 2-1: Manufacturer: Seagate [ 113.217794] usb 2-1: SerialNumber: NA4J4N6K [ 113.435404] usbcore: registered new interface driver uas [ 113.455315] Initializing USB Mass Storage driver... [ 113.468051] scsi5 : usb-storage 2-1:1.0 [ 113.468180] usbcore: registered new interface driver usb-storage [ 113.468182] USB Mass Storage support registered. [ 114.473105] scsi 5:0:0:0: Direct-Access Seagate Expansion Desk 070B PQ: 0 ANSI: 6 [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! ... So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and e2fsck scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So the numbers in the lseek lines before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if those numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. UPDATE2: Okey, big disappointment, the numbers are back to very small again (2012-11-07_0720) lseek(4, 52174548992, SEEK_SET) = 52174548992 read(4, "\374\312\22\\\325\215\213\23\0357U\222\246\370v^f(\312|f\212\362\343\375\373\342\4\204mU6"..., 4096) = 4096 lseek(4, 46603526144, SEEK_SET) = 46603526144 write(4, "\370\261\223\227\23?\4\4\217\264\320_Am\246CQ\313^\203U\253\274\204\277\2564n\227\177\267\343"..., 4096) = 4096 so either e2fsck goes over the data multiple times, or it just hops back and forth multiple times. Or my assumption that those numbers are bytes is wrong. UPDATE3: Since it's mentioned here http://forums.fedoraforum.org/showthread.php?t=282125&page=2 that you can testisk while e2fsck is running, i tried that, though not with a lot of success. When asking testdisk to display the data of my partition, this is what I get: TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org 1 P Linux 0 4 5 45600 40 8 732566272 Can't open filesystem. Filesystem seems damaged. And this is what strace currently gives me (2012-11-07_1030) lseek(4, 212460343296, SEEK_SET) = 212460343296 read(4, "\315Mb\265v\377Gn \24\f\205EHh\2349~\330\273\203\3375\206\10\r3=W\210\372\352"..., 4096) = 4096 lseek(4, 47347830784, SEEK_SET) = 47347830784 write(4, "]\204\223\300I\357\4\26\33+\243\312G\230\250\371*m2U\t_\215\265J \252\342Pm\360D"..., 4096) = 4096 (times are in CET)

    Read the article

  • Alternate way to create a clone of a UNIX System

    - by Spirit
    THE STORY: (If you don't like to read much, down below is the question :) ) Where I work we have two HP RP2470 servers same hardware same number of hard drives same everything :). One of them is a production server and runs HP-UX 11.00. The poor ba***rd hasn't been turned off for years and now I have to make a clone of it on the other server - just in case, for redundancy. The problem is simple (or not simple) as I have to make the the other server exactly the same. However the old version of OS (UX 11.00 is a history now) and the old software running on it, have made my task almost impossible. On the production server there is also a cloning/recover utility Ignite-UX. I tried many times to create a recovery tape with it. Then when I load the tape on the backup server, it succeeds with the loading of the tape (no errors no warnings) but on the next restart it fails to load the OS :S and drops into HP`s ISL prompt. --- THE QUESTION: Is there an alternate way to create a clone of the Unix System? The environment is: 1. 2x HP RP2470 Servers (non-Intel), same hardware, same number od HDDs (two each of them) same everything. 2. OS running: HP-UX 11.00 The production server has to be cloned without downtime - sadly :( as I hope that they will reconsider on this one For example (like on Windows platforms), if you try to copy an entire HDD with Windows inside on another HDD, and then put that HDD on another PC it will still work, as long as the hardware is the same. Can I do something like that with a Unix system? Can I somehow COPY the contents of the entire HDD, put those on another HDD, and then just load the HDD into the other server? (If you haven't read the story the servers are exactly the same) Will it work? Can it be done with ordinary commands like cp or dump or something like that? Does any one have a similar experience? --- UPDATE: 26.01.2012 NOTE: The update is related to "The Story". If you haven't read that part then you can skip this update. This is just a short update on the recover logs from the Ignite Tape.. someone with more exp. might notice something.. ... --- READING CONTENTS OF THE IGNITE TAPE --- --- OUTPUT OMITED --- ... ... x ./configure3, 413696 bytes, 808 tape blocks x ./monitor_bpr, 20480 bytes, 40 tape blocks * Download_mini-system: Complete * Loading_software: Begin * Installing boot area on disk. * Enabling swap areas. * Backing up LVM configuration for "vg00". * Processing the archive source (Recovery Archive). * Wed Jan 25 15:27:32 EST 2012: Starting archive load of the source (Recovery Archive). * Positioning the tape (/dev/rmt/0mn). * Archive extraction from tape is beginning. Please wait. * Wed Jan 25 15:39:52 EST 2012: Completed archive load of the source (Recovery Archive). * Executing user specified script: "/opt/ignite/data/scripts/os_arch_post_l". * Running in recovery mode (os_arch_post_l). * Running the ioinit command ("/sbin/ioinit -c") * Creating device files via the insf command. insf: Installing special files for sdisk instance 0 address 0/0/1/1.15.0 insf: Installing special files for sdisk instance 1 address 0/0/2/0.1.0 insf: Installing special files for sdisk instance 2 address 0/0/2/1.15.0 insf: Installing special files for stape instance 0 address 0/0/1/0.3.0 insf: Installing special files for btlan instance 0 address 0/0/0/0 insf: Installing special files for btlan instance 1 address 0/2/0/0 insf: Installing special files for pseudo driver dlpi insf: Installing special files for pseudo driver kepd insf: Installing special files for pseudo driver framebuf insf: Installing special files for pseudo driver sad * Running "/opt/upgrade/bin/tlinstall -v" and correcting transition link permissions. * Constructing the bootconf file. * Setting primary boot path to "0/0/1/1.15.0". * Executing: "/var/adm/sw/products/PHSS_20146/pfiles/iux_postload". * Executing: "/var/adm/sw/products/PHSS_25982/pfiles/iux_postload". NOTE: tlinstall is searching filesystem - please be patient NOTE: Successfully completed * Loading_software: Complete * Build_Kernel: Begin NOTE: Since the /stand/vmunix kernel is already in place, the kernel will not be re-built. Note that no mod_kernel directives will be processed. * Build_Kernel: Complete * Boot_From_Client_Disk: Begin * Rebooting machine as expected. NOTE: Rebooting system. sync'ing disks (0 buffers to flush): 0 buffers not flushed 0 buffers still dirty Closing open logical volumes... Done Console reset done. Boot device reset done. ********** VIRTUAL FRONT PANEL ********** System Boot detected ***************************************** LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF OFF ON ON LED State: Running non-OS code. (i.e. Boot or Diagnostics) ... ... ... --- SERVER IS PERFORMING POST SEQUENCE HERE --- --- OUTPUT OMITED --- ... ... ... ***************************************** ************ EARLY BOOT VFP ************* End of early boot detected ***************************************** Firmware Version 43.50 Duplex Console IO Dependent Code (IODC) revision 1 ------------------------------------------------------------------------------ (c) Copyright 1995-2002, Hewlett-Packard Company, All rights reserved ------------------------------------------------------------------------------ Processor Speed State CoProcessor State Cache Size Number State Inst Data --------- -------- --------------------- ----------------- ------------ 0 650 MHz Active Functional 750 KB 1.5 MB 1 650 MHz Idle Functional 750 KB 1.5 MB Central Bus Speed (in MHz) : 120 Available Memory : 2097152 KB Good Memory Required : 16140 KB Primary boot path: 0/0/1/1.15 Alternate boot path: 0/0/2/1.15 Console path: 0/0/4/1.643 Keyboard path: 0/0/4/0.0 Processor is starting autoboot process. To discontinue, press any key within 10 seconds. 10 seconds expired. Proceeding... Trying Primary Boot Path ------------------------ Booting... Boot IO Dependent Code (IODC) revision 1 HARD Booted. ISL Revision A.00.38 OCT 26, 1994 ISL booting hpux ISL>

    Read the article

  • How do I reset/update my BIOS for Optiplex GX280?

    - by Sam Langlhey
    So far this has been a nightmare for me, which has been frustrating me constantly. I am using Dell Optiplex GX280 with Windows XP home edition, which is running a BIOS version A04. Recently, i've rebooted the pc to find out that its not booting. It will get to the Windows boot up screen with the progress bar but only to restart to the same process again, over and over. Frustrated that I am, i've inserted the Windows recovery CD to at least either repair of reinstall the operating system to find out that was not possible. I hit F8 to have the boot options, each of the boot option that I've selected gave me an error saying: "Selected boot device is not available." Right after that, I went to the BIOS setting and did a diagnostic test, which recognized all the Boot devices onboard. Now, I cannot even repair of reinstall Windows XP, because the system is not booting from none of the boot devices. The surprise is when I removed the hard-drive from the computer and loaded it on into another computer successfully; that's right, there is nothing wrong with the hard drive. After that I was totally puzzled. I found a few pointers online saying that the BIOS start-up block might be corrupted itself and I might need to flash/update the BIOS. I found the detailed instruction on how to create a Boot up disk by downloading the BIOS firmware from the manufacture's website. I did exactly as instructed below: Download the latest version or your choose version of BIOS file for your computer or motherboard from the manufacturer’s support site. Rename the downloaded file to AMIBOOT.ROM. Copy the file to a floppy disk. Insert the floppy disk to the floppy drive. Turn on the system. After I did that and powered on the PC to boot from the floppy drive, it gave me this error message: "Non-System Disk or Disk Error. Replace and Strike any key when ready." I did all that, and I kept on pressing [Ctrl]+[Home] to force it, but it did not did any satisfying result. Desperate as I am, my next attempt is to try the instruction below. Since I want to be ready, in the event it does not work, do you have any solution that you can provide? Please keep in mind that I cannot boot from any of the devices at this moment. My only hope now is to come on with a solution that will work through the Floppy drive, since that's the only drive that affected. Thank you very much for your advice and support in advance. To create a Windows startup disk, insert a floppy disk into the drive of a similarly configured, working Windows XP system, launch My Computer, right-click the floppy disk icon, and select the Format command from the context menu. When you see the Format dialog box, leave all the default settings as they are and click the Start button. Once the format operation is complete, close the Format dialog box to return to My Computer, double-click the drive C icon to access the root directory, and copy the following three files to the floppy disk: Boot.ini NTLDR Ntdetect.com

    Read the article

  • RAID controller dropping the wrong drive

    - by bramp
    I've been having an issue with 3ware 9500S-8 RAID 10, and I have contracted their tech support, but I wanted to hear the serverfault community's recommendations. Firstly, all my data is backuped and secure, so I don't mind blowing my RAID away if I have to. But let me describe the problem I've been seeing. A month ago, disk 6 dropped out of the RAID. It is mirrored with disk 7, so I wasn't that bothered. I went to the data centre and replaced it. When I got back to the office, I noticed that disk 6 will still not in the RAID, and in fact the controller was show the name of the old drive still. A week later I went back and replace the drive again, thinking I might have swapped in a bad drive. Still the same problem. I decided to reboot the machine, to see if that would "force" the controller into seeing the new drive. It did, and a rebuild started to happen (from disk 7). Eventually both drives were showing as good. A week later, the MySQL database has flagged the database is corrupt, and is unable to repair it. I don't know what has gone wrong, but I suspected this 6-7 pair. At this point I noticed that the RAID had constantly been verifying itself, over and over. Regardless of this I began to rebuild the database, which took about 19 hours. It's a big database. Near the end of the repair, the RAID controller told me it had dropped disk 7, and that some data was most likely corrupted. I contacted LSI tech support, and they very promptly started to help me. I mentioned that drive 7 had been dropped. They suspect that drive 7 was always at fault, and drive 6 had always been good. I want to know how often a RAID controller would drop the wrong drive (in this case dropping drive 6 a month ago, instead of 7). I foolishly didn't run smartctl on the drives before I started swapping them out. I just assumed the RAID controller knew what it was talking about. I think my plan of action is to replace drive 7, rebuild the array from scratch, double check smartctl on ALL the disks, and then start restoring my data again. I would appreciate anyone's input on what the correct procedure for swapping drives is, and how often failures like this happen. If anyone would like more information then I'd be happy to provide it. thanks in advance. Oh some more information. I'm running CentOS 5.3, with two RAID arrays, a simple RAID 1 for the OS, and RAID 10 for the database. Both arrays are on different controllers. The RAID 10 is made of 10 identical ST3640323AS drives, until I swapped in a SAMSUNG HD103SJ last month.

    Read the article

  • File Server - Storage configuration: RAID vs LVM vs ZFS something else... ?

    - by privatehuff
    We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them. I've got a box set up with Ubuntu Server and 4 x 500 GB drives. They're currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this: 500 GB is not really big enough (some projects are larger) It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. ("the project is on sever2 in share4" etc) So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I've done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively "new". What is the most efficient and least risky way to to combine the hdd's that is convenient for my users? Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one "folder" per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together. My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system. I haven't used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We've got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however) I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that. But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn't mind it terribly, though. On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn't really solve the how-to-combine-drives issue.

    Read the article

  • .AVI Files randomly cease to open, other strange errors too

    - by Ben Franchuk
    I Recently (a couple weeks ago) downloaded the complete series of Seinfeld, all in varying file type. I Watched them in sequence according to season and to airing date, and all was well. All of the files played fine with my media player of choice ("BS Player"), and once I had finished, I went onto watch some other TV I had previously downloaded (The U.S. Series of "The Office"), and after then, some other film and then some music, over the following weeks (keep in mind all of these files are all on the same Hard Drive). Later then, More recently, I Went back to watching Seinfeld. The episodes played well as they did before- with the exclusion of a few in Season 7. I Have not tested all of the episodes in the season, but upon inspection, the majority of them are experiencing this problem; the problem being simply that they don't open! BS Player says that the files are either damaged or that the codecs to play the files are not on my computer-- however I am certain that the files DO have the codecs, and I am pretty sure that they are NOT DAMAGED either. I Have played the files with other players (such as VLC, Media Player Classic, and Windows Media Player), too, only to the same result; of them not opening. Seemingly the only way that I can differentiate between a damaged file and a non-damaged file are the way that the icon shows in Windows Explorer. For example, the below image is how explorer shows the information of a file that is non-damaged... ...and below is how a damaged file appears... The most disturbing and confusing part of this, though, is the last episode in the season- It opens, but not as a video- Instead, as a 1 Hour, 16 Minute, and 35 Second Audio file! The file plays a song for the first 4 or so minutes, and then is pretty much silent (except for some extremely quiet noise) until the last minute or so, when a random array of chopped up sounds and beeping noises play. I Do not recognise the song at the beginning of the file, but by the sounds of it, it is a song by the artist "Mr. Oizo," who's complete works I downloaded a couple weeks before now; and a bit before then I had finished downloading season 9 (not affected by these problems) of Seinfeld. I'd also like to note that the file I told of earlier (which played audio instead of video) reads as the same size as the other files in the season (around 175 MB) and also opens as a video clip. I Have NEVER experienced any of these problems in the past, and they seem to be only effecting the one season of my downloaded TV. The problems have not arisen with any of the other files on my Hard Drive, or any of the files downloaded around the time or after the time of which I downloaded season 7 of Seinfeld- or at least to my noticing. I Use the hard drive these files are located on almost every day, so could that be the cause of these problems? Is this a sign that my HDD is soon going to die? If it helps, the HDD is a Western Digital MyBook 1.5 TB 7500 RPM. It is connected to the computer via U.S.B. 2.0. EDIT! I noticed that this problem is now occurring with Season 9 of Seinfeld- and, presumably, other files on the drive I have yet to check. Please, If you have ANY IDEA AT ALL on what may be causing this or how to fix it, do tell me!

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >