Search Results

Search found 11032 results on 442 pages for 'opensuse 12 2'.

Page 304/442 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • How to avoid intrusion detection/anti spoofing issue on a sonicwall TZ series FW

    - by Ian
    We have a sonicwall tz series FW with two internet service providers connected. One of the providers has a wireless service which works a bit like an ethernet switch in that we have an ip with a /24 subnet and the gateway is .1. All other clients on the same subnet (say 195.222.99.0) have the same .1 gateway - this is important, read on. Some of our clients are also on the same subnet. Our config: X0 : Lan X1 : 89.90.91.92 X2 : 195.222.99.252/24 (GW 195.222.99.1) X1 and X2 are not connected, other than both being connected to the public Internet. Client config: X1 : 195.222.99.123/24 (GW 195.222.99.1) What fails, what works: Traffic 195.222.99.123 (client) <- 89.90.91.92 (X1) : Spoof alert Traffic 195.222.99.123 (client) <- 195.222.99.252 (X1) : OK - no spoof alert I have several clients with IPs in the 195.222.99.0 range and all provoke identical alerts. This is the alert I see on the FW: Alert Intrusion Prevention IP spoof dropped 195.222.99.252, 21475, X1 89.90.91.92, 80, X1 MAC address: 00:12:ef:41:75:88 Anti-spoofing is switched off on my FW (network-mac-ip-anti-spoofing - config for each interface) for all ports I can provoke the alerts by telneting to a port on X1 from the clients. You can't argue with the logic - this is suspicious traffic. X1 is receiving traffic with a source IP which corresponds to X2s subnet. Anyone know how can I tell the FW that packets with a src subnet of 195.222.99.0 can legitimately appear on X1? I know whats going wrong, I've seen the same thing before, but with higher end FWs you can avoid this with a few extra rules. I can't see how to do this here. And before you ask why we're using this service provider - they give us 3ms (yep 3ms, thats not an error) delay between routers.

    Read the article

  • RTL8192SU Linux Issue Installing Driver

    - by s32ialx
    OK I've read tons of fourms of people getting the onboard RTL8192SE working and the RTL8192SU working dif is U = USB they are both N and i have both Toshiba L500D-00T pre-installed Win Vistax64-HP and i have obtained the free Win7x64-HP upgrade the onboard wificard sucks and can't hold a stable connection for more then 20minutes in windows but the usb is amazing. Now problem is i tried both Ubuntu and Mandriva with no resolve the issue is the onboard drive detects and actually SHOWS that it's there but no wireless networks detect so it's saying no SSID's are broadcasting which i know is a lie since I'm running a 2wire bell dsl modem with built in wifi and a Linksys wrt54g w/ DD-WRT firmware and both are broadcasting fine. Why don't i use the USB? In the hardware device manager in mandriva it shows up as unknown but shows that it's realtek and that it's a 8192 chipset. but no option to for a driver install and when i do a make in terminal i get this error and no clue what it means [root@John-PC rtl8192se_linux_2.6.0010.1020.2009_64bit]# make make: *** /lib/modules/2.6.31.12-desktop-3mnb/build: No such file or directory. Stop. make: *** [all] Error 2 [root@John-PC rtl8192se_linux_2.6.0010.1020.2009_64bit]# any help appreciated. and just encase I'm running currently Mandriva Spring 2010 Free

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Easyphp Web Setup

    - by Dominique
    I've tried to setup an EasyPHP in local and make it visible from the Web via DynDNS, which I've already successed many times before, but now this just doesn't work, maybe I've forgotten something... *The "server" is a common workstation. Here is what I have done : 1) Installed EasyPhp (with a index.php/html file in WWW folder) 2) Changed the port in the config to port 80 3) Forwarded port 80 to the server IP in my router configuration 4) Added the server to the router DMZ *Also tried removing antivirus/firewall I've installed PortListener, pointed it on port 80, and when I access "myname.dyndns.com" it says Client connected GET / HTTP/1.1 Host: xyz.dyndns-remote.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; fr; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive So the server is accessible via Web, receive the connection successfully, but in my browser it says that the connection failed and show nothing...

    Read the article

  • Windows Terminal Server: occasional memory violation for applications

    - by syneticon-dj
    On a virtualized (ESXi 4.1) Windows Server 2008 SP2 32-bit machine which is used as a terminal server, I occasionally (approximately 1-3 event log entries a day) see applications fail with an 0xc0000005 error - apparently a memory access violation. The problem seems quite random and only badly reproducable - applications may run for hours, fail with 0xc0000005 and restart quite fine or just throw the access violation at startup and start flawlessly at the second attempt. The names of executables, modules and offset addresses vary, although a single executable tends to fail with same modules and the same memory offset addresses (like "OUTLOOK.EXE" repeatedly failing on module "olmapi32.dll" with the offset "0x00044b7a") - even across multiple user's logons and with several days passing without a single failure inbetween. The offset addresses seem to change across reboots, however. Only selective executables seem affected by the problem, although I may simply not be seeing a sufficient number of application runs from the other ones. I first suspected a possible problem with the physical machine's RAM, but ruled this out as a rather unlikely cause - the memory comes with ECC and I've already moved the virtual machine across several times, without any perceptable change. I've seen that DEP was enabled in "OptOut" mode on this machine: C:\Users\administrator>wmic OS Get DataExecutionPrevention_SupportPolicy DataExecutionPrevention_SupportPolicy 3 and tried changing the policy to OptIn via startup options: bcdedit.exe /set {current} nx OptIn but have yet to see any effect - I also would expect Outlook 12 or Adobe Reader 9 (both affected applications) to play well with DEP. Any other ideas why the apps may be failing?

    Read the article

  • Weird routing issue

    - by Joel Coel
    I'm having some weird internet problems on campus. I know it's something simple, but it's a case where I need another set of eyes. I think I can explain the problem best by posting a tracert: Tracing route to google.com [74.125.45.147] over a maximum of 30 hops: 1 3 ms 3 ms 3 ms 192.168.8.1 2 1 ms 1 ms 1 ms elissaemily-pc.york.edu [192.168.10.5] 3 2 ms 2 ms 2 ms rrcs-76-79-19-33.west.biz.rr.com [76.79.19.33] 4 31 ms 3 ms 2 ms ge-1-1-0.lnclne00-mx41.neb.rr.com [76.85.220.109] 5 20 ms 17 ms 17 ms ge-7-3-0.chcgill3-rtr1.kc.rr.com [76.85.220.137] 6 20 ms 20 ms 19 ms ae-5-0.cr0.chi30.tbone.rr.com [66.109.6.112] 7 19 ms 19 ms 24 ms ae-1-0.pr0.chi10.tbone.rr.com [66.109.6.155] 8 26 ms 24 ms 24 ms 74.125.48.109 9 23 ms 24 ms 21 ms 216.239.46.246 10 39 ms 39 ms 55 ms 209.85.242.215 11 39 ms 39 ms 39 ms 209.85.254.243 12 39 ms 40 ms 96 ms 209.85.253.145 13 39 ms 39 ms 39 ms yx-in-f147.1e100.net [74.125.45.147] Trace complete. Note the second entry in there. Not only is the host name a student's computer, but the ip address doesn't exist. Dhcp shows that host as having a different address and you can't ping any 192.168.10.5. Yet somehow it's routing packets for us (and not very well, either — things are slow right now). The basic network routing table looks like this: Destination Subnet Mask Gateway --------------------------------------- Default Route -- 10.1.1.5 (our firewall) 10.0.0.0 255.0.0.0 -- 192.168.8.0 255.255.252.0 --

    Read the article

  • Apache2 refuses to process php files - "Snow Leopard" OSX 10.6.4

    - by w-01
    I have a macbook pro i5. my understanding is that by default it should be able to serve php5. i have uncommented the relevant line in /etc/apache2/httpd.conf LoadModule php5_module libexec/apache2/libphp5.so I have restarted apache with sudo apachectl -k restart and when i try to access a file with a php extension, Apache prompts me to download the file. i.e. instead of processing the php and sending me html, it thinks i want to download the file.... when i look in apache error log i see this [Fri Nov 12 10:16:14 2010] [notice] Apache/2.2.14 (Unix) PHP/5.3.2 mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.2 Python/2.6.1 configured -- resuming normal operations so it looks like php5 is loading properly. I'd like to know either: How do i fix this? or How do I reinstall apache2 so that it's like i just installed the os? thanks in advance update @Zayne - the end of my httpd.conf has Include /private/etc/apache2/other/*.conf and i have a file /etc/apache2/other/php.conf with the contents <IfModule php5_module> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> </IfModule> @Zayne I've already copied php.ini.default to php.ini in the same folder. when i run sudo apachectl configtest i get /usr/sbin/apachectl: line 82: ulimit: open files: cannot modify limit: Invalid argument httpd: Could not reliably determine the server's fully qualified domain name, using ::1 for ServerName Syntax OK furthermore i decided to try apachectl -M which shows all loaded modules Most importantly in the list of loaded modules i got Loaded Modules: php5_module (shared) Since the module is being loaded, it seems like the issue has more to do with making apache use php engine to process the php files.... so something wrong with the ifmodule directive?

    Read the article

  • Secure iptables config for Samba

    - by Eric
    I'm trying to setup an iptables config such that outbound connections from my CentOS 6.2 server are allowed ONLY if they are of state ESTABLISHED. Currently, the following setup is working great for sshd, but all the Samba rules get totally ignored for a reason I cannot figure out. iptables Bash script to setup ALL rules: # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p udp --dport 137:138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p udp --sport 137:138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -s 10.1.1.0/24 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -d 10.1.1.0/24 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT # Enable these rules service iptables restart iptables rule list after running the above script: [root@repoman ~]# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:22222 state NEW,ESTABLISHED Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:22222 state ESTABLISHED Ultimately, I'm trying to restrict Samba the same way I have done for sshd. In addition, I'm trying to restrict connections to the following IP address range: 10.1.1.12 - 10.1.1.19 Can you guys offer some pointers or possibly even a full-blown solution? I've read man iptables quite extensively, so I'm not sure why the Samba rules are getting thrown out. Additionally, removing the -s 10.1.1.0/24 flags don't change the fact the rules get ignored.

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • How to disable 3rd party cookies in Chrome?

    - by David Nordvall
    I have both the "stop websites from storing local data" and the "block all third party cookies without exception" settings enabled in Chrome 12 (I'm not sure what the exact names of these settings are in english as I run Chrome with swedish localization). I do however have two problems. My first problem is that when I'm visiting one of my local news paper's site (and surely other), cookies from www.facebook.com is allowed for some reason. I suspect that the reason is that I have added an exception to the www.facebook.com domain but as the setting "block all third party cookies without exception" implies, that shouldn't matter. My second problem is that if I check what cookies are stored on my computer after browsing for a while, I have tons of cookies that are not on my white list. Primarily from ad services. My expectations from enabling the above mentioned settings was that only cookies that fulfill the two folling requirements would be accepted: the cookies must be from the domain in my address bar the cookies must be from a domain on my whitelist Apparently this isn't the case. The question is, have I completely misunderstood the settings or is this a bug? And, either way, is there a way to accomplish my desired behavior?

    Read the article

  • ERROR with rpm_check_debug vs depsolve

    - by Frank Thornton
    Transaction Summary ========================================================================================================================================================== Install 9 Package(s) Upgrade 227 Package(s) Remove 1 Package(s) Total size: 252 M Downloading Packages: Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: libasound.so.2()(64bit) is needed by libgcj-4.4.7-4.el6.x86_64 libasound.so.2(ALSA_0.9)(64bit) is needed by libgcj-4.4.7-4.el6.x86_64 ** Found 15 pre-existing rpmdb problem(s), 'yum check' output follows: alsa-lib-devel-1.0.22-3.el6.x86_64 has missing requires of alsa-lib = ('0', '1.0.22', '3.el6') alsa-lib-devel-1.0.22-3.el6.x86_64 has missing requires of libasound.so.2()(64bit) alsa-utils-1.0.22-5.el6.x86_64 has missing requires of libasound.so.2()(64bit) alsa-utils-1.0.22-5.el6.x86_64 has missing requires of libasound.so.2(ALSA_0.9)(64bit) alsa-utils-1.0.22-5.el6.x86_64 has missing requires of libasound.so.2(ALSA_0.9.0rc4)(64bit) alsa-utils-1.0.22-5.el6.x86_64 has missing requires of libasound.so.2(ALSA_0.9.0rc8)(64bit) frontpage-2002-SR1.2.i386 has missing requires of libexpat.so.0 gstreamer-plugins-base-0.10.29-2.el6.x86_64 has missing requires of libasound.so.2()(64bit) gstreamer-plugins-base-0.10.29-2.el6.x86_64 has missing requires of libasound.so.2(ALSA_0.9)(64bit) gstreamer-plugins-base-0.10.29-2.el6.x86_64 has missing requires of libasound.so.2(ALSA_0.9.0rc4)(64bit) libgcj-4.4.7-3.el6.x86_64 has missing requires of libasound.so.2()(64bit) libgcj-4.4.7-3.el6.x86_64 has missing requires of libasound.so.2(ALSA_0.9)(64bit) 1:qt-x11-4.6.2-26.el6_4.x86_64 has missing requires of libasound.so.2()(64bit) 1:qt-x11-4.6.2-26.el6_4.x86_64 has missing requires of libasound.so.2(ALSA_0.9)(64bit) 1:qt-x11-4.6.2-26.el6_4.x86_64 has missing requires of libasound.so.2(ALSA_0.9.0rc4)(64bit) Your transaction was saved, rerun it with: yum load-transaction /tmp/yum_save_tx-2013-12-23-22-364infzT.yumtx root@www1 [~]# I did some research and this is due to a 32bit binary trying to install itself or broken repo? root@www1 [~]# yum repolist Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile * base: centos.mirror.lstn.net * extras: mirror.ash.fastserv.com * updates: ftp.usf.edu repo id repo name status base CentOS-6 - Base 6,284+83 dag Dag RPM Repository for Red Hat Enterprise Linux 4,559+91 extras CentOS-6 - Extras 14 updates CentOS-6 - Updates 247+39 repolist: 11,104 Now I disabled epel and rpmforge repops and still ended up with the same issues. Ideas?

    Read the article

  • ZFS - Impact of L2ARC cache device failure (Nexenta)

    - by ewwhite
    I have an HP ProLiant DL380 G7 server running as a NexentaStor storage unit. The server has 36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expanders), 2 SAS system drives, 12 SAS data drives, a hot-spare disk, an Intel X25-M L2ARC cache and a DDRdrive PCI ZIL accelerator. This system serves NFS to multiple VMWare hosts. I also have about 90-100GB of deduplicated data on the array. I've had two incidents where performance tanked suddenly, leaving the VM guests and Nexenta SSH/Web consoles inaccessible and requiring a full reboot of the array to restore functionality. In both cases, it was the Intel X-25M L2ARC SSD that failed or was "offlined". NexentaStor failed to alert me on the cache failure, however the general ZFS FMA alert was visible on the (unresponsive) console screen. The zpool status output showed: pool: vol1 state: ONLINE scan: scrub repaired 0 in 0h57m with 0 errors on Sat May 21 05:57:27 2011 config: NAME STATE READ WRITE CKSUM vol1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t5000C50031B94409d0 ONLINE 0 0 0 c9t5000C50031BBFE25d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c10t5000C50031D158FDd0 ONLINE 0 0 0 c11t5000C5002C823045d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c12t5000C50031D91AD1d0 ONLINE 0 0 0 c2t5000C50031D911B9d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c13t5000C50031BC293Dd0 ONLINE 0 0 0 c14t5000C50031BD208Dd0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c15t5000C50031BBF6F5d0 ONLINE 0 0 0 c16t5000C50031D8CFADd0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c17t5000C50031BC0E01d0 ONLINE 0 0 0 c18t5000C5002C7CCE41d0 ONLINE 0 0 0 logs c19t0d0 ONLINE 0 0 0 cache c6t5001517959467B45d0 FAULTED 2 542 0 too many errors spares c7t5000C50031CB43D9d0 AVAIL errors: No known data errors This did not trigger any alerts from within Nexenta. I was under the impression that an L2ARC failure would not impact the system. But in this case, it surely was the culprit. I've never seen any recommendations to RAID L2ARC. Removing the bad SSD entirely from the server got me back running, but I'm concerned about the impact of the device failure (and maybe the lack of notification from NexentaStor as well). Edit - What's the current best-choice SSD for L2ARC cache applications these days?

    Read the article

  • NRPE unable to read output, but why?

    - by ticktockhouse
    I have this problem with NRPE, all the stuff I've found so far on the net seems to point me at things I've already tried. # /usr/local/nagios/plugins/check_nrpe -H nrpeclient gives NRPE v2.12 as expected. Running the command by hand (as defined in nrpe.cfg on "nrpeclient", gives the expected response nrpe.cfg: command[check_openmanage]=/usr/lib/nagios/plugins/additional/check_openmanage -s -e -b ctrl_driver=0 bat_charge "Expected response" But if I try to run the command from the Nagios server I get the following: # /usr/local/nagios/plugins/check_nrpe -H comxps -c check_openmanage NRPE: Unable to read output Can anyone think of anywhere else I might have made a mistake with this? I've done the same thing on multiple other servers with no problem. The only difference I can think of with this is that this box is RHEL 5 based, whereas the others are RHEL 4 based. Those two bits above that I've tested are the what most people seem to suggest when people have had this problem. I should mention that I get a weird error in the logs when I restart nrpe: nrpe[14534]: Unable to open config file '/usr/local/nagios/etc/nrpe.cfg' for reading nrpe[14534]: Continuing with errors... nrpe[14535]: Starting up daemon nrpe[14535]: Warning: Daemon is configured to accept command arguments from clients! nrpe[14535]: Listening for connections on port 5666 nrpe[14535]: Allowing connections from: bodbck,combck,nam-bck Even though, it's plainly reading that /usr/local/nagios/etc/nrpe.cfg file to get the stuff it's talking about further down..

    Read the article

  • Parsing the output of "uptime" with bash

    - by Keek
    I would like to save the output of the uptime command into a csv file in a Bash script. Since the uptime command has different output formats based on the time since the last reboot I came up with a pretty heavy solution based on case, but there is surely a more elegant way of doing this. uptime output: 8:58AM up 15:12, 1 user, load averages: 0.01, 0.02, 0.00 desired result: 15:12,1 user,0.00 0.02 0.00, current code: case "`uptime | wc -w | awk '{print $1}'`" in #Count the number of words in the uptime output 10) #e.g.: 8:16PM up 2:30, 1 user, load averages: 0.09, 0.05, 0.02 echo -n `uptime | awk '{ print $3 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $4,$5 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $8,$9,$10 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; 12) #e.g.: 1:41pm up 105 days, 21:46, 2 users, load average: 0.28, 0.28, 0.27 echo -n `uptime | awk '{ print $3,$4,$5 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $6,$7 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $10,$11,$12 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; 13) #e.g.: 12:55pm up 105 days, 21 hrs, 2 users, load average: 0.26, 0.26, 0.26 echo -n `uptime | awk '{ print $3,$4,$5,$6 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $7,$8 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $11,$12,$13 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; esac

    Read the article

  • Solaris: detect hotswap SATA disk insert

    - by growse
    What's the method used on Solaris to get the system to rescan for new disks that have been hot-plugged on a SATA controller? I've got an HP X1600 NAS which had 9 drives configred in a ZFS pool. I've added 3 disks, but the format command still only shows the original 9. When I plugged them in, I saw this: cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array P212 Controller cpqary3: [ID 823470 kern.notice] Hot-plug drive inserted, Port=1I Box=1 Bay=12 cpqary3: [ID 479030 kern.notice] Configured Drive ? ....... NO cpqary3: [ID 100000 kern.notice] cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array P212 Controller cpqary3: [ID 823470 kern.notice] Hot-plug drive inserted, Port=1I Box=1 Bay=11 cpqary3: [ID 479030 kern.notice] Configured Drive ? ....... NO cpqary3: [ID 100000 kern.notice] cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array P212 Controller cpqary3: [ID 823470 kern.notice] Hot-plug drive inserted, Port=1I Box=1 Bay=10 cpqary3: [ID 479030 kern.notice] Configured Drive ? ....... NO But can't figure out how to get the format command to see them so I know they've been detected by the system.

    Read the article

  • Mounting NAS drive with cifs using credentials file through fstab does not work

    - by mahatmanich
    I can mount the drive in the following way, no problem there: mount -t cifs //nas/home /mnt/nas -o username=username,password=pass\!word,uid=1000,gid=100,rw,suid However if I try to mount it via fstab I get the following error: //nas/home /mnt/nas cifs iocharset=utf8,credentials=/home/username/.smbcredentials,uid=1000,gid=100 0 0 auto .smbcredentials file looks like this: username=username password=pass\!word Note the ! in my password ... which I am escaping in both instances I also made sure there are no eol in the file using :set noeol binary from Mount CIFS Credentials File has Special Character chmod on .credentials file is 0600 and chown is root:root file is under ~/ Why am I getting in on the one side and not with fstab?? I am running on ubuntu 12 LTE and mount.cifs -V gives me mount.cifs version: 5.1 Any help and suggestions would be appreciated ... UPDATE: /var/log/syslog shows following [26630.509396] Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE [26630.509407] CIFS VFS: Send error in SessSetup = -13 [26630.509528] CIFS VFS: cifs_mount failed w/return code = -13 UPDATE no 2 Debugging with strace mount through fstab: strace -f -e trace=mount mount -a Process 4984 attached Process 4983 suspended Process 4985 attached Process 4984 suspended Process 4984 resumed Process 4985 detached [pid 4984] --- SIGCHLD (Child exited) @ 0 (0) --- [pid 4984] mount("//nas/home", ".", "cifs", 0, "ip=<internal ip>,unc=\\\\nas\\home"...) = -1 EACCES (Permission denied) mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Process 4983 resumed Process 4984 detached Mount through terminal strace -f -e trace=mount mount -t cifs //nas/home /mnt/nas -o username=user,password=pass\!wd,uid=1000,gid=100,rw,suid Process 4990 attached Process 4989 suspended Process 4991 attached Process 4990 suspended Process 4990 resumed Process 4991 detached [pid 4990] --- SIGCHLD (Child exited) @ 0 (0) --- [pid 4990] mount("//nas/home", ".", "cifs", 0, "ip=<internal ip>,unc=\\\\nas\\home"...) = 0 Process 4989 resumed Process 4990 detached

    Read the article

  • Ubuntu 12.04 Server - eth0 1Gbps NIC eth1 10Gbps NIC - all traffic using eth0?

    - by James
    Ubuntu Server 12.04.1 x64 Primary role is an NFS fileserver, for Mac OSX Clients. Hardware: Eth0: 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) Eth1: 07:00.0 Ethernet controller: MYRICOM Inc. Myri-10G Dual-Protocol NIC Config: ifconfig eth0 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.150 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:460042020 errors:0 dropped:148 overruns:0 frame:0 TX packets:231906707 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:581431978417 (581.4 GB) TX bytes:259057368617 (259.0 GB) Interrupt:20 Memory:f7d00000-f7d20000 eth1 Link encap:Ethernet HWaddr <MACADDRESS> inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6832208 errors:0 dropped:2 overruns:0 frame:0 TX packets:376 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:513826442 (513.8 MB) TX bytes:33688 (33.6 KB) Interrupt:59 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:507 errors:0 dropped:0 overruns:0 frame:0 TX packets:507 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:45057 (45.0 KB) TX bytes:45057 (45.0 KB) nano /etc/network/interfaces #The loopback network interface auto lo iface lo inet loopback #The primary network interface auto eth0 iface eth0 inet static address 192.168.0.150 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 #second network interface auto eth1 iface eth1 inet static address 192.168.0.100 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 Currently I am using on the OSX clients: nfs://192.168.0.100/Volumes/Storage to mount the NFS share. My problem is why would all the data (and I have checked using various monitoring tools bmon, iftop, glances, etc) be going over the slower connection?? Also, after configuring /etc/network/interfaces with the above setup I always get an error message at bootup something about waiting for network configuration. Are these connected?

    Read the article

  • Linux can't find file that exists

    - by Joe
    I'm trying to get Google's Dart language up and running, but it errors when running dart2js. I'm running Arch linux and I installed dart-sdk from AUR. Some relevant terminal output lies below. % dart2js main.dart /usr/local/bin/dart2js: line 7: /usr/local/bin/dart: No such file or directory % cat /usr/local/bin/dart2js #!/bin/sh # Copyright (c) 2012, the Dart project authors. Please see the AUTHORS file # for details. All rights reserved. Use of this source code is governed by a # BSD-style license that can be found in the LICENSE file. BIN_DIR=`dirname $0` exec $BIN_DIR/dart --allow_string_plus=false $BIN_DIR/../lib/dart2js/lib/compiler/implementation/dart2js.dart "$@" % file /usr/local/bin/dart /usr/local/bin/dart: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, BuildID[sha1]=0x27fe166ca015c1adfeaf3a6f9c018e7d7af46d9f, stripped % ls -alh /usr/local/bin total 4.9M drwxr-xr-x 2 root root 4.0K Jun 10 22:51 . drwxr-xr-x 12 root root 4.0K Jun 10 22:51 .. -rwxr-xr-x 1 root root 422K May 10 22:41 cargo -rwxr-xr-x 1 root root 2.7M Jun 10 22:50 dart -rwxr-xr-x 1 root root 360 Jun 6 16:20 dart2js -rwxr-xr-x 1 root root 176 Jun 6 16:20 pub -rwxr-xr-x 1 root root 166K May 10 22:41 rustc -rwxr-xr-x 1 root root 1.6M May 10 22:41 rustdoc % uname -rm 3.3.7-1-ARCH x86_64 Could it be because I'm running a 64bit OS and the dart binary is 32bit?

    Read the article

  • Ripping a home video VCD on Linux or Windows with VLC or otherwise

    - by user259774
    I have a VCD with 22 minutes of video on it. I would like to retain this footage and throw away the VCD. I can play the whole thing with VLC ("open disc - vcd - /dev/sr0 - play"): all 22 minutes of the main track. I don't believe there's any other content aside from the main track. I can seek to anywhere I want to within the 22 minute track. If I mount /dev/sr0 /media/vcd and then try to copy the only file from the MPEGAV folder, I get an I/O error, with an empty destination file. VLC has a "convert" option in addition to "play". When I use this I actually get a good OGG file back, after it runs through the video in painful real-time. I guess it dubs it frame-by-frame. But the file is only 10 minutes long, leaving 12 minutes off of the track. Handbrake doesn't detect it's track titles, unfortunately. I don't know if I should start getting involved with GNU ddrescue or if it's because VCDs somehow encode their data sectors differently. Anyway, I'm in way over my head and if anyone knows how I could get that video track off the thing, feel free to share! Edit: I should note that I also have access to a Windows computer

    Read the article

  • apache pointing to the wrong version of python on ubuntu how do I change?

    - by one
    I am setting up a flask application on and Ubuntu 12.04.3 LTS EC2 instance and everything seemed to be working well (i.e. I could get to the webpage via the publicly available url) until I tried to import a module (e.g. numpy) and realised the apache python differs from the one I used to compile the mod_wsgi and also the one I am using I am running apache2. The apache2 logs show the warnings (specifically the last line shows the path hasnt changed): [warn] mod_wsgi: Compiled for Python/2.7.5. [warn] mod_wsgi: Runtime using Python/2.7.3. [warn] mod_wsgi: Python module path '/usr/lib/python2.7/:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/lib-tk:/usr/lib$ I have tried to set the path in my virtual host conf (my python is located in /home/ubuntu/anaconda/bin along with all of the other libraries): WSGIPythonHome /home/ubuntu/anaconda WSGIPythonPath /home/ubuntu/anaconda <VirtualHost *:80> ServerName xx-xx-xxx-xxx-xxx.compute-1.amazonaws.com ServerAdmin [email protected] WSGIScriptAlias / /var/www/microblog/microblog.wsgi <Directory /var/www/microblog/app/> Order allow,deny Allow from all </Directory> Alias /static /var/www/microblog/app/static <Directory /var/www/FlaskApp/FlaskApp/static/> Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> But I still get the warnings and the apache python path hasnt changed - where do I need to put the relevant directives to point apache at my python version and modules (e.g. scipy, numpy etc)? Separately, could I have avoided this using virtual environments? Thanks in advance.

    Read the article

  • What causes this logrotate behavior in Puppet?

    - by ujjain
    After running logrotate, Puppet starts writing it's logs into /var/log/puppet/masterhttp.log-20130616. How come it doesn't keep logging in /var/log/puppet/masterhttp.log? It seems normal behavior is renaming the original log-file and start with a clean fresh log-file to start writing in that log file, keeping the other file as a log-archive. [root@puppetmaster puppet]# ls -al total 97520 drwxr-x---. 2 puppet puppet 4096 Jun 16 03:24 . drwxr-xr-x. 12 root root 4096 Jul 1 09:11 .. -rw-r--r--. 1 puppet puppet 0 Jun 16 03:24 masterhttp.log -rw-rw----. 1 puppet puppet 99847187 Jul 1 09:19 masterhttp.log-20130616 [root@puppetmaster init.d]# cat /etc/logrotate.d/puppet /var/log/puppet/*log { missingok notifempty create 0644 puppet puppet sharedscripts postrotate pkill -USR2 -u puppet -f /usr/sbin/puppetmasterd || true [ -e /etc/init.d/puppet ] && /etc/init.d/puppet reload > /dev/null 2>&1 || true endscript } [root@puppetmaster init.d]# How can I make Puppet log to /var/log/puppet/masterhttp.log and not to /var/log/puppet/masterhttp.log-20130616? Even restarting puppet doesn't make it log into /var/log/puppet/masterhttp.log instead of /var/log/puppet/masterhttp.log-20130616.

    Read the article

  • Resize Debian in VirtualBox

    - by Poni
    I have a VM with one HD of size 3GB and I'd like to enlarge its HD to 7GB. So I execute this command on the host (while guest is shutdown): VBoxManage modifyhd debian.vdi --resize 7168 Then I run the guest, Debian 6, and then: smith@debian6:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 2.8G 2.6G 60M 98% / tmpfs 61M 0 61M 0% /lib/init/rw udev 57M 160K 57M 1% /dev tmpfs 61M 0 61M 0% /dev/shm smith@debian6:~$ sudo parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 3221MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 3035MB 3034MB primary ext3 boot 2 3036MB 3220MB 185MB extended 5 3036MB 3220MB 185MB logical linux-swap(v1) smith@debian6:~$ cat /proc/partitions major minor #blocks name 8 0 3145728 sda 8 1 2962432 sda1 8 2 1 sda2 8 5 180224 sda5 So, no automatic resizing (detection) of the HD/partition (while VirtualBox, in the host, shows it's 7GB now). Ok... Then I do: smith@debian6:~$ sudo resize2fs /dev/sda1 resize2fs 1.41.12 (17-May-2010) The filesystem is already 740608 blocks long. Nothing to do! smith@debian6:~$ sudo parted GNU Parted 2.3 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) select /dev/sda1 Using /dev/sda1 (parted) resize WARNING: you are attempting to use parted to operate on (resize) a file system. parted's file system manipulation code is not as robust as what you'll find in dedicated, file-system-specific packages like e2fsprogs. We recommend you use parted only to manipulate partition tables, whenever possible. Support for performing most operations on most types of file systems will be removed in an upcoming release. Partition number? 1 Start? 0 End? [3034MB]? Here I'm stuck. At the above parted it asks me to resize to 3GB. No point in that, right.. What should I do in order to enlarge this partition?

    Read the article

  • Apache > 2.2.22 rewrite rule not working?

    - by EBAH
    since yesterday I'm trying to figure out how to fix the following: running phpipam (http://www.phpipam.net/) with WAMP (Windows environment). The problem I am facing is related with RewriteRule functionality, so forget phpipam for a moment and concentrate on few lines of code. Here is the directory structure of my test website that emulate the first steps phpipam does (you can download http://goo.gl/ksvuGc): C:\wamp\www\rewrite-tst\ C:\wamp\www\rewrite-tst\.htaccess C:\wamp\www\rewrite-tst\index.php C:\wamp\www\rewrite-tst\install C:\wamp\www\rewrite-tst\install\index.php It seems that the following rewrite rule in .htaccess doesn't work: C:\wamp\www\rewrite-tst\.htaccess # install RewriteRule ^install$ install/ [R] RewriteRule ^install/$ index.php?page=install When opening C:\wamp\www\rewrite-tst\index.php the first step check the URL for "install" argument. Since the URL is: http://localhost/rewrite-tst no arguments are supplied and the browser is redirected to: header("Location: /rewrite-tst/install/") At this point the browser opens the page: C:\wamp\www\rewrite-tst\install\index.php >> http://localhost/rewrite-tst/install Apache, thanks to C:\wamp\www\rewrite-tst.htaccess should intercept this URL and redirect to: http://localhost/rewrite-tst/index.php?page=install Here are my tries: Win Apache 2.2.22: works Win Apache 2.4.4: KO Win Apache 2.4.6: KO In the attached zip file you can also find two traces from apache RewriteLog which I can't understand very well. Why Apache 2.4 doesn't work on Windows? Is it possible that there's a bug on Windows version of Apache (2.4.4 and 2.4.6) or am I wrong someway? Thanks for your help!!! Evan -- UPDATE 12 oct 2013 Now I'm really confused! Working on Linux, Kubuntu 13.04. Linux Apache 2.2.22: works Linux Apache 2.4.6: KO I guess there's something wrong in my rules at this point, or some change happened from Apache 2.2 to 2.4 ...

    Read the article

  • MySQL binlogs seems incomplete?

    - by warl0ck
    I created a Database, a table and inserted some data, and found this binlog.0000001 in my log folder, but when I do mysqlbinlog binlog.0000001, it only shows stuff below, seems incomplete: (There's only two files in the log dir: binlog.000001 binlog.index) /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #120924 21:12:56 server id 1 end_log_pos 107 Start: binlog v 4, server v 5.5.24-0ubuntu0.12.04.1-log created 120924 21:12:56 at startup # Warning: this binlog is either in use or was not closed properly. ROLLBACK/*!*/; BINLOG ' GAVhUA8BAAAAZwAAAGsAAAABAAQANS41LjI0LTB1YnVudHUwLjEyLjA0LjEtbG9nAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAYBWFQEzgNAAgAEgAEBAQEEgAAVAAEGggAAAAICAgCAA== '/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; If this warning was the cause: Warning: this binlog is either in use or was not closed properly.. How do I force close the log? EDIT After flush logs command, I see "0 rows" affected, and a few new files, binlog.000001 binlog.000002 binlog.000003 binlog.000004 binlog.index, the contents are nearly the same as binlog.000001. Now I dropped the database, and try restore it with mysqlbinlog binlog.0* | mysql -u root -p, but the database wasn't recovered. EDIT 2 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking log-bin=/var/log/mysql/binlog binlog-do-db=mydb bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M expire_logs_days = 10 max_binlog_size = 100M P.S /var/log/mysql{.err,.log} are both empty

    Read the article

  • Sendmail Sends but never Delivers

    - by Jeremy
    I have tried 10 different emails hosted at Google, Yahoo!, GoDaddy, and some that are privately hosted, and each time I get the following errors. I have blocked sensitive information, but you will be able to see the errors. Feb 16 17:06:50 xxxxx sendmail[31824]: o1GM6ovJ031824: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30054, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (o1GM6oJo031825 Message accepted for delivery) Feb 16 16:54:19 xxxxx sendmail[31625]: o1GLsJPP031625: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30097, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (o1GLsJah031626 Message accepted for delivery) Feb 17 09:05:52 xxxxx sm-mta[10620]: o1H6Z3jM005734: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=07:30:49, xdelay=01:15:36, mailer=esmtp, pri=571331, relay=aspmx3.googlemail.com. [209.85.222.4], dsn=4.0.0, stat=Deferred: Connection timed out with aspmx3.googlemail.com. Feb 17 10:35:23 xxxxx sm-mta[12828]: o1HEZwn8011833: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:59:25, xdelay=00:12:36, mailer=esmtp, pri=300353, relay=aln-mailrelay.att.net. [12.102.252.75], dsn=4.0.0, stat=Deferred: Connection timed out with aln-mailrelay.att.net. If you take a look, they all send, but then (HOURS later) I get an error "stat=Deferred: Connection timed out with {server}". I'm at my wits end, because I use this same setup on each of my servers, and they all work.

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >