Search Results

Search found 16894 results on 676 pages for 'block device'.

Page 151/676 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • pfsense 2.0.1 Firewall SMB Share not showing up under network

    - by atrueresistance
    I have a freenas NAS with a SMB share running at 192.168.2.2 of a 192.168.2.0/28 network. Gateway is 192.168.2.1. Originally this was running on a switch with my LAN, but now having upgraded to new hardware the Freenas has it's own port on the firewall. Before the switch the freenas would show up under Network on a windows 7 box and an OSX Lion box as freenas{wins} or CIFS shares on freenas{osx} so I know it doesn't have anything do to with the freenas. Here are my pfsense rules. ID Proto Source Port Destination Port Gateway Queue Schedule Description PASS TCP FREENAS net * LAN net 139 (NetBIOS-SSN) * none cifs lan passthrough PASS TCP FREENAS net * LAN net 389 (LDAP) * none cifs lan passthrough PASS TCP FREENAS net * LAN net 445 (MS DS) * none cifs lan passthrough PASS UDP FREENAS net * LAN net 137 (NetBIOS-NS) * none cifs lan passthrough PASS UDP FREENAS net * LAN net 138 (NetBIOS-DGM) * none cifs lan passthrough BLOCK * FREENAS net * LAN net * * none BLOCK * FREENAS net * OPTZONE net * * none BLOCK * FREENAS net * 192.168.2.1 * * none PASS * FREENAS net * * * * none BLOCK * * * * * * none I can connect if I use \\192.168.2.2 and enter the correct login details. I would just like this to show up on the network. Nothing in the log seems to be blocked when I filter by 192.168.2.2. What port am I missing for SMB to show up under the network and not have to connect by IP? ps. Do I really need the LDAP rule?

    Read the article

  • Why is mkfs overwriting the LUKS encryption header on LVM on RAID partitions on Ubuntu 12.04?

    - by Starchy
    I'm trying to setup a couple of LUKS-encrypted partitions to be mounted after boot-time on a new Ubuntu server which was installed with LVM on top of software RAID. After running cryptsetup luksFormat, the LUKS header is clearly visible on the volume. After running any flavor of mkfs, the header is overwritten (which does not happen on other systems that were setup without LVM), and cryptsetup will no longer recognize the device as a LUKS device. # cryptsetup -y --cipher aes-cbc-essiv:sha256 --key-size 256 luksFormat /dev/dm-1 WARNING! ======== This will overwrite data on /dev/dm-1 irrevocably. Are you sure? (Type uppercase yes): YES Enter LUKS passphrase: Verify passphrase: # hexdump -C /dev/dm-1|head -n5 00000000 4c 55 4b 53 ba be 00 01 61 65 73 00 00 00 00 00 |LUKS....aes.....| 00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000020 00 00 00 00 00 00 00 00 63 62 63 2d 65 73 73 69 |........cbc-essi| 00000030 76 3a 73 68 61 32 35 36 00 00 00 00 00 00 00 00 |v:sha256........| 00000040 00 00 00 00 00 00 00 00 73 68 61 31 00 00 00 00 |........sha1....| # cryptsetup luksOpen /dev/dm-1 web2-var # mkfs.ext4 /dev/mapper/web2-var [..snip..] Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done # hexdump -C /dev/dm-1|head -n5 # cryptsetup luksClose /dev/mapper/web2-var 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000400 00 40 5d 00 00 88 74 01 66 a0 12 00 17 f2 6d 01 |.@]...t.f.....m.| 00000410 f5 3f 5d 00 00 00 00 00 02 00 00 00 02 00 00 00 |.?].............| 00000420 00 80 00 00 00 80 00 00 00 20 00 00 00 00 00 00 |......... ......| # cryptsetup luksOpen /dev/dm-1 web2-var Device /dev/dm-1 is not a valid LUKS device. I have also tried mkfs.ext2 with the same result. Based on setups I've done successfully on Debian and Ubuntu (but not LVM or Ubuntu 12.04), it's hard to see why this is failing.

    Read the article

  • Trying to use Nginx try_files to emulate Apache MultiViews

    - by Samuel Bierwagen
    I want a request to http://example.com/foobar to return http://example.com/foobar.jpg. (Or .gif, .html, .whatever) This is trivial to do with Apache MultiViews, and it seems like it would be equally easy in Nginx. This question seems to imply that it'd be easy as try_files $uri $uri/ index.php; in the location block, but that doesn't work. try_files $uri $uri/ =404; doesn't work, nor does try_files $uri =404; or try_files $uri.* =404; Moving it between my location / { block and the regexp which matches images has no effect. Crucially, try_files $uri.jpg =404; does work, but only for .jpg files, and it throws a configuration error if I use more than one try_files rule in a location block! The current server { block: server { listen 80; server_name example.org www.example.org; access_log /var/log/nginx/vhosts.access.log; root /srv/www/vhosts/example; location / { root /srv/www/vhosts/example; } location ~* \.(?:ico|css|js|gif|jpe?g|es|png)$ { expires max; add_header Cache-Control public; try_files $uri =404; } } Nginx version is 1.1.14.

    Read the article

  • Windows 7 Not Recognizing Camera Nor iPhone as Camera

    - by taudep
    I've been struggling with this one for a few days. I've recently upgraded an older computer to Windows 7 Home Premium. Neither my digital camera (A Canon SD1200IS) nor iPhone are ever detected as cameras, nor ever show up as accessable in Explorer. With the Canon camera, no driver is required. It's supposed to work with the default Windows 7 drivers. However, in the Control Panel's Device Manager, I'm always seeing a yellow icon next to the "Canon Digital Camera" device. I've uninstalled the device and let Windows attempt to reinstall, but it can never find a driver to install. With the iPhone, it's very similar. One big difference, though, is that iTunes can see the iPhone and back it up, etc. However, again when I go to the Device Manager, there's a yellow icon next to the iPhone. I've uninstalled iTunes, reinstalled, rebooted, deleted drivers, and let Window try to reinstall the driver, but it can never find the driver. So there seems to be some correlation that my machine can't detect cameras properly, and that it might be even a lower-level type of driver I'm struggling with. I know that USB however, does work, because I have have an external drive hooked into the machine. I've gone through the web and tried two hours worth of fixes, without success. I feel like if I can get the Canon camera detected, then the iPhone will be on it's way to being fixed too. BTW, I couldn't really find anything of use in the Event viewer. Any and all suggestions welcome.

    Read the article

  • ext4 filesystem corruption -- maybe hardware error?

    - by pts
    I'm getting these errors in dmesg after about half an hour after I turn on the computer: [ 1355.677957] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318420: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251700offset=0(0), inode=1802725748, rec_len=179136, name_len=32 [ 1355.677973] Aborting journal on device sda2-8. [ 1355.678101] EXT4-fs (sda2): Remounting filesystem read-only [ 1355.690144] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318416: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251699offset=0(0), inode=2194783952, rec_len=53280, name_len=152 [ 1356.864720] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1312795: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251176offset=1460(13748), inode=1432317541, rec_len=208208, name_len=119 /dev/sda is an SSD, and it's using the noop scheduler. /etc/fstab entry: UUID=acb4eefa-48ff-4ee1-bb5f-2dccce7d011f / ext4 errors=remount-ro,noatime,discard,user_xattr 0 1 System information: $ cat /proc/mounts | grep /dev/sd /dev/sda1 /boot ext2 rw,noatime,errors=continue 0 0 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.3 LTS" $ uname -a Linux leetpad 2.6.35-30-generic-pae #61~lucid1-Ubuntu SMP Thu Oct 13 21:14:29 UTC 2011 i686 GNU/Linux I've run memtest for 7 hours, it didn't found any memory errors. Any obvious ideas what can go wrong in this case? The most reasonable thing I can imagine is that the SSD is silently dropping some write requests, which eventually leads to an EXT4 filesystem inconsistency (but no disk I/O errors). How can this happen? Is there a relevant configuration option I should ensure to be set correctly? What tools should I use to diagnose the hardware failures? Would it be possible to diagnose the SSD failure without overwriting data?

    Read the article

  • Is it possible to detect nearby Wi-Fi enabled devices, not necessarily on the same network? [closed]

    - by Sky
    first question on StackExchange ever. I hope I got the right board. I'm trying to create a device (either from a standard AP or some other unconventional means) that will be able to detect nearby Wi-Fi enabled devices. For example, if a cellular phone (iPhone for instance) would be carried into the secured area, its MAC address will be logged. A cellular phone is a good example because it's the most common threat that should be detected. Some important points: The detection can be either active or passive, doesn't matter. The detected device might be connected to a different network, or might not be connected to anything at all. I assume most cellular phones are actively probing when not connected, but I'm not sure. It is important to not only identify the breach, but also to identify the device (MAC address). Conventional hardware is only optional. Distance of detection is at least 6 meters (20 feet). Handling one device at a time is good. Speed of detection is important, under 5 seconds is ideal. So my question is, is this even possible? If so, what can I use in order to make this a reality? Thank you for reading!

    Read the article

  • Kerberos service on win2k dc will not start following disk failure

    - by iwilson68
    Hi, I have a win2k (mixed mode domain) with 4 DCS. One of these also acts an exchange 2000 server which uses 2 logical volumes from an MSA 2000 array. AD etc is stored on local drives. We experienced a problem last week when the raid array fell back to a redundant controller and this temporarily meant that the two logical drives were not visible to the server for around 5 minutes and a couple of reboots. The log records these Events as Type: Warning Event Source: Disk Event Category: None Event ID: 51 Date: 06/11/2009 Time: 11:46:23 User: N/A Computer: server1 Description: An error was detected on device \Device\Harddisk1\DR1 during a paging operation. Following these problems, the server “kerberos Key Distribution” service refuses to start with an “error.31 a device attached to the system is not functioning”. All other automatic start services (including net logon) are running and there are no DNS issues etc. All devices are also functioning but the two logical MSA disks are now numbered in the Windows Disk Management MMC as 2 and 4 and I suspect that they may have previously been identified as disks 1 & 2 and perhaps windows still sees this as an ongoing failure?? Replication has not been affected but obviously there are many audit failures in the security log relating to users and workstations presumably linked to the Kerberos issue. Attempting to manually start the kerberos service generates the following in the System Log. Event Type: Error Event Source: Service Control Manager Event Category: None Event ID: 7023 Date: 09/11/2009 Time: 09:46:55 User: N/A Computer: Server1 Description: The Kerberos Key Distribution Center service terminated with the following error: A device attached to the system is not functioning. DCDIAG passes all tests except “Advertising” and “Services” which I believe relate directly to the failure of Kerberos only. Any advice would be appreciated.

    Read the article

  • How to disable auto insert notification in Windows 7?

    - by White Phoenix
    Alright, here's the problem. My hard drive activity light on my custom built PC is blinking exactly once every second. Microsoft has this to say on the issue: http://support.microsoft.com/kb/138598 There has been discussion on this issue several months ago: Why does my hard drive LED light blink every second? The problem seems to stem from primarily Windows 7 polling the CD-ROM/DVD drive every second to see if something is inserted. The Windows 7 users in the thread that was linked in the superuser question, https://social.technet.microsoft.com/Forums/fi-FI/w7itprohardware/thread/4f6f63b3-4b58-4154-9298-1566100f9d00, have confirmed that this IS a known issue with Windows 7. Some people point at the motherboard circuitry causing the CD-ROM and SATA activity to both be linked to that hard drive activity, but whatever the case, the temporary solution seems to be to disable the CD/DVD-ROM drive in Device Manager. In fact, disabling the CD/DVD-ROM does stop the blinking, but of course this solution is counterproductive, because I shouldn't have to entirely disable a device to fix this problem. I've done the following suggestions in that thread: Change the autorun registry entry to 0 Completely disable autoplay in the autoplay control panel Disable autoplay in the Local Group Policy Editor. None of these stop the blinking from happening - apparently these solutions work for both XP and Vista, but it seems to be different in Windows 7. So I'm wondering if anyone has found out how to completely disable the polling in Windows 7, or if this will just have to be an issue we will have to deal with. There's no option to disable the auto insert notification when you go to the device within device manager (there was in XP), so I got no idea where this option is hidden, or if there's a registry key entry I could change to stop the polling. Anyone have any idea?

    Read the article

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • QNAP NAS 509 (LINUX) - how to unmout busy volume and find physical disk?

    - by Horst Walter
    On my NAS QNAP TS 509 I do have a technical issue. I need to run e2fsck. This works fine for me on md0 (see below), but how can I unmount the busy devices md9 and sda4 in order to do the same. Whenever I try, I fail because the device is busy. [This part is solved, see below] In order to further track down the issue, I'd need to sort out the physical disk to device relationship. How can I find out this, e.g. md0 is a stripped volume on 2 disk (but I need to find out on what physical disk). Remark: As you can easily derive from my questions, I am not a Linux expert, but manage to get along. /dev/ram0 124.0M 94.1M 29.8M 76% / tmpfs 32.0M 80.0k 31.9M 0% /tmp /dev/sda4 310.0M 103.9M 206.1M 34% /mnt/ext /dev/md9 509.5M 39.2M 470.2M 8% /mnt/HDA_ROOT /dev/md0 1.8T 1.4T 444.7G 76% /share/MD0_DATA tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp -- Added -- QNAP seems to be based on Busybox. I do not find something like init / telinit / runlevel. At busybox docs it says that I need to run the below. But in /var/service sv is not available. I want to go to single user mode to unmount the devices. # cd /var/service # sv d * # sv u getty* -- Added, thanks A4L -- This QNAP Box runs a special flavor of Linux, so not all SOPs do apply. In my particular case I found a services.sh script, stopping all services. After that the drive could be unmounted. The information passed by A4L is valid and worth reading it, maybe I'll profit from it next time. Links: http://unix.stackexchange.com/questions/19918/umount-device-is-busy and http://unix.stackexchange.com/questions/15024/umount-device-is-busy-why So the unmount issue is solved, still looking for the best option to find the physical to volume mapping.

    Read the article

  • ASUS EAH5450 Graphics Card (ATI Radeon HD5450 - 1 GB DDR3) on Windows 2003? Anybody got it to work?

    - by JJarava
    Hi all! I've just bought an ASUS EAH5450 Graphics Card (ATI Radeon HD5450, 1 GB DDR3) for my main system, but I haven't been able to make it work under Windows 2003 (my OS in that system). When I plugged the card, I got a couple of "installing drivers" prompt for things such as "ATI High Definition Audio Device" that got themselves sorted out of the Internet, and then a "Standard VGA Graphics Adapter". The CD that came with the card installs something called "ATI Catalyst Install Manager" and .net 2.0, but no drivers. I've downloaded the latest (WinXP 32bits) drivers from ATI, and the experience is the same: I don't get any drivers installed. My Motherboard is an ASUS A8N-SLI with nVidia nForce 4 chipset (for an Athlon 64X2, somewhat old), but my previous card was an ATi Radeon X700, so it's been working with ATI cards before. On POST, during boot I see a "Display Card" Device (Vendor ID 1002-68F9-0300) and a "Multimedia Device" (1002-AA68-0403), and when viewing the properties of the "Standard VGA", they match the device ID. Any hints? I'd really hate having to get rid of the card, and I'm sure it's not that strange what I'm trying to do...

    Read the article

  • Broken fonts in Konsole KDE 4.3.4

    - by depesz
    I have a strange situation - after some upgrades a couple of days ago fonts in KDE Konsole broke. To make it more specific - standard fonts look more or less OK, but when I use my national characters (like acelnsózz) they all look broken - like from another font, or badly scaled. The same problem doesn't exist in GNOME Terminal. I usually use the Terminus font, so I used this for demonstration, but it shows in other fonts as well - if that will be necessary I will provide list. Konsole shot: GNOME Terminal shot: As for my settings: =$ cat /etc/X11/xorg.conf Section "Device" Identifier "Builtin Default intel Device 0" Driver "intel" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Screen" Identifier "Builtin Default intel Screen 0" Device "Builtin Default intel Device 0" Monitor "Monitor0" EndSection Section "InputDevice" Identifier "touchpad" Driver "synaptics" Option "CorePointer" EndSection Section "ServerLayout" Identifier "Builtin Default Layout" Screen "Builtin Default intel Screen 0" InputDevice "touchpad" EndSection =$ xdpyinfo | grep -E resolution\|dimensions dimensions: 1680x1050 pixels (444x277 millimeters) resolution: 96x96 dots per inch I tried forcing DPI in system settings (to 120), or adding monitor size to xorg.conf - so far nothing helped. Any idea on what should I do to make it work sanely again?

    Read the article

  • Recover LVM2 volume group after one HDD failed

    - by Bernd
    I had two HDDs, each one containing a LVM partition which formed a volume group. Then I had two LVs, one for my / directory and one for my /home/ directory. Yesterday where I had my / dir failed. I'm trying to recover at least my /home/ dir. What I've done so far: Boot a live system Extract LVM2 metadata from the working HDD using dd Copy metadata to /etc/lvm/backup/vg0 Now I'm trying to do this: pvcreate --restore /etc/lvm/backup/vg0 --uuid "[uuid of my working hdd]" /dev/sdb2 But I always get: Couldn't find device with uuid '[uuid of broken hdd]'. Couldn't find device with uuid '[uuid of working hdd]'. Device /dev/sdb2 not found (or ignored by filtering). I confirmed that /dev/sdb2 exists and I've commented out all filtering settings from /etc/lvm/lvm.conf so I don't know what might be causing pvcreate not to find the device. So: What might be the problem? Is it even possible to restore this partition? (As I'm writing this I'm starting to think it's impossible D:) Edit: Okay, looks like I've got it figured out. I was using a Ubuntu 8.10 CD (yeah, I know it's not supported anymore) and it seems that was the problem. When I started from a Ubuntu 10.04 CD everything worked 'fine', I could mount my LVM partitions partially without problems. (Will answer the question in 4 hours. But if anyone has still got some hints/tips, please share! :)

    Read the article

  • How to manage mounted partitions (fstab + mount points) from puppet

    - by Cristian Ciupitu
    I want to manage the mounted partitions from puppet which includes both modifying /etc/fstab and creating the directories used as mount points. The mount resource type updates fstab just fine, but using file for creating the mount points is bit tricky. For example, by default the owner of the directory is root and if the root (/) of the mounted partition has another owner, puppet will try to change it and I don't want this. I know that I can set the owner of that directory, but why should I care what's on the mounted partition? All I want to do is mount it. Is there a way to make puppet not to care about the permissions of the directory used as the mount point? This is what I'm using right now: define extra_mount_point( $device, $location = "/mnt", $fstype = "xfs", $owner = "root", $group = "root", $mode = 0755, $seltype = "public_content_t" $options = "ro,relatime,nosuid,nodev,noexec", ) { file { "${location}/${name}": ensure => directory, owner => "${owner}", group => "${group}", mode => $mode, seltype => "${seltype}", } mount { "${location}/${name}": atboot => true, ensure => mounted, device => "${device}", fstype => "${fstype}", options => "${options}", dump => 0, pass => 2, require => File["${location}/${name}"], } } extra_mount_point { "sda3": device => "/dev/sda3", fstype => "xfs", owner => "ciupicri", group => "ciupicri", $options = "relatime,nosuid,nodev,noexec", } In case it matters, I'm using puppet-0.25.4-1.fc13.noarch.rpm and puppet-server-0.25.4-1.fc13.noarch.rpm.

    Read the article

  • Broken fonts in konsole kde 4.3.4

    - by depesz
    I have strange situation - after some upgrade couple of days ago fonts in KDE konsole broke. To make it more specific - standard fonts look more or less ok, but when I use my national characters (like acelnsózz) they all look broken - like from another font, or badly scaled. The same problem doesn't exist in gnome-terminal. I usually use Terminus font, so I used this for demonstration, but it shows in other fonts as well - if that will be necessary I will provide list. Konsole shot: gnome-terminal shot: As for my settings: =$ cat /etc/X11/xorg.conf Section "Device" Identifier "Builtin Default intel Device 0" Driver "intel" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Screen" Identifier "Builtin Default intel Screen 0" Device "Builtin Default intel Device 0" Monitor "Monitor0" EndSection Section "InputDevice" Identifier "touchpad" Driver "synaptics" Option "CorePointer" EndSection Section "ServerLayout" Identifier "Builtin Default Layout" Screen "Builtin Default intel Screen 0" InputDevice "touchpad" EndSection =$ xdpyinfo | grep -E resolution\|dimensions dimensions: 1680x1050 pixels (444x277 millimeters) resolution: 96x96 dots per inch I tried forcing DPI in system settings (to 120), or adding monitor size to xorg.conf - so far nothing helped. Any idea on what should I do to make it work sanely again?

    Read the article

  • Use Alladin eToken with ThunderBird and other tool

    - by Yurij73
    I'm looking for an example on how to setup the eToken PRO Java device to work with Mozilla Thunderbird and with other Linux tool such as PAM logon. I installed distributed pkiclient-5.00.28-0.i386.RPM from the official product page eToken Pro but that tool only handles importing/exporting certificates on the device. I read a glance an old HOWTO from eToken on Linux, but I couldn't install pkcs11-lib for this device as recommended for Thunderbird use this crypto device. It seems my usb token isn't listed in system, unless lsusb show it, so that is the matter modutil -list -dbdir /etc/pki/nssdb Listing of PKCS #11 Modules NSS Internal PKCS #11 Module Blockquote slots: 2 slots attached Blockquote status: loaded Blockquote slot: NSS User Private Key and Certificate Services Blockquote token: NSS Certificate DB Blockquote CoolKey PKCS #11 Module Blockquote library name: libcoolkeypk11.so Blockquote slots: 1 slot attached Blockquote status: loaded Blockquote slot: AKS ifdh [Main Interface] 00 00 token: is my token absent? on other hand i don't know which module is convenient to Java Pro, does CoolKey does all the job well? It seems Java token is too new hardware for Linux? there is excerpt from /etc/pam_pkcs11.conf #filename of the PKCS #11 module. The default value is "default" use_pkcs11_module = coolkey; screen_savers = gnome-screensaver,xscreensaver,kscreensaver pkcs11_module coolkey { module = libcoolkeypk11.so; description = "Cool Key"`

    Read the article

  • BackupPC - why does it use rsync --sender --server ... ?

    - by Jakobud
    I'm in the process of experimenting with BackupPC on a CentOS 5.5 server. I have everything pretty much setup with default values. I tried setting up a basic backup for a host's /www directory. The backup fails with the following errors: full backup started for directory /www Running: /usr/bin/ssh -q -x -l root target /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Xfer PIDs are now 30395 Read EOF: Connection reset by peer Tried again: got 0 bytes Done: 0 files, 0 bytes Got fatal error during xfer (Unable to read 4 bytes) Backup aborted (Unable to read 4 bytes) Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 0) First of all, yes I have my ssh keys setup to allow me to ssh to the target server without requiring a password. In the process of troubleshooting, I tried the above ssh command directly from the command line, and it hangs. Looking at the end of the debug messages for SSH I get: debug1: Sending subsystem: /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Request for subsystem '/usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/' failed on channel 0 Next I started looking at the rsync flags. I did not recognize --server and --sender. Looking at the rsync man pages, sure enough, I don't see anything about --server or --sender in there. What are those in there for? Looking at the BackupPC config I have this: RsyncClientPath = /usr/bin/rsync RsyncClientCmd = $sshPath -q -x -l root $host $rsyncPath $argList+ And for the arguments, I have the following listed: --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive Notice there is no --server, --sender or --ignore-times. Why are these things getting added in? Is this part of the problem?

    Read the article

  • OpenVPN, install a TAP adapter

    - by GolezTrol
    When I try to connect to my work VPN using OpenVPN, the connection fails with the message: All TAP-Win32 adapters on this system are currently in use. Many sources suggest to look in Control Panel\Network and Internet\Network Connections an enable the TAP adapter, but when I look there, there is none. Now I've run addtap.bat which is provided with OpenVPN, but I still don't get to see any TAP adapter, and logging in in VPN still fails. The output of addtap.bat is C:\Windows\system32>"C:\Program Files (x86)\OpenVPN\bin\tapinstall.exe" install "C:\Program Files (x86)\OpenVPN\driver\OemWin2k.inf" tap0801 Device node created. Install is complete when drivers are updated... Updating drivers for tap0801 from C:\Program Files (x86)\OpenVPN\driver\OemWin2k .inf. Drivers updated successfully. I've Run As Administrator both the setup of OpenVPN and addtap.bat. I've run deltapall.bat to remove any (maybe hidden) adapters. It said it removed three of them, after which I ran addtap.bat again to try to create another one. I also run OpenVPN itself as administrator. What's wrong? Running Windows 7 Home Premium on a HP Pavilion dv7 4050ed. It has worked before, but I recently had to reinstall my laptop, for which I used the restore disks I created when I just got it. Everything else seems to work fine. == UPDATE == The TAP adapter is found in Device Manager, but apparently it is disabled because it is incompatible with Windows 7 64bit. I've deïnstalled OpenVPNGui, downloaded a version that should be 64bit compatible, and installed that. Still no cigar. Then I found a tip to install OpenVPN (version 9) after installing OpenVPNGui, because that installs OpenVPN version 8. Now I got a v9 TAP driver in Device Manager, but it still doesn't work and shows up in device manager with an exclamation mark, and not at all in my network devices.

    Read the article

  • ifcf-ethx problem

    - by Shahmir Javaid
    Every time i run service networkd restart This is what i get Shutting down interface eth0: Device state: 3 (disconnected) [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown [ OK ] Bringing up loopback interface: Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown Error org.freedesktop.NetworkManagerSettings.InvalidConnection: ifcfg file '/etc/sysconfig/network-scripts/ifcfg-lo' unknown [ OK ] Bringing up interface eth0: ** (process:12951): WARNING **: fetch_connections_done: error fetching user connections: (2) The name org.freedesktop.NetworkManagerUserSettings was not provided by any .service files. Active connection state: activating Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/1 state: activated Connection activated [ OK ] Here is my ifcfg-eth0 # Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller DEVICE=eth0 BOOTPROTO=dhcp DEFROUTE=yes DHCPCLASS= HWADDR=xxx IPV4_FAILURE_FATAL=yes IPV6INIT=no ONBOOT=yes OPTIONS=layer2=1 PEERDNS=yes PEERROUTES=yes TYPE=Ethernet UUID=xxx And my ifcfg-eth1 # Intel Corporation 82541PI Gigabit Ethernet Controller DEVICE=eth1 HWADDR=xxx ONBOOT=no And my ifcfg-lo DEVICE=lo IPADDR=127.0.0.1 NETMASK=255.0.0.0 NETWORK=127.0.0.0 # If you're having problems with gated making 127.0.0.0/8 a martian, # you can change this to something else (255.255.255.255, for example) BROADCAST=127.255.255.255 ONBOOT=yes NAME=loopback Any ideas?

    Read the article

  • Does Guest WiFi on an Access Point make any sense?

    - by uos??
    I have a Belkin WiFi Router which offers a feature of a secondary Guest Access WiFi network. Of course, the idea is that the Guest network doesn't have access to the computers/devices on the main network. I also have a Comcast-issues Cable Modem/Router device with mutliple wired ports, but no WiFi-capabilities. I prefer to only run one router/DHCP/NAT instead of both the Comcast Router and the Belkin Router, so I can disable the Routing functions of the Belkin and allow the Comcast Router to But if I disable the Routing functions of the Belkin device, the Guest WiFi network is still available. Is this configuration just as secure as when the Belkin acts as a Router? I guess the question comes down to this: Do Guest WiFi's provide security by 1) only allowing requests to IPs found in-front of the device, or do they work by 2) disallowing requests to IPs on the same subnet? 1) Would mean that Guest WiFi on an access point provides no benefit 2) Would mean that the Guest WiFi functionality can work even if the device is just an access point. Or maybe something else entirely?

    Read the article

  • Xmodmap fails to remap modifier keys

    - by ZyX
    When I try to move keys, so that I have CapsLock on escape, Control on CapsLock and Escape on left control, I get the following error: % xmodmap ~/.Xmodmap X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 118 (X_SetModifierMapping) Value in failed request: 0x17 Serial number of failed request: 15 Current serial number in output stream: 15 That is the code that fails: remove Lock = Caps_Lock ! ESC keycode 9 = Caps_Lock add Lock = Caps_Lock remove Control = Control_L ! CapsLock keycode 66 = Control_L add control = Control_L ! Control_R keycode 37 = Escape ! 2*Meta_L keycode 148 = Meta_L add mod1 = Meta_L If I comment out all lines that start with either add or remove it runs without any errors, but does not do what I want. Program versions (Gentoo x86 (stable)): xorg-server-1.7.6 xmodmap-1.0.4 xf86-input-evdev-2.3.2 Xorg.conf: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 1.0 (buildmeister@builder63) Fri Aug 14 17:54:58 PDT 2009 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Evdev Keyboard" "CoreKeyboard" InputDevice "Evdev Mouse" "CorePointer" EndSection Section "Module" Disable "dri" Disable "dri2" Disable "record" EndSection Section "InputDevice" Identifier "Evdev Keyboard" Driver "evdev" Option "Device" "/dev/input/event2" Option "CoreKeyboard" Option "AutoRepeat" "500 25" Option "XkbRules" "xorg" Option "xkb_rules" "xorg" Option "XkbModel" "yahoo" Option "xkb_model" "yahoo" Option "XkbLayout" "dvp2" # ,ru2 Option "xkb_layout" "dvp2" # ,ru2 # Option "XkbVariant" "" # ,winkeys Option "XkbOption" "grp_led:scroll,grp:rctrl_toggle,compose:rwin,grp:lwin_switch" # grp:lwin_switch EndSection Section "InputDevice" Identifier "Evdev Mouse" Driver "evdev" Option "CorePointer" Option "Device" "/dev/input/event3" Option "Name" "Genius Ergo Mouse" Option "HWHEELRelativeAxisButtons" "7 6" Option "WHEELRelativeAxizButtons" "4 5" Option "SendCoreEvents" "true" Option "Buttons" "11" EndSection Section "Files" FontPath "/usr/share/fonts/misc" FontPath "/usr/share/fonts/Type1" FontPath "/usr/share/fonts/100dpi" FontPath "/usr/share/fonts/75dpi" FontPath "/usr/share/fonts/terminus" # FontPath "/usr/share/fonts/intlfonts" FontPath "/usr/share/fonts/ttf-bitstream-vera" # FontPath "/usr/share/fonts/ttf" FontPath "/usr/share/fonts/corefonts" FontPath "/usr/share/fonts/paratype" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection Section "ServerFlags" # Option "XkbDisable" "false" # Option "AutoAddDevices" "false" Option "DontVTSwitch" "false" Option "DontZap" "false" # Option "DontZoom" "true" EndSection Everything worked before update.

    Read the article

  • can't load IA 32-bit .dll on a AMD 64 bit platform

    - by user101425
    I have a Windows 2003 64 bit terminal server which we run a Java application from. The application has always worked up until 2 days ago. No new updates have been installed to the server in that time frame. I have tried re-installing java 64 bit but still have the following error. Unexpected exception: java.lang.reflect.InvocationTargetException java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at com.sun.javaws.Launcher.executeApplication(Unknown Source) at com.sun.javaws.Launcher.executeMainClass(Unknown Source) at com.sun.javaws.Launcher.doLaunchApp(Unknown Source) at com.sun.javaws.Launcher.run(Unknown Source) at java.lang.Thread.run(Unknown Source) **Caused by: java.lang.UnsatisfiedLinkError: C:\Documents and Settings\administrator\Application Data\Sun\Java\Deployment\cache\6.0\19\625835d3-5826d302-n\swt-win32-3116.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform** at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:100) at org.eclipse.swt.internal.win32.OS.<clinit>(OS.java:18) at org.eclipse.swt.graphics.Device.init(Device.java:563) at org.eclipse.swt.widgets.Display.init(Display.java:1784) at org.eclipse.swt.graphics.Device.<init>(Device.java:99) at org.eclipse.swt.widgets.Display.<init>(Display.java:363) at org.eclipse.swt.widgets.Display.<init>(Display.java:359) at com.ko.StartKO.main(StartKO.java:57) ... 9 more

    Read the article

  • Dell 2970 - HP 1/8 G2 autoloader keeps falling off LSI 2032 SCSI chain

    - by middaparka
    I've a somewhat irritating problem with a Dell 2970 that has a HP 1/8 G2 autoloader (the Ultrium LTO 2 model) attached to the Dell/LSI 2032 non-RAID SCSI card. In essence, sometimes the autoloader/drive completely fails to appear on the SCSI chain (i.e.: there's neither a media changer or tape drive present within the device manager) and sometimes it appears but then subsequently disappears at a seemingly random (yet always inconvenient) time, resulting in backup failures. On most occasions, there are simply no errors logged in the system event log, but I did manage to capture a series of LSI_SCSI event ID 11 ("The driver detected a controller error on \Device\RaidPort0") errors followed by an event ID 129, ("Reset to device, \Device\RaidPort0, was issued") error during testing. I've tried two different cables, both with the same effect – sometimes the autoloader appears (for a while), sometimes it's completely absent. There's only one terminator I've tried to use, but as I've since successfully tested the autoloader on multiple occasions (albeit via a Adaptec U160 card on a different machine), my gut feel is that the issue doesn't lie with the terminator, or indeed the autoloader itself. As such, I'm just wondering if anyone has any ideas? It's most likely not relevant, but this is all under Windows SBS 2008, running Backup Exec 12.5 SBS edition (the Dell version), both fully patched. Addidtionally, the autoloader is running the latest firmware. It's been a while since I've dealt with anything SCSI, so all suggestions will be gratefully, gratefully received.

    Read the article

  • How to create partition when growing raid5 with mdadm.

    - by hometoast
    I have 4 drives, 2x640GB, and 2x1TB drives. My array is made up of the four 640GB partitions and the beginning of each drive. I want to replace both 640GB with 1TB drives. I understand I need to 1) fail a disk 2) replace with new 3) partition 4) add disk to array My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition? Or do I create another 640GB partition and grow it later? Or perhaps the same question could be worded: after I replace the drives how to I grow the 640GB raid partitions to fill the rest of the 1TB drive? fdisk info: Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe3d0900f Device Boot Start End Blocks Id System /dev/sdb1 1 77825 625129281 fd Linux raid autodetect /dev/sdb2 77826 121601 351630720 83 Linux Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc0b23adf Device Boot Start End Blocks Id System /dev/sdc1 1 77825 625129281 fd Linux raid autodetect /dev/sdc2 77826 121601 351630720 83 Linux Disk /dev/sdd: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x582c8b94 Device Boot Start End Blocks Id System /dev/sdd1 1 77825 625129281 fd Linux raid autodetect Disk /dev/sde: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xbc33313a Device Boot Start End Blocks Id System /dev/sde1 1 77825 625129281 fd Linux raid autodetect Disk /dev/md0: 1920.4 GB, 1920396951552 bytes 2 heads, 4 sectors/track, 468846912 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x00000000

    Read the article

  • with nginx having the base url rewrite to https

    - by jchysk
    I'd like only my base domain www.domain.com to be rewritten to https://www.domain.com By default in my https block I have it reroute to http:// if it's not ~uri = "/" (base domain) or static content. server { listen 443; set $ssltoggle 2; if ($uri ~ ^/(img|js|css|static)/) { set $ssltoggle 1; } if ($uri = '/') { set $ssltoggle 1; } if ($ssltoggle != 1) { rewrite ^(.*)$ http://$server_name$1 permanent; } } So in my http block I need to do the rewrite if it has to https: server { listen 80; if ($uri = '/') { set $ssltoggle 1; } if ($ssltoggle = 1) { rewrite ^(.*)$ https://$server_name$1 permanent; } } If I don't have the $uri = '/' if-statement in the http block, then https works fine if I go directly to it, but I won't get redirected if I go to regular http which is expected. If I do put that in-statement in the http block then everything stops working within minutes. It might work for a few requests, but will always stop within a minute or so. In browsers I just get a blank page for all requests. If I restart nginx it continues to not work until I remove both if-statement blocks in both the https and http blocks and restart nginx. When I look in the error logs I don't see anything logged. When I look in the access log I see this message: "-" 400 0 "-" "-" which I assume means a 400 error. I don't understand why this doesn't work for me. My end goal is to have the base domain be https-only while all other pages default to http. How can I achieve this?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >