Search Results

Search found 21343 results on 854 pages for 'pass by reference'.

Page 759/854 | < Previous Page | 755 756 757 758 759 760 761 762 763 764 765 766  | Next Page >

  • Disk is spinning down each minute, unable to disable it

    - by lzap
    I played with spindown and APM settings of my Samsung discs and now they spin down every minute. I want to disable it, but it seems it does not accept any of the spindown time or APM values. Nothing works, it's all the same. Please help what values should be proper for it. I do not want it to spin down at all. /dev/sda: ATA device, with non-removable media Model Number: SAMSUNG HD154UI Serial Number: S1Y6J1KZ206527 Firmware Revision: 1AG01118 Standards: Used: ATA-8-ACS revision 3b Supported: 7 6 5 4 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 2930277168 Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 1430799 MBytes device size with M = 1000*1000: 1500301 MBytes (1500 GB) cache/buffer size = unknown Capabilities: LBA, IORDY(can be disabled) Queue depth: 32 Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 16 Current = 16 Advanced power management level: 60 Recommended acoustic management value: 254, current value: 0 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 udma7 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * SMART feature set Security Mode feature set * Power Management feature set * Write cache * Look-ahead * Host Protected Area feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * DOWNLOAD_MICROCODE * Advanced Power Management feature set Power-Up In Standby feature set * SET_FEATURES required to spinup after power up SET_MAX security extension Automatic Acoustic Management feature set * 48-bit Address feature set * Device Configuration Overlay feature set * Mandatory FLUSH_CACHE * FLUSH_CACHE_EXT * SMART error logging * SMART self-test Media Card Pass-Through * General Purpose Logging feature set * 64-bit World wide name * WRITE_UNCORRECTABLE_EXT command * {READ,WRITE}_DMA_EXT_GPL commands * Segmented DOWNLOAD_MICROCODE * Gen1 signaling speed (1.5Gb/s) * Gen2 signaling speed (3.0Gb/s) * Native Command Queueing (NCQ) * Host-initiated interface power management * Phy event counters * NCQ priority information DMA Setup Auto-Activate optimization Device-initiated interface power management * Software settings preservation * SMART Command Transport (SCT) feature set * SCT Long Sector Access (AC1) * SCT LBA Segment Access (AC2) * SCT Error Recovery Control (AC3) * SCT Features Control (AC4) * SCT Data Tables (AC5) Security: Master password revision code = 65534 supported not enabled not locked frozen not expired: security count supported: enhanced erase 326min for SECURITY ERASE UNIT. 326min for ENHANCED SECURITY ERASE UNIT. Logical Unit WWN Device Identifier: 50024e900300cca3 NAA : 5 IEEE OUI : 0024e9 Unique ID : 00300cca3 Checksum: correct I have the very same disc which I did not "tuned" and it does not spin. But I do not know where to read the settings from. The hdparm only shows this: Advanced power management level: 60 Recommended acoustic management value: 254, current value: 0 Edit: It seems the issue was tuned daemon in RHEL6. It was too aggressive, I turned off disc tuning and it seems they are no longer spinning down.

    Read the article

  • Getting macro keys from a razer blackwidow to work on linux

    - by Journeyman Geek
    I picked up a razer blackwidow ultimate that has additional keys meant for macros that are set using a tool that's installed on windows. I'm assuming that these arn't some fancypants joojoo keys and should emit scancodes like any other keys. Firstly is there a standard way to check these scancodes in linux? Secondly how do i set these keys to do things in command line and x based linux setups? My current linux install is xubuntu 10.10, but i'll be switching to kubuntu once i have a few things fixed up. Ideally the answer should be generic and system-wide Things i have tried so far: showkeys from the built in kbd package (in a seperate vt) - macro keys not detected xev - macro keys not detected lsusb and evdev output this ahk script's output suggests the M keys are not outputting standard scancodes Things i need to try snoopy pro + reverse engineering (oh dear) Wireshark - preliminary futzing around seems to indicate no scancodes emitted when what i seem to think is the keyboard is monitored and keys pressed. Might indicate additional keys are a seperate device or need to be initialised somehow. Need to cross reference that with lsusb output from linux, in 3 scenarios - standalone, passed through to a windows VM without the drivers installed, and the same with. LSUSB only detects one device on a standalone linux install It might be useful to check if the mice use the same razer synapse driver , since that means some variation of razercfg might work (not detected. only seems to work for mice) Things i have Have worked out: In a windows system with the driver, the keyboard is seen as a keyboard and a pointing device. And said pointing device uses, in addition to your bog standard mouse drivers.. a driver for something called a razer synapse. Mouse driver seen in linux under evdev and lsusb as well Single Device under OS X apparently, though i have yet to try lsusb equivilent on that Keyboard goes into pulsing backlight mode in OS X upon initialisation with the driver. This should probably indicate that there's some initialisation sequence sent to the keyboard on activation. They are, in fact, fancypants joojoo keys. Extending this question a little I have access to a windows system so if i need to use any tools on that to help answer the question, its fine. I can also try it on systems with and without the config utility. The expected end result is still to make those keys usable on linux however. I also realise this is a very specific family of hardware. I would be willing to test anything that makes sense on a linux system if i have detailed instructions - this should open up the question to people who have linux skills, but no access to this keyboard The minimum end result i require I need these keys detected, and usable in any fashion on any of the current graphical mainstream ubuntu varients

    Read the article

  • Why can I not get a WDS-originated PXE boot to progress past the first file download?

    - by Jeff Shattock
    I'm trying to work out an automated Windows install process, and thought I'd give WDS a look. After some promising initial progress, I seem to have hit a wall. I imported the boot and install WIMs, and created the capture WIM successfully. However, whenever I try to PXE boot the reference machine against the WDS server, it kinda craps out. It finds the server and downloads WDSNBP.COM successfully, and then gives the message "TFTP download failed." According to WireShark, the only communication between the WDS box and the client box is the successful TFTP request and download of boot\x86\WDSNBP.COM. No further requests are sent. The WDS log on the server shows the same thing, one successful download and no more activity. I've tried every combination of the following, with exactly zero change in behaviour: Win Server 2008R2 vs 2012 vs 2012R2 WDS virtualized on KVM, ESXi, VirtualBox, VMWare Workstation Client virtualized on KVM, ESXi, VirtualBox, VMWare Workstation Every network adaptor type offered by the virtualization platforms. "Actual" network vs isolated, virtual network. MS DHCP server vs Linux isc-dhcp-server Joined to a domain vs Stand-alone I tried changing the boot filename in DHCP to pxeboot.com instead, and it has no problem downloading that file instead, but it then crabs about Boot\BCD being corrupted. Also, with 2012, it doesnt appear that WDSNBP.com does the architecture detection, or at least does'nt report that it did. 2008 reports that it found x64, and then errors. I find myself out of things to check, and I dont see anything immediately wrong. Where do I go from here? WDS server is at 192.168.1.50, DHCP/DNS at 192.168.1.7. Console of the client computer after the boot: MAC: 52:54:00:28:94:0E UUID: blah blah Searching for server (DHCP)..... Me: 192.168.1.155, DHCP: 192.168.1.7, Gateway 192.168.1.1 Loading 192.168.1.50:boot\x86\wdsnbp.com ...(PXE).................done Downloaded WDSNCP... TFPT download failed Interesting parts of /etc/dhcp/dhcpd.conf on the Linux DHCP server: allow booting; allow bootp; option option-60 code 60 = string; option option-66 code 66 = string; option option-67 code 67 = string; subnet 192.168.1.0 netmask 255.255.255.0 { range 192.168.1.110 192.168.1.253; next-server 192.168.1.50; option tftp-server-name "192.168.1.50"; option option-60 "PXEClient"; filename "boot\\x86\\wdsnbp.com"; option bootfile-name "boot\\x86\\wdsnbp.com"; }

    Read the article

  • Mac OS X behind OpenLDAP and Samba

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

  • Sendmail - Multiple Domains, One Box - Blocking One Or Two Domains

    - by TangoOversway
    I have a number of domains hosted at a web hosting service. They use sendmail to handle incoming email. I have six domains on this service (which we can call aaa.com, bbb.com and so on). Each email account has the same name and one email box. In other words, [email protected], [email protected], [email protected] and all the others go into one box, /var/spool/mail/tango, where my email program on my desktop picks it up. I have done very little work in sendmail. I haven't had to, and I've been warned it's a steep learning curve. But now I'm running into an issue. I was in a business situation where, for years, my email address was on the website for aaa.com. (We won't go into why this was necessary - it wasn't my preference and it's in the past.) Now I'm using [email protected] instead of [email protected]. I was getting about 1,000 or more pieces of spam a day, but SpamAssassin and my own email program caught about 75% of that. (Which still left stuff to delete.) Now, after checking, I see that 90% or more goes to [email protected], the one that was on the web for years. I'd like to deactivate [email protected] and possibly [email protected] and [email protected], but want to keep using [email protected]. Remember, email to tango at any of these domains will go into one email box. I've had people tell me that sendmail can be configured so I can deactivate [email protected] (and other domains) and still use [email protected] (and others, if I want to). In other words, I can configure sendmail to use this account on some domains and not others. One of the people who was teling me this was in tech support at the hosting service. But I wrote to tech support with a work order to do this and now I'm told it can't be done. I can modify config files myself on this account if needed, but I was hoping to just let them do it. (I love delegation -- it means I spend more time doing my stuff.) Is it possible to keep an email account active on one domain and not others with sendmail, when all domains are hosted on the same server? Is there a name for this process or setting? Any information would be helpful - either pointers to instructions so I can do it, or enough info so I can tell tech support, "This is where to look, and it can be done, so please pass my request on to someone who works with sendmail and knows how to do it." Is this something sendmail can do?

    Read the article

  • AIX: iscsi volumes disappear after reboot

    - by Dan
    We have an IBM P505 AIX box, with two internal disks and a defined iSCSI volume. The iSCSI volume is defined in it's own volume group, and is connected to an IBM iSCSI DS3300 disk array via the secondary onboard ethernet port (ie, we're not using a dedicated HBA, we're using the second onboard ethernet port for iSCSI exclusively.) When we reboot the AIX box, the iSCSI volume doesn't get mounted (which is fine; I've figured out that it fails to mount because AIX tries mounting it's volumes before starting the networking stack.) The problem is, after the server has booted it fails to redetect the iSCSI target as a physical disk. This means the volume group (iscsivg) can't go online. if I run cfgmgr -v to redetect the iscsi volume it successfully detects the iscsi target volume and creates a physical volume reference, but allocates it a different volume ID to what was defined before. eg - rootvg contains hdisk 0 and 1 iscsivg was originally defined with hdisk2 as the physical iSCSI volume. after reboot and running cfgmgr -v, AIX detects physical volumes hdisk0, hdisk11 and hdisk3. As there's no hdisk2, I can't varyon the iscsivg volume group. I can't seem any existing hdisk2 definition in the ODM. I can't easily add or change the definition of the physcial disk in the iscsivg volume group as it won't "varyon". Exporting the volume group deletes it completely, recreating the volume group by "importing" it from the reallocated disk makes it available again, but surely there's a better way? Can I force a specific hdisk drive designation for an iscsi target? How do you bring online iSCSI volumes after a reboot? I assume this "just works" with a dedicated HBA instead of a generic ethernet adapter? By the way, the iSCSI volume works fine once it's mounted; we only have problems getting it working - and only with AIX. The iSCSI array works fine with our Linux and Windows servers; ie the volumes get detected and remounted after boot time without any problems, using generic ethernet adapters. Here's some of the config from the AIX box: defined disks / devices: # lsdev hdisk0 Available 06-08-01-5,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 06-08-01-8,0 16 Bit LVD SCSI Disk Drive hdisk3 Available Other iSCSI Disk Drive iscsi0 Available iSCSI Protocol Device scsi0 Available 06-08-00 PCI-X Dual Channel Ultra320 SCSI Adapter bus scsi1 Available 06-08-01 PCI-X Dual Channel Ultra320 SCSI Adapter bus ses0 Available 06-08-01-15,0 SCSI Enclosure Services Device sisscsia0 Available 06-08 PCI-X Dual Channel Ultra320 SCSI Adapter iscsi target definition in /etc/iscsi/targets: # IBM DS3300 disk array # port 1 on second controller 10.10.xx.xxx 3260 iqn.1992-01.com.lsi:1535.600a0b80005b0a7fxxxxxxxxxxxx physical volumes (after reimporting the volume group) # lspv hdisk0 0003b08a0d4936b6 rootvg active hdisk1 0003b08aaa5cb366 rootvg active hdisk3 0003b08a032d04bb iscsivg active

    Read the article

  • DPM 2010 PowerShell Script to Easily Restore Multiple Files

    - by bmccleary
    I’ve got what I thought would be a simple task with Data Protection Manager 2010 that is turning out to be quite frustrating. I have a file server on one server and it is the only server in a protection group. This file server is the repository for a document management application which stores the files according to the data within a SQL database. Sometimes users inadvertently delete files from within our application and we need to restore them. We have all the information needed to restore the files to include the file name, the folder that the file was stored in and the exact date that the file was deleted. It is easy for me to restore the file from within the DPM console since we have a recovery point created every day, I simply go to the day before the delete, browse to the proper folder and restore the file. The problem is that using the DPM console, the cumbersome wizard requires about 20 mouse clicks to restore a single file and it takes 2-4 minutes to get through all the windows. This becomes very irritating when a client needs 100’s of files restored… it takes all day of redundant mouse clicks to restore the files. Therefore, I want to use a PowerShell script (and I’m a novice at PowerShell) to automate this process. I want to be able to create a script that I pass in a file name, a folder, a recovery point date (and a protection group/server name if needed) and simply have the file restored back to its original location with some sort of success/failure notification. I thought it was a simple basic task of a backup solution, but I am having a heck of a time finding the right code. I have seen the sample code at http://social.technet.microsoft.com/wiki/contents/articles/how-to-use-a-windows-powershell-script-to-recover-an-item-in-data-protection-manager.aspx that I have tried to follow, but it doesn’t accomplish what I really want to do (it’s too simplistic) and there are errors in the sample code. Therefore, I would like to get some help writing a script to restore these files. An example of the known values to restore the data are: DPM Server: BACKUP01 Protection Group: Document Repository Data Protected Server: FILER01 File Path: R:\DocumentRepository\ToBackup\ClientName\Repository\2010\07\24\filename.pdf Date Deleted: 8/2/2010 (last recovery point = 8/1/2010) Bonus Points: If you can help me not only create this script, but also show me how to automate by providing a text file with the above information that the PowerShell script loops through, or even better, is able to query our SQL server for the needed data, then I would be more than willing to pay for this development.

    Read the article

  • How to change SMP affinity of an IRQ on Ubuntu domU inside Xen XCP?

    - by Alexander Gladysh
    I'd like to change IRQ SMP affinity for reasons, outlined in this question: CPU0 is swamped with eth1 interrupts But I can't — I see Input/output error when I try to write to /proc/irq/*/smp_affinity. Please point me to the HOWTO on the matter. (A formal reference on /proc/irq/*/ would be cool as well.) Gory details: Note that this is a VM inside an Ubuntu-based Xen XCP host. $ uname -a Linux MYHOST 2.6.38-15-virtual #59-Ubuntu SMP Fri Apr 27 16:40:18 UTC 2012 i686 i686 i386 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.04 Release: 11.04 Codename: natty $ sudo cat /proc/irq/*/smp_affinity 01 01 01 01 01 80 80 80 80 80 80 40 40 40 40 40 40 20 20 20 20 20 20 10 10 10 10 10 10 08 08 08 08 08 08 04 04 04 04 04 04 02 02 02 02 02 02 01 01 01 01 01 01 Update. The error details: $ N=$(grep -c processor /proc/cpuinfo) $ echo $N 8 $ printf %x $((2**N-1)) ff $ printf %x $((2**N-1)) | sudo tee /proc/irq/*/smp_affinity fftee: /proc/irq/288/smp_affinity: Input/output error tee: /proc/irq/289/smp_affinity: Input/output error tee: /proc/irq/290/smp_affinity: Input/output error tee: /proc/irq/291/smp_affinity: Input/output error tee: /proc/irq/292/smp_affinity: Input/output error tee: /proc/irq/293/smp_affinity: Input/output error tee: /proc/irq/294/smp_affinity: Input/output error tee: /proc/irq/295/smp_affinity: Input/output error tee: /proc/irq/296/smp_affinity: Input/output error tee: /proc/irq/297/smp_affinity: Input/output error tee: /proc/irq/298/smp_affinity: Input/output error tee: /proc/irq/299/smp_affinity: Input/output error tee: /proc/irq/300/smp_affinity: Input/output error tee: /proc/irq/301/smp_affinity: Input/output error tee: /proc/irq/302/smp_affinity: Input/output error tee: /proc/irq/303/smp_affinity: Input/output error tee: /proc/irq/304/smp_affinity: Input/output error tee: /proc/irq/305/smp_affinity: Input/output error tee: /proc/irq/306/smp_affinity: Input/output error tee: /proc/irq/307/smp_affinity: Input/output error tee: /proc/irq/308/smp_affinity: Input/output error tee: /proc/irq/309/smp_affinity: Input/output error tee: /proc/irq/310/smp_affinity: Input/output error tee: /proc/irq/311/smp_affinity: Input/output error tee: /proc/irq/312/smp_affinity: Input/output error tee: /proc/irq/313/smp_affinity: Input/output error tee: /proc/irq/314/smp_affinity: Input/output error tee: /proc/irq/315/smp_affinity: Input/output error tee: /proc/irq/316/smp_affinity: Input/output error tee: /proc/irq/317/smp_affinity: Input/output error tee: /proc/irq/318/smp_affinity: Input/output error tee: /proc/irq/319/smp_affinity: Input/output error tee: /proc/irq/320/smp_affinity: Input/output error tee: /proc/irq/321/smp_affinity: Input/output error tee: /proc/irq/322/smp_affinity: Input/output error tee: /proc/irq/323/smp_affinity: Input/output error tee: /proc/irq/324/smp_affinity: Input/output error tee: /proc/irq/325/smp_affinity: Input/output error tee: /proc/irq/326/smp_affinity: Input/output error tee: /proc/irq/327/smp_affinity: Input/output error tee: /proc/irq/328/smp_affinity: Input/output error tee: /proc/irq/329/smp_affinity: Input/output error tee: /proc/irq/330/smp_affinity: Input/output error tee: /proc/irq/331/smp_affinity: Input/output error tee: /proc/irq/332/smp_affinity: Input/output error tee: /proc/irq/333/smp_affinity: Input/output error tee: /proc/irq/334/smp_affinity: Input/output error tee: /proc/irq/335/smp_affinity: Input/output error Update. irqbalance is running: $ sudo service irqbalance status irqbalance start/running, process 560

    Read the article

  • Mac OS X behind OpenLDAP and Samba

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

  • JNDI Datasource Problem on Tomcat 6, Hibernate

    - by Asuman AKYILDIZ
    I am using Tomcat 6 as application server, Struts-Hibernate and MyEclipse 6.0. My application uses JDBC driver but I should modify it to use JNDI Datasource. I followed steps as described in tomcat 6.0 howto tutorial. I defined my resource in tomcatconf: <Resource name="jdbc/ats" global="jdbc/ats" auth="Container" type="javax.sql.DataSource" driverClassName="oracle.jdbc.OracleDriver" url="jdbc:oracle:thin:@//localhost:1521/MISDEV" username="TEST" password="TEST" maxActive="20" maxIdle="10" maxWait="-1" validationQuery="SELECT 1 from dual" removeAbandoned="true" removeAbandonedTimeout="30" logAbandoned="false"/> I gave reference in my application web.xml: <resource-ref> <description>Oracle Datasource example</description> <res-ref-name>jdbc/ats</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> And I defined datasource-dialect in my hibernate-cfg.xml <property name="connection.datasource">java:comp/env/jdbc/ats</property> <property name="dialect">org.hibernate.dialect.Oracle9Dialect</property> But when I create hibernate session, it can not open the connection: 09:18:11,322 ERROR JDBCExceptionReporter:72 - Connections could not be acquired from the underlying database! org.hibernate.exception.GenericJDBCException: Cannot open connection I also tried to set the properties at runtime: Configuration configuration = new Configuration(); configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.Oracle9Dialect"); //configuration.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/ats"); configuration.setProperty("hibernate.current_session_context_class", "thread"); configuration.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider"); configuration.setProperty("hibernate.show_sql", "true"); sessionFactory = configuration.configure().buildSessionFactory(); It does not open connection again. But, when I use JDBC driver it works: Configuration configuration = new Configuration(); configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.Oracle9Dialect"); //configuration.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/ats"); configuration.setProperty("hibernate.connection.url", "jdbc:oracle:thin:@//localhost:1521/MISDEV"); configuration.setProperty("hibernate.connection.username", "test"); configuration.setProperty("hibernate.connection.password", "test"); configuration.setProperty("hibernate.connection.driver_class", "oracle.jdbc.OracleDriver"); configuration.setProperty("hibernate.transaction.factory_class", "org.hibernate.transaction.JDBCTransactionFactory"); configuration.setProperty("hibernate.current_session_context_class", "thread"); configuration.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider"); configuration.setProperty("hibernate.show_sql", "true"); sessionFactory = configuration.configure().buildSessionFactory(); I have been searching for 3 days and no success. What may be de problem?

    Read the article

  • Can't get Passwordless (SSH provided) SFTP working

    - by Shoaibi
    I have chrooted sftp setup as below. # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin without-password StrictModes yes AllowGroups admins clients RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* #Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Subsystem sftp internal-sftp Match group clients ChrootDirectory /var/chroot-home X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp a dummy user root:~# tail -n1 /etc/passwd david:x:1000:1001::/david:/bin/sh Now in this case david can sftp using say filezilla client and he is chrooted to /var/chroot-home/david/. But what if i was to setup a passwordless auth? I have tried pasting his key in /var/chroot-home/david/.ssh/authorized_keys but no use, tried ssh'ing as david to the box and it just stops at "debug1: Sending env LC_CTYPE = C" after i supply it password and there is nothing shown in auth.log, may be because it can't find the homedir. If i do "su - david" as root i see "No directory, logging in with HOME=/" which makes sense. Symlink doesn't help either. I have also tried with: Match group clients ChrootDirectory /var/chroot-home/%u X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp a dummy user root:~# tail -n1 /etc/passwd david:x:1000:1001::/var/chroot-home/david:/bin/sh This way if i don't change /var/chroot-home/david to root:root sshd complains about bad ownership or permission modes, and if i do, david can no longer upload/delete anything directly in his home while using sftp from filezilla.

    Read the article

  • Directory listing through FTPS (TLS) is not working

    - by Aron Rotteveel
    We recently switched our server to require TLS for every connection. This is working flawlessly so far, but one of our clients is having problems. Some facts: Server uses Pure-FTPD Server has a passive port range configured Server has no firewall limitations regarding the FTP Client uses WS FTP Client is behind a router Client connects to the same IP as every other, using PASSIVE mode All other clients have no trouble connecting Because of the TLS requirement, connecting using ACTIVE mode is almost not possible, but PASSIVE is working fine for everyone except this specific client. It seems that he is able to connect, but once a LIST command is performed, things go wrong. Log: Finding Host <clienthost> ... Connecting to <serverip:21> Connected to <serverip:21> in 0.020000 seconds, Waiting for Server Response Initializing SSL Session ... 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- 220-You are user number 5 of 50 allowed. 220-Local time is now 22:14. Server port: 21. 220-This is a private system - No anonymous login 220-IPv6 connections are also welcome on this server. 220 You will be disconnected after 15 minutes of inactivity. AUTH TLS 234 AUTH TLS OK. SSL session NOT set for reuse SSL Session Started. Host type (1): Automatic Detect USER <user> 331 User <user> OK. Password required PASS (hidden) 230-User <user> has group access to: <user> 230 OK. Current restricted directory is / SYST 215 UNIX Type: L8 Host type (2): Unix (Standard) PBSZ 0 200 PBSZ=0 PROT P 200 Data protection level set to "private" PWD 257 "/" is your current location CWD /public_html 250 OK. Current directory is /public_html PWD257 "/public_html" is your current location TYPE A 200 TYPE is now ASCII PASV 227 Entering Passive Mode (<serverip>,132,100) connecting data channel to <serverip>:132,100(33892) Substituting connection address <serverip> for private address <serverip> from PASV Using external address <customer ext. ip> instead of local address <customer int. ip> for PORT command PORT 82,161,56,225,195,181 200 PORT command successful LIST Error reading response from server. It appears that the connection is dead. Attempting reconnect... Any help is appreciated.

    Read the article

  • can not connect through SCP, but SSH connections works

    - by Joe Cabezas
    i am trying to connect to my server to transfer file using scp: $ scp -v -r -P <port> <user>@<host>:~/dir/ dir/ this is the output: OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/joe/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to <host> [<host>] port <port>. debug1: Connection established. debug1: identity file /Users/joe/.ssh/identity type -1 debug1: identity file /Users/joe/.ssh/id_rsa type -1 debug1: identity file /Users/joe/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host but connecting via SSH works fine: $ ssh <user>@<host> -p <port> <user>@<host>'s password: <user>@<host>:~$ OK what can be wrong with this? my /etc/ssh/sshd_config file on the host is: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port <port> # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication no #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes

    Read the article

  • getfacl command and Linux file permissions - getting 403 error when accessing Wordpress

    - by tommytwoeyes
    I'm configuring Wordpress for a friend, and I just screwed up the Wordpress directory permissions (I suspect) using setfacl. Webfaction doesn't allow sudo or allow me to change the directory group ownership using chown. Now it appears that something I did is causing the entire application to give me 403 errors when I try to access it. The current directory listing looks like this (I set the whole thing to 777 temporarily to try to recover access to it): drwxrwsr-x+ 6 myusername myusername 4096 Mar 2 07:07 ./ drwxr-xr-x 3 root root 4096 Feb 25 19:48 ../ -rwxrwxr-x+ 1 myusername myusername 286 Mar 2 06:33 gzip.php -rwxrwxr-x+ 1 myusername myusername 4831 Mar 4 20:02 .htaccess -rwxrwxr-x+ 1 myusername myusername 397 Feb 25 19:49 index.php -rw-rw-r--+ 1 myusername myusername 15606 Feb 25 19:49 license.txt -rw-rw-r--+ 1 myusername myusername 9200 Feb 25 19:49 readme.html drwxrwsr-x+ 6 myusername myusername 4096 Feb 25 19:49 .svn/ -rwxrwxr-x+ 1 myusername myusername 4337 Feb 25 19:49 wp-activate.php drwxr-xr-x+ 10 myusername myusername 4096 Mar 4 20:03 wp-admin/ -rwxrwxr-x+ 1 myusername myusername 40283 Feb 25 19:49 wp-app.php -rwxrwxr-x+ 1 myusername myusername 226 Feb 25 19:49 wp-atom.php -rwxrwxr-x+ 1 myusername myusername 274 Feb 25 19:49 wp-blog-header.php -rwxrwxr-x+ 1 myusername myusername 3931 Feb 25 19:49 wp-comments-post.php -rwxrwxr-x+ 1 myusername myusername 244 Feb 25 19:49 wp-commentsrss2.php -rwxrwxr-x+ 1 myusername myusername 3485 Feb 25 20:15 wp-config.php drwxr-xr-x+ 6 myusername myusername 4096 Feb 26 08:52 wp-content/ -rwxrwxr-x+ 1 myusername myusername 1255 Feb 25 19:49 wp-cron.php -rwxrwxr-x+ 1 myusername myusername 246 Feb 25 19:49 wp-feed.php drwxrwxr-x+ 9 myusername myusername 4096 Feb 25 19:49 wp-includes/ -rwxrwxr-x+ 1 myusername myusername 1997 Feb 25 19:49 wp-links-opml.php -rwxrwxr-x+ 1 myusername myusername 2453 Feb 25 19:49 wp-load.php -rwxrwxr-x+ 1 myusername myusername 27787 Feb 25 19:49 wp-login.php -rwxrwxr-x+ 1 myusername myusername 7774 Feb 25 19:49 wp-mail.php -rwxrwxr-x+ 1 myusername myusername 494 Feb 25 19:49 wp-pass.php -rwxrwxr-x+ 1 myusername myusername 224 Feb 25 19:49 wp-rdf.php -rwxrwxr-x+ 1 myusername myusername 334 Feb 25 19:49 wp-register.php -rwxrwxr-x+ 1 myusername myusername 226 Feb 25 19:49 wp-rss2.php -rwxrwxr-x+ 1 myusername myusername 224 Feb 25 19:49 wp-rss.php -rwxrwxr-x+ 1 myusername myusername 9655 Feb 25 19:49 wp-settings.php -rwxrwxr-x+ 1 myusername myusername 18644 Feb 25 19:49 wp-signup.php -rwxrwxr-x+ 1 myusername myusername 3702 Feb 25 19:49 wp-trackback.php -rwxrwxr-x+ 1 myusername myusername 3210 Feb 25 19:49 xmlrpc.php The getfacl output looks like this: # file: . # owner: myusername # group: myusername user::rwx group::r-x group:apache:rw- mask::rwx other::r-x I simply wanted to change the ownership to myusername:apache and the file permissions to 755. I have no idea how to fix the permissions now. Any help would be really appreciated! Thanks, Tom

    Read the article

  • Need help making an ODBC MySQL Connection

    - by Andy Moore
    Short Version: How do I connect from PowerShell to an ODBC 5.1 MySQL Driver? I can't seem to find any connection strings that accurately have a "Provider" field for this particular instance. (See bottom of this question for examples/errors) ===== Long Version: I'm not a server guy, and I've been handed the task of setting up PowerGadgets on our network. I have a MySQL server running on a Linux box, that is configured for remote access and has a user defined for remote access as well. On my windows desktop PC, I have PowerGadgets installed. I installed the MySQL ODBC 5.1 connector, and went to Control Panel Data Sources and set up a User DSN connection to the database. The connection, user, and pass seem to be correct because it lists the tables of the database in my windows control panel. Where I'm running into trouble is in 3 places in PowerGadgets: When selecting a data source, I can select "SQL Server". Inputting the servers IP address does not work and I can't get this option to work at all. When selecting a data source, I can select "OleDB". This screen has a wizard on it, that appears to populate all the correct information (including database table names!) for me. "Test Connection" runs great. But if I try to complete the wizard, I get the error "The .NET Framework data provider for OLEDB does not support the MS Ole DB provider for ODBC Drivers." When selecting a data source, I can select "ODBC". This screen does not have a wizard and I cannot figure out a "connection string" that works. Typically it will respond with the error "The field 'Provider' is missing". Googling ODBC connection strings doesn't reveal any examples with a "provider" field and have no idea what to put in here. The connection string (for #2) above contains "SQLOLEDB" as a provider, and upon inputting that value into this connection string I get the same connection error that #2 gets. I believe I can solve my problems by figuring out a connection string for #3 but don't know where to get started. (PowerGadgets also allows for PowerShell support but I believe I will run into the same problem there) == Here's my current PowerShell connection that doesn't work: invoke-sql -connection "Driver={MySQL ODBC 5.1 Driver};Initial Catalog=hq_live;Data Source=HQDB" -sql "Select * FROM accounts" Spits back the error: "Invoke-Sql : An OLE DB Provider was not specified in the ConnectionString. An example would be, 'Provider=SQLOLEDB;'. == Another string that doesn't work: invoke-sql -connection "Provider=MSDASQL.1;Persist Security Info=False;Data Source=HQDB;Initial Catalog=hq_live" -sql "select * from accounts" And the error: The .Net Framework Data Provider for OLEDB (System.Data.OleDb) does not support the Microsoft OLE DB Provider for ODBC Drivers (MSDASQL). Use the .Net Framework Data Provider for ODBC (System.Data.Odbc).

    Read the article

  • Triple monitor setup in linux

    - by Brendan Abel
    I'm hoping there are some xorg gurus out there. I'm trying to get a three monitor setup working in linux. I have 2 lcd monitors and a tv, all different resolutions. I'm using 2 video cards; a 9800 GTX and 7900Gt. I've seen a lot of different posts about people trying to make this work, and in every case, they either gave up, or Xinerama magically solved all their problems. Basically, my main problem is that I cannot get Xinerama to work. Every time I turn it on in the options, my machine gets stuck in a neverending boot cycle. If I disable Xinerama, I just have three Xorg screens, but I can't drag windows from one to the other. I can get the 2 lcds on Twinview, and the tv on a separate Xorg screen no problem. But I don't really like this solution. I'd rather have them all on separate screens and stitch them together with Xinerama. Has anyone done this? Here's my xorg.conf for reference. p.s. This took me all of 30 seconds to set up in Windows XP! p.s.s. I've seen somewhere that maybe randr can solve my problems? But I'm not quite sure how? Section "Monitor" Identifier "Main1" VendorName "Acer" ModelName "H233H" HorizSync 40-70 VertRefresh 60 Option "dpms" EndSection #Section "Monitor" # Identifier "Main2" # VendorName "Acer" # ModelName "AL2216W" # HorizSync 40-70 # VertRefresh 60 # Option "dpms" #EndSection Section "Monitor" Identifier "Projector" VendorName "BenQ" ModelName "W500" HorizSync 44.955-45 VertRefresh 59.94-60 Option "dpms" EndSection Section "Device" Identifier "Card1" Driver "nvidia" VendorName "nvidia" BusID "PCI:5:0:0" BoardName "nVidia Corporation G92 [GeForce 9800 GTX+]" Option "ConnectedMonitor" "DFP,DFP" Option "NvAGP" "0" Option "NoLogo" "True" #Option "TVStandard" "HD720p" EndSection Section "Device" Identifier "Card2" Driver "nvidia" VendorName "nvidia" BusID "PCI:4:0:0" BoardName "nVidia Corporation G71 [GeForce 7900 GT/GTO]" Option "NvAGP" "0" Option "NoLogo" "True" Option "TVStandard" "HD720p" EndSection Section "Module" Load "glx" EndSection Section "Screen" Identifier "ScreenMain-0" Device "Card1-0" Monitor "Main1" DefaultDepth 24 Option "Twinview" Option "TwinViewOrientation" "RightOf" Option "MetaModes" "DFP-0: 1920x1080; DFP-1: 1680x1050" Option "HorizSync" "DFP-0: 40-70; DFP-1: 40-70" Option "VertRefresh" "DFP-0: 60; DFP-1: 60" #SubSection "Display" # Depth 24 # Virtual 4880 1080 #EndSubSection EndSection Section "Screen" Identifier "ScreenProjector" Device "Card2" Monitor "Projector" DefaultDepth 24 Option "MetaModes" "TV-0: 1280x720" Option "HorizSync" "TV-0: 44.955-45" Option "VertRefresh" "TV-0: 59.94-60" EndSection Section "ServerLayout" Identifier "BothTwinView" Screen "ScreenMain-0" Screen "ScreenProjector" LeftOf "ScreenMain-0" #Option "Xinerama" "on" # most important option let you window expand to three monitors EndSection

    Read the article

  • Configure Postfix to Port other than 25

    - by bwheeler96
    I've done quite a bit of googling on how to reconfigure postfix to work on a different port, but I still can't fond the line(s) people keep talking about in my master.cf. I'm using OS X Mountain Lion, and my ISP blocks traffic both ways on port 25. people have said to look for a line that says smtp inet n - n - - smtpd I can't find it. This is (what I believe to be) unmodified # ==== Begin auto-generated section ======================================== # This section of the master.cf file is auto-generated by the Server Admin # Mail backend plugin whenever mails settings are modified. smtp inet n - n - 1 postscreen smtpd pass - - n - - smtpd dnsblog unix - - n - 0 dnsblog tlsproxy unix - - n - 0 tlsproxy submission inet n - n - - smtpd -o smtpd_tls_security_level=encrypt smtp unix - - n - - smtp # === End auto-generated section =========================================== # Modern SMTP clients communicate securely over port 25 using the STARTTLS command. # Some older clients, such as Outlook 2000 and its predecessors, do not properly # support this command and instead assume a preconfigured secure connection # on port 465. This was sometimes called "smtps", but such usage was never # approved by the IANA and therefore conflicts with another, legitimate assignment. # For more details about managing secure SMTP connections with postfix, please see: # http://www.postfix.org/TLS_README.html # To read more about configuring secure connections with Outlook 2000, please read: # http://support.microsoft.com/default.aspx?scid=kb;en-us;Q307772 # Apple does not support the use of port 465 for this purpose. # After determining that connecting clients do require this behavior, you may choose # to manually enable support for these older clients by uncommenting the following # four lines. #465 inet n - n - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - n - - smtp pickup fifo n - n 60 1 pickup cleanup unix n - n - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - n 300 1 oqmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify sacl-cache unix - - n - 1 sacl-cache flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - n - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - n - - showq error unix - - n - - error retry unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants.

    Read the article

  • Linux Debian Security Breach - what now? [closed]

    - by user897075
    Possible Duplicate: My server's been hacked EMERGENCY I installed Debian (Squeeze) a while back in my home network to host some personal sites (thank god). During the installation it prompted me to enter a user other than root - so in a rush I used my name as user and pass (alex/alex for what its worth). I know it's horrible practice but during the setup of this server I'm always logged in as root to perform configurations, etc. Few days or a week passes and I forget to change the password. Then I finally get my web site finished and I open the port forwarding on my router and DynDNS to point to my server in my home. I've done this many times in the past never had issues but I use a cryptic root password and I guess disabled regular accounts. Today I reformat my Windows 7 and after spending all day tweaking and updating SP1 I look for cloning apps and find clonezilla and see it supports SSH cloning, so I go through the process only to discover I need a user, so I log into my web-server and see I have the user 'alex' already in and realize I don't know the password. So I change the password to something cryptic and visit the directory 'home' only to realize their are contents such as passfile, bengos, etc. My heart sinks, I've been hacked!!! Sure as hell there are all sort of scripts and password files. I run a 'last' command and it seems they last logged in april 3rd. Question: What can I do to see if they did anything destructive? Should I reformat and reinstall? How restrictive is Debian/Squeeze in terms of user permissions out of the box - all my personal website stuff was created using 'root' so changing files does not seem to have occured. How did they determine there was a user 'alex' on the machine? Can you query any machine and figure this out? What the users are? Looks like they tried to run a IP scan...other nodes on the network are running Windows 7. One of which seems a little wonky as of late - is it possible they buggered up that system? What corrective action can I take to avoid this from happening again? And figure out what might have changed or been hacked? I'm hoping debian out of box is fairly secure and at best he managed to read some of my source code. :p Regards, Alex

    Read the article

  • SSL_CLIENT_CERT_CHAIN not being passed to backend server

    - by nidkil
    I have client certificate configured and working in Apache. I want to pass the PEM-encoded X.509 certificates of the client to the backend server. I tried with the SSLOptions +ExportCertData. This does nothing at all, while the documentation states it should add SSL_SERVER_CERT, SSL_CLIENT_CERT and SSL_CLIENT_CERT_CHAINn (with n = 0,1,2,..) as headers. Any ideas why this option is not working? I then tried setting the headers myself using RequestHeader. This works fine for all variables except SSL_CLIENT_CERT_CHAIN. It shows null in the header. Any ideas why the certificate chain is not being filled? This is my first Apache configuration: <VirtualHost 192.168.56.100:443> ServerName www.test.org ServerAdmin webmaster@localhost DocumentRoot /var/www ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/certs/www.test.org.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.test.org.key SSLCACertificateFile /etc/apache2/ssl/ca/ca.crt <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> <Location /carbon> ProxyPass http://www.test.org:9763/carbon ProxyPassReverse http://www.test.org:9763/carbon </Location> <Location /services/GbTestProxy> SSLVerifyClient require SSLVerifyDepth 5 SSLOptions +ExportCertData ProxyPass http://www.test.org:8888/services/GbTestProxy ProxyPassReverse http://www.test.org:8888/services/GbTestProxy </Location> </VirtualHost> This is my second Apache configuration: <VirtualHost 192.168.56.100:443> ServerName www.test.org ServerAdmin webmaster@localhost DocumentRoot /var/www ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined SSLEngine on SSLProxyEngine on SSLCertificateFile /etc/apache2/ssl/certs/www.test.org.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.test.org.key SSLCACertificateFile /etc/apache2/ssl/ca/ca.crt <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> <Location /carbon> ProxyPass http://www.test.org:9763/carbon ProxyPassReverse http://www.test.org:9763/carbon </Location> <Location /services/GbTestProxy> SSLVerifyClient require SSLVerifyDepth 5 RequestHeader set SSL_CLIENT_S_DN "%{SSL_CLIENT_S_DN}s" RequestHeader set SSL_CLIENT_I_DN "%{SSL_CLIENT_I_DN}s" RequestHeader set SSL_CLIENT_S_DN_CN "%{SSL_SERVER_S_DN_CN}s" RequestHeader set SSL_SERVER_S_DN_OU "%{SSL_SERVER_S_DN_OU}s" RequestHeader set SSL_CLIENT_CERT "%{SSL_CLIENT_CERT}s" RequestHeader set SSL_CLIENT_CERT_CHAIN0 "%{SSL_CLIENT_CERT_CHAIN0}s" RequestHeader set SSL_CLIENT_CERT_CHAIN1 "%{SSL_CLIENT_CERT_CHAIN1}s" RequestHeader set SSL_CLIENT_VERIFY "%{SSL_CLIENT_VERIFY}s" ProxyPass http://www.test.org:8888/services/GbTestProxy ProxyPassReverse http://www.test.org:8888/services/GbTestProxy </Location> </VirtualHost> Hope someone can help. Regards, nidkil

    Read the article

  • LUKS with LVM, mount is not persistent after reboot

    - by linxsaga
    I have created a Logical vol and used luks to encrypt it. But while rebooting the server. I get a error message (below), therefore I would have to enter the root pass and disable the /etc/fstab entry. So mount of the LUKS partition is not persistent during reboot using LUKS. I have this setup on RHEL6 and wondering what i could be missing. I want to the LV to get be mount on reboot. Later I would want to replace it with UUID instead of the device name. Error message on reboot: "Give root password for maintenance (or type Control-D to continue):" Here are the steps from the beginning: [root@rhel6 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created [root@rhel6 ~]# vgcreate vg01 /dev/sdb Volume group "vg01" successfully created [root@rhel6 ~]# lvcreate --size 500M -n lvol1 vg01 Logical volume "lvol1" created [root@rhel6 ~]# lvdisplay --- Logical volume --- LV Name /dev/vg01/lvol1 VG Name vg01 LV UUID nX9DDe-ctqG-XCgO-2wcx-ddy4-i91Y-rZ5u91 LV Write Access read/write LV Status available # open 0 LV Size 500.00 MiB Current LE 125 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 [root@rhel6 ~]# cryptsetup luksFormat /dev/vg01/lvol1 WARNING! ======== This will overwrite data on /dev/vg01/lvol1 irrevocably. Are you sure? (Type uppercase yes): YES Enter LUKS passphrase: Verify passphrase: [root@rhel6 ~]# mkdir /house [root@rhel6 ~]# cryptsetup luksOpen /dev/vg01/lvol1 house Enter passphrase for /dev/vg01/lvol1: [root@rhel6 ~]# mkfs.ext4 /dev/mapper/house mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 127512 inodes, 509952 blocks 25497 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67633152 63 block groups 8192 blocks per group, 8192 fragments per group 2024 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 21 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@rhel6 ~]# mount -t ext4 /dev/mapper/house /house PS: HERE I have successfully mounted: [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# vim /etc/fstab -> as follow /dev/mapper/house /house ext4 defaults 1 2 [root@rhel6 ~]# vim /etc/crypttab -> entry as follows house /dev/vg01/lvol1 password [root@rhel6 ~]# mount -o remount /house [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# umount /house/ [root@rhel6 ~]# mount -a -> SUCCESSFUL AGAIN [root@rhel6 ~]# ls /house/ lost+found Please let me know if I am missing anything here. Thanks in advance.

    Read the article

  • Nginx phpmyadmin redirecting to / instead of /phpmyadmin upon login

    - by Frederik Nielsen
    I am having issues with my phpmyadmin on my nginx install. When I enter <ServerIP>/phpmyadmin and logs in, I get redirected to <ServerIP>/index.php?<tokenstuff> instead of <ServerIP>/phpmyadmin/index.php?<tokenstuff> Nginx config file: user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 2; #gzip on; include /etc/nginx/conf.d/*.conf; } Default.conf: server { listen 80; server_name _; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root /usr/share/nginx/html; try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } } (Any general tips on tidying op those config files are accepted too)

    Read the article

  • Solaris10 x86 mirror. Making second disk booteable when failure

    - by Kani
    Did a mirror (RAID1) with Solaris 10 in x86. Everything OK. Now, I´m trying to make the second disk booteable, this is: from grub or in case of failure of disk1. I edited /boot/grub/menu.lst: #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris 10 9/10 s10x_u9wos_14a X86 findroot (rootfs1,0,a) kernel /platform/i86pc/multiboot module /platform/i86pc/boot_archive #---------------------END BOOTADM-------------------- #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris failsafe findroot (rootfs1,0,a) kernel /boot/multiboot -s module /boot/amd64/x86.miniroot-safe #---------------------END BOOTADM-------------------- #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris failsafe findroot (rootfs1,0,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe #---------------------END BOOTADM-------------------- #Make second disk booteable!!!!!!! title alternate boot findroot (rootfs1,1,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe But is not working. In the BIOS, when I select "alternate boot" I get: Error 15: 15 file not found also, how to configure to GRUB to make the disk2 to boot in case of error in disk1? Additionally, I did (but not related to GRUB): eeprom altbootpath=/devices/pci@0,0/pci108e,5352@1f,2/disk@1,0:a Here is the output of some commands that may help you: /sbin/biosdev 0x80 /pci@0,0/pci108e,5352@1f,2/disk@0,0 0x81 /pci@0,0/pci108e,5352@1f,2/disk@1,0 ls -l /dev/dsk/c1t?d0s0 lrwxrwxrwx 1 root root 50 Jul 7 12:01 /dev/dsk/c1t0d0s0 -> ../../devices/pci@0,0/pci108e,5352@1f,2/disk@0,0:a lrwxrwxrwx 1 root root 50 Jul 7 12:01 /dev/dsk/c1t1d0s0 -> ../../devices/pci@0,0/pci108e,5352@1f,2/disk@1,0:a more /boot/solaris/bootenv.rc setprop ata-dma-enabled '1' setprop atapi-cd-dma-enabled '0' setprop ttyb-rts-dtr-off 'false' setprop ttyb-ignore-cd 'true' setprop ttya-rts-dtr-off 'false' setprop ttya-ignore-cd 'true' setprop ttyb-mode '9600,8,n,1,-' setprop ttya-mode '9600,8,n,1,-' setprop lba-access-ok '1' setprop prealloc-chunk-size '0x2000' setprop bootpath '/pci@0,0/pci108e,5352@1f,2/disk@0,0:a' setprop keyboard-layout 'US-English' setprop console 'text' setprop altbootpath '/pci@0,0/pci108e,5352@1f,2/disk@1,0:a' cat /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - #/dev/dsk/c1t0d0s1 - - swap - no - /dev/md/dsk/d1 - - swap - no - /dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no - /devices - /devices devfs - no - sharefs - /etc/dfs/sharetab sharefs - no - ctfs - /system/contract ctfs - no - objfs - /system/object objfs - no - swap - /tmp tmpfs - yes - df -h Filesystem size used avail capacity Mounted on /dev/md/dsk/d0 909G 11G 889G 2% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 14G 972K 14G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab /usr/lib/libc/libc_hwcap1.so.1 909G 11G 889G 2% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 14G 40K 14G 1% /tmp swap 14G 28K 14G 1% /var/run

    Read the article

  • How to setup nginx and a subdomain

    - by Evolutio
    i have gitlab installed on my server and it works on all domains eg: git.lars-dev.de, lars-dev.de and *.lars-dev.de how I can run gitlab only on git.lars-dev.de and another subdomain on files.lars-dev.de? my lars-dev conf: server { listen *:80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www/webdata/lars-dev.de/htdocs; index index.html index.htm; server_name lars-dev.de; location / { try_files $uri $uri/ /index.html; } #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } and the gitlab configuration: upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.lars-dev.de; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } }

    Read the article

  • How is incoming SMTP mail being delivered despite blocked port

    - by Josh
    I setup a MX mail server, everything works despite port 25 being blocked, I'm stumped as to why I am able to receive email with this setup, and what the consequences might be if I leave it this way. Here are the details: Connections to SMTP over port 25 and 587 both reliably connect over my local network. Connections to SMTP over port 25 are blocked from external IPs (the ISP is blocking the port). Connections to Submission SMTP over port 587 from external IPs are reliable. Emails sent from gmail, yahoo, and a few other addresses all are being delivered. I haven't found an email provider that fails to deliver mail to my MX. So, with port 25 blocked, I am assuming other MTA servers fallback to port 587, otherwise I can't imagine how the mail is received. I know port 25 shouldn't be blocked, but so far it works. Are there mail servers that this will not work with? Where can I find more about how this is working? -- edit More technical detail, to validate that I'm not missing something silly. Obviously in the transcript below I've replaced my actual domain with example.com. # DNS MX record points to the A record. $ dig example.com MX +short 1 example.com $ dig example.com A +short <Public IP address> # From a public server (not my ISP hosting the mail server) # We see port 25 is blocked, but port 587 is open $ telnet example.com 25 Trying <public ip>... telnet: Unable to connect to remote host: Connection refused # Let's try openssl $ openssl s_client -starttls smtp -crlf -connect example.com:25 connect: Connection refused connect:errno=111 # Again from a public server, we see port 587 is open $ telnet example.com 587 Trying <public ip>... Connected to example.com. Escape character is '^]'. 220 example.com ESMTP Postfix ehlo example.com 250-example.com 250-PIPELINING 250-SIZE 10485760 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250-DSN 250-BINARYMIME 250 CHUNKING quit 221 2.0.0 Bye Connection closed by foreign host. Here is a portion from the mail log when receiving a message from gmail: postfix/postscreen[93152]: CONNECT from [209.85.128.49]:48953 to [192.168.0.10]:25 postfix/postscreen[93152]: PASS NEW [209.85.128.49]:48953 postfix/smtpd[93160]: connect from mail-qe0-f49.google.com[209.85.128.49] postfix/smtpd[93160]: 7A8C31C1AA99: client=mail-qe0-f49.google.com[209.85.128.49] The log shows that a connection was made to the local IP on port 25 (I'm not doing any port mapping, so it is port 25 on the public IP too). Seeing this leads me to hypothesize that the ISP block on port 25 only occurs when a connection is made from an IP address that is not known to be a mail server. Any other theories?

    Read the article

  • Tutorial for configuring OpenVPN [on hold]

    - by user2699451
    I have been through 10+ tutorials on setting up a OpenVPN, and each tutorial gives a different problem... Does anyone know of a decent and helpful website/tutorial which I could go to to get it set up? I have been battling through it for almost 2 months now. Yes, I have also bugged forums.openvpn, but I think I have "reached my post limit" with them. I have to configure it remotely via ssh. UPDATE: okay, I have been asked to be more clear on the topic I followed this tutorial (as a example) - http://www.servermom.com/how-to-build-openvpn-server-on-centos-6-x/732/ I had no issues setting up, etc. except when I boot into windows and run the OpenVPN GUI Client, it connects and gives this error: WARNING: Bad encapsulated packet length from peer (21331), which must be 0 and <= 1576 -- please ensure that --tun-mtu or --link-mtu is equal on both peers -- this condition could also indicate a possible active attack on the TCP link -- [Attemping restart...] Here is my server config: port 1194 #- port proto udp #- protocol dev tun tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 reneg-sec 0 ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem plugin /usr/lib64/openvpn/plugin/lib/openvpn-auth-pam.so /etc/pam.d/login #- Co$ #plugin /etc/openvpn/radiusplugin.so /etc/openvpn/radiusplugin.cnf #- Uncomment$ client-cert-not-required username-as-common-name server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" keepalive 5 30 comp-lzo persist-key persist-tun status 1194.log verb 3 and my client config: client dev tun proto udp remote [server ip] 1194 # - Your server IP and OpenVPN Port resolv-retry infinite nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 persist-key persist-tun ca ca.crt auth-user-pass comp-lzo reneg-sec 0 verb 3 OpenVPN Client Log: Thu Oct 31 11:51:29 2013 OpenVPN 2.0.9 Win32-MinGW [SSL] [LZO] built on Oct 1 2006 Thu Oct 31 11:51:44 2013 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Thu Oct 31 11:51:44 2013 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Thu Oct 31 11:51:44 2013 LZO compression initialized Thu Oct 31 11:51:44 2013 Control Channel MTU parms [ L:1576 D:140 EF:40 EB:0 ET:0 EL:0 ] Thu Oct 31 11:51:44 2013 Data Channel MTU parms [ L:1576 D:1450 EF:44 EB:135 ET:32 EL:0 AF:3/1 ] Thu Oct 31 11:51:44 2013 Local Options hash (VER=V4): '2547efd2' Thu Oct 31 11:51:44 2013 Expected Remote Options hash (VER=V4): '77cf0943' Thu Oct 31 11:51:44 2013 Attempting to establish TCP connection with x.x.x.x:1194 Thu Oct 31 11:51:44 2013 TCP connection established with x.x.x.x:1194 Thu Oct 31 11:51:44 2013 TCPv4_CLIENT link local: [undef] Thu Oct 31 11:51:44 2013 TCPv4_CLIENT link remote: x.x.x.x:1194 // after this it just hangs, nothing happens So I dont know what I am doing wrong but I am getting a bit impatient and on each forum I post this, I get stupid/unrelated/unhelpful answers...

    Read the article

< Previous Page | 755 756 757 758 759 760 761 762 763 764 765 766  | Next Page >