Search Results

Search found 18347 results on 734 pages for 'generate password'.

Page 672/734 | < Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >

  • Postfix to deliver mail to a virtual address mailbox

    - by Chloe
    Postfix version 2.6.6, Dovecot Version 2.0.9 I want to setup Postfix + Dovecot. Dovecot seems to be working. I can authenticate. However, the mailbox is empty! Nothing will get delivered! I followed many tutorials on Postfix + Dovecot but they seem to want to complicate things by using Dovecot LDA or MySQL. I just want it to be very simple and having Postfix deliver to the virtual mail boxes are fine. I don't need MySQL either. I already set up a custom password file that Dovecot uses for authentication and I can login to POP3 with SSL. I can see from the logs that Postfix is delivering to the system user accounts (the catch-all), instead of the virtual users that I set up in Dovecot. The SMTP + SSL authentication seems to work also. I can also see from the logs that Dovecot is checking the correct virtual mail folder. I just need to figure out how to get Postfix to deliver to the virtual mail boxes. I have the following which I believe are relevant. Let me know what other settings you need to see: alias_maps = hash:/etc/aliases mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = xxx.com myhostname = mail.xxx.com mynetworks = 99.99.99.99, 99.99.99.99 myorigin = $mydomain relay_domains = $mydestination, xxx.com, domain2.net, domain3.com sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = reject_non_fqdn_sender reject_non_fqdn_recipient reject_unknown_recipient_domain permit_sasl_authenticated check_relay_domains smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_sender_restrictions = check_sender_mx_access cidr:/etc/postfix/bogus_mx reject_invalid_hostname reject_unknown_sender_domain reject_non_fqdn_sender virtual_mailbox_base = /var/spool/vmail virtual_mailbox_domains = xxx.com, domain2.net, domain3.com virtual_minimum_uid = 444 Postfix master.cf: submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_sasl_type=dovecot -o smtpd_sasl_path=private/auth -o smtpd_sasl_security_options=noanonymous -o smtpd_sasl_local_domain=$myhostname -o smtpd_client_restrictions=permit_sasl_authenticated,reject -o smtpd_sender_login_maps=hash:/etc/postfix/virtual -o smtpd_sender_restrictions=reject_sender_login_mismatch -o smtpd_recipient_restrictions=reject_non_fqdn_recipient,reject_unknown_recipient_domain,permit_sasl_authenticated,reject Dovecot related: mail_location = maildir:~/Maildir passdb { args = /etc/dovecot/users.conf driver = passwd-file } service auth { unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix } } The virtual mail user: vmail:x:444:99:virtual mail users:/var/spool/vmail:/sbin/nologin Here is the /var/log/maillog when I try to send something to myself: Oct 25 22:10:05 308321 postfix/smtpd[2200]: connect from user-999.cable.mindspring.com[99.99.99.99] Oct 25 22:10:05 308321 postfix/smtpd[2200]: D224BD4753: client=user-999.cable.mindspring.com[99.99.99.99], sasl_method=LOGIN, [email protected] Oct 25 22:10:06 308321 postfix/cleanup[2207]: D224BD4753: message-id=<7DC3C163CFFC483AB6226F8D3D9969D2@dumbopc> Oct 25 22:10:06 308321 postfix/qmgr[2168]: D224BD4753: from=<[email protected]>, size=1385, nrcpt=1 (queue active) Oct 25 22:10:06 308321 postfix/smtpd[2200]: disconnect from user-999.cable.mindspring.com[99.99.99.99] Oct 25 22:10:06 308321 postfix/local[2208]: D224BD4753: to=<[email protected]>, orig_to=<[email protected]>, relay=local, delay=1.1, delays=0.53/0.02/0/0.51, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 22:10:06 308321 postfix/qmgr[2168]: D224BD4753: removed

    Read the article

  • Can't get my Raspberry Pi to keep a static IP

    - by JonnyIrving
    I recently got given a Raspberry Pi and I would like to be able to remote into it using puTTy from my laptop so I don't have to sit next to my tv with a keyboard and mouse to use it. I am able to get a puTTy session going when I know the IP address that my router has given the Pi on each session but it keeps changing on each reboot as I would expect. So I followed a number if instruction to go about configuring the RPi to keep a static IP address. This involved changing the file at '/etc/netwrok/interfaces' which now contains (password removed): auto lo iface lo inet loopback iface eth0 inet static address 192.168.1.82 netmask 255.255.255.0 gateway 192.168.1.254 auto wlan0 allow-hotplug wlan0 iface wlan0 inet dhcp wpa-ssid "BeBoxD304BF" wpa-psk "**********" Despite this however, each time I reboot my RPi it gives me a new dynamic IP address still. I also noticed that in the 'ifconfig' output below that the details of the eth0 doesn't contain IP details for inet addr, Bcast or Mask which have been present in all other examples I have seen online. eth0 Link encap:Ethernet HWaddr b8:27:eb:b5:95:da UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 00:87:c6:00:33:77 inet addr:192.168.1.83 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:918 errors:0 dropped:0 overruns:0 frame:0 TX packets:277 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 Also I'm not sure if this is relevant but it can't hurt! The file at '/etc/resolv.conf' contains: domain config search config nameserver 192.168.1.254 ..I heard it might mean something on one of the pages I was looking at. I would be very grateful for any help with this. I have tried everything I can think of and would really like to get this working this weekend so I can use it from work.

    Read the article

  • Nagios shell script cannot be executed

    - by MeinAccount
    I'm trying to monitor GitLab with nagios. I've created the following command definition and shell script but when checking the service I'm receiving the following e-mail. How can I solve this? The file is executable. [...] nagios : 3 incorrect password attempts ; TTY=unknown ; PWD=/ ; USER=git ; COMMAND=/bin/bash -c /var/lib/nagios/custom_plugins/check_gitlab.sh Command definition: define command { command_name custom_check_gitlab command_line /var/lib/nagios/custom_plugins/check_gitlab.sh } Shell script: #! /bin/sh # [...] RAILS_ENV="production" # Script variable names should be lower-case not to conflict with internal /bin/sh variables such as PATH, EDITOR or SHELL. app_root="/home/git/gitlab" app_user="git" unicorn_conf="$app_root/config/unicorn.rb" pid_path="$app_root/tmp/pids" socket_path="$app_root/tmp/sockets" web_server_pid_path="$pid_path/unicorn.pid" sidekiq_pid_path="$pid_path/sidekiq.pid" ### Here ends user configuration ### # Switch to the app_user if it is not he/she who is running the script. if [ "$USER" != "$app_user" ]; then sudo -u "$app_user" -H -i $0 "$@"; exit; fi # Switch to the gitlab path, if it fails exit with an error. if ! cd "$app_root" ; then echo "Failed to cd into $app_root, exiting!"; exit 1 fi ### Init Script functions check_pids(){ if ! mkdir -p "$pid_path"; then echo "Could not create the path $pid_path needed to store the pids." exit 1 fi # If there exists a file which should hold the value of the Unicorn pid: read it. if [ -f "$web_server_pid_path" ]; then wpid=$(cat "$web_server_pid_path") else wpid=0 fi if [ -f "$sidekiq_pid_path" ]; then spid=$(cat "$sidekiq_pid_path") else spid=0 fi } # Checks whether the different parts of the service are already running or not. check_status(){ check_pids # If the web server is running kill -0 $wpid returns true, or rather 0. # Checks of *_status should only check for == 0 or != 0, never anything else. if [ $wpid -ne 0 ]; then kill -0 "$wpid" 2>/dev/null web_status="$?" else web_status="-1" fi if [ $spid -ne 0 ]; then kill -0 "$spid" 2>/dev/null sidekiq_status="$?" else sidekiq_status="-1" fi } check_pids check_status if [ "$web_status" != "0" -a "$sidekiq_status" != "0" ]; then echo "GitLab is not running." exit 2 fi if [ "$web_status" != "0" ]; then printf "The GitLab Unicorn webserver is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$sidekiq_status" != "0" ]; then printf "The GitLab Sidekiq job dispatcher is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$web_status" = "0" -a "$sidekiq_status" = "0" ]; then printf "GitLab and all it's components are \033[32mup and running\033[0m.\n" exit 0 fi

    Read the article

  • Server 2008/Windows 7/Samba Unspecified error 80004005

    - by ancillary
    I have a Samba share on a LAN with 2008 PDC/DNS. Smb authenticates with AD and I have several Win7 Machines that can connect fine. I recently added a couple of new computers to the LAN which were imaged the same way (same software, etc.; different hardware so different drivers) as the other machines and they have the same policies set. I can not get the new machines to connect to the samba share no matter what. I am always met with either Unspecified Error 0x80004005 or Network Path not found. I've turned off the firewall; set LANMAN auth to respond to NTLM only/send LM & NTLM responses/use NTLM session security if negotiated in Local Sec Policy SEcurity Options; tried both ip and hostname to connect. SMB log shows that authentication succeeds; but then connection is immediately killed by the client. tcpdump shows nothing remarkable except that when trying to connect from the client via hostname there is an unknown packet type error: ack 201 win 255 NBT Session Packet: Unknown packet type 0xABData: (41 bytes) Here's a couple of lines from that error: 11:18:37.964991 IP 001-client.domain.local.49372 > smb.domain.local.netbios-ssn: P 1670:2146(476) ack 201 win 255 NBT Session Packet: Unknown packet type 0xABData: (41 bytes) [000] AA 46 96 FA D5 99 33 75 0C C4 20 CE 26 42 F3 61 \252F\226\372\325\2313u \014\304 \316&B\363a [010] F0 8C FB 65 18 17 40 A5 DB 42 BB 94 37 53 92 EC \360\214\373e\030\027@\245 \333B\273\2247S\222\354 [020] 55 98 7F C4 AE 3D 6B 10 C4 U\230\177\304\256=k\020 \304 11:18:37.964998 IP smb.domain.local.netbios-ssn > 001-client.domain.local.49372: . ack 2146 win 100 Here's smb.conf just in case (though don't see how if other machines are working fine): [global] workgroup = MYDOMAIN realm = MYDOMAIN.LOCAL server string = domain|smb share interfaces = eth1 security = ADS password server = 192.168.1.3 log level = 2 log file = /var/log/samba/%m.log smb ports = 139 strict locking = no load printers = No local master = No domain master = No wins server = 192.168.1.3 wins support = Yes idmap uid = 500-10000000 idmap gid = 500-10000000 winbind separator = + winbind enum users = Yes winbind enum groups = Yes winbind use default domain = Yes [samba-share1] comment = SMB Share path = /home/share/smb/ valid users = @"MYDOMAIN+Domain Users" admin users = @"MYDOMAIN+Domain Admins" guest ok = no read only = No create mask = 0765 force directory mode = 0777 Any ideas what else I could try or look for? Or what might be the problem? Thanks.

    Read the article

  • JNDI Datasource definition in Tomcat 6.0

    - by romaintaz
    Hi all, I want to define a DataSource to an Oracle database on my Tomcat 6.0. So, in conf/server.xml (yes, I know that this DataSource will be available for all the webapps in Tomcat, but it's not a problem here), I've set this Resource: <GlobalNamingResources> <Resource name="hibernate/HibernateDS" auth="Container" type="javax.sql.DataSource" url="jdbc:oracle:thin:@myserver:1542:foo" username="foo" password="bar" driverClassName="oracle.jdbc.OracleDriver" maxActive="50" maxIdle="10" validationQuery="select 1 from dual"/> Then, in the web.xml of my application, I set a resource-ref element: <resource-ref> <description>Hibernate Datasource</description> <res-ref-name>hibernate/HibernateDS</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Finally, as Hibernate is used to manage the database connection, I have a webapps/mywebapp/WEB-INF/classes/hibernate.cfg.xml that creates a session-factory using the JNDI DataSource: <hibernate-configuration> <session-factory> <property name="connection.datasource">java:comp/env/hibernate/HibernateDS</property> ... However, when I start my Tomcat server, I get an error that says it could not create the INFO [net.sf.hibernate.util.NamingHelper] JNDI InitialContext properties:{} INFO [net.sf.hibernate.connection.DatasourceConnectionProvider] Using datasource: java:comp/env/hibernate/HibernateDS INFO [net.sf.hibernate.transaction.TransactionFactoryFactory] Transaction strategy: net.sf.hibernate.transaction.JDBCTransactionFactory INFO [net.sf.hibernate.transaction.TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of process level read-write cache is not recommended) WARN [net.sf.hibernate.cfg.SettingsFactory] Could not obtain connection metadata org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot create JDBC driver of class '' for connect URL 'null' at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1150) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSource.java:880) at net.sf.hibernate.connection.DatasourceConnectionProvider.getConnection(DatasourceConnectionProvider.java:59) at net.sf.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:84) at net.sf.hibernate.cfg.Configuration.buildSettings(Configuration.java:1172) ... Caused by: java.lang.NullPointerException at sun.jdbc.odbc.JdbcOdbcDriver.getProtocol(JdbcOdbcDriver.java:507) at sun.jdbc.odbc.JdbcOdbcDriver.knownURL(JdbcOdbcDriver.java:476) at sun.jdbc.odbc.JdbcOdbcDriver.acceptsURL(JdbcOdbcDriver.java:307) at java.sql.DriverManager.getDriver(DriverManager.java:253) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1143) ... 11 more Do you have any idea why Hibernate is not able to construct the session-factory? What is wrong in my configuration?

    Read the article

  • Kernel-mode Authentication: 401 errors when accessing site from remote machines

    - by CJM
    I have several Classic ASP sites that use Integrated Windows Authentication and Kerberos delegation. They work OK on the live servers (recently moved to a Server 2008/IIS7 servers), but do not work fully on my development PC or my development server. The IIS on both machines were configured through an IIS web deployment tool package which was exported from an old machine; the deployment didn't work perfectly, and I had to tinker a bit to get the sites working. When accessing the apps locally on either machine, they work fine; when accessing from another machine, the user is prompted by a username/password dialog, and regardless of what you enter, ultimately it results in a 401 (Unauthorised) error. I've tried comparing the configuration of these machines against similar live servers (that all work fine), and they seem generally comparable (given that none of the live servers are yet on IIS7.5 (Windows 7/Server 2008 R2). These applications run in a common application pool which uses a special domain user as it's identity - this user has similar permissions on the live and development machines. On IIS6 platforms, to enable kerberos delegation, I needed to set up some SPNs for this user, and they are still in place (even though I don't believe they are needed any longer for IIS7+ due to kernel-mode authentication), Furthermore, this account is enabled for Kerberos delegation in Active Directory, as is each machine I am dealing with. I'm considering the possibility that the deployment might have made changes/failed to make changes to the IIS configuration thus causing this problem. Perhaps a complete rebuild (minus another web deployment attempt) would solve the problem, but I'd rather fix (thus understand) the current problem. Any ideas so far? I've just had another attempt at fixing this issue, and I've made some progress, but I don't have a complete fix...yet. I've discovered that if I access the sites via IP address (than via NetBIOS name), I get the same dialog, except that it accepts my credentials and thus the application works - not quite a fix, but a useful step. More interestingly, I discovered that if I disable Kernel-mode authentication (in IIS Manager Website Authentication Advanced Settings), the applications work perfectly. My foggy understanding is that this is effectively working in the pre-IIS7 way. A reasonable short-term solution, but consider the following explicit advice from IIS on this issue: By default, IIS enables kernel-mode authentication, which may improve authentication performance and prevent authentication problems with application pools configured to use a custom identity. As a best practice, do not disable this setting if Kerberos authentication is used in your environment and the application pool is configured to use a custom identity. Clearly, this is not the way my applications should be working. So what is the issue?

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • How can I fix Problems with interlaced video jerking/flicking when playedback on DVD players? (Mixin

    - by Simon P Stevens
    I'm trying to make a DVD and the final DVD jerks when played on standalone DVD players. It seems to play fine on PCs. I think the problem may be to do with interlacing settings when rendering the final output, but I'll outline the whole editing process I have followed in case I've made a mistake somewhere else. Most of the footage comes from a sony handy cam (one of those mini DVD ones) so isn't great quality. It was set to "high quality" (haha) and 16:9 aspect ratio when it was recorded. I copy the files directly from the mini DVDs onto the hard drive and import them into Cinelerra. In Cinelerra I set the format to 25fps, 720x576, RGBA-8bit, 16:9, interlaced bottom fields first. When I've finished the editing, I add a Fields to frames effect (set to bottom first) to each video track. I render to audio and video separately: Audio: AC3, 128kbps Video: YUV4MPEG steam, video pipe settings: ffmpeg -f yuv4mpegpipe -i - -y -target dvd -flags +ilme+ildct mpeg2video % Cinelerra often crashes during the rendering, so I set it to generate a new video file at each label, and combine them using cat when I've got a sucesful render of each one. Once I've combined them, I use mencoder to re-index them: mencoder -forceidx -oac copy -ovc copy merged.m2v -o mergedReIndexed.m2v I combine the audio and video files using ffmpeg: ffmpeg -i AudioFile.ac3 -i VideoFile.m2v -target dvd -flags +ilme+ildct FinalMovie.mpg Then I build the menus with spumux and I create the DVD file system with dvdauthor, and finally I write it do a dvd-r like this: nice -n -20 growisofs -dvd-compat -speed=2 -Z /dev/dvd -dvd-video -V VIDEO ./ && eject /dev/dvd Originally, when I did it the DVD flickered badly, so as suggested in a guide I added the fields to frames effect in cinelerra. Now it doesn't "flicker", but has become "jerky" when there is lots of motion, particularly when the camera is moving, so the whole background moves. This is what I've tried so far: Removed "mpeg2video" from cinelerra video render pipe. Removed +ilme from render pipe. Removed +ildct from render pipe. Removed +ilme from render audio/video rejoin command. Removed +ildct from render audio/video rejoin command. Added -alt to render pipe. Added -alt to render audio/video rejoin command. Tried with and without the frames to fields effect in Cinelerra. and various combinations of the above. I've also tried this: change the Cinelerra fps to 50, use fields to frames (instead of frames to fields), render to an intermediate QTforlinux jpeg video stream, re-importing that back into Cinelerra, adding a frames to fields effect and then rendering that output as normal (@25fps), and I still have the same problem. Has anyone experienced this "jerking" playback before? Can anyone give any suggestions on how to fix it? (Like I say, it plays back fine on a PC, but not on any of the standalone players I've tried)

    Read the article

  • Domain: Netlogon event sequence

    - by Bob
    I'm getting really confused, reading tutorials from SAMBA howto, which is hell of a mess. Could you write step-by-step, what events happen upon NetLogon? Or in particular, I can't get these things: I really can't get the mechanism of action of LDAP and its role. Should I think of Active Directory LDS as of its superset? What're the other roles of AD and why this term is nearly a synonym of term "domain"? What's the role of LDAP in the remote login sequence? Does it store roaming user profiles? Does it store anything else? How it is called (are there any upper-level or lower-level services that use it in the course of NetLogon)? How do I join a domain. On the client machine I just use the Domain Controller admin credentials, but how do I prepare the Domain Controller for a new machine to join it. What's that deal of Machine trust accounts? How it is used? Suppose, I've just configured a machine to join a domain, created its machine trust, added its data to the domain controller. How would that machine find WINS server to query it for Domain Controller NetBIOS name? Does any computer name, ending with <1C type, correspond to domain controller? In what cases Kerberos and LM/NTLM are used for authentication? Where are password hashes stored in, say, Windows2000 domain controller? Right in the registry? What is SAM - is it a service, responsible for authentication and sending/storing those passwords and accompanying information, such as groups policies etc.? Who calls it? Does it use Active Directory? What's the role of NetBIOS except by name service? Can you exemplify a scenario of its usage as a "datagram distribution service for connectionless communication" or "session service for connection-oriented communication"? (quoted taken from http://en.wikipedia.org/wiki/NetBIOS_Frames_protocol description of NetBIOS roles) Thanks and sorry for many questions.

    Read the article

  • Some emails are being delivered, some returned

    - by Tom Broucke
    I have my own VPS where a site is running (control panel: directadmin). When I send mails, some are being delivered (hotmail, gmail, [email protected] ,...), others are not ([email protected]), others are delivered after being greylisted ([email protected]). /var/log/exim/mainlog What could be the cause of this? Is the problem Sender-Side or Receiver-Side? case 1: [email protected] (delivered) 2012-06-20 15:02:03 1ShKXr-0005Sc-7g <= [email protected] U=apache P=local S=1319 T="Password reset" from <[email protected]> for [email protected] 2012-06-20 15:02:03 1ShKXr-0005Sc-7g gmail-smtp-in-v4v6.l.google.com [2a00:1450:8005::1b] Network is unreachable 2012-06-20 15:02:03 1ShKXr-0005Sc-7g => [email protected] F=<[email protected]> R=lookuphost T=remote_smtp S=1355 H=gmail-smtp-in-v4v6.l.google.com [173.194.67.27] X=TLSv1:RC4-SHA:128 C="250 2.0.0 OK 1340196103 cp4si34336466wib.14" 2012-06-20 15:02:03 1ShKXr-0005Sc-7g Completed case 2: [email protected] (not being delivered) 2012-06-21 09:57:14 1ShcGQ-0007No-5H <= [email protected] H=localhost ([91.230.245.141]) [127.0.0.1] P=esmtpa A=login:[email protected] S=740 [email protected] T="hey" from <[email protected]> for [email protected] 2012-06-21 09:57:14 1ShcGQ-0007No-5H ** [email protected] F=<[email protected]> R=virtual_aliases: 2012-06-21 09:57:14 1ShcGQ-0007Nt-7Z <= <> R=1ShcGQ-0007No-5H U=mail P=local S=1546 T="Mail delivery failed: returning message to sender" from <> for [email protected] 2012-06-21 09:57:14 1ShcGQ-0007No-5H Completed 2012-06-21 09:57:14 1ShcGQ-0007Nt-7Z => info <[email protected]> F=<> R=virtual_user T=virtual_localdelivery S=1643 2012-06-21 09:57:14 1ShcGQ-0007Nt-7Z Completed case 3: [email protected] (greylisted) 2012-06-21 15:29:02 1ShhRW-000862-BV <= [email protected] H=localhost ([91.230.245.141]) [127.0.0.1] P=esmtpa A=login:[email protected] S=782 [email protected] T="testmail squirrel" from <[email protected]> for [email protected] 2012-06-21 15:29:02 1ShhRW-000862-BV SMTP error from remote mail server after RCPT TO:<[email protected]>: host mx-cluster-b1.one.com [195.47.247.194]: 450 4.7.1 <[email protected]>: Recipient address rejected: Greylisted for 5 minutes 2012-06-21 15:29:02 1ShhRW-000862-BV == [email protected] R=lookuphost T=remote_smtp defer (-44): SMTP error from remote mail server after RCPT TO:<[email protected]>: host mx-cluster-b2.one.com [195.47.247.195]: 450 4.7.1 <[email protected]>: Recipient address rejected: Greylisted for 5 minutes Notice that the "from" in case1 differs in case2: [email protected] or [email protected]. Thanks for your time!

    Read the article

  • Cobbler 2.2.2 problems

    - by Peter
    I have setup a dedicated LAN for Cobbler tests. My setup is: Cobbler server: openSUSE 12.3, cobbler 2.2.2 (from openSUSE repos) Imported distros: Centos 6.5, Red Hat 6.5, Red Hat 7.0, openSUSE 13.1 Target Machine: VMs in a Windows 7 Virtualbox Systems provisioning works OK, but I have some problems. The first one is that cobbler does not honor the "pxe_just_once: 1" setting. When the setup of the target OS is finished, after the reboot the target systems continues to PXE boot! The second problem is that the target server is not correctly configured! See my setup: cobbler system report --name=test Name : test TFTP Boot Files : {} Comment : Fetchable Files : {} Gateway : 192.168.0.1 Hostname : testcob1.example.com Image : IPv6 Autoconfiguration : False IPv6 Default Device : Kernel Options : {} Kernel Options (Post Install) : {} Kickstart : <<inherit>> Kickstart Metadata : {} LDAP Enabled : False LDAP Management Type : authconfig Management Classes : [] Management Parameters : <<inherit>> Monit Enabled : False Name Servers : ['192.168.0.1', '8.8.8.8'] Name Servers Search Path : [] Netboot Enabled : False Owners : ['admin'] Power Management Address : Power ID : Power Password : Power Management Type : ipmitool Power Username : Profile : RHEL-6.5-x86_64 Proxy : <<inherit>> Red Hat Management Key : <<inherit>> Red Hat Management Server : <<inherit>> Repos Enabled : False Server Override : <<inherit>> Status : testing Template Files : {} Virt Auto Boot : <<inherit>> Virt CPUs : <<inherit>> Virt Disk Driver Type : <<inherit>> Virt File Size(GB) : <<inherit>> Virt Path : <<inherit>> Virt RAM (MB) : <<inherit>> Virt Type : <<inherit>> Interface ===== : eth0 Bonding Opts : Bridge Opts : DHCP Tag : DNS Name : Master Interface : Interface Type : IP Address : 192.168.0.200 IPv6 Address : IPv6 Default Gateway : IPv6 MTU : IPv6 Secondaries : [] IPv6 Static Routes : [] MAC Address : Management Interface : True MTU : Subnet Mask : 255.255.255.0 Static : True Static Routes : [] Virt Bridge : So, although I have setup the hostname and the network interface of the target system, after the setup, the hostname is set to localhost.localdomain and eth0 is configured as a DHCP not static! How can I find the problem and fix it? Note that I have synced and restarted cobbler a couple of times, but the problems persists.

    Read the article

  • SSH: Port Forwarding, Firewalls, & Plesk

    - by Kian Mayne
    I edited my SSH configuration to accept connections on Port 213, as it was one of the few ports that my work firewall allows through. I then restarted sshd and everything was going well. I tested the ssh server locally, and checked the sshd service was listening on port 213; however, I still cannot get it to work outside of localhost. PuTTY gives a connection refused message, and some of the sites that allow check of ports I tried said the port was closed. To me, this is either firewall or port forwarding. But I've already added inbound and outbound exceptions for it. Is this a problem with my server host, or is there something I've missed? My full SSH config file, as requested: # $OpenBSD: sshd_config,v 1.73 2005/12/06 22:38:28 reyk Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. Port 22 Port 213 #Protocol 2,1 Protocol 2 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 768 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH SyslogFacility AUTHPRIV #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes #StrictModes yes #MaxAuthTries 6 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no PasswordAuthentication yes # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes ChallengeResponseAuthentication no # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no GSSAPIAuthentication yes #GSSAPICleanupCredentials yes GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication mechanism. # Depending on your PAM configuration, this may bypass the setting of # PasswordAuthentication, PermitEmptyPasswords, and # "PermitRootLogin without-password". If you just want the PAM account and # session checks to run without PAM authentication, then enable this but set # ChallengeResponseAuthentication=no #UsePAM no UsePAM yes # Accept locale-related environment variables AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes #UseLogin no #UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #ShowPatchLevel no #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner /some/path # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server

    Read the article

  • How to setup GIT repo on server with need for working dir (non- bare)

    - by OrangeTux
    I want to have configurate a GIT repo for a website. Multiple users will have a clone of the repo on their local machine and on the end of each day they push their work to the server. I can setup a bare repo, but I want a working dir/non-bare repository. The idea is that the working dir of the repository will the root folder for the website. At the end of each day all changes will be visible directly. But I can't find a way to do this. Initializing the server repo with git init gives the following error when a client is trying to push some files: git push origin master [email protected]'s password: Counting objects: 3, done. Writing objects: 100% (3/3), 227 bytes, done. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To ssh://[email protected]/home/orangetux/www/ ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'ssh://[email protected]/home/orangetux/www/' So I'm wondering if this the right way to setup a GIT repo for a website? If so, how do I have to do this? If not, what is a better way to setup a GIT repo for the development of a website? EDIT you can't push to a non-bare repository Oke, clear. But whats the way to solve my problem? Create a bare repository on the server and have a clone of this repo on the same server in the htdocs folder? This looks a bit clumsy to me. To see the result of a commit I've to clone the repository each time.

    Read the article

  • Confused with DKIM, SPF and Exim Configs

    - by 0pt1m1z3
    I've now spent 2 hours trying to figure out this issue and I am about to give up and go to bed. I've been having issues with Gmail rejecting emails from my VPS server because of false spam alerts (probably caused by lfd sending too many emails). So I changed my Exim config to send emails from a different IP (my VPS comes with 3) and that fixed the issue. I also enabled DKIM and SPF on my domains for added measure. But now, all my emails appear as ("From: Sender Name via server.domain1.com") where server.domain1.com is my VPS hostname. I previously had the same issue in Outlook and turning off "Set SMTP Sender: headers" solved that problem. But I believe adding the DKIM and SPF now makes Gmail add "via server.domain1.com" to my messages. How do I fix this? This is a typical header for a message (as it appears at gmail): Delivered-To: [email protected] Received: by 10.60.44.163 with SMTP id f3csp248622oem; Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Received: by 10.50.106.200 with SMTP id gw8mr452788igb.10.1333081398523; Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Return-Path: <[email protected]> Received: from domain2.com ([X.X.X.X]) by mx.google.com with ESMTPS id y1si810998igb.3.2012.03.29.21.23.18 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 29 Mar 2012 21:23:18 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates X.X.X.X as permitted sender) client-ip=X.X.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates X.X.X.X as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=server.domain1.com; s=default; h=Date:Message-Id:From:Content-type:MIME-Version:Subject:To; bh=wF8bBRgh01EYg4t5DAeVPv1Ps906UVIeRnQCb/HvSYw=; b=k/Pg7lnrO+Ud/z1mOTv+O/3DiJzzQgyBhfIizIaFHM8tF/eNJt5P2k+9yQB224sxYstZIWwVRBJmiqvcM1QhARv1HWqWma0crppZ3JOn+LRHANan634OBi+58SIRA+gu; Received: (Exim 4.77) id 1SDTVE-0005HA-9Y for [email protected]; Fri, 30 Mar 2012 00:31:56 -0400 To: [email protected] Subject: Password Reset Request MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 From: Sender Name <[email protected]> Message-Id: <[email protected]> Date: Fri, 30 Mar 2012 00:31:56 -0400 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - server.domain1.com X-AntiAbuse: Original Domain - domain2.com X-AntiAbuse: Originator/Caller UID/GID - [507 504] / [47 12] X-AntiAbuse: Sender Address Domain - server.domain1.com

    Read the article

  • PHP-FPM performing worse than mod_php

    - by lordstyx
    Recently the website I maintain has been growing a lot and I saw the point coming where I'd want to switch from apache to nginx, because I kept on reading that it performs way better. Now I've done the switch, and I have to say, nginx is keeping up just fine. However, php-fpm is forming a problem. Where the php pages used to take 0.1 second to generate with the same load they now take around 3 seconds! Furthermore the error.log from nginx is being spammed with errors like: upstream timed out (110: Connection timed out) while connecting to upstream, client: ... I also tried using unix sockets instead, but those would complain about the following: connect() to unix:/tmp/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream I've fiddled with settings here and there but nothing seems to work. Changing the amount of pm.max_children doesn't seem to help a lot either, but with it's current amount at 350 it seems to be the lesser of all evil. The server that's being used has 3 GB RAM (not all of it is free due to a MySQL server also running) along with 2 dual-core processors (4 cores in total). Am I doing something majorly wrong with the settings here, or is the server simply not capable enough? EDIT: Here is the nginx server block server { listen 80; listen [::]:80 default ipv6only=on; root /var/www; index index.php index.html index.htm; server_name localhost; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } location = /50x.html { root /usr/share/nginx/www; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini try_files $uri = 404; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } And the php-fpm pool: [www] user = www-data group = www-data listen = 127.0.0.1:9000 ;listen = /tmp/php5-fpm.sock listen.backlog = -1 pm = dynamic pm.max_children = 350 pm.start_servers = 200 pm.min_spare_servers = 10 pm.max_spare_servers = 350 pm.max_requests = 1536 rlimit_files = 65536 rlimit_core = unlimited chdir = /

    Read the article

  • Ubuntu and Postfix Configuration Issues

    - by Obi Hill
    I recently installed postfix on Ubuntu Natty. I'm having a problem with the configuration. Firstly here is my postfix configuration file: # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. mydomain = $myorigin myhostname = mail.nairanode.com alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/virtual # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/virtual # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 mydestination = $myorigin, $myhostname, localhost.localdomain, , localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all #mynetworks_style = host # ADDITIONAL unknown_local_recipient_reject_code = 550 maximal_queue_lifetime = 7d minimal_backoff_time = 1000s maximal_backoff_time = 8000s smtp_helo_timeout = 60s smtpd_recipient_limit = 16 smtpd_soft_error_limit = 3 smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit # Requirements for the sender details smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_$ # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.n$ # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_do$ # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes Here is also my /etc/postfix/aliases: # See man 5 aliases for format postmaster: root Here is also my /etc/mailname: nairanode.com I've also updated my hostname to nairanode.com However, when I run postalias /etc/postfix/aliases I get the following : postalias: warning: valid_hostname: invalid character 47(decimal): /etc/mailname postalias: fatal: file /etc/postfix/main.cf: parameter mydomain: bad parameter value: /etc/mailname Is there something I'm doing wrong?! I noticed that when I replace myorigin = /etc/mailname with myorigin = nairanode.com in my postfix config, I don't see any errors anymore after calling postalias. Is this a bug or something?!

    Read the article

  • I have been told to accept one error with Memtest86+

    - by DustByte
    Bought a new computer back in August with 4x4 GB RAM. Had problems with the RAM. They sent me four new sticks, which also generated errors. Singled out four sticks (from the eight I now had) that didn't generate any errors. Discovered by coincident a new RAM error last week (this time no BSOD). Contacted the company. According to them there have been issues with a bad stock from last summer so I got two tested 8 GB sticks sent to me. Been running Memtest86+ over the weekend. After 20 hours I got an error (see attached photo). The test has now been running for 37 hours but so far only this one error. I contacted the company where I bought the computer. They wrote back: I wouldn't worry about hat one fail. We have had similar situations here whereby it passes numerous times but then fails once. We think it's an issue with memtest, after all memory is faulty or it isn't so you can't really have it pass a few times, fail the next time around and then pass again! Please trust me on this and continue with the memory we sent you and if your problems continue we'll look at getting it replaced again. I gather from other forum posts that many people do not accept a single error. What could this single error signify, faulty RAM or a glitch in the MEMTEST program (or other)? Update: From the helpful comments below I conclude that an occasional (and rare) "random" error could occur and be acceptable, but repeated errors at the same address would indicate malfunction. Memtest has now run for 45 hours and I still have only one error. For everyone's information, I will keep running the test. In less than two days I am going away for a month. I will most likely leave Memtest running. As I do not have a UPS there is a risk that a power outage will ruin the experiment. The computer is a desktop so I cannot bring it with me (which would curiously have exposed it to more cosmic rays as I will be flying ;)).

    Read the article

  • A proper way to create non-interactive accounts?

    - by AndreyT
    In order to use password-protected file sharing in a basic home network I want to create a number of non-interactive user accounts on a Windows 8 Pro machine in addition to the existing set of interactive accounts. The users that corresponds to those extra accounts will not use this machine interactively, so I don't want their accounts to be available for logon and I don't want their names to appear on welcome screen. In older versions of Windows Pro (up to Windows 7) I did this by first creating the accounts as members of "Users" group, and then including them into "Deny logon locally" list in Local Security Policy settings. This always had the desired effect. However, my question is whether this is the right/best way to do it. The reason I'm asking is that even though this method works in Windows 8 Pro as well, it has one little quirk: interactive users from "User" group are still able to see these extra user names when they go to the Metro screen and hit their own user name in the top-right corner (i.e. open "Sign out/Lock" menu). The command list that drops out contains "Sign out" and "Lock" commands as well as the names of other users (for "switch user" functionality). For some reason that list includes the extra users from "Deny logon locally" list. It is interesting to note that this happens when the current user belongs to "Users" group, but it does not happen when the current user is from "Administrators". For example, let's say I have three accounts on the machine: "Administrator" (from "Administrators", can logon locally), "A" (from "Users", can logon locally), "B" (from "Users", denied logon locally). When "Administrator" is logged in, he can only see user "A" listed in his Metro "Sign out/Lock" menu, i.e. all works as it should. But when user "A" is logged in, he can see both "Administrator" and user "B" in his "Sign out/Lock" menu. Expectedly, in the above example trying to switch from user "A" to user "B" by hitting "B" in the menu does not work: Windows jumps to welcome screen that lists only "Administrator" and "A". Anyway, on the surface this appears to be an interface-level bug in Windows 8. However, I'm wondering if going through "Deny logon locally" setting is the right way to do it in Windows 8. Is there any other way to create a hidden non-interactive user account?

    Read the article

  • SSH Connection Refused - Debug using Recovery Console

    - by olrehm
    Hey everyone, I have found a ton of questions answered about debugging why one cannot connect via SSH, but they all seem to require that you can still access the system - or say that without that nothing can be done. In my case, I cannot access the system directly, but I do have access to the filesystem using a recovery console. So this is the situation: My provider made some kernel update today and in the process also rebooted my server. For some reason, I cannot connect via SSH anymore, but instead get a ssh: connect to host mydomain.de port 22: Connection refused I do not know whether sshd is just not running, or whether something (e.g. iptables) blocks my ssh connection attempts. I looked at the logfiles, none of the files in /var/log contain any mentioning on ssh, and /var/log/auth.log is empty. Before the kernel update, I could log in just fine and used certificates so that I would not need a password everytime I connect from my local machine. What I tried so far: I looked in /etc/rc*.d/ for a link to the /etc/init.d/ssh script and found none. So I am expecting that sshd is not started properly on boot. Since I cannot run any programs in my system, I cannot use update-rc to change this. I tried to make a link manually using ln -s /etc/init.d/ssh /etc/rc6.d/K09sshd and restarted the server - this did not fix the problem. I do not know wether it is at all possible to do it like this and whether it is correct to create it in rc6.d and whether the K09 is correct. I just copied that from apache. I also tried to change my /etc/iptables.rules file to allow everything: # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *mangle :PREROUTING ACCEPT [7468813:1758703692] :INPUT ACCEPT [7468810:1758703548] :FORWARD ACCEPT [3:144] :OUTPUT ACCEPT [7935930:3682829426] :POSTROUTING ACCEPT [7935933:3682829570] COMMIT # Completed on Thu Dec 10 18:05:32 2009 # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *filter :INPUT ACCEPT [7339662:1665166559] :FORWARD ACCEPT [3:144] :OUTPUT ACCEPT [7935930:3682829426] -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT -A INPUT -p tcp -m tcp --dport 993 -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 143 -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 8080 -s localhost -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A INPUT -j ACCEPT -A FORWARD -j ACCEPT -A OUTPUT -j ACCEPT COMMIT # Completed on Thu Dec 10 18:05:32 2009 # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *nat :PREROUTING ACCEPT [101662:5379853] :POSTROUTING ACCEPT [393275:25394346] :OUTPUT ACCEPT [393273:25394250] COMMIT # Completed on Thu Dec 10 18:05:32 2009 I am not sure this is done correctly or has any effect at all. I also did not find any mentioning of iptables in any file in /var/log. So what else can I do? Thank you for your help.

    Read the article

  • Model M Keyboard inputs incorrect characters after logging in to Fedora

    - by mickburkejnr
    I recently bought a 24 year old IBM Model M keyboard. From what I gather, it'd been left on a shelf for the last 5 years, so you can imagine the amount of dust dirt and crap that was on it. Before cleaning it, I plugged it in to my laptop (running Fedora 17) using a PS/2 to USB adapter. What I found was, while it still works, the keys I press don't correspond to what is displayed on the screen. So for example, when I type S on the keyboard, I get ß display on the screen instead. At the time, I put this down to the adapter not working properly. Since then, I stripped the keys off the keyboard and cleaned the whole thing. It looks like it's just come out of a box! I then plugged it in to my computer (also running Fedora 17) via a standard PS/2 plug. The computer loaded up to the login screen, and I typed in my password. Pressed enter, and I logged straight in to my machine. At this point, I opened up a text editor and started typing some stuff. To my horror, the keystrokes I was entering weren't coming up as intended. What came up instead were characters that would map to the pressed key but only under a different keyboard language setting. I opened up a program to see what keyboard language had been selected, and the correct one for the keyboard was selected (which is UK in my case). I opened up a window that would show what characters mapped to what keys, and I pressed every single key on the keyboard, and every corresponding block representing each key lit up. I went back to the text editor to try again, but I was still getting these random characters. Whats more is that the backspace key would not work, although in the other utility it would flash when pressed. What I know is that at the login screen the keyboard must have entered the correct characters, otherwise I wouldn't have been able to log in. Further more, keys that don't respond while using a text editor as sending signals to the computer, as illustrated in that keyboard utility. The question is why random characters are displayed when they really shouldn't be? Would this be a hardware fault or a software issue?

    Read the article

  • Can not get sound over hdmi in kubuntu 9.10

    - by user32509
    I have used a hdmi cable to connect my lcd (which is connected with my speakers) with my nvida 275 gtx grafic card. I can not get the sound output to work. The hardware itself is working probably - I tested it under windows. Currently I am running Kubuntu 9.10 64 with Nvidia 190.53. The sound output worked fine before I installed the hdmi connection. (German output - i can change it, if you tell me how :)) aplay -l **** Liste von PLAYBACK Geräten **** Karte 0: Intel [HDA Intel], Gerät 0: ALC889A Analog [ALC889A Analog] Untergeordnete Geräte: 1/1 Untergeordnetes Gerät '0: subdevice #0 Karte 0: Intel [HDA Intel], Gerät 1: ALC889A Digital [ALC889A Digital] Untergeordnete Geräte: 1/1 Untergeordnetes Gerät '0: subdevice #0 aplay -L front:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog Front speakers surround40:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=Intel,DEV=0 HDA Intel, ALC889A Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=Intel,DEV=0 HDA Intel, ALC889A Digital IEC958 (S/PDIF) Digital Audio Output null Discard all samples (playback) or generate zero samples (capture) pulse Playback/recording through the PulseAudio sound server And i disabled mute in kmix an all channels :) Edit: lspci -v ... 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) Subsystem: Giga-byte Technology Device a022 Flags: bus master, fast devsel, latency 0, IRQ 22 Memory at ea400000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel <?> Capabilities: [130] Root Complex Link <?> Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel ... cat /proc/asound/version Advanced Linux Sound Architecture Driver Version 1.0.20. lsmod | grep snd_hda_intel snd_hda_intel 31880 2 snd_hda_codec 87584 2 snd_hda_codec_realtek,snd_hda_intel snd_pcm 93160 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss snd 77096 16 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq_oss,snd_rawmidi,snd_seq,snd_timer,snd_seq_device snd_page_alloc 10928 2 snd_hda_intel,snd_pcm I think I am missing the something-hdmi module? Is there such a thing?

    Read the article

  • MySQL won't start, reinstall fails on Ubuntu 12.04

    - by Evils
    My problem started yesterday night when I tried to change the my.cnf config on my ubuntu 12.04 x64 System. I simply tried to changed the bind-address parameter from 127.0.0.1 to 0.0.0.0. A simple restart after a reboot gave this error: stop: Unknown instance: start: Job failed to start I tried to start mysql then by using 'mysqld' which outputs this: 130701 11:05:59 [Note] Plugin 'FEDERATED' is disabled. mysqld: Table 'mysql.plugin' doesn't exist 130701 11:05:59 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 130701 11:05:59 InnoDB: The InnoDB memory heap is disabled 130701 11:05:59 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130701 11:05:59 InnoDB: Compressed tables use zlib 1.2.3.4 130701 11:05:59 InnoDB: Initializing buffer pool, size = 128.0M 130701 11:05:59 InnoDB: Completed initialization of buffer pool 130701 11:05:59 InnoDB: highest supported file format is Barracuda. 130701 11:05:59 InnoDB: Waiting for the background threads to start 130701 11:06:00 InnoDB: 5.5.31 started; log sequence number 1595675 130701 11:06:00 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130701 11:06:00 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130701 11:06:00 [Note] Server socket created on IP: '127.0.0.1'. 130701 11:06:00 [ERROR] Can't start server : Bind on unix socket: Permission denied 130701 11:06:00 [ERROR] Do you already have another mysqld server running on socket: /var/run/mysqld/mysqld.sock ? 130701 11:06:00 [ERROR] Aborting 130701 11:06:00 InnoDB: Starting shutdown... 130701 11:06:00 InnoDB: Shutdown completed; log sequence number 1595675 130701 11:06:00 [Note] mysqld: Shutdown complete Meanwhile I already tried to reinstall and purge the complete mysql package which results in another error which says that dpkg cant change the admins password. While this error appeared another error came with it. When trying to install something new with apt, it always says 'fopen: permission denied' right after it tries to update my man-db. This is my dmesg output: [ 6879.687998] type=1400 audit(1372669683.397:36): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=9336 comm="apparmor_parser" [ 6881.323215] init: mysql main process (9340) terminated with status 1 [ 6881.323316] init: mysql respawning too fast, stopped Any help will be appreciated as this is a productive server which renders useless without mysql.

    Read the article

  • Pushing Large Files to 500+ Computers [closed]

    - by WMIF
    I work with a team to manage 500-600 rented Windows 7 computers for an annual conference. We have a large amount of data that needs to be synced to these computers, up to 1 TiB. The computers are divided into rooms and connected through unmanaged gigabit switches. We prepare these computers ahead of time with the Windows installation and configuration, plus any files that we have available to us before we send the base image in for replication by the rental company. Every year, we have presenters approach on site with up to gigs of data that need to be pushed to the room that they will be presenting in. Sometimes they only have a few files that are small sizes, such as a slide PDF, but can sometimes be much larger 5 GiB. Our current strategy for pushing these files is using batch scripts and RoboCopy. For the large pushes, we actually use a BitTorrent client to generate a torrent file, and then we use the batch-RoboCopy to push the torrent into a folder on the remote machines that is being monitored by an installed BT client. Often times, this data needs to be pushed immediately with a small time window. We have several machines in a control room that are identical to the machines on the floor that we use for these pushes. We occasionally have a need to execute a program on the remote machines, and we currently use batch and PSexec to handle this task. We would love to be able to respond to these last minute pushes with "sorry, your own fault", but it won't happen. The BT method has allowed us to have a much faster response time, but the whole batch process can get messy when there are multiple jobs being pushed. We use Enterprise Ghost for other processes, and it doesn't work well in this large of scale, plus it is really quite expensive for a once-a-year task like this. EDIT: There is a hard requirement that the remote machines on the floor are running Windows. The control machines do not have a hard OS requirement. I would really like to stay away from Multicast because of complications with upstream routers. Is Multicast or BitTorrent the better way to go on this? Is there another protocol that might work better?

    Read the article

  • Hardware for multipurpose home server

    - by Michael Dmitry Azarkevich
    Hi guys, I'm looking to set up a multipurpose home server and hoped you could help me with the hardware selection. First of all, the services it will provide: Hosting a MySQL database (for training and testing purposes) FTP server Personal Mail Server Home media server So with this in mind I've done some research, and found some viable solutions: A standard PC with the appropriate software (Either second hand or new) A non-solid state mini-ITX system A solid state, fanless mini-ITX system I've also noted the pros and cons of each system: A standard second hand PC with old hardware would be the cheapest option. It could also have lacking processing power, not enough RAM and generally faulty hardware. Also, huge power consumption heat generation and noise levels. A standard new PC would have top-notch hardware and will stay that way for quite some time, so it's a good investment. But again, the main problem is power consumption, heat generation and noise levels. A non-solid state mini-ITX system would have the advantages of lower power consumption, lower cost (as far as I can see) and long lasting hardware. But it will generate noise and heat which will be even worse because of the size. A solid state, fanless mini-ITX system would have all the advantages of a non-solid state mini-ITX but with minimal noise and heat. The main disadvantage is the read\write problems of flash memory. All in all I'm leaning towards a non-solid state mini-ITX because of the read\write issues of flash memory. So, after this overview of what I do know, my questions are: Are all these services even providable from a single server? To my best understanding they are, but then again, I might be wrong. Is any of these solutions viable? If yes, which one is the best for my purposes? If not, what would you suggest? Also, on a more software oriented note: OS wise, I'm planning to run Linux. I'm currently thinking of four options I've been recommended: CentOS, Gentoo, DSL (Damn Small Linux) and LFS (Linux From Scratch). Any thoughts on this? Any other distro you would recomend? Regarding FTP services, I've herd good things about FileZila. Anyone has any experience with that? Do you recommend it? Do you recommend something else? Regarding the Mail service, I know nothing about this except that it exists. Any software you recommend for this task? Home media, same as mail service. Any recommended software? Thank you very much.

    Read the article

  • Nginx https rewrite turns POST to GET

    - by x7311
    My proxy server runs on ip A and this is how people access my web service. The nginx configuration will redirect to a virtual machine on ip B. For the proxy server on IP A, I have this in my sites-available server { listen 443; ssl on; ssl_certificate nginx.pem; ssl_certificate_key nginx.key; client_max_body_size 200M; server_name localhost 127.0.0.1; server_name_in_redirect off; location / { proxy_pass http://10.10.0.59:80; proxy_redirect http://10.10.0.59:80/ /; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } server { listen 80; rewrite ^(.*) https://$http_host$1 permanent; server_name localhost 127.0.0.1; server_name_in_redirect off; location / { proxy_pass http://10.10.0.59:80; proxy_redirect http://10.10.0.59:80/ /; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The proxy_redirect was taken from how do I get nginx to forward HTTP POST requests via rewrite? Everything that hits the public IP will hit 443 because of the rewrite. Internally, we are forwarding to 80 on the virtual machine. But when I run a python script such as the one below to test our configuration import requests data = {'username': '....', 'password': '.....'} url = 'http://IP_A/api/service/signup' res = requests.post(url, data=data, verify=False) print res print res.json print res.status_code print res.headers I am getting a 405 Method Not Allowed. In nginx we found that when it hit the internal server, the internal nginx was getting a GET request, even though in the original header we did a POST (this was shown in the Python script). So it seems like rewrite has problem. Any idea how to fix this? When I commented out the rewrite, it hits 80 for sure, and it went through. Since rewrite was able to talk to our internal server, so rewrite itself has no issue. It's just the rewrite dropped POST to GET. Thank you! (This will also be asked on Nginx forum because this is a critical blocker...)

    Read the article

< Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >