Search Results

Search found 2361 results on 95 pages for 'incorrect'.

Page 57/95 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Teamcity build agent gives 504 gateway timeout

    - by Anthony
    I have a new teamcity build agent machine, which when started up tries to connect to the build server and fails. It never shows up in the connected, disconnected or unauthorised agents tabs of the build server web interface. The logs on the build agent show that it fails to connect with a 504 gateway timeout. This is from teamcity-agent.log [2012-09-04 15:34:59,776] INFO - buildServer.AGENT.registration - Registering on server http://10.0.10.16, AgentDetails{Name='my-local', AgentId=null, BuildId=null, AgentOwnAddress='10.0.1.14', AlternativeAddresses=[10.0.10.32], Port=8080, Version='21424', PluginsVersion='21424-md5-somechecksum', AvailableRunners=[ABunchOfPlugins], AvailableVcs=[SomeRunners], AuthorizationToken='sometoken'} [2012-09-04 15:35:53,606] WARN - buildServer.AGENT.registration - Call http://10.0.10.16/RPC2 buildServer.registerAgent3: org.apache.xmlrpc.XmlRpcClientException: Server returned incorrect status code: 504 Gateway Time-out [2012-09-04 15:35:53,606] WARN - buildServer.AGENT.registration - Connection to TeamCity server is probably lost. Will be trying to restore it. Take a look at logs/teamcity-agent.log for details (unless you're using custom logging). (I have edited some identifying data out of this log excerpt) But I can reach the build server. In fact, tracert shows that it is very nearby. Tracing route to TEAMCITYSERVER [10.0.10.16] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 10.0.2.1 2 <1 ms <1 ms <1 ms TEAMCITYSERVER [10.0.10.16] Trace complete. I can see a teamcity login page if I hit http://10.0.10.16 in the browser. The teamcity service is logging in as the same (local administrator) account as I used to log in and test the network. The build agent is a windows 2008 server VM hosted on Ubuntu 12.04 under Oracle VirtualBox. I have disabled firewalls on both the Windows and Ubuntu machines. Other VMS with similar configuration can connect fine and do not report this error. What can possibly be preventing this connection?

    Read the article

  • Diagnosing permission problems with Cobian Backup to network share

    - by DaveBurns
    I'm running the latest Cobian 11. I have a Synology DS412 NAS. All of my machines (Mac and Windows) access this just fine when I'm logged in and I browse to it manually. I have Cobian installed as a service on two Windows machines: WinXP SP3 and Win7 x64. On both machines, the service is set to log on with my user account which is in the Windows administrator group. Backups on both machines fail with the message "Couldn't create the destination directory "\nas1\backups\foo\bar\": The filename, directory name, or volume label syntax is incorrect". I have tried setting the NAS's share to allow anonymous read/write access but it made no difference. Although I want the backups to run unattended in the middle of the night, I have tested them by running them manually while I'm logged in but no luck. Before starting that, I make sure that I can browse to the NAS with Explorer to ensure that any authentication session with Windows and the NAS has not expired. Still no luck. I have tried creating that destination directory both on the NAS before the backup and deleting it so the backup job could create it with the client's credentials but no luck. The usual answer in the Cobian support forums is that there is a permission problem. I agree. But at this point, what can I do to diagnose this further?

    Read the article

  • Xorg server won't start after fresh install of Debian 5.04, screen goes blank (Intel Atom D510(Pinet

    - by Kamil Zadora
    Hello, I have installed Debian 5.04 Lenny on my new Intel D510MO motheboard. I fixed some issues with incorrect drive mapping (for some reason during installation my hdd was on sdb, after a restart it is under sda - fixed in grub), and now I am struggling with getting graphical enviroment up and running, I installed the graphical enviroment using the Debian installer. I am not an linux expert by any means, I assume that I need to edit the xorg.conf file. Any hints appreciated! UPDATE1: No change after dpkg-reconfigure xserver-xorg Here is my current xorg.conf: # xorg.conf (X.Org X Window System server configuration file) # # This file was generated by dexconf, the Debian X Configuration tool, using # values from the debconf database. # # Edit this file with caution, and see the xorg.conf manual page. # (Type "man xorg.conf" at the shell prompt.) # # This file is automatically updated on xserver-xorg package upgrades *only* # if it has not been modified since the last upgrade of the xserver-xorg # package. # # If you have edited this file but would like it to be automatically updated # again, run the following command: # sudo dpkg-reconfigure -phigh xserver-xorg Section "InputDevice" Identifier "Generic Keyboard" Driver "kbd" Option "XkbRules" "xorg" Option "XkbModel" "pc105" Option "XkbLayout" "pl" EndSection Section "InputDevice" Identifier "Configured Mouse" Driver "mouse" EndSection Section "Device" Identifier "Configured Video Device" EndSection Section "Monitor" Identifier "Configured Monitor" EndSection Section "Screen" Identifier "Default Screen" Monitor "Configured Monitor" EndSection UPDATE2: I have installed vnc4server package. I can connect over vnc from my windows 7 laptop and i see empty desktop with terminal window open. Seems that the xserver and gdm are running but they cant talk with my GPU. I am not sure if a can use any gui tool to configure it overthe vnc, as all I see is the terminal window, no taskbars etc. UPDATE3: My current Xorg.0.log http://pastebin.pl/18918 The graphic chipset integrated into the D510 processor is Intel 945GC

    Read the article

  • Temp file folder full, but clearing it out doesn't seem to work

    - by Vegar Westerlund
    I got this error on our build server: MSB6003: The specified task executable could not be run. MSB5003: Failed to create a temporary file. Temporary files folder is full or its path is incorrect. The directory name is invalid. [C:\Users\swdev_build\bamboo-agent-home\xml-data\build-dir\XXXX-ZZZZZZ-JOB1\build.xml] This was when trying to run and msbuild task to run our test suite. Using the power of google it seemed that this should be a problem with the %TEMP% folder running out of tmp file names apparently because a 4 digit hex name is used (for a total of 65535 temporary files). The problem is that the error persist, even after going into the %TEMP% folder and deleting everything before rebooting the machine. Does anyone know how to fix the issue and more importantly how to prevent it from happening again? What is the preferred way of cleaning up this temp files? Make it a part of the build process? Update: Actually I cleared the TEMP folder of the only local user, but since the build server is running as the SYSTEM user, it probably has some temp folder somewhere else.

    Read the article

  • imapsync - Authentication failed

    - by Touff
    I've deployed many Google Apps accounts and have used imapsync a number of times to migrate accounts to Google Apps. This time however, no matter what I try imapsync refuses to work claiming my credentials are incorrect - I've checked them time and time again and they are 100% correct. On Ubuntu 12, built from source, my command is: imapsync --host1 myserver.com --user1 [email protected] --password1 mypassword1 -ssl1 --host2 imap.gmail.com --user2 [email protected] --password2 mypassword2 -ssl2 -authmech2 PLAIN Full output from the command: get options: [1] PID is 21316 $RCSfile: imapsync,v $ $Revision: 1.592 $ $Date: With perl 5.14.2 Mail::IMAPClient 3.35 Command line used: /usr/bin/imapsync --debug --host1 myserver.com --user1 [email protected] --password1 mypassword1 -ssl1 --host2 imap.gmail.com --user2 [email protected] --password2 mypassword2 -ssl2 -authmech2 PLAIN Temp directory is /tmp PID file is /tmp/imapsync.pid Modules version list: Mail::IMAPClient 3.35 IO::Socket 1.32 IO::Socket::IP ? IO::Socket::INET 1.31 IO::Socket::SSL 1.53 Net::SSLeay 1.42 Digest::MD5 2.51 Digest::HMAC_MD5 1.01 Digest::HMAC_SHA1 1.03 Term::ReadKey 2.30 Authen::NTLM 1.09 File::Spec 3.33 Time::HiRes 1.972101 URI::Escape 3.31 Data::Uniqid 0.12 IMAPClient 3.35 Info: turned ON syncinternaldates, will set the internal dates (arrival dates) on host2 same as host1. Info: will try to use LOGIN authentication on host1 Info: will try to use PLAIN authentication on host2 Info: imap connexions timeout is 120 seconds Host1: IMAP server [SERVER1] port [993] user [USER1] Host2: IMAP server [imap.gmail.com] port [993] user [USER2] Host1: * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready. Host1: SERVER1 says it has CAPABILITY for AUTHENTICATE LOGIN Host1: success login on [SERVER1] with user [USER1] auth [LOGIN] Host2: * OK Gimap ready for requests from MY-VPS Host2: imap.gmail.com says it has CAPABILITY for AUTHENTICATE PLAIN Failure: error login on [imap.gmail.com] with user [USER2] auth [PLAIN]: 2 NO [AUTHENTICATIONFAILED] Invalid credentials (Failure) I have tried -authmech2 LOGIN as well which returns: Host2: imap.gmail.com says it has NO CAPABILITY for AUTHENTICATE LOGIN Failure: error login on [imap.gmail.com] with user [[email protected]] auth [LOGIN]: 2 NO [AUTHENTICATIONFAILED] Invalid credentials (Failure) If anyone can shed some light on this I would greatly appreciate it.

    Read the article

  • How to avoid lftp Certificate verification error?

    - by pattulus
    I'm trying to get my Pelican blog working. It uses lftp to transfer the actual blog to ones server, but I always get an error: mirror: Fatal error: Certificate verification: subjectAltName does not match ‘blogname.com’ I think lftp is checking the SSL and the quick setup of Pelican just forgot to include that I don't have SSL on my FTP. This is the code in Pelican's Makefile: ftp_upload: $(OUTPUTDIR)/index.html lftp ftp://$(FTP_USER)@$(FTP_HOST) -e "mirror -R $(OUTPUTDIR) $(FTP_TARGET_DIR) ; quit" which renders in terminal as: lftp ftp://[email protected] -e "mirror -R /Volumes/HD/Users/me/Test/output /myblog_directory ; quit" What I managed so far is, denying the SSL check by changing the Makefile to: lftp ftp://$(FTP_USER)@$(FTP_HOST) -e "set ftp:ssl-allow no" "mirror -R $(OUTPUTDIR) $(FTP_TARGET_DIR) ; quit" Due to my incorrect implementation I get logged in correctly (lftp [email protected]:~>) but the one line feature doesn't work anymore and I have to enter the mirror command by hand: mirror -R /Volumes/HD/Users/me/Test/output/ /myblog_directory This works without an error and timeout. The question is how to do this with a one liner. In addition I tried: set ssl:verify-certificate/ftp.myblog.com no This trick to disable certificate verification in lftp: $ cat ~/.lftp/rc set ssl:verify-certificate no However, it seems there is no "rc" folder in my lftp directory - so this prompt has no chance to work.

    Read the article

  • Task Scheduler not running .bat or .vbs successfully

    - by Django Reinhardt
    Hi there, got this weird problem, which will hopefully have an obvious solution for some enlightened soul: We have several daily tasks we run via a .vbs script on our server (through the Task Scheduler), and for months it has been fine, but recently we've hit a problem. The .vbs script stopped successfully executing... but oddly it worked fine when ran manually! The error given in these circumstances was always "Timeout". We thought we try a little creative thinking, and run the .vbs another way: Via a .bat file. Again we hit weird issues, but with a little more debugging information, this time around. The .bat file is nothing more than... CScript "C:\location\script.vbs" > Log.txt But the Task Scheduler fails with the following error: 0x1: An incorrect function was called or an unknown function was called. The log.txt file says: CScript Error: Initialization of the Windows Script Host failed. (Not enough storage is available to process this command. ) But get this: The .bat file executes perfectly (vbs script and all) if it's executed with a double click! There's only a problem when it's run by Task Scheduler. What the hell? We're running Windows Server 2008 R2 (x64) and yes, the Task Sheduler's results are the same whether the user is logged in or not. Also, the user that can run the scripts successfully manually, is also the same user that runs the scripts in Task Scheduler. Thanks for any help for this weird problem!

    Read the article

  • Account Lockout with pam_tally2 in RHEL6

    - by Aaron Copley
    I am using pam_tally2 to lockout accounts after 3 failed logins per policy, however, the connecting user does not receive the error indicating pam_tally2's action. (Via SSH.) I expect to see on the 4th attempt: Account locked due to 3 failed logins No combination of required or requisite or the order in the file seems to help. This is under Red Hat 6, and I am using /etc/pam.d/password-auth. The lockout does work as expected but the user does not receive the error described above. This causes a lot of confusion and frustration as they have no way of knowing why authentication fails when they are sure they are using the correct password. Implementation follows NSA's Guide to the Secure Conguration of Red Hat Enterprise Linux 5. (pg.45) It's my understanding that that only thing changed in PAM is that /etc/pam.d/sshd now includes /etc/pam.d/password-auth instead of system-auth. If locking out accounts after a number of incorrect login attempts is required by your security policy, implement use of pam_tally2.so. To enforce password lockout, add the following to /etc/pam.d/system-auth. First, add to the top of the auth lines: auth required pam_tally2.so deny=5 onerr=fail unlock_time=900 Second, add to the top of the account lines: account required pam_tally2.so EDIT: I get the error message by resetting pam_tally2 during one of the login attempts. user@localhost's password: (bad password) Permission denied, please try again. user@localhost's password: (bad password) Permission denied, please try again. (reset pam_tally2 from another shell) user@localhost's password: (good password) Account locked due to ... Account locked due to ... Last login: ... [user@localhost ~]$

    Read the article

  • BSOD Dump - EXCEPTION_DOUBLE_FAULT - ON Windows 2008 Server 64bit

    - by Mark K
    Hello, my windows 2008 server (datacenter ed) 64bit , have recently created a series of BSOD on a different applications. the error message is in general EXCEPTION_DOUBLE_FAULT. Can anyone please help with the analysis of the dump file bellow- Best regards, Mark 2: kd !analyze -v * Bugcheck Analysis * * UNEXPECTED_KERNEL_MODE_TRAP (7f) This means a trap occurred in kernel mode, and it's a trap of a kind that the kernel isn't allowed to have/catch (bound trap) or that is always instant death (double fault). The first number in the bugcheck params is the number of the trap (8 = double fault, etc) Consult an Intel x86 family manual to learn more about what these traps are. Here is a portion of those codes: If kv shows a taskGate use .tss on the part before the colon, then kv. Else if kv shows a trapframe use .trap on that value Else .trap on the appropriate frame will show where the trap was taken (on x86, this will be the ebp that goes with the procedure KiTrap) Endif kb will then show the corrected stack. Arguments: Arg1: 0000000000000008, EXCEPTION_DOUBLE_FAULT Arg2: 0000000080050033 Arg3: 00000000000006f8 Arg4: fffff800018b1678 Debugging Details: BUGCHECK_STR: 0x7f_8 CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: DRIVER_FAULT_SERVER_MINIDUMP PROCESS_NAME: CustomerService. CURRENT_IRQL: 1 EXCEPTION_RECORD: fffffa6004e45568 -- (.exr 0xfffffa6004e45568) ExceptionAddress: fffff800018a0150 (nt!RtlVirtualUnwind+0x0000000000000250) ExceptionCode: 10000004 ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 0000000000000000 Parameter[1]: 00000000000000d8 TRAP_FRAME: fffffa6004e45610 -- (.trap 0xfffffa6004e45610) NOTE: The trap frame does not contain all registers. Some register values may be zeroed or incorrect. rax=0000000000000050 rbx=0000000000000000 rcx=0000000000000004 rdx=00000000000000d8 rsi=0000000000000000 rdi=0000000000000000 rip=fffff800018a0150 rsp=fffffa6004e457a0 rbp=fffffa6004e459e0 r8=0000000000000006 r9=fffff8000181e000 r10=ffffffffffffff88 r11=fffff80001a1c000 r12=0000000000000000 r13=0000000000000000 r14=0000000000000000 r15=0000000000000000 iopl=0 nv up ei pl zr na po nc nt!RtlVirtualUnwind+0x250: fffff800018a0150 488b02 mov rax,qword ptr [rdx] ds:00000000000000d8=???????????????? Resetting default scope LAST_CONTROL_TRANSFER: from fffff800018781ee to fffff80001878450 STACK_TEXT: fffffa6001768a68 fffff800018781ee : 000000000000007f 0000000000000008 0000000080050033 00000000000006f8 : nt!KeBugCheckEx fffffa6001768a70 fffff80001876a38 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiBugCheckDispatch+0x6e fffffa6001768bb0 fffff800018b1678 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiDoubleFaultAbort+0xb8 fffffa6004e44e30 fffff800018782a9 : fffffa6004e45568 0000000000000001 fffffa6004e45610 000000000000023b : nt!KiDispatchException+0x34 fffffa6004e45430 fffff800018770a5 : 0000000000000000 0000000000000000 0000000000000000 0000000000000001 : nt!KiExceptionDispatch+0xa9 fffffa6004e45610 fffff800018a0150 : fffffa6004e46638 fffffa6004e46010 fffff80001965190 fffff8000181e000 : nt!KiPageFault+0x1e5 fffffa6004e457a0 fffff800018a3f78 : fffffa6000000001 0000000000000000 0000000000000000 ffffffffffffff88 : nt!RtlVirtualUnwind+0x250 fffffa6004e45810 fffff800018b1706 : fffffa6004e46638 fffffa6004e46010 fffffa6000000000 0000000000000000 : nt!RtlDispatchException+0x118 fffffa6004e45f00 0000000000000000 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiDispatchException+0xc2 STACK_COMMAND: kb FOLLOWUP_IP: nt!KiDoubleFaultAbort+b8 fffff800`01876a38 90 nop SYMBOL_STACK_INDEX: 2 SYMBOL_NAME: nt!KiDoubleFaultAbort+b8 FOLLOWUP_NAME: MachineOwner MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe DEBUG_FLR_IMAGE_TIMESTAMP: 4a7801eb FAILURE_BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b8 BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b8 Followup: MachineOwner

    Read the article

  • Task Scheduler not able to execute .vbs scripts successfully

    - by Django Reinhardt
    Apologies if this has a really obvious answer! We have several daily tasks we run via a .vbs script on our server (through the Task Scheduler), and for months it has been fine, but recently we've hit a problem. The .vbs scripts stopped successfully executing (always timing out)... but could still be executed manually with no problems(!). Not knowing any good reason why the Task Scheduler should start having problems, we thought we'd try a little "creative thinking", and run the .vbs another way: Via a .bat file executed by the Task Scheduler. Again we hit weird issues, but with a little more debugging information, this time around. The .bat file run by Task Scheduler is nothing more than... CScript "C:\location\script.vbs" > Log.txt But after an attempt to run it, the Task Scheduler fails with the following error: 0x1: An incorrect function was called or an unknown function was called. The Log.txt (as output from the .bat file above) says: CScript Error: Initialization of the Windows Script Host failed. (Not enough storage is available to process this command. ) But get this: The .bat file executes perfectly (vbs script and all) if it's executed with a double click! There's only a problem when it's run by Task Scheduler. What the hell? We're running Windows Server 2008 R2 (x64) and yes, the Task Sheduler's results are the same whether the user is logged in or not. Also, the user that can run the scripts successfully manually, is also the same user that runs the scripts in Task Scheduler. Thanks for any help for this weird problem!

    Read the article

  • VsFTPd - pam_mkhomedir

    - by Totor
    I am trying to set up a FTP server that authenticates against an LDAP server. This part is done and works. My server is VsFTPd on Ubuntu Server 11.04. But I have to create the home directories for my LDAP users. I am trying to user the pam_mkhomedir module but it is not working: when I add its line to the /etc/pam.d/vsftpd file, my users can not login anymore to the FTP server. The problem is that I have very few information on what is wrong. VsFTPd just responds 530: login incorrect and I could not find a way to get debug or error messages from pam_mkhomedir. Here are my different configuration files. The /etc/pam.d/vsftpd file: auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed auth required pam_ldap.so account required pam_ldap.so password required pam_ldap.so session optional pam_mkhomedir.so skel=/home/skel debug The /etc/vsftpd.conf file: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem guest_enable=YES session_support=YES log_ftp_protocol=YES tcp_wrappers=YES Permissions on /home and /home/skel: root@ftp:/home# ls -al total 16 drwxrwxrwx 4 root root 4096 2011-10-11 21:19 . drwxr-xr-x 21 root root 4096 2011-09-27 13:32 .. drwxrwxrwx 2 root root 4096 2011-10-11 19:34 skel drwxrwxrwx 5 foo foo 4096 2011-10-11 21:11 foo root@ftp:/home# ls -al skel/ total 16 drwxrwxrwx 2 root root 4096 2011-10-11 19:34 . drwxrwxrwx 4 root root 4096 2011-10-11 21:19 .. -rwxrwxrwx 1 root root 3352 2011-10-11 19:34 .bashrc -rwxrwxrwx 1 root root 675 2011-10-11 19:34 .profile Yes, I know, permissions are not properly set but security is not the issue here: I first need to get it to work. So, to recapitulate: without pam_mkhomedir my LDAP users can login, but they cannot do anything because they are in an empty chrooted jail. If I add pam_mkhomedir, they cannot login anymore. If anyone has an idea why, or know how to get more information from logs, I would be very grateful, thanks.

    Read the article

  • Can I change the file system on the OS partition on Server 2008 R2?

    - by KCotreau
    I have a client using R1Soft Continuous Data Protection backup, and two of the Server 2008 R2 boxes were erroring out with these errors: Unable to obtain NTFS volume data for device '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}': Incorrect function. Unable to discover information for filesytem volume '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}'; Unable to obtain NTFS volume So I backed up all the registry entries with this, {f612849e-7125-11e0-8772-806e6f6e6963}, in it, and deleted them based on some VERY sparse info from R1Soft. I then decided to restore them before I rebooted, and do a system state backup first using MS backup, and even it errored out saying that there were FAT32 partitions. This was a major clue as the only two computers with problems had these FAT32 partitions. I figured if MS backup can't backup something, any other program is likely to have problems. Also, now that I realized the servers had FAT32 partitions on them, the error referencing NTFS takes on more weight. The partitions on both servers have the label "OS", but on one of the computers, it is given a letter, but on the other not. So I am thinking if I just convert the file systems from FAT32 to NTFS, it may solve the backup problem. So the question is this: Can I just convert those partitions, and does anyone have any concrete knowledge of any major downsides, like the servers not coming back up (of course, I would do one at a time)? My thinking is that the answer is probably at least 95% no, but they are production servers, so I wanted to get some second opinions.

    Read the article

  • Cannot connect to a VPN server - authentication failed with error code 691

    - by stacker
    When trying to connect to a VPN server, I get the 691 error code on the client, which say: Error Description: 691: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. I validated that the username and password are correct. I also installed a certification to use with the IKEv2 security type. I also validated that the VPN server support security method. But I cannot login. In the server log I get this log: Network Policy Server denied access to a user. The user DomainName\UserName connected from IP address but failed an authentication attempt due to the following reason: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. Any idea of what can I do? Thanks in advance! Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 12/29/2010 7:12:20 AM Event ID: 6273 Task Category: Network Policy Server Level: Information Keywords: Audit Failure User: N/A Computer: VPN.domain.com Description: Network Policy Server denied access to a user. Contact the Network Policy Server administrator for more information. User: Security ID: domain\Administrator Account Name: domain\Administrator Account Domain: domani Fully Qualified Account Name: domain.com/Users/Administrator Client Machine: Security ID: NULL SID Account Name: - Fully Qualified Account Name: - OS-Version: - Called Station Identifier: 192.168.147.171 Calling Station Identifier: 192.168.147.191 NAS: NAS IPv4 Address: - NAS IPv6 Address: - NAS Identifier: VPN NAS Port-Type: Virtual NAS Port: 0 RADIUS Client: Client Friendly Name: VPN Client IP Address: - Authentication Details: Connection Request Policy Name: Microsoft Routing and Remote Access Service Policy Network Policy Name: All Authentication Provider: Windows Authentication Server: VPN.domain.home Authentication Type: EAP EAP Type: Microsoft: Secured password (EAP-MSCHAP v2) Account Session Identifier: 313933 Logging Results: Accounting information was written to the local log file. Reason Code: 16 Reason: Authentication failed due to a user credentials mismatch. Either the user name provided does not map to an existing user account or the password was incorrect.

    Read the article

  • Remote Desktop Mobile mangles barcodes coming from scanner

    - by sfonck
    We have an application here using handhelds to scan barcodes. These handhelds are actually making a remote desktop session towards a server where the application runs. Works fine. Now we have bought some new Motorola MC55's running 'Windows Mobile 6.1 Classic', and when using the application over remote desktop: it mangles the characters of the barcodes.... I already tried following things: When scanning a barcode on the MC55 itself it is displayed correctly When scanning a barcode via the remote desktop into a notepad session it is incorrect. Played with all options of the 'Remote Desktop Mobile' - no result Disabled 'autocorrect' and 'suggest words when entering text' on the input settings - no result The strange things is: a barcode which consists of only numbers gets scanned correctly the mangled characters comes through in lower case For some codes \t is mangled in between (should normally be entered after the barcode) e.g.: 'PERIN4' becomes 'ERINp4' 'MGZB' becomes 'GZB m' 'BAK664' becomes 'AK664 b' 'MAGBFA01' becomes 'AGBFmA01' '5021879949500' gets scanned correctly Final solution: Suppllier of the handhelds said the handheld was sending the characters too fast over the remote desktop connection. They changed the handheld to wait for 50ms between sending each character, which produced correct results right now. Scanning a barcode became somewhat slower but it's almost not remarkable to endusers.

    Read the article

  • NFS "Permission Denied" getting cached on NetApp Filer

    - by Christopher Karel
    We have a bunch of Linux boxes mounting NFS shares off a NetApp filer. From time to time, I will flub some part of the export configuration. Typo on one of the allowed hosts, incorrect IP address, etc, etc. No worries, this is usually done on a test system, or with brand new exports that aren't yet in production. However, I've found that once I've been denied permission to mount something from a Linux machine, the failure gets cached for as long as a day. I will correct the problem that was blocking the mount, re-export on the NetApp, and still not be able to mount the share. I'm pretty sure this caching is done at the NetApp side. It normally ages out after a day or so, but it really sucks having to wait until tomorrow to mount a share. I've tried exportfs -f on the NetApp, as well as dns flush. (I found both suggestions via Google) However, neither one works. I would sell my soul if someone could help out with a command/pagan ritual that would clear up this cache issue. --Christopher Karel

    Read the article

  • Side-By-Side Configuration Error VC90.CRT

    - by Swiss
    I keep receiving the following error when trying to run MikTeX 2.8 or Visual Studio 2008 on 64-Bit Windows Vista. It's particularly odd because both programs were working problem free until a few days ago. The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log for more detail. Opening the Application log provides the following information: Activation context generation failed for "C:\Program Files (x86)\MiKTeX 2.8\miktex\bin\texworks.exe". Error in manifest or policy file "C:\Program Files (x86)\MiKTeX 2.8\miktex\bin\Microsoft.VC90.CRT.MANIFEST" on line 4. Component identity found in manifest does not match the identity of the component requested. Reference is Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.30729.4148". Definition is Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.30729.1". Please use sxstrace.exe for detailed diagnosis. It looks like the problem is with Microsoft.VC90.CRT.MANIFEST, but I am not sure why or how to fix this problem. I have tried uninstalling/reinstalling Visual Studio and MikTeX, as well as uninstalling/reinstalling Microsoft's C++ Redistributable, but nothing seems to be fixing this problem.

    Read the article

  • How to use cURL to FTPS upload to SecureTransport (hint: SITE AUTH and client certificates)

    - by Seamus Abshere
    I'm trying to connect to SecureTransport 4.5.1 via FTPS using curl compiled with gnutls. You need to use --ftp-alternative-to-user "SITE AUTH" per http://curl.haxx.se/mail/lib-2006-07/0068.html Do you see anything wrong with my client certificates? I try with # mycert.crt -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- # mykey.pem -----BEGIN RSA PRIVATE KEY----- ... -----END RSA PRIVATE KEY----- And it says "530 No client certificate presented": myuser@myserver ~ $ curl -v --ftp-ssl --cert mycert.crt --key mykey.pem --ftp-alternative-to-user "SITE AUTH" -T helloworld.txt ftp://ftp.example.com:9876/upload/ * About to connect() to ftp.example.com port 9876 (#0) * Trying 1.2.3.4... connected * Connected to ftp.example.com (1.2.3.4) port 9876 (#0) < 220 msn1 FTP server (SecureTransport 4.5.1) ready. > AUTH SSL < 334 SSLv23/TLSv1 * found 142 certificates in /etc/ssl/certs/ca-certificates.crt > USER anonymous < 331 Password required for anonymous. > PASS [email protected] < 530 Login incorrect. > SITE AUTH < 530 No client certificate presented. * Access denied: 530 * Closing connection #0 curl: (67) Access denied: 530 I also tried with a pk8 version... # openssl pkcs8 -in mykey.pem -topk8 -nocrypt > mykey.pk8 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- ...but got exactly the same result. What's the trick to sending a client certificate to SecureTransport?

    Read the article

  • ICMP Redirect Theory VS. Application

    - by joeqwerty
    I'm trying to watch ICMP redirects in a lab using Cisco Packet Tracer (version 5.3.2) and I'm not seeing them, which leads me to believe that either my lab configuration isn't correct or my understanding of ICMP redirects isn't correct or that Packet Tracer doesn't support/use ICMP redirects. Here's what I believe to be true regarding ICMP redirects: Routers send ICMP redirects when all of these conditions are met: The interface on which the packet comes into the router is the same interface on which the packet gets routed out. The subnet or network of the source IP address is on the same subnet or network of the next-hop IP address of the routed packet. The datagram is not source-routed. The router kernel is configured to send redirects. I have the lab set up in Cisco Packet Tracer as displayed in the image and would expect to see an ICMP redirect from Router1 when pinging from PC1 to PC3. I'm not seeing the ICMP redirect and it looks like Router1 is actually routing all of the packets via Router2. I have IP ICMP debugging enabled on Router1 (and Router2) and I'm not seeing any ICMP redirect activity in either console. I'm also not seeing a route to the PC3 network in the routing table on PC1, which I think confirms that the ICMP redirect is not occurring. I'm using only static routing on Routers 1 and 2. Is my understanding of ICMP redirects incorrect, or is there a problem with my lab configuration or does Packet Tracer not support/use ICMP redirects?

    Read the article

  • Vista logs into black screen, "Application Error 0xc0000022" for explorer.exe

    - by IMAPC
    Whenever I attempt to log into a Windows Vista computer (the only account on it), I'm presented with a black screen & a cursor. I can open Task Manager, and from there I can launch applications. It seems to be using Aero Basic (instead of the full Aero which I had set as default before the problem started). When attempting to launch "explorer.exe" I get "explorer.exe - Application Error 'The application failed to initialize properly (0xc0000022). Click OK to terminate the application.'" Every now and then I get an error along the lines of "the application has failed to start because its side by side configuration is incorrect please see the application event log for more detail." I can boot into safe mode successfully, but I still get the black screen when I log into it in regular mode. I've tried most of the suggestions here, but did not work. I'm attempting to back up everything right now in case the only fix is to reinstall Windows. Has anyone seen this before?

    Read the article

  • Ubuntu 10.04 preseed unattended install results in faulty partition table

    - by joschi
    I'm currently trying to set up an unattended installation of Ubuntu 10.04 (Lucid Lynx) through preseeding. But whenever I try to create a custom partition scheme, the Debian installer (which Ubuntu is using) produces a faulty partition table. I've taken the partition scheme described in the example preseed file: d-i partman-auto/expert_recipe string \ boot-root :: \ 40 50 100 ext3 \ $primary{ } $bootable{ } \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext3 } \ mountpoint{ /boot } \ . \ 500 10000 1000000000 ext3 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext3 } \ mountpoint{ / } \ . \ 64 512 300% linux-swap \ method{ swap } format{ } \ . Unfortunately it also produces an incorrect partition table on the disk. The installation process itself is working and the installed system eventually boots and is working, as far as I can tell. But fdisk and cfdisk are still complaining: # fdisk -l /dev/sda Disk /dev/sda: 17.2 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000a1cdd Device Boot Start End Blocks Id System /dev/sda1 * 1 5 37888 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 5 2089 16736257 5 Extended /dev/sda5 5 2013 16121856 83 Linux /dev/sda6 2013 2089 613376 82 Linux swap / Solaris cfdisk even refuses to start at all: # cfdisk /dev/sda FATAL ERROR: Bad primary partition 1: Partition ends in the final partial cylinder parted on the other hand does not complain about the cylinder boundary of /dev/sda1: # parted /dev/sda p Model: VMware Virtual disk (scsi) Disk /dev/sda: 17.2GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 39.8MB 38.8MB primary ext4 boot 2 40.9MB 17.2GB 17.1GB extended 5 40.9MB 16.5GB 16.5GB logical ext4 6 16.6GB 17.2GB 628MB logical linux-swap(v1) Since the installed system is working, it shouldn't be a big problem but I'm afraid that this will mean trouble in the future.

    Read the article

  • Internal disk not correctly recognised by Windows 7

    - by david
    i'm having problems configuring a disk in a brand new, clean windows-7 install. here are some system specifics- . disk- western digital velociraptor wd6000hlhx . mobo- gigabyte z77x-ud3h . bios sata-mode set to ahci [not raid], w/disk connected to sata0 [6gb/s hi-speed sata]. . windows 7 enterprise sp1 x64 the disk is recognized by bios and correctly identified [name & size ok]. the disk is also recognized by windows on a h/w level, but it won't show up in the explorer. windows reports the device is working correctly. windows disk manager shows the drive, but says it's uninitialized and has no partitions [which is incorrect]. if i try to initialize the drive, windows throws an error saying that it "cannot find the file specified". [which file???] before connecting the drive to the new machine, i partitioned and formatted the disk under windows xp sp2, giving it 2 partitions [mbr, not gpt] and copying over a boatload of data. obviously none of this appears under windows 7. removing the disk from the new machine and replacing it back in the windows xp machine shows the disk and all data are intact and functional. i'd like to have windows 7 recognize the disk w/o having to lose the data and start over. is this possible? if so, how would i do that? I checked this post, but even though the problem seems identical, the information didn't help. any help appreciated. thanks!

    Read the article

  • Jetty - 401 Unauthorized when using basic authentication

    - by JP.
    I am running SOLR on jetty in Ubuntu (a bitnami VM, if that helps) and am trying to lock down access to both the admin pages and the update/delete/etc. pages using basic authentication. When I attempt to connect to the admin console via a web browser I am prompted for a user name and password, but the username and password I use simply does not work. For test purposes I am using foo:bar as the credentials, but I receive a '401 Unauthorized' response. I see the following in my request log. 127.0.0.1 - - [10/Nov/2013:05:35:46 +0000] "GET /solr/ HTTP/1.1" 401 1376 Am I doing something wrong and/or is there anything obviously incorrect with the below configuration? Any help is greatly appreciated. Jetty.xml <Call name="addBean"> <Arg> <New class="org.eclipse.jetty.security.HashLoginService"> <Set name="name">solr</Set> <Set name="config"><SystemProperty name="jetty.home" default="."/>/etc/realm.properties</Set> <Set name="refreshInterval">5</Set> </New> </Arg> </Call> /etc/realm.properties foo: bar, solr_admin webdefault.xml <security-constraint> <web-resource-collection> <url-pattern>/</url-pattern> </web-resource-collection> <auth-constraint> <role-name>solr_admin</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>solr</realm-name> </login-config>

    Read the article

  • Cannot connect to local network shares when connected to VPN. Error: "the user name could not be found"

    - by Nick G
    I keep finding that on our small company LAN (7 users, 3 servers) that some servers keep becoming "not accessible" for the purposes of file sharing. They display the message "\SERVER is not accessible. You might not have permission to use this network resource. The user name could not be found". But I don't know why "the user name could not be found" as all the machines are on the same domain and the PDC and BDC seem to be behaving OK. EDIT: VPN seems to be the cause: It turns out I can see the server if I use the IP address (\\1.2.3.4\ etc) or the FQ active directory name (eg \server.domainname.local) but not if I use the server name on its own or a mapped network drive originally created from the "short" name. Oddly though, my machine has no issue resolving the server's DNS name as I can ping the machine name OK and it immediately comes back with the IP, however nslookup seems to fail. It seems to be a problem with how Windows looks up machine names when connected to VPNs. When I'm connected to a VPN, windows seems to use the DNS assocated with the VPN and not the one on the domain controller. This behavior to me, seems incorrect as surely that would mean connecting to any VPN would break any ability to lookup local machine names for servers and printers etc. So I guess the real question now is, how can I make my machine still search the local Active Directory DNS (the PDC) even when connected to a VPN? More info in my comments below.

    Read the article

  • Certain banking pages not loading

    - by Joseph Lee
    For some unknown reason, I am suddenly unable to access my accounts at several banking and credit sites. I have been a registered user at each site for several years and know I am using the correct user ID and password. Yet, after entering the data, answering security questions, and clicking the submit button, I land on a page with an error message saying their is a technical problem preventing me from accessing my account. On one site, I end up at the sign in page repeatedly. I am never told that my ID/password are incorrect. I believe may be firewall related. Windows firewall was damaged after a recent malware attack. I am now using a third party firewall (Fort Knox). I am not seeing a pop-up indicating sites are blocked or asking me to indicate yes or no. I am using Windows 7 Home Premium. I get the same result regardless of the browser. I switched to Maxthon last night and am getting the same result. This is not happening at other sites. And I am able to access some banking sites normally. This is frustrating because I need to make payments and have gone paperless. Any feedback will be appreciated. ---- Joe ----

    Read the article

  • SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified data

    - by eulerfx
    I am running SQL Server 2008 Express Edition on Windows Server 2008 with an ASP.NET application which must access the server. The ASP.NET application is associated with an application pool that runs on the NetworkService account. This account in turn has a Login and User record on SQL Server in the required database. When I attempt to run the ASP.NET website I get a blank page and when viewed in the error log, I seem to be getting this information event record: Login failed for user 'NT AUTHORITY\NETWORK SERVICE'. Reason: Failed to open the explicitly specified database. [CLIENT: myLocalMachine] The connection string has Trusted_Connection=True; and the required database specified. When I explicitly specify the user name and password I get another login error stating the password is incorrect, even though the same un/pw combination works through SQL Server Management studio. The NETWORK SERVICE account seems to have all the required privileges for the database. Also, I made a test ASP.NET website project which does a simple select from a table in that database, and using the same config file I am not getting the error and it seems to work. Is it something to do with trust levels then, because the original ASP.NET web app references various DLLs including open source libraries. Also, the application does not seem to be able to write to the event log itself, throwing a security exception, even though everything in the config files, including machine.config states the app is in full trust.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >