Search Results

Search found 3695 results on 148 pages for 'failure'.

Page 129/148 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Samba share not accessible from Win 7 - tried advice on superuser

    - by Roy Grubb
    I have an old Red Hat Linux box that I use, amongst other things, to run Samba. My Vista and remaining Win XP PC can access the p/w-protected Samba shares. I just set up a new Windows 7 64-bit Pro PC. Attempts to access the Samba shares by clicking on the Linux box's icon in 'Network' from this machine gave a Logon failure: unknown user name or bad password. message when I gave the correct credentials. So I followed the suggestions in Windows 7, connecting to Samba shares (also checked here but found LmCompatibilityLevel was already 1). This got me a little further. If click on the Linux box's icon in 'Network' from this machine I now see icons for the shared directories. But when I click on one of these, I get \\LX\share is not accessible. You might not have permission... etc. I tried making the Win 7 password the same as my Samba p/w (the user name was already the same). Same result. The Linux box does part of what I need for ecommerce - the in-house part, it's not accessible to the Internet. As my Linux Fu is weak, I have to avoid changes to the Linux box, so I'm hoping someone can tell me what to do to Win 7 to make it behave like XP and Vista when accessing this share. Help please!? Thanks Thanks for replying @Randolph. I had set 'Network security: LAN Manager authentication level' to Send LM & NTLM - use NTLMv2 session security if negotiated based on the advice in Windows 7, connecting to Samba shares and had restarted the machine, but that didn't work for me. I'll try playing with other Network security values. I have now tried the following: Network security: Allow Local System to use computer identity for NTLM: changed from Not Defined to "Enabled". Restarted machine Still says "\LX\share is not accessible. You might not have permission..." etc. Network security: Restrict NTLM: Add remote server exceptions for NTLM Authentication (added LX) Restarted machine Still says "\LX\share is not accessible. You might not have permission..." etc. I can't see any other Network security settings that might affect this. Any other ideas please? Thanks Roy

    Read the article

  • MBPro, mid 2010 can't see Dlink DIR655 signal after sleep etc

    - by user88114
    This is my son's MBP 7,1 running Snow Leopard 10.6.7. Router signal is fine since iPad, Wintel on same table 20 feet from router are fine. the MBP however frequently wakes and fails to find the internet. iStumbler can see 1 neighbours hub and my garden hub are there but can't get to the normal DIR655 wifi... no ping no en0 or en1 device seems to exist. Airport off and on does not help. He just resets router and it all works but this does not please me! I must admit the winter sometimes seems to loose connect too, but less so. The DIR655 (hardware rev A3) is on the original EU firmware 1.10, I'm cautious about jumping to latest 1.31EU since no downgrade seems to be possible and that feels a bit risky as so much is set up and working fine. If I use the DIR655 admin web and release the lease the MBP has then wake it all worked OK. So I suspect lease timing/locking issue but unsure how to check up, plus why iStumbler seems to say the network is not visible at all when I sit on the iPad right next to it just fine.. I do not think there are any channel overlaps and we also have RFquiet DECT phones (Orchid) that are silent until lifted or called. Anyway signals all show low interference and high throughput except for this failure to connect. Just walked the MBP to the garden office and iStumbler now sees the more distant DIR655 signal although it will not connect to it (does not show under Sys Prefs NetNetwork names) even after airport off & on... It also refuses to connect to my garden network (an old Belkin acting as AP wired to DIR655), the signal it can see and even net name in Sys Prefs NetNetwork names (2 mins later):NOW both names ARE visible, but both fail to accept the correct WPA2 password and keep asking again after failing to connect. IT ALL MAKES NO SENSE TO ME. Just revoked the lease for the MBP on DIR655 and no changes although this seemed to help MBP wake into connection 1 hour ago. OK a bit of walking about to report. Carried MBP across garden towards DIR655, a few other wifis show up on iStumbler, low signals all channel 1. Right next to DIR655 but iStumbler not showing it, although most other wifi's have gone. I'd say iStubler is suffering timeouts&hangs but hard to be sure. Lots of attempts to Airport on/off, join other etc and suddenly I get to connect, get given new IP (I revoked), can browse. Walk away, connection drops quite soon at 30 feet then reconnected briefly then died again. MUST ATTEND ELSEWHERE FOR A BIT...

    Read the article

  • Generating/managing config files for hosted application

    - by mfinni
    I asked a question about config management, and haven't seen a reply. It's possible my question was too vague, so let's get down to brass tacks. Here's the process we follow when onboarding a new customer instance into our hosted application : how would you manage this? I'm leaning towards a Perl script to populate templates to generate shell scripts, config files, XML config files, etc. Looking briefly at CFengine and Chef, it seems like they're not going to reduce the amount of work, because I'd still have to manually specify all of the changes/edits within the tool. Doesn't seem to be much of a gain over touching the config files directly. We add a stanza to the main config file for the core (3rd-party) application. This stanza has values that defines the instance (customer) name the TCP listener port for this instance (not one currently used) the DB2 database name (serial numeric identifier, already exists, they get prestaged for us by the DBAs) three sub-config files, by name - they need to be created from 3 templates and be named after the instance The sub-config files define: The filepath for the DB2 volumes The filepath for the storage of objects The filepath for just one of the DB2 volumes (yes, redundant to the first item. We run some application commands, start the instance We do some LDAP thingies (make an OU for the instance, etc.) We add a stanza to the config file for our security listener that acts as a passthrough to LDAP instance name LDAP OU TCP port for instance DB2 database name We restart the security listener (off-hours), change the main config file from item 1, stop and restart the instance. It is now authenticating via LDAP. We add the stop and start commands for this instance to the HA failover scripts. We import an XML config file into the instance that defines things for the actual application for the customer - user names, groups, permissions, and business rules. The XML is supplied by the implementation team. Now, we configure the dataloading application We add a stanza to the existing top-level config file that points to a new customer-level config file. The new customer-level config file includes: the instance (customer) name the DB2 database name arbitrary number of sub-config files, by name Each of the sub-config files defines: filepaths to the directories for ingestion, feedback, backup, and failure those filepaths have a common path to a customer-specific folder, and then one folder for each sub-config file Each of those filepaths needs to be created We need to add this customer instance to our monitoring scripts that confirm the proper processes are running and can be logged into. Of course, those monitoring config files include the instance name, the TCP port, the DB2 database name, etc. There's also a reporting application that needs to be configured for the new instance. You get the idea. There's also XML that is loaded into WAS by the middleware team. We give them the values for them to plug into the XML - they could very easily hand us the template and we could give them back completed XML.

    Read the article

  • How should I convert a physical drive to a VHD for use with VirtualPC?

    - by RBerteig
    I have the hard disks from a PC that was happily running Windows Me until is it suffered an unknown hardware failure. The drives are intact, and can be mounted and read on other PCs. We have data backups, but there is licensed software installed that may not be possible to migrate to newer versions running on a more modern platform making the idea of just booting a virtual image attractive. Is it possible to make VHDs from the drives such that I can boot them in VirtualPC? If not VirtualPC, would it be possible in any other virtualization tool? Edit: Some more details.... The system was running Windows Me, but upgraded from Windows 95 (or possibly 98). It can't have been more than a Pentium II, but I will have to look at the motherboard to confirm that. There were no "exotic" devices installed, and nothing beyond the usual legacy stuff that would need to survive into a virtual machine. The licensed software did not have a dongle, so I won't need to worry about virtualizing a physical dongle of some kind. Licenses were probably died to the disk serial number. There were two HDs, both IDE. The boot disk is about 6GB, and the spare data disk is 12GB, but nearly empty. I have a small bias in favor of VirtualPC just because its free and I've used it successfully in the past. But this is a good excuse to revisit the state of the art. I do know from direct experience that it is possible to install and boot DOS 5.0 and Win95 in VirtualPC, but the VM extensions weren't available so the experience isn't as seamless as I would have liked. A very old DirectX game that failed miserably under XP SP2 runs really nicely on that VM, and actually plays better in a lot of ways than it did on period hardware, so that gives me hope that this is possible. Edit 2: Well, I'm closer than I was when I asked... so thanks to all for helpful suggestions and hints to what I should be trying. I used WinImage to copy the disks, and VirtualPC 2007 to attempt to boot. So far, I have it booting in safe mode, but hanging with a black screen otherwise. I strongly suspect that the copy of Artisoft Lantastic 8.0 (anyone else remember them?) that is still installed for networking with even older PCs that mostly don't exist any more is the culprit there. In my infinite free time, I will try to resolve the differences between a Safe Mode boot and a normal boot, and feel that it is likely to yield to pressure. I'd accept more than one answer if I could... this isn't as black and white a question as the one accepted answer convention assumes.

    Read the article

  • Hard freeze on new computer

    - by mphair
    OCZ Gold 3x2GB 240-Pin DDR3 SDRAM PC312800 Palit NE5T240SFHD01 GeForce GT240 1GB 128-bit GDDR5 ASUS P7P55D-E LGA 1156 P55 SATA 6Gb/s USB 3.0 ATX Intel Motherboard Intel Core i7-860 Lynnfield 2.8GHz 8MB L3 Cache LGA 1156 95W Quad-Core Processor SAMSUNG 22X Optical drive (DVD+-R/RW) CORSAIR CMPSU-620HX 620W ATX12V V2.2 Windows 7 Ultimate x64 Brand new system (got it from newegg two days ago) and it booted up and installed windows and ran for a day just fine. Yesterday, I boot it up in the afternoon and run various games at full graphics for most of the day. I turn on WoW and play for a few hours and it hard stalls. No numlock switching, no mouse feedback but nothing going wrong on the screen. No BSOD. I wait a bit to see if the stall is just a temporary one, but then force shutdown the computer. Upon reboot, everything seems fine, windows sees that it didn't shut down properly but I go into normal boot and restart WoW. I'm able to load it up and start running around when it freezes again. This time when I restart, it doesn't even get to BIOS. It starts (power goes on) and it just hangs with no output to the monitors. I shut it off and went to bed. This morning, I turned it on and went into BIOS setup. I'm not terribly experienced with messing with BIOS settings but I checked over them the best I could. Everything seems fine so I boot into windows and browse the internet for a bit looking for a solution, hard freeze within 10 minutes. I restart and go into BIOS and check the CPU temperature, 40c. I'm kinda stumped here. Some people say it might be a memory issue, but why would it take so long for it to come up? Could it have been slowly accessing one memory stick at a time and then it just got to a bad one and that's what is causing it to fail? It seems odd that I don't get a BSOD from a hardware failure. Having the screen just halt with no input or output change seems like a software thing to me. Any thoughts?

    Read the article

  • Hang while starting several daemons

    - by Adrian Lang
    I’m running a Debian Squeeze AMD64 server. Target runlevel after boot is runlevel 2, which includes rsyslogd, cron, sshd and some other stuff, but not dovecot, postfix, apache2, etc. The system fails to reach runlevel 2 with several symptoms: The system hangs at trying to start rsyslogd Booting into runlevel 1 works, then login from the console works Starting rsyslogd from runlevel 1 via /etc/init.d/rsyslog hangs Starting runlevel 2 with rsyslogd disabled works But then, logging in via console fails: I get the motd, and then nothing Starting sshd from runlevel 1 succeeds But then, I cannot login via ssh. Sometimes password ssh login gives me the motd and then nothing, sometimes not even this. Trying to offer a public key seems to annoy the sshd enough to not talk to me any further. When rebooting from runlevel 1, the server hangs at trying to stop apache2 (which is not running, so this really should be trivial). Trying to stop apache2 when logged in in runleve 1 does hang as well. And that’s just the stuff which fails all the time. RAM has been tested, dmesg shows no problems. I have no clue. Update: (shortened) output from rsyslogd -c4 -d called in runlevel 1 rsyslogd 4.6.4 startup, compatibility mode 4, module path '' caller requested object 'net', not found (iRet -3003) Requested to load module 'lmnet' loading module '/user/lib/rsyslog/lmnet.so' module of type 2 being loaded conf.c requested ref for 'lmnet', refcount 1 rsylog runtime initialized, version 4.6.4, current users 1 syslogd.c requested ref for 'lmnet', refcount now 2 I can kill rsyslogd with Strg+C, then. /var/log shows none of the configured log files, though. Update2: Thanks to @DerfK I still have no clue, but at least I narrowed down the problem. I’m now testing with /etc/init.d/apache2 stop (without an apache2 running, of course) which hangs as well and looks like an even more obvious failure. After some testing I found out that a file with one single line: /usr/sbin/apache2ctl configtest /dev/null 2&1 hangs, while the same line executed in an interactive shell works. I was not able to further reduce this line while, i. e. every single part, the stream redirections and the commando itself is necessary to reproduce the hang. @DerfK also pointed me to strace which gave a shallow hint about what kind of hang we have here: wait4(-1for the init scripts futex(0xsomepointer, FUTEX_WAIT_PRIVATE, 2, NULL for rsyslogd / apache2 binaries called by the init scripts The system was installed as a Debian Lenny by my hoster in autumn 2011, I upgraded it to Squeeze immediately and kept it up to date with Squeeze, which then used to be testing. There were no big changes, though. I guess I never tried to reboot the system before.

    Read the article

  • Building an SSL server farm

    - by dan
    I'm interested in building the the architecture in the article referenced below. I currently have a modestly-priced layer-4 load balancer and my application servers are the SSL endpoints. I want to put an SSL server farm in between my load balancer and my app servers. Then I will put another inexpensive load balancer between the SSL farm and my app servers, to do layer-7 routing. My web application has a fairly high amount of consumer traffic, that 6 servers can handle at about 50% capacity. Additionally, I have infrastructure traffic that is several orders of magnitude heavier than my consumer traffic. This is data coming in from all over the world that must integrate with my web application in real time. In total I have 18 app servers to handle all the traffic, plus 6 database servers. I will be adding 6 more app servers over the next 2 weeks and another 6 the 2 weeks after that. Conservatively, I estimate I will need to scale to 120 servers by the end of the year. My motivation right now is to separate the consumer traffic from the infrastructure traffic. The consumer traffic is higher priority than the infrastructure traffic and I cannot allow a stampede on the infrastructure side to take down my consumer-facing servers. Having a website that is always up is the top priority. However if there is a failure in one of the consumer app servers, I want to route that traffic to the servers designated for infrastructure traffic. The complication is that all the traffic is addressed using the same hostname and is nearly 100% https. The only way in my case to distinguish infrastructure from consumer traffic is by URL (poor architecture I inherited), so I need a layer 7 load balancer to be able to route. However for that to work I need either a fancy hardware-based SSL terminator or an SSL server farm as described above. Because my user base is rapidly scaling, I worry that if I go down the hardware path it will become very expensive very fast, especially since I will need 4 of everything for high availability (2 identical setups in 2 facilities). Meanwhile, the above diagram seems very flexible and more horizontally scalable. Has anyone built this before? Are there pre-built configurations? What considerations should I make and what software should I use (I've heard of people using apache with mod-ssl, nginx, and stunnel)? Also, when does it make sense to buy an expensive load balancer vs building an SSL server farm? http://1wt.eu/articles/2006_lb/index_05.html

    Read the article

  • How to Reinstalling MSSQL Server 2008 with SP1? (Windows 7)

    - by user23884
    I am using Windows 7 Ultimate x64. I had earlier installed SQL server 2008 with SP1 with Visual Studio 2008 Team System with sp1. Now that VS2010 is out I wanted to install it so I uninstalled visual studio then MSSLQ Server 2008 SP1 and then SQL Server 2008 as suggested here: h**p://mark.michaelis.net/Blog/SQLServer2008InstallNightmare.aspx But now when I try to reinstall it I am unable to get it right I am getting the ERROR: “Attempted to perform an unauthorized operation.” (Following is part of the log file): 2010-04-16 04:54:57 Slp: Sco: Attempting to replace account with sid in security descriptor D:(A;CI;KR;;;S-1-5-21-2213424280-2581054173-1939225444-1027) 2010-04-16 04:54:57 Slp: ReplaceAccountWithSidInSddl -- SDDL to be processed: D:(A;CI;KR;;;S-1-5-21-2213424280-2581054173-1939225444-1027) 2010-04-16 04:54:57 Slp: ReplaceAccountWithSidInSddl -- SDDL to be returned: D:(A;CI;KR;;;S-1-5-21-2213424280-2581054173-1939225444-1027) 2010-04-16 04:54:57 Slp: Prompting user if they want to retry this action due to the following failure: 2010-04-16 04:54:57 Slp: ---------------------------------------- 2010-04-16 04:54:57 Slp: The following is an exception stack listing the exceptions in outermost to innermost order 2010-04-16 04:54:57 Slp: Inner exceptions are being indented 2010-04-16 04:54:57 Slp: 2010-04-16 04:54:57 Slp: Exception type: Microsoft.SqlServer.Configuration.Sco.ScoException 2010-04-16 04:54:57 Slp: Message: 2010-04-16 04:54:57 Slp: Attempted to perform an unauthorized operation. 2010-04-16 04:54:57 Slp: Data: 2010-04-16 04:54:57 Slp: WatsonData = Microsoft SQL Server 2010-04-16 04:54:57 Slp: DisableRetry = true 2010-04-16 04:54:57 Slp: Inner exception type: System.UnauthorizedAccessException 2010-04-16 04:54:57 Slp: Message: 2010-04-16 04:54:57 Slp: Attempted to perform an unauthorized operation. 2010-04-16 04:54:57 Slp: Stack: 2010-04-16 04:54:57 Slp: at System.Security.AccessControl.Win32.GetSecurityInfo(ResourceType resourceType, String name, SafeHandle handle, AccessControlSections accessControlSections, RawSecurityDescriptor& resultSd) 2010-04-16 04:54:57 Slp: at System.Security.AccessControl.NativeObjectSecurity.CreateInternal(ResourceType resourceType, Boolean isContainer, String name, SafeHandle handle, AccessControlSections includeSections, Boolean createByName, ExceptionFromErrorCode exceptionFromErrorCode, Object exceptionContext) 2010-04-16 04:54:57 Slp: at Microsoft.SqlServer.Configuration.Sco.SqlRegistrySecurity..ctor(ResourceType resourceType, SafeRegistryHandle handle, AccessControlSections includeSections) 2010-04-16 04:54:57 Slp: at Microsoft.SqlServer.Configuration.Sco.SqlRegistrySecurity.Create(InternalRegistryKey key) 2010-04-16 04:54:57 Slp: at Microsoft.SqlServer.Configuration.Sco.InternalRegistryKey.SetSecurityDescriptor(String sddl, Boolean overwrite) 2010-04-16 04:54:57 Slp: ---------------------------------------- 2010-04-16 10:37:19 Slp: User has chosen to cancel this action 2010-04-16 10:37:19 Slp: Watson Bucket 2 Original Parameter Values 2010-04-16 10:37:19 Slp: Parameter 0 : SQL2008@RTM@ 2010-04-16 10:37:19 Slp: Parameter 2 : System.Security.AccessControl.Win32.GetSecurityInfo 2010-04-16 10:37:19 Slp: Parameter 3 : Microsoft.SqlServer.Configuration.Sco.ScoException@1211@1 2010-04-16 10:37:19 Slp: Parameter 4 : System.UnauthorizedAccessException@-2147024891 2010-04-16 10:37:19 Slp: Parameter 5 : SqlBrowserConfigAction_install_ConfigNonRC 2010-04-16 10:37:19 Slp: Parameter 7 : Microsoft SQL Server 2010-04-16 10:37:19 Slp: Parameter 8 : Microsoft SQL Server 2010-04-16 10:37:19 Slp: Final Parameter Values I have googled around for the error given error but all I could find is to regedit and reset permissions on certain reg keys but I don’t see any reg keys with access problem in the log file the log file can be download here: http://www.mediafire.com/?dznizytjznn. Please guys help me out here I am a developer and I cannot afford an OS reinstallation! Thanks in advance…

    Read the article

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • SSL connection error during handshake on Windows Server 2008 R2

    - by Thomas
    I have a Windows 2008 R2 Server that runs a HTTPS Tunneling service. The software uses a certificate that is provided via the Windows certificate store. The certificate is located in the local computer private certificates. It supports server and client authentication with signing and keyencipherment. Cert chain The certificate chain looks fine. It's a Thawte SSL123 certificate. Thawte Premium Server CA (SHA1) [?e0 ab 05 94 20 72 54 93 05 60 62 02 36 70 f7 cd 2e fc 66 66] thawte Primary Root CA [?1f a4 90 d1 d4 95 79 42 cd 23 54 5f 6e 82 3d 00 00 79 6e a2] Thawte DV SSL CA [3c a9 58 f3 e7 d6 83 7e 1c 1a cf 8b 0f 6a 2e 6d 48 7d 67 62] Server certificate Issues Most browsers accept the certificate without any warning. But IE 7 on Windows XP SP3 and Opera 12 on OSX just report an connection error. Opera complains: Secure connection: fatal error (552) https://www.example.com/ Opera was not able to connect to the server, because the server does not communicate via any secure protocol known to Opera. A connection test using openssl s_client -connect www.example.com:443 -state says: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A 52471:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-35.1/src/ssl/s23_lib.c:182: ssldump -aAHd host www.example.com during curl https://www.example.com/ reports: New TCP connection #1: localhost(53302) <-> www.example.com(443) 1 1 0.0235 (0.0235) C>SV3.1(117) Handshake ClientHello Version 3.1 random[32]= 50 77 56 29 e8 23 82 3b 7f e0 ae 2d c1 31 cb ac 38 01 31 85 4f 91 39 c1 04 32 a6 68 25 cd a0 c1 cipher suites Unknown value 0x39 Unknown value 0x38 Unknown value 0x35 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA Unknown value 0x33 Unknown value 0x32 Unknown value 0x2f Unknown value 0x9a Unknown value 0x99 Unknown value 0x96 TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 TLS_DHE_RSA_WITH_DES_CBC_SHA TLS_DHE_DSS_WITH_DES_CBC_SHA TLS_RSA_WITH_DES_CBC_SHA TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 TLS_RSA_EXPORT_WITH_RC4_40_MD5 Unknown value 0xff compression methods unknown value NULL 1 0.0479 (0.0243) S>C TCP FIN 1 0.0481 (0.0002) C>S TCP FIN Thawte provides two Java based SSL Checkers. The Legacy Thawte SSL Certificate Installation Checker and the sslToolBox. Both validate the certificate under Windows XP but report connection errors under OSX and Windows 2008 R2.

    Read the article

  • Cygwin's RSYNC for large data transfer

    - by Tim Brigham
    I'm using rsync from Cygwin to do a large scale data transfer from an aging HP MSA 1000 to a new DAS attached to a different server. I have a daemon running on the remote server in read only mode and a local copy writing the files to disk. One of my servers is an image repository with over a million files spread across about 300 directories. Each file averages only a couple hundred kilobytes. More so than any other box this one is proving problematic. The rsync process will work for a while - some times 20 minutes, some times an hour - and then it simply quits and sits idle at a given file name. I have verified that the file isn't corrupt on the remote server and that the file is successfully created on the local drive. I ran the rsync client in -vv mode, which returns nothing. I checked out the logs created by the daemon. I looked at the network utilization on the interface, which is sitting idle. I looked at the AV settings to see if anything could pose a problem there. I even updated to the latest release of Cygwin. What do I need to in order to keep this connection up? EDIT: The client system is using the command rsync.exe server::Drives/f/Repo/ /cygdrive/T/Repo --archive -P -vv The server is using the command rsync.exe --daemon --no-detach --config "rsyncd.conf" The contents of rsyncd.conf: use chroot = false strict modes = false hosts allow = 192.168.100.9 log file = c:/rsyncd.log uid=0 gid=0 [Drives] path = /cygdrive read only = yes EDIT: The file server is 2003, the disk type on the array is GPT and the size is of the array is about 4 TB. EDIT: Stranger.. It looks like the process is reliably erroring out at about 175,000 files. Rsync runs fine when I pick the same directory it has problems with one at a time. EDIT: rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints, no socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes A similar failure occurred when going from the same set of files with Cygwin to a Linux install. It didn't happen until several hours later than normal however.

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • Clear OS always showing "Operation too slow. Less than 1 bytes/sec"

    - by Blue Gene
    Have been trying to install clear os addon but nothing is working as i am facing this error on every mirror in the .repo file. Yum install squid http://mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on http://mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror2-houston.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-houston.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'O*peration too slow. Less than 1 bytes/sec transfered the last 30 seconds'*) Trying other mirror. mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. Error: failure: repodata/primary.sqlite.bz2 from clearos-core: [Errno 256] No more mirrors to try. How can i fix this.i am able to access repo through web,and it seems nothing wrong with the repo.Where can be the problem. Tried yum clean all but it also didnt help. Is there a way to fix it as i am not able to install any package in it.

    Read the article

  • Add user in CentOS 5

    - by Ron
    I created a new user in my CentOS web server with useradd. Added a password with passwd. But I can't log in with the user via SSH. I keep getting 'access denied'. I checked to make sure that the password was assigned and that the account is active. /var/log/secure shows the following error: Aug 13 03:41:40 server1 su: pam_unix(su:auth): authentication failure; logname= uid=500 euid=0 tty=pts/0 ruser=rwade rhost= user=root Please help, Thanks Thanks for the responses so far: I should add that it is a VPS on a remote computer, fresh out of the box. I can log in as the root user quite fine. I can also su to the new user, but I cannot log in as the new user. Here is my sshd_config file: # $OpenBSD: sshd_config,v 1.73 2005/12/06 22:38:28 reyk Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #Protocol 2,1 Protocol 2 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 768 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH SyslogFacility AUTHPRIV #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes #StrictModes yes #MaxAuthTries 6 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no PasswordAuthentication yes # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes ChallengeResponseAuthentication no # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no GSSAPIAuthentication yes #GSSAPICleanupCredentials yes GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication mechanism. # Depending on your PAM configuration, this may bypass the setting of # PasswordAuthentication, PermitEmptyPasswords, and # "PermitRootLogin without-password". If you just want the PAM account and # session checks to run without PAM authentication, then enable this but set # ChallengeResponseAuthentication=no #UsePAM no UsePAM yes # Accept locale-related environment variables AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes #UseLogin no #UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #ShowPatchLevel no #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner /some/path # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server

    Read the article

  • SCCM SP2 - OOB Management Certificates Problems

    - by Achinoam
    Hi experts, I have a vPro client computer with AMT 4.0. It was importeed successfully via the Import OOB Computers wizard, and after sending a "Hello- packet" it became provisioned. (The SCCM GUI displays AMT Status: Provisioned). But when I try to perform power operations on this machine, they always fail with the following lines in the log: AMT Operation Worker: Wakes up to process instruction files 7/29/2009 10:59:29 AM 2176 (0x0880) AMT Operation Worker: Wait 20 seconds... 7/29/2009 10:59:29 AM 2176 (0x0880) Auto-worker Thread Pool: Work thread 3884 started 7/29/2009 10:59:29 AM 3884 (0x0F2C) session params : https:/ / amt4.domaindemo.com:16993 , 11001 7/29/2009 10:59:29 AM 3884 (0x0F2C) ERROR: Invoke(invoke) failed: 80020009argNum = 0 7/29/2009 10:59:31 AM 3884 (0x0F2C) Description: A security error occurred 7/29/2009 10:59:31 AM 3884 (0x0F2C) Error: Failed to Invoke CIM_BootConfigSetting::ChangeBootOrder_INPUT action. 7/29/2009 10:59:31 AM 3884 (0x0F2C) AMT Operation Worker: AMT machine amt4.domaindemo.com can't be waken up. Error code: 0x80072F8F 7/29/2009 10:59:31 AM 3884 (0x0F2C) Auto-worker Thread Pool: Warning, Failed to run task this time. Will retry(1) it 7/29/2009 10:59:31 AM 3884 (0x0F2C) After investigation, I've seen that the problem occurs already on the 2nd stage of the provisioning: Start 2nd stage provision on AMT device amt4.domaindemo.com. 8/2/2009 4:55:12 PM 2944 (0x0B80) session params : https: / / amt4.domaindemo.com:16993 , 11001 8/2/2009 4:55:12 PM 2944 (0x0B80) Delete existing ACLs... 8/2/2009 4:55:12 PM 2944 (0x0B80) ERROR: Invoke(invoke) failed: 80020009argNum = 0 8/2/2009 4:55:14 PM 2944 (0x0B80) Description: A security error occurred 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Cannot Enumerate User Acl Entries. 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: CSMSAMTProvTask::StartProvision Fail to call AMTWSManUtilities::DeleteACLs 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Can not finish WSMAN call with target device. 1. Check if there is a winhttp proxy to block connection. 2. Service point is trying to establish connection with wireless IP address of AMT firmware but wireless management has NOT enabled yet. AMT firmware doesn't support provision through wireless connection. 3. For greater than 3.x AMT, there is a known issue in AMT firmware that WSMAN will fail with FQDN longer than 44 bytes. (MachineId = 17) 8/2/2009 4:55:14 PM 2944 (0x0B80) STATMSG: ID=7208 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_AMT_OPERATION_MANAGER" SYS=JE-DEV-MS0 SITE=JR1 PID=1756 TID=2944 GMTDATE=Sun Aug 02 14:55:14.281 2009 ISTR0="amt4.domaindemo.com" ISTR1="amt4.domaindemo.com" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0 8/2/2009 4:55:14 PM 2944 (0x0B80) This error is consistent with all the other 2nd stage provisioning tasks. (Add ACLs, Enable Web UI, etc.) I've opened the certification authority, and I see that the certificates were issued to the SCCM Site server instead of the AMT client! What could be the reason for this failure? What is the problematic definition for the certificate? Thank you in advance!!!

    Read the article

  • SCCM SP2 - OOB Management Certificates Problems

    - by Achinoam
    I have a vPro client computer with AMT 4.0. It was importeed successfully via the Import OOB Computers wizard, and after sending a "Hello- packet" it became provisioned. (The SCCM GUI displays AMT Status: Provisioned). But when I try to perform power operations on this machine, they always fail with the following lines in the log: AMT Operation Worker: Wakes up to process instruction files 7/29/2009 10:59:29 AM 2176 (0x0880) AMT Operation Worker: Wait 20 seconds... 7/29/2009 10:59:29 AM 2176 (0x0880) Auto-worker Thread Pool: Work thread 3884 started 7/29/2009 10:59:29 AM 3884 (0x0F2C) session params : https:/ / amt4.domaindemo.com:16993 , 11001 7/29/2009 10:59:29 AM 3884 (0x0F2C) ERROR: Invoke(invoke) failed: 80020009argNum = 0 7/29/2009 10:59:31 AM 3884 (0x0F2C) Description: A security error occurred 7/29/2009 10:59:31 AM 3884 (0x0F2C) Error: Failed to Invoke CIM_BootConfigSetting::ChangeBootOrder_INPUT action. 7/29/2009 10:59:31 AM 3884 (0x0F2C) AMT Operation Worker: AMT machine amt4.domaindemo.com can't be waken up. Error code: 0x80072F8F 7/29/2009 10:59:31 AM 3884 (0x0F2C) Auto-worker Thread Pool: Warning, Failed to run task this time. Will retry(1) it 7/29/2009 10:59:31 AM 3884 (0x0F2C) After investigation, I've seen that the problem occurs already on the 2nd stage of the provisioning: Start 2nd stage provision on AMT device amt4.domaindemo.com. 8/2/2009 4:55:12 PM 2944 (0x0B80) session params : https: / / amt4.domaindemo.com:16993 , 11001 8/2/2009 4:55:12 PM 2944 (0x0B80) Delete existing ACLs... 8/2/2009 4:55:12 PM 2944 (0x0B80) ERROR: Invoke(invoke) failed: 80020009argNum = 0 8/2/2009 4:55:14 PM 2944 (0x0B80) Description: A security error occurred 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Cannot Enumerate User Acl Entries. 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: CSMSAMTProvTask::StartProvision Fail to call AMTWSManUtilities::DeleteACLs 8/2/2009 4:55:14 PM 2944 (0x0B80) Error: Can not finish WSMAN call with target device. 1. Check if there is a winhttp proxy to block connection. 2. Service point is trying to establish connection with wireless IP address of AMT firmware but wireless management has NOT enabled yet. AMT firmware doesn't support provision through wireless connection. 3. For greater than 3.x AMT, there is a known issue in AMT firmware that WSMAN will fail with FQDN longer than 44 bytes. (MachineId = 17) 8/2/2009 4:55:14 PM 2944 (0x0B80) STATMSG: ID=7208 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_AMT_OPERATION_MANAGER" SYS=JE-DEV-MS0 SITE=JR1 PID=1756 TID=2944 GMTDATE=Sun Aug 02 14:55:14.281 2009 ISTR0="amt4.domaindemo.com" ISTR1="amt4.domaindemo.com" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0 8/2/2009 4:55:14 PM 2944 (0x0B80) This error is consistent with all the other 2nd stage provisioning tasks. (Add ACLs, Enable Web UI, etc.) I've opened the certification authority, and I see that the certificates were issued to the SCCM Site server instead of the AMT client! What could be the reason for this failure? What is the problematic definition for the certificate? Thank you in advance!!!

    Read the article

  • DPM 2010 PowerShell Script to Easily Restore Multiple Files

    - by bmccleary
    I’ve got what I thought would be a simple task with Data Protection Manager 2010 that is turning out to be quite frustrating. I have a file server on one server and it is the only server in a protection group. This file server is the repository for a document management application which stores the files according to the data within a SQL database. Sometimes users inadvertently delete files from within our application and we need to restore them. We have all the information needed to restore the files to include the file name, the folder that the file was stored in and the exact date that the file was deleted. It is easy for me to restore the file from within the DPM console since we have a recovery point created every day, I simply go to the day before the delete, browse to the proper folder and restore the file. The problem is that using the DPM console, the cumbersome wizard requires about 20 mouse clicks to restore a single file and it takes 2-4 minutes to get through all the windows. This becomes very irritating when a client needs 100’s of files restored… it takes all day of redundant mouse clicks to restore the files. Therefore, I want to use a PowerShell script (and I’m a novice at PowerShell) to automate this process. I want to be able to create a script that I pass in a file name, a folder, a recovery point date (and a protection group/server name if needed) and simply have the file restored back to its original location with some sort of success/failure notification. I thought it was a simple basic task of a backup solution, but I am having a heck of a time finding the right code. I have seen the sample code at http://social.technet.microsoft.com/wiki/contents/articles/how-to-use-a-windows-powershell-script-to-recover-an-item-in-data-protection-manager.aspx that I have tried to follow, but it doesn’t accomplish what I really want to do (it’s too simplistic) and there are errors in the sample code. Therefore, I would like to get some help writing a script to restore these files. An example of the known values to restore the data are: DPM Server: BACKUP01 Protection Group: Document Repository Data Protected Server: FILER01 File Path: R:\DocumentRepository\ToBackup\ClientName\Repository\2010\07\24\filename.pdf Date Deleted: 8/2/2010 (last recovery point = 8/1/2010) Bonus Points: If you can help me not only create this script, but also show me how to automate by providing a text file with the above information that the PowerShell script loops through, or even better, is able to query our SQL server for the needed data, then I would be more than willing to pay for this development.

    Read the article

  • Add user in CentOS 5

    - by Ron
    I created a new user in my CentOS web server with useradd. Added a password with passwd. But I can't log in with the user via SSH. I keep getting 'access denied'. I checked to make sure that the password was assigned and that the account is active. /var/log/secure shows the following error: Aug 13 03:41:40 server1 su: pam_unix(su:auth): authentication failure; logname= uid=500 euid=0 tty=pts/0 ruser=rwade rhost= user=root Please help, Thanks Thanks for the responses so far: I should add that it is a VPS on a remote computer, fresh out of the box. I can log in as the root user quite fine. I can also su to the new user, but I cannot log in as the new user. Here is my sshd_config file: # $OpenBSD: sshd_config,v 1.73 2005/12/06 22:38:28 reyk Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #Protocol 2,1 Protocol 2 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 768 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH SyslogFacility AUTHPRIV #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes #StrictModes yes #MaxAuthTries 6 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no PasswordAuthentication yes # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes ChallengeResponseAuthentication no # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no GSSAPIAuthentication yes #GSSAPICleanupCredentials yes GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication mechanism. # Depending on your PAM configuration, this may bypass the setting of # PasswordAuthentication, PermitEmptyPasswords, and # "PermitRootLogin without-password". If you just want the PAM account and # session checks to run without PAM authentication, then enable this but set # ChallengeResponseAuthentication=no #UsePAM no UsePAM yes # Accept locale-related environment variables AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes #UseLogin no #UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #ShowPatchLevel no #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner /some/path # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • Mysql innoDB corruption after server crash

    - by Ward Loockx
    Yesterday my server died because an outage in the data center. Today it's back up, but having some problems with mysql. First of all my mysql server was not able to start. For this reason I deleted the files ib_logfile0 and ib_logfile1 in /var/lib/mysql folder (I still have the old failing files). After this my server was able to startup again. But now I see a lot of issues in the mysql log file. Sep 1 09:43:55 * mysqld: 120901 9:43:55 InnoDB: Error: page 70944 log sequence number 8 1483471899 Sep 1 09:43:55 * mysqld: InnoDB: is in the future! Current system log sequence number 5 612394935. Sep 1 09:43:55 * mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 09:43:55 * mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 09:43:55 * mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html When I check the docs on mysql.com, I found that I need to recover my database with backups. I have a backup but not sure what's the good way on importing it. Or is there a way to recover without having to re-import the database again? So if I'm correct I need to put innodb_force_recovery to 4 in mysql and delete all current data and re-import? Is there a way to do this without having downtime? I also have one slave running. This slave has the current status now: Last_Error: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave. How can I totally reset the slave after the new import on the master has happend? Hopefully we can find a solution without not to much downtime. Thanks!

    Read the article

  • Error sending email to alias with Postfix

    - by Burning the Codeigniter
    I'm on Ubuntu 11.04 64bit. I'm trying to set up Postfix on my VPS, which has been configured but when I send an email to an alias e.g. [email protected] it will send it to [email protected]. Now when I sent the email from my GMail account, I got this returned: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 #5.1.0 Address rejected [email protected] (state 14). ----- Original message ----- DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=R1WtjVRWywfkWCR2g4QKbSjAfUaU9DAAMKbg9UAWqvs=; b=FiSfdhEaV4pEq/76ENlH4tvOgm35Ow3ulRg06kDYrIQTaDf3eOEgfSEgH25PjZuAj/ 7Hg1CL++o6Rt/tl80ZiR2AWekhA0zIn2JkqE7KssMG7WbBmMmbf8V9KDo2jOw+mZv+C/ KDKsQ65AudBZ/NYLDDpTT7MkKf8DzqeGCKj9MAct6sHDoC0wCciXYxNfTf+MKxrZvRHQ oICTkH5LOugKW9wEjPF2AoO8X0qgYmTLYeSUtXxu46VeNKRBGmdRkkpPOoJlQN9ank7i SW6kU6M9bk2bYOgKwV/YPsaantmYlu1XdmYx+kWeJkNJAyYOfXfZZ8WUJhbbFFD9bZCi m/hw== MIME-Version: 1.0 Received: by 10.101.3.5 with SMTP id f5mr783908ani.86.1334247306547; Thu, 12 Apr 2012 09:15:06 -0700 (PDT) Received: by 10.236.73.136 with HTTP; Thu, 12 Apr 2012 09:15:06 -0700 (PDT) Date: Thu, 12 Apr 2012 17:15:06 +0100 Message-ID: <CAN+9S2aB=xjiDxVZx3qYZoBMFD4XuadUyR_3OYWaxw1ecrZmOQ@mail.gmail.com> Subject: Test Email From: My Name <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=001636c597eabfd21504bd7da8fd Now that I don't understand why it isn't working, my aliases are set up correctly - I see no error messages being produced in /var/log/mail.log or any other mail logs, which makes it harder for me to debug. This is my postfix configuration (postconf -n): alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = $mydomain, $myhostname, localhost, localhost.localdomain, localhost mydomain = domain.com myhostname = localhost mynetworks = 192.168.1.0/24 127.0.0.0/8 readme_directory = no recipient_delimiter = + smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Does anyone know how to solve this specific issue?

    Read the article

  • Trouble creating FTP in Server 2008

    - by Saariko
    I have been trying to create an FTP server on my new Server 2008. I have been following both (very detailed and highly published here guides) For setting up using IIS Manager http://learn.iis.net/page.aspx/321/configure-ftp-with-iis-7-manager-authentication/ and For anonymous FTP http://www.trainsignaltraining.com/windows-server-2008-ftp-iis7 I am able to log as an anonymous user. My need is to use a named user, so I need to use the IIS Manager. I get error 530 when trying to log as a user. Connected to 127.0.0.1. 220 Microsoft FTP Service User (127.0.0.1:(none)): ftpmanager 331 Password required for ftpmanager. Password: 530-User cannot log in. Win32 error: Logon failure: unknown user name or bad password. Error details: Filename: Error: 530 End Login failed. ftp> I can not learn from this message anything. My password is set to: 1234 (so I don't think I make a mistake here - testing purposes only ofc) Thank you. Note - I went over other posts on SE that I read, and couldn't get the result: IIS7 Windows Server 2008 FTP -> Response: 530 User cannot log in. FTP Error 530, User cannot log in, home directory inaccessible. Having trouble setting up FTP server on Windows Server 2008 EDIT I think I found some errors with the physical path. Going to Basic settings, and Test Connection on the physical path, gave me the following error: The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that \$ has Read access to the physical path. Then test these settings again. I am not sure which/whom should get access to the Root folder !? I want to point out, I managed to login with a domain user (change authorization and authentication methods) but this is NOT the requested solution. I checked to make sure that the FTP, folders, access is working properly. I am bit lost here. ==== More tries: I have enabled another Allow rule for ALL Users. I still get the same error. It seems that it doesn't matter if i use a correct or wrong password, I still get Error 530.

    Read the article

  • weird SSH connection timed out

    - by bran
    This problem started when I tried to login to my brand spaning new VPS server. I remember that in my first SSH try on the server I actually got prompt for password several times which would mean that there is no port blocking problem from my isp. Since the password did'nt work for me (for some reason). I had a lot of authentication failure. After that attempting to log in to the server just timed out. I did the same at mediatemple (which used to work before with sftp) and put in wrong password and now trying to ssh (or even SFTP) gives me timeout error. So some kind of security feature is preventing me from trying too many times to log in, either from my side or from the server side. Any idea what it could be? TRaceroute and ping works on the ips. I am using a zyxel wimax modem (max-206m1r - if that's relevent) c:\Program Files (x86)\OpenSSH\bin>ssh.exe [email protected] ssh: connect to host 109.169.7.136 port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe [email protected] ssh: connect to host 109.169.7.131 port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe [email protected] ssh: connect to host 87.117.249.227 port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe [email protected] -vv OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Connecting to 87.117.249.227 [87.117.249.227] port 22. debug1: connect to address 87.117.249.227 port 22: Connection timed out ssh: connect to host 87.117.249.227 port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe s122797.gridserver.com Could not create directory '/home/pavs/.ssh'. The authenticity of host 's122797.gridserver.com (205.186.175.110)' can't be est ablished. RSA key fingerprint is 33:24:1e:38:bc:fd:75:02:81:d8:39:42:16:f6:f6:ff. Are you sure you want to continue connecting (yes/no)? yes Failed to add the host to the list of known hosts (/home/pavs/.ssh/known_hosts). Password: Password: Password: [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. [email protected]'s password: Received disconnect from 205.186.175.110: 2: Too many authentication failures fo r pavs c:\Program Files (x86)\OpenSSH\bin>ssh.exe s122797.gridserver.com ssh: connect to host s122797.gridserver.com port 22: Connection timed out c:\Program Files (x86)\OpenSSH\bin>ssh.exe s122797.gridserver.com ssh: connect to host s122797.gridserver.com port 22: Connection timed out

    Read the article

  • Why won't IE let users login to a website unless in In Private mode?

    - by Richard Fawcett
    I'm not entirely sure this belongs on SuperUser.com. I also considered ServerFault.com and StackOverflow.com, but on balance, I think it should belong here? We host a website which has the same code responding to multiple domain names. On 28th December (without any changes deployed to the website) a percentage of users suddenly could not login, and the blank login page was just rendered again even when the correct credentials were entered. The issue is still ongoing. After remote controlling an affected user's PC, we've found the following: The issue affects Internet Explorer 9. The user can login from the same machine on Chrome. The user can login from an In Private browser session using IE9. The user can login if the website is added to the Trusted Sites security zone. The user can NOT login from an IE session in safe mode (started with iexplore -extoff). Only one hostname that the website responds to prevents login, the same user account on the other hostname works fine (note that this is identical code and database running server side), even though that site is not in trusted sites zone. Series of HTTP requests in the failure case: GET request to protected page, returns a 302 FOUND response to login page. GET request to login page. POST to login page, containing credentials, returns redirect to protected page. GET request to protected page... for some reason auth fails and browser is redirected to login page, as in step 1. Other information: Operating system is Windows 7 Ultimate Edition. AV system is AVG Internet Security 2012. I can think of lots of things that could be going wrong, but in every case, one of the findings above is incompatible with the theory. Any ideas what is causing login to fail? Update 06-Jan-2012 Enhanced logging has shown that the .ASPXAUTH cookie is being set in step 3. Its expiry date is 28 days in the future, its path is /, the domain is mysite.com, and its value is an encrypted forms ticket, as expected. However, the cookie is not being received by the web server during step 4. Other cookies are being presented to the server during step 4, it's just this one that is missing. I've seen that cookies are usually set with a domain starting with a period, but mine isn't. Should it be .mysite.com instead of mysite.com? However, if this was wrong, it would presumably affect all users?

    Read the article

  • Synchronize the same set of files to 2 different locations with 2 different programs for 2 different purposes

    - by Hedgetrimmer
    Because of stupid questionable IT policies at my not-to-be-named place of occupation, I have been (and will be, for the forseeable future) carrying on an external hard drive a unison-synchronized copy of all of my documents and code, including code which resides in some of my "dotfiles" and other code which resides in ~/bin (things I've made are there because ~/bin is in my $PATH) along with some cruft generated (and to be generated) by conscript and its related "giter8" templating system for Scala project boilerplates. Despite this, I do use a symlinking program to store all of my important dotfiles in a subdirectory. Thanks to that somewhat complicated setup, I have resorted to making a directory full of symlinks to every directory (or file, as is the case with stuff under ~/bin) that I want synchronized, and then follow = True is in my unison profile. It happens to be that this collection of odds and ends—plus an automatically-generated text file containing every package installed on my system—is everything under ~ that needs to be backed up to a remote (rsync-over-ssh) host with client-side encryption and signing from GPG. I already believe that duplicity is the most appropriate program to do that. What isn't as clear-cut is how to make duplicity use the exact same set of files when it runs a backup; it would be simple if duplicity would follow symlinks, but it does not and the manpage lists no option for enabling any such behavior. Comparing unison's file selection algorithm to duplicity's, I don't think I can write a program that could compute a ruleset for one program given one for the other. For the record, I would rather not keep the symlinks manually synchronized with duplicity file-selection rules, as they can change thanks to the above-mentioned complications regarding ~/bin. I don't think running duplicity on the external hard disk is such a good idea either; I usually keep that hard disk unmounted and unplugged in case of a power failure or other physical problem with the computer, plus I'm not sure about duplicity's performance given that: the hard disk is NTFS-formatted in order to be useable at my Windows-imprisoned place of occupation. despite being a USB 3.0 disk, my computer has no USB 3.0 ports so it acts as a USB 2.0 disk. How can I have duplicity (or is there a better program that I have overlooked?) back up the exact same set of files that is bidirectionally synchronized with my external hard disk?

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >