Search Results

Search found 7019 results on 281 pages for 'adaptive systems'.

Page 63/281 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Out of nowhere, ssh_exchange_identification: Connection closed by remote hot me too

    - by dgerman
    See similar: Out of nowhere, ssh_exchange_identification: Connection closed by remote host Today, 6/19/12 attempting to ssh to the same host as usual ssh replied ssh_exchange_identification: Connection closed by remote host two additional attempts failed ssh -v $RWS OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to Real-World-Systems.com [174.127.119.33] port 22. debug1: Connection established. debug1: identity file /Users/dgerman/.ssh/id_rsa type 1 debug1: identity file /Users/dgerman/.ssh/id_rsa-cert type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host ping host was successful, ftp host was successful, ssh now successful, ssh -v $RWS OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to Real-World-Systems.com [174.127.119.33] port 22. debug1: Connection established. debug1: identity file /Users/dgerman/.ssh/id_rsa type 1 debug1: identity file /Users/dgerman/.ssh/id_rsa-cert type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa type -1 debug1: identity file /Users/dgerman/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3 debug1: match: OpenSSH_4.3 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'real-world-systems.com' is known and matches the RSA host key. debug1: Found key in /Users/dgerman/.ssh/known_hosts:5 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dgerman/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Trying private key: /Users/dgerman/.ssh/id_dsa debug1: Next authentication method: password ++++ What gives?? +++++++++++ Mac OS X 10.4.7 , OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011, /Users/dgerman/.ssh > ls -la total 24 drwx------ 7 dgerman staff 238 Jun 19 15:46 . drwxr-xr-x 389 dgerman staff 13226 Jun 19 15:46 .. -rw------- 1 dgerman staff 1766 Feb 26 18:25 id_rsa -rw-r--r-- 1 dgerman staff 400 Feb 26 18:25 id_rsa.pub -rw-r--r-- 1 dgerman staff 67 Feb 26 18:27 keyfingerprint -rw-r--r-- 1 dgerman staff 6215 May 1 08:11 known_hosts -rw-r--r-- 1 dgerman staff 220 Feb 26 18:26 randomart

    Read the article

  • Is there still a place for tape storage?

    - by Jon Ericson
    We've backed up our data on LTO tapes for years and it's a real comfort to know we have everything on tape. A sister project and one of our data providers have both moved to 100% disk storage because the cost of disk has dropped so much. When we propose systems to potential customers these days we tend to downplay or not mention our use of tape systems for data storage since it might seem outdated. I feel more comfortable with having data saved in two separate formats: disks and tape. In addition, once data is securely written to tape, I feel (perhaps naively) that it's been permanently saved. Not having to rely on a RAID controller to be able to read back data is another plus for me. Do you see a place for tape backup these days?

    Read the article

  • Which CMS for a mobile app? No HTML, just XML or JSON

    - by Sascha
    I am a newbie in content management systems. I would need a CMS that can transfer content by XML or JSON to a client. It is ok if the CMS can also manage HTML websites, but the priority is on the data transfer over a web service. Which is the best CMS to use here? I want to avoid spending endless hours learning all the big CMS systems just to find out that they don't support this feature or that it's badly integrated. Thanks.

    Read the article

  • Automation of software installation - should I ask for text or file?

    - by Denis
    I am preparing a software installation in Windows environment for my application. During installation it asks for Subscriber ID which should be entered into text field. I am wondering if it is a best solution for mass installations. I know that for mass installations IT teams use systems like Microsoft System Center which allow automate deployment. But I do not know much about capabilities of such systems. Can such system automate data entry into the text fields? Will it be better to change installation process and ask not a text but a file which contains Subscriber ID? By the way, I am looking for beta testers for my software. This software let user view Microsoft Project files without having Microsoft Project installed.

    Read the article

  • Preventing SSH RSA host key warnings for change of key vs IP address

    - by Adam M-W
    I have a network with DHCP enabled, and also a computer that dual boots operating systems and has different SSH keys on each (and yes, I would like to keep different keys on each rather than copying the same identity/private key to each). Because the IP address does not change between operating systems because the MAC address is the same, when connecting to ssh, even when not using the IP address but the hostname via DNS/mDNS, I get the warning: Warning: the RSA host key for 'hostname' differs from the key for the IP address '192.168.1.172' Offending key for IP in /Users/user/.ssh/known_hosts:37 Matching host key in /Users/user/.ssh/known_hosts:38 Are you sure you want to continue connecting (yes/no)? How can I surpress the warning when the hostname differs from the IP address for that hostname, but retain the ability to check host keys are the same for each hostname? (each OS has a unique hostname)

    Read the article

  • Windows Login Failure

    - by Chris Bateson
    I'm getting an error in the Event Viewer, which is also generating a lot of Logon Failure messages on our syslog server. Pretty much stuck on how to resolve. EventID: 536 Logon Type: 3 Reason: The NetLogon component is not active This is for a Windows Server 2003 system. I have checked here We're using Shavlik Protect 9 to scan and deploy patches. Shavlik stores the credentials for the systems and uses those stored credentials to deploy patches. This system is able to scan and deploy to other systems on the network using those credentials and no errors are generated. When installing to the local system that Shavlik is physically on then this error is generated. Whats interesting is that it doesn't generate during a scan, and the patches install fine. We've contacted Shavlik to get the response that they are unable to help since it's a Microsoft error. Has anyone seen this?

    Read the article

  • Prevalence of WMI enabled in real Windows Server networks

    - by TripleAntigen
    Hi I would like to get opinions from systems administrators, on how common it is that WMI functionality is actually enabled in corporate networks. I am writing an enterprise network application that could benefit from the features of WMI, but I noted after creating a virtual network based on Server 2008 R2, that WMI seems to be disabled by default. Do systems admins in practical corporate networks enable WMI? Or is it usually disabled for security purposes? What is it used for if it is enabled? Thanks for any advice! MORE INFO: I should have said, I really need to be able to query the workstations but I understand that by default the WMI ports on Win7 and XP firewalls (at least) are disallowed, so do you use some sort of group policy or other method to leave a hole open for WMI on the workstations? Or is just the servers that are of interest? Thanks for the responses!!

    Read the article

  • Video codec that can be read on clean installs of either Windows, OS X and Ubuntu

    - by fmercille
    I have to make a video that will need to be watched on different operating systems. Is there a "universal" video codec that can be played on Windows, OS X and Linux without requiring additional plugins or player other than those that comes on a default clean install of each of those systems? Compression is not an issue, I'm merely looking for compatibility (e.g. for audio, I would use WAV as a universal codec). Note : I must assume that the video will be distributed in countries where software patents are enforced, and therefore can't rely on the user to install non-free codecs on Linux. Thanks.

    Read the article

  • mass deploy Oracle patches with OEM with different OS's

    - by bobsmith12
    I have not been able to get this to work. We are running our OEM grid control database/oms on Red Hat 5(32 bit), but our databases are on Solaris x86-64. I could not mass deploy agents since the Operating Systems were different. When I download patches it is by OS. Is there a way to mass deploy to multiple operating systems? I have alot of databases. I was given the redhat server for OEM because it was available. We are have 10.1,10.2, and 11.1 databases. OEM DB is 10.2.0.5

    Read the article

  • top process state column under FreeBSD

    - by Eric DANNIELOU
    When running top interactively, I can see various word in the state column : nanslp, biord, select, uwait, lockf, pause, kqread, piperd, sbwait ... Some like nanslp or kqread are self explanatory, others are not. Tried man pages : STATE is the current state (one of "START", "RUN" (shown as "CPUn" on SMP systems), "SLEEP", "STOP", "ZOMB", "WAIT", "LOCK" or the event on which the process waits), C is the processor number on which the process is executing (visible only on SMP systems) Tried search engines : stack overflow mailing lists archives Where may I get a complete list of possible process state under FreeBSD 9, and their meanings?

    Read the article

  • Workflow Automation software for SVN

    - by KyleMit
    We're currently using IBM's ClearQuest for task management and ClearCase for change management. They plug and play very well with each other. Users can create tasks in ClearCase as defects and enhancements, and developers can use those tasks to check out and modify code in source control. We're looking to upgrade to a better, more modern Source Control system, like SVN, although we're not married to that as our Source Control system. There are loads of source control systems out there, but I'm having difficulty finding one that also includes the ability to have users enter tasks and track them, especially in a native way to the source control system itself. Are there any products that replace ClearQuest for systems like SVN? Are there any other cheap / open source application pairs that handle both sides of the coin?

    Read the article

  • How frequent are network partitions on cloud services?

    - by roja
    Much is made of the CAP trade-off for data storage where conflicts can be introduced if there is a network partition. My question is there any evidence that this is a problem that arises with any significant frequency in modern cloud IAAS services e.g.; EC2, Azure, Rackspace. Is it a problem which, despite being a theoretical roadblock in constructing idealised distributed systems is, in fact, a non-issue for all practical concerns? Has anyone experienced a network partition within one of these systems (within a single data-centre?) If so would you be willing to share any details?

    Read the article

  • What should a hosting company do to prepare for IPv6?

    - by Josh
    At the time of writing The IPv4 Depletion Site estimates there are 300 days remaining before all IPv4 addresses have been allocated. I've been following the depletion of IPv4 addresses for some time and realize the "crisis" has been going on for many years and IPv4 addresses have lasted longer than expected, however... As the systems administrator for a small SaaS / website hosting company, what steps should I be taking to prepare for IPv6? We run a handful of CentOS and Ubuntu Linux systems on managed hardware in a remote datacenter. All our servers have IPv6 addresses but they appear to be link local addresses. Our primary business function is website hosting on a proprietary website CMS system. One of my concerns is SSL certificates; at the moment every customer with an SSL certificate gets a dedicated IPv4 IP address. What else should I be concerned about / what action should I take to be prepared for IPv4 depletion?

    Read the article

  • 8 Character Device names

    - by Lee Harrison
    Is there any reason to still use only 8 characters in a device name? My boss still uses this rule for printers, computers, routers, servers... basicly any device connected to our network. This leads to massive confusion among users, especially when it comes to printer. It also leads to confusion from an administration standpoint because every device is named vaguely, and similarly(its only 8 characters!). I understand the history behind this and compatibility with older systems, but none of our legacy systems will ever make use of PS-printers and Wifi networks. Is there any reason to still do this, and what is everyone else doing when it comes to naming network devices at an enterprise level?

    Read the article

  • Debian/Ubuntu: Enabling "dist-upgrade" behavior for unattended-upgrades?

    - by Mark Renouf
    We've got a customized distribution of Ubuntu, a repository with some custom packages and we run unattended-upgrades on a number of systems. What we want to be able to do is supply an update of one of our packages which might have a new dependency which is not yet installed. I understand apt normally prevents that from happening automatically, and using dist-upgrade would permit it. How can I get that behavior so our unattended upgrades work the same way? Ideally we'd only want new packages installed if one of our packages causes it to be needed (either as a direct dependency or a child, etc.) Should I be aware of any potential problems or increased risk of breakage. The systems are generally not easily accessed via the console so anything causing a problem requiring manual intervention would be very bad!

    Read the article

  • Distributed website server redundancy

    - by Keith Lion
    Assume a website infrastructure is very complicated and is fully distributed (probably like most large web companies). Am I right in thinking that although there are all these extra web servers to handle multiple client requests, there is still a single "machine" whereby users must enter? I am guessing this machine will be the one physically associated to the IP address? I ask because I need to know whether, in places where distributed systems exist, there is still a single point of failure- usually the control node or, in this example, the machine connected to the public internet? Surely there cannot be two machines connected to the internet, as they would have to have different IP addresses? This "machine" may not be a server per se, but maybe it is a piece of cisco equipment. I just need to know whether, in the real world, these distributed systems still have a particular section where they depend on the integrity of one electronic device?

    Read the article

  • What router hardware or software should be used when multiple public IPs are routed into the same LAN?

    - by lcbrevard
    I am looking for recommendations to replace a set of consumer grade (Linksys, Netgear, Belkin) routers with something that can handle more traffic while routing more than one static public IP into the same LAN address space. We have a block of static public IPs, 5 usable, with Comcast Business. Currently four of them are in use for: General office access Web server Mail and DNS servers Download and backup web server for separate business All systems (a mixture of physical and virtual) are in the same LAN address space (10.x.y.0/24) to enable easy access between them inside the office. There are 30 or more systems in use depending on which virtual machines are currently active. We have a mixture of Windows, Linux, FreeBSD, and Solaris. Currently a separate consumer grade router is used for each of the four static addresses, with its WAN address set to the specific static address and a different gateway address for each: uses 10.x.y.1 - various ports are forwarded to various LAN IPs on systems with gateway 10.x.y.1 uses 10.x.y.254 - port 80 is forwarded to a server with gateway 10.x.y.254 uses 10.x.y.253 - ports for mail and dns are forwarded to a server with gateway 10.x.y.253 uses 10.x.y.252 - ports as needed are forwarded to server with gateway 10.x.y.252 Only router 1. is allowed to serve DHCP and address reservation based on the MAC is used for most of the internal "server" IP addresses so they are at fixed values. [Some are set static due to limitations in the address reservation capabilities of router 1.] And, yes, this really does work! But... I am looking for: better DHCP with more capable address reservation higher capacity so I don't have to periodically power cycle the routers One obvious improvement would be to have a real DHCP server and not use a consumer grade router for that purpose. I am torn between buying a "professional" router such as Cisco or Juniper or Sonic Wall verus learning to configure some spare hardware to perform this function. The price goes up extremely rapidly with capabilities for commercial routers! Worse, some routers require licensing based on the number of clients - a disaster in our environment with so many virtual machines. Sorry for such a long posting but I am getting tired of having to power cycle routers and deal with shifting IP addresses afterwards!

    Read the article

  • Mitigating the 'firesheep' attack at the network layer?

    - by pobk
    What are the sysadmin's thoughts on mitigating the 'firesheep' attack for servers they manage? Firesheep is a new firefox extension that allows anyone who installs it to sidejack session it can discover. It does it's discovery by sniffing packets on the network and looking for session cookies from known sites. It is relatively easy to write plugins for the extension to listen for cookies from additional sites. From a systems/network perspective, we've discussed the possibility of encrypting the whole site, but this introduces additional load on servers and screws with site-indexing, assets and general performance. One option we've investigated is to use our firewalls to do SSL Offload, but as I mentioned earlier, this would require all of the site to be encrypted. What's the general thoughts on protecting against this attack vector? I've asked a similar question on StackOverflow, however, it would be interesting to see what the systems engineers thought.

    Read the article

  • When should I upgrade to Ubuntu 10.04 (Lucid Lynx)? [closed]

    - by Emyr
    I'm a web developer for a small non-IT firm. When 9.10 came out, I was using it with no adverse effects from about a month before release (iirc, first beta), initially as an upgrade but as a clean install later to ensure my system would be consistent with most other 9.10 systems. The last alpha of 10.04 came out last week, with another 2 weeks before beta. I'm quite eager to do it today, but obviously the usual "not for production systems" notice is still in place. When should I upgrade? Do I need to worry about software installed from source? (./configure, make, make install etc) Is the attraction of a non-brown theme really this tempting for you?

    Read the article

  • Is there a filesystem that is "friendly" to both windows and Linux?

    - by Somebody still uses you MS-DOS
    I'm planning to install Ubuntu 10.04 with Windows 7. (I'm new to Linux, have to use at work so I'm planning to install it at home to learn more) I plan to use a partition to my Windows system files (C:), a partition for my personal files that already exists (D:) and a new partition for Linux. What I want is to have a partition for my personal files that works across these systems - so, if I start with Windows or Linux, there's the same "Videos", "Pictures", "Projects" folders. Is it possible? Is there a hd filesystem capable of having writes from both systems without too much risk of corrupting or something?

    Read the article

  • Is there a filesystem that is "friendly" to both windows and Linux?

    - by Somebody still uses you MS-DOS
    I'm planning to install Ubuntu 10.04 with Windows 7. (I'm new to Linux, have to use at work so I'm planning to install it at home to learn more) I plan to use a partition to my Windows system files (C:), a partition for my personal files that already exists (D:) and a new partition for Linux. What I want is to have a partition for my personal files that works across these systems - so, if I start with Windows or Linux, there's the same "Videos", "Pictures", "Projects" folders. Is it possible? Is there a hd filesystem capable of having writes from both systems without too much risk of corrupting or something? (Can't be FAT32, I need to store 4gb files). I've read some horror stories of corruption, and would like to know from a sysadmin POV all the risks involved in such scenario.

    Read the article

  • Straight-forward to Virtualise with XEN or KVM on IBM Server System x3650 M4 791562G?

    - by ChrisZZ
    I want to built a virtualised Server Environment using XEN or KVM. The virtual machines should be purely debian systems - so XEN or KVM should be a sane choice. Now while buying servers, I am confronted with the fact, that the vendors obviously only support commercial solution. I think, on a good server, one should be able to install uncommercial software as well - but of course sometimes systems have hardware, that requires drivers, that are not found in the OS Community. So I am asked the question: Is it straight-forward to use Debian with IBM Server System x3650 M4 791562G with Debian - or even virtualising the IBM Server System x3650 M4 791562G using XEN or KVM. I am sure there will always be a way to achieve this goal - but this way might have a high milage - so I am not asking, whether this is theoretical possible, but whether this should be straight-forward and practically easy to do, no major headaches to be expected.

    Read the article

  • Conficker keeps coming back

    - by PHLiGHT
    I hadn't run into anyone who actually got this virus until recently when dealing with a new client that didn't believe in patching their systems and consquently have been hit with this pest. I was under the impression that if you have KB958644 installed and ran the latest malicious software removal tool that conficker would be squashed. I have several systems that are fully patched, MSRT has removed the virus yet the bugger keeps coming back. This has even happened to a file server and a Domain Controller. What am I missing here? They are running AVG which I used to recommend but I have been doubting it's effectiveness over the past year or so.

    Read the article

  • Which open source/free CMSs allow for staging content changes before putting live?

    - by elliot100
    I'm not sure that I've phrased the question all that well. What I'm really looking for is a feature of CMSs where content changes are made on a restricted access 'staging/preview' site, before being published to the live external site. The open source/free CMSs I've looked at so far (Textpattern, WordPress, Movable Type) don't seem to allow this, as far as I can see. Although they allow new content to be saved as draft/pending, viewable by users with appropriate privileges, this doesn't work with changes to existing content -- a post/page can't be live and also have a new version pending. (Do correct me if I'm wrong). I realise it should be possible to do this by making all changes on a staging site, and then replicating the contents of that database to a separate live site manually, but am looking for something a little more elegant. Edit: Just to clarify, both systems which involve synchronising a live database with a staging database systems which offer live/staging views of a single database would be of interest. Am sure I have seen both approaches in commercial/proprietary CMSs.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >