Search Results

Search found 5723 results on 229 pages for 'turing machines'.

Page 55/229 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Getting list of opened ssh connections by name

    - by lyrae
    I have a config file in my .ssh dir that looks like this Host somehostA HostName 123.45.67.89 User katsh So from my local machine, i can ssh into multiple machines by their name in the config file, like so ssh somehostA ssh somehostB ssh somehostC ... etc Is it possible to get a list of all machines i am connected to, by their name? I know I can do: lsof -i tcp -n | grep '\<ssh\>' and i'll get something like ssh 9871 katsh 3u IPv4 400199 0t0 TCP 987.654.2.2:47329->987.654.2.2:47329:ssh (ESTABLISHED) ssh 20554 katsh 3u IPv4 443965 0t0 TCP 123.456.7.8:41923->123.456.7.8:ssh (ESTABLISHED) But it does not list their names, just IP

    Read the article

  • Load testing nginx inside AWS

    - by andy
    I'm trying to load test nginx running on AWS. I need to try to optimise it to handle 1Gbps of inbound traffic. Currently I've got it to peak at 85Mbit/s by running nginx on an m1.large with 4 other machines hitting it by using ab with -i (for head requests) -k (keepalives) -r (ignore failed requests) -n 500000 -c 20000. I'm struggling to generate more than 85 Mbit/s traffic from 4 machines, yet when I do scp a large file I get nearly 0.25Gbit/s of traffic going over the network. Are there any tools or approaches that I could use to load test nginx that might generate more load? I'm only interested in inbound traffic, so perhaps a DoS tool could help if it chucks away responses? I'm hitting a very small (40 byte) static asset, and have peaked at handling 50K concurrent connections and getting 25k reqs/s when just using a single load generator machine.

    Read the article

  • Where do dsrm, dsadd and dsmove come from?

    - by Ben
    I am writing a script to join a machine to the domain after it has been imaged. (Don't want to do it in Sysprep.) On the machine I am writing the script on (a battered, world weary IT workhorse with all sorts of crud on it) my script works fine. However on one of my shiny new test machines, it doesn't find dsrm and dsadd. I can only assume I have inadvertently installed this on my machine in the past. I want these to be used just for the purpose of joining the machines to the domain, so don't want any full blown admin-ware installed. Where do I get / turn on dsrm etc?

    Read the article

  • How to move a windows machine properly from RAID 1 to raid 10? [migrated]

    - by goober
    Goal I would like to add two more hard drives to my current RAID 1 setup and create a RAID 0 setup on top of the two RAID 1 setups (which I believe is referred to as "RAID 10"). Components Involved Intel P68 Chipset Motherboard 4 SATA ports that can be configured for Raid An intel SSD cache that sits in front of the RAID, and a 64 GB SSD configured in that manner Two 1TB HDDs configured in RAID 1 OS: Windows 7 Professional Resources Consulted so far I found a great resource on LinuxQuestions.org for a good "best practices" process for Linux machines, but I'd like to develop a similar process that I know works on Windows Machines.

    Read the article

  • Win Server 2008 R2 - Mapped shared folder hanging?

    - by M-Tech
    I have recently built a windows 2008 server R2 machine. This is purely for file server purposes and is very much a basic build. All windows updates installed and part of domain. I have setup a shared folder on the C:Drive and added permissions for domain users as co-owners. The client machines run XP SP3 and are part of the domain also. We have a few servers running the same setups on a few of our sites but this one is particular crashes users machines (explorer.exe hangs for at least a few mins) when attempting to access the shared folder. I have turned off the option on the network card for power save aswell still no change. Any help with this is very much appreciated and i look forward to hearing from you ;)

    Read the article

  • Ubuntu live CD and installing new applications onto a USB drive

    - by bikesandcode
    Background: I am a programmer that occasionally has access to other computers when on vacation or something. These are generally the machines of friends or family, so randomly installing Ubuntu on it wouldn't be terribly polite. I would like to completely avoid the hard drive of the target machine. Not all of these machines can boot to USB either, so that simple solution is out. What I want to be able to do is boot to an Ubuntu live CD, plug in a USB drive and then grab various updates and other applications, installing them to the USB drive. Later, on another machine, put in the live CD, after boot, put in the USB drive and then magic, I have all of the updates/applications/data/etc that I've tossed onto the drive. I suspect that it should be possible to mount /home, /var, /usr, and maybe a couple of other locations from the USB drive or something along those lines. So is this possible and what do I need to do?

    Read the article

  • How to trigger chef-client on all nodes from my workstation

    - by divyanshm
    I have 5 nodes and all of them have one setup cook-book in common. Now I would like to add another task in this common cookbook that would configure SQL server for me on all the nodes. Is there a way/command to manually trigger this change across all clients right away? I use azure VM's. All the nodes are Windows Server 2012 machines. I could do a knife winrm machine-name chef-client -m -x username -P password on all the machines, but i'm sure there should be a better way of doing this. I'm new to using chef, so I might be missing a very basic command here.

    Read the article

  • Best practice for administering a (hadoop) cluster

    - by Alex
    Dear all, I've recently been playing with Hadoop. I have a six node cluster up and running - with HDFS, and having run a number of MapRed jobs. So far, so good. However I'm now looking to do this more systematically and with a larger number of nodes. Our base system is Ubuntu and the current setup has been administered using apt (to install the correct java runtime) and ssh/scp (to propagate out the various conf files). This is clearly not scalable over time. Does anyone have any experience of good systems for administering (possibly slightly heterogenous: different disk sizes, different numbers of cpus on each node) hadoop clusters automagically? I would consider diskless boot - but imagine that with a large cluster, getting the cluster up and running might be bottle-necked on the machine serving the OS. Or some form of distributed debian apt to keep the machines native environment synchronised? And how do people successfully manage the conf files over a number of (potentially heterogenous) machines? Thanks very much in advance, Alex

    Read the article

  • Hyper-V Server 2008 Configuration

    - by Eternal21
    I need to set-up Lync on a Server 2008 machine. The problem is that Lync cannot be set up on a Domain Controller. That means I need to have one Server 2008 that's a domain controller and another that's Server 2008 running Lync. I figured the best way would be hosting it on a single machine, using virtual machines. I installed Server 2008, but now my question is this. Do I add two virtual machines (Domain Controller and Lync), or do I only add one virtual machine for Lync, and the 'parent' Server 2008 can act as a domain controller?

    Read the article

  • How to troubleshoot Application Popup issues 0XC0000142 and 0XC000009a

    - by DotDot
    I am running randomly into 1 of these popups when our application runs. The machines range from 8GB/8Core to 24GB/24Core and run Windows Server 2008 R2. The application is a bunch of perl scripts and exe's that are expected to utilitize the server well. The process tree can be quite deep (5-6 child levels) and quite broad (60-70 level 1 processes). We hit this issues every 1% run on random machines. The application stalls on popup, unless someone clicks the damn button. The event log reads as cmd.exe - "Failed to initialize app. Click OK to close app" How could I reliably repro these issues?

    Read the article

  • Ways to deduplicate files

    - by User1
    I want to simply backup and archive the files on several machines. Unfortunately, the files have some large files that are the same file but stored differently on different machines. For instance, there may a few hundred photos that were copied from one computer to the other as an ad-hoc backup. Now that I want to make a common repository of files, I don't want several copies of the same photo. If I copy all of these files to a single directory, is there a tool that can go thru and recognize duplicate files and give me a list or even delete one of the duplicates?

    Read the article

  • connecting 2 ubuntu computers as a lan

    - by Brendan Cutjar
    Hi i am trying to connect 2 ubuntu computers as a LAN. In my current setup I have: One machine running ubuntu 11.10 with Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller While the other machine is running Ubuntu 12.04 with an Atheros Communications Inc AR8152 v1.1 Fast Ethernet The two machines are connected via a switch (Edimax ES-3205P). Unfortunately I still cannot connect the two machines together. Can somebody please show me what to do and how to go about solving this issue?

    Read the article

  • Snow Leopard Windows 7 File sharing issue

    - by nsiggel
    Hello, wondering if anyone might have any suggestions on how to fix this issue in file sharing between Snow Leopard and Windows 7. On my home network, I have a system running windows 7 which acts as a file server: \WINDOWS7-A\SHARE I also have an IMac which can access the file share most of the time.... however now comes the problem, as soon as I introduce my Windows 7 Laptop and copy files over from it onto the file server (\Windows7-A), the file server become no longer accessible from the Snow Leopard machine... I assume this somehow has to do with elevated security on the network share having been negotiated between the 2 Windows 7 machines which is no longer allowing the Mac to see the Windows machines... but not sure how to disable this, as the only way I can restore communication between the mac and the Windows 7 machine is to restart it... less than ideal... Any suggestions would be welcome.

    Read the article

  • ZFS on top of iSCSI

    - by Solipsism
    I'm planning on building out a file server using ZFS and BSD, and I was hoping to make it more expandable by attaching drives stored in other machines in the same rack via iSCSI (e.g., one machine is running ZFS, and others have iSCSI targets available to be connected to by the ZFS box and added to zpools). Looking for other people who have tried this has pretty much lead me to resources about exposing iSCSI shares on top of ZFS, but nothing about the reverse. Primarily I have the following questions: Is iSCSI over gigabit ethernet fast enough for this purpose, or would I have to switch to 10GbE to get decent performance? What would happen when one of the machines running iSCSI targets disconnects from the network? Is there a better way to do this that I just am not clever enough to have realized? Thanks for any help.

    Read the article

  • cd (change directory) to my home directory on Windows [closed]

    - by deostroll
    Possible Duplicate: Is there a short cut command in Windows command prompt to get to the current users home directory like there is in Linux? Any short way to cd to the user specific directories in the command prompt. Like for e.g. in linux shell (debian based) we do a cd ~ and it instantly takes to the current logged user's directory /home/<username>. Anything to this effect on windows? ps: currently trying to do this on xp machines. If it differs for other machines, mention that too.

    Read the article

  • Where do I find a free (open source preferably) VNC management tool?

    - by thenior
    Hello, I am trying to get remote internal setup for our business. Basically, I just want to remote desktop into any computer on the network. I don't want to use LogMein, because I only want it to be internal for security. Basically, I am looking for a way to just install VNC clients on all the machines, and on my machine have centralized manager for all the machines that are connected to it. Doesn't have to VNC - just needs to work and be free. All systems running Win 7 64bit

    Read the article

  • How do I remotely run a Powershell workflow that uses a custom module?

    - by drawsmcgraw
    I have a custom Powershell module that I wrote for various tasks. Now I want to craft a workflow whose activities will use commands from the module. Here's my test workflow: workflow New-TestWorkflow{ InlineScript { Import-Module custom.ps1 New-CommandFromTheModule } } Then I run the workflow with: New-TestWorkflow -PSComputerName remoteComputer When I do this, the import fails because it can't find the module. I imagine this is because the workflow is executing on the remote machine, where my module does not exist. I can see myself running this across many machines so I'd really rather not have to install this module and maintain it on all of the machines. Is there some way to have my module in a central place and use it in workflows?

    Read the article

  • Deployed Web Application Requests for User Name and Password

    - by user43175
    Deployed Web Application Requests for User Name and Password I recently deployed a .NET web application into the server. Authentication mode is set to Windows (since the application is accessible only to Intranet users. Testing some machines, the application loads up properly. For some machines, a logon dialog window appears asking for User Name or Password. These dialog windows are those that you also normally see when you are trying to log into a Windows domain. Any idea why this happens randomly? Thanks.

    Read the article

  • Can an ESX server under heavy load cause cpu spikes on guest VM's?

    - by ReferentiallySeethru
    So we have a number of vm's running on an ESX 4.1 server for product testing. The ESX Server is at times under heavy load. We've been experiencing high CPU levels during some use cases, but we can't always duplicate this. If the ESX server as a whole is under heavy load could this cause guest machines to show high CPU usage? To ask it a different way, if the guest machines require more cpu resources than the server has, how does this affect CPU usage as indicated by the OS and process?

    Read the article

  • Boot Linux from DOS (with loadlin.exe etc)

    - by dreamlax
    I have been using the latest version of loadlin.exe (version 1.6e). It works on some machines but on others I get "no place after kernel for initrd". The kernel is about 5MB in size (non-modular) and my initrd image (decompressed) is about 8MB. One route that I could take is to enable module support and offload some of the weight of the kernel into the initrd image but I'm not confident this will rectify the issue. Are there any alternatives to loadlin.exe that are capable of loading Linux from a booted DOS session? I basically have a series of DOS tools that I'd like to run one after another and then boot into Linux, which loadlin.exe seems to be working very well for except on some machines.

    Read the article

  • How can I re-create Microsoft Cluster Service resource groups on a different cluster?

    - by PersonalNexus
    I use Microsoft Cluster Service on a cluster of Windows Server 2003 machines containing several dozen resource groups. In the process of migrating to newer hardware, I would like move resources to the new machines on resource group at a time spread out over a few days to ease the migration and minimize risk. I was wondering of there was a smarter way to do this than manually re-creating resources on the new and then deleting them on the old cluster? The cluster has already been set up properly, the only missing is the resource groups and the resources they contain (IP, network names, services...). I have looked through the options of the cluster admin GUI and cluster.exe's commandline options, but haven't found anything like an import/export feature to copy over the configuration of a resource or entire resource group. Does something like this exist?

    Read the article

  • Access server by hostname without domain

    - by projectshave
    I want to access services on other machines on my home network with just their hostname. In every browser, "http://machine" fails, but adding a period in "http://machine./" works. Is there a way to avoid adding that extra period? My setup is a router with DD-WRT w/ DNSmasq turned on, Win7 machines and several Ubuntu VMs. nslookup works fine with just hostname. Remote desktop works, but TightVNC needs the extra period. ssh needs the period. As I said, all my browsers need the extra period. I'd prefer a solution that doesn't require manually maintaining the hosts file. Thanks.

    Read the article

  • How can I log in to a malfunctioning domain controller?

    - by Billy ONeal
    Hello :) I have a setup here with a single domain controller and 4 servers which were whithin it's domain. The servers were brought down and are being repurposed, but we would like to keep backups of the machines around. I am going through one by one and taking the backups, which requires that I login to these machines. I've been able to login to all the servers, except the domain controller. The domain controller itself seems to have not started all it's active directory services, and when one tries to login, it complains that the system cannot log you on now because the domain XXXXX is not available. How can I login to this box? Billy3

    Read the article

  • Kernel compiling with -j2+ parameter ends prematurely with no error message or output bzImage

    - by Minix
    I've noticed quite a while ago that compiling a kernel with the parameter -j set to 1 or more doesn't produce a bzImage. Instead, it ends prematurely without any advice. I have reproduced the same behavior in both my netbook and home server. As far as I'm aware, the point where the compilation stops is random - Compiling twice with the same parameters will probably stop at different files. However, when I run make with no -j* parameter the compilation ends just fine and outputs a working bzImage. Both machines run Intel Atom (N270 on the netbook and 330 on the server) and I've compiled for these processors. If I recall correctly, I've tried compiling both with Atom and with generic x86_64 options. The kernel version I'm building is 2.6.34.1 I've always compiled normally with those options in my Core2Duo and Pentium Dual Core machines. Has anyone experienced this issue? Any ideas why does this happens? Is there a fix or workaround?

    Read the article

  • Managing Windows 7 Workstations

    - by ethamoose
    There was a similar thread to this a few years back, but without any solutions. Just wondering if things have moved on since then. I'm primarily an Mac admin working in a college, but I have recently taken over responsibility for about 30 student workstations running Windows 7. For the Macs I have Apple Remote Desktop, where I can logon to machines in a session,check for any nefarious student activity and if necessary, log off or lock users out of machines. Could anyone give me a suggestion for an equivalent Windows system manager where I can do these tasks - e.g. not just a vnc client but one with more management options?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >