Search Results

Search found 5723 results on 229 pages for 'turing machines'.

Page 54/229 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • How to prevent people taking software home?

    - by Robert MacLean
    Most companies I have worked at have had either a collection of disks or a network share with the installs of the commonly used software in them. This is to allow the IT dept and skilled users to install the software they need on their work machines very easily. However some users would see this as an opportunity to get "free" software for their home machines. I've seen the draconian approach of locking the machine down completely, but that does not work well (in my view - if you disagree feel free to comment on it) because You add so much extra work to IT Users get that big brother feeling So how do you find a way to prevent users from taking home software but still allowing them to install what they need? You can make the assumption that most of the users in the organisations I work in are smart enough to install software, I'm not worried about the tea lady here.

    Read the article

  • Routing using Linux with 2 NIC cards

    - by Kevin Parker
    Configured Clear OS to be in Gateway mode on a machine with two NIC cards. eth0:192.168.2.0/24 with ip 192.168.2.27 which is connected to a modem and thus have internet connectivity. eth1:192.168.122.0/24 with ip 192.168.122.10 which is connected to other machines in LAN through switch. LAN machines with network 192.168.122.0 is not getting internet.How can they get internet Through Clear OS gateway.I have enabled packet forwarding in clear os using "ip_forward=1" What am i missing?.Can you please help me in this. Following are the static routing i have added: on LAN machine1 with ip address 192.168.122.11 ip route add 192.168.2.0/24 via 192.168.122.10 dev eth0 ip route show 192.168.2.0/24 via 192.168.122.10 dev eth0 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.11 But still 192.168.2.0/24 network is not reachable.Where can be the problem??

    Read the article

  • Is it possible to push DNS search suffices from DNS server to client?

    - by Mark
    Our (active directory, windows-server-based) intranet used to be called "intranet", and DNS worked fine for windows machines and iPads/Android devices. We have changed it to be "apps.intranet", and it still works for windows machines, but no longer for iPads/Android devices. I think this is because out windows clients are configured to append .company.com when searching DNS, to make it a fully qualified lookup (this search suffix list is pushed to the PCs via AD group policies). I must admit, though, I don't know why it worked with just "intranet"! Does anyone know if it's possible to get DNS to "tell" the iPads/Android devices to append .company.com ... or how we can make it work some other way (but still using the multi-label, non-qualified DNS names) ? Thanks!

    Read the article

  • Amazon EC2: how to find out detailed CPU usage?

    - by j0nes
    I am running several EC2 instances, and I want to know the exact work my CPU is doing. On "normal" machines I am doing this with munin and its CPU plugin which looks at the statistics provided by /proc/stat. On my EC2 machines however, I get incorrect graphs. The machine has two cores, so the max CPU usage should be 200% - however it gets as high as 400%: I know that I should use Amazon CloudWatch to see the total CPU usage (and this is the official and recommended from Amazon way to do this), but I am specifically looking on how the CPU usage is spend (e.g. system, user, iowait). Is there a way to get detailed CPU usage statistics on EC2 instances?

    Read the article

  • How to make WinServer's AD work with Linux DNS/DHCP on VMware?

    - by Borald
    Hope you're fine. I got 2 virtual machines : Windows Server 2008 with Active Directory installed. Linux that works as a DNS and DHCP Server I need to make them work together, but I don't if this is going to be possible because VMWARE is sharing the NIC with other virtual machines and computer itself. I've assigned different IP static adresses to the servers. Is there a way for me to make these things get interconnected and test them on some virtual clients ? Any help will be much appreciated... (useful links,tutorials,..) Thanks in advance !

    Read the article

  • Secure copy uucp style

    - by Alexander Janssen
    I often have the case that I have to make a lot of hops to the remote host, just because there is no direct routing between my client and the remote host. When I need to copy files from a remote host two or more hops away, I always have to: client$ ssh host1 host1$ ssh host2 host2$ scp host3:/myfile . host2$ exit host1$ scp host2:myfile . host1$ exit client$ scp host1:myfile . Back when uucp still was being used this would be as simple as a uucp host1!host2!host3 /myfile . I know that there's uucp over ssh, but unfortunately I don't have the proper privileges on those machines to set it up. Also, I'm not sure if I really want to fiddle around with customer's machines. Does anyone know of a method doing this tasks without the need to setup a lot of tunnels or deploying new software to remote hosts? Maybe some kind of recursive script which clones itself to all the remote hosts, doing the hard work for me? Assume that authentication takes place with public keys and that all hosts do SSH Agent Forwarding. Edit: I'm not looking for a way to automatically forwarding my interactive sesssion to the nexthop host. I want a solution to copy files bangpath-style using scp via multiple hops without the need to install uucp on any of those machines. I don't have the (legal) rights or the privileges to make permanent changes to the ssh-config. Also, I'm sharing this username and hosts with a lot of other people. I'm willing to hack up my own script, but I wanted to know if anyone knows something which already does it. Minimum-invasive changes to hosts on the bangpath, simple invocation from the client. Edit 2: To give you an impression of how it's properly been done in interactive sessions, have a look at the GXPC clustershell. This is basically a Python-script, which spwans itself over to all remote hosts which have connectivity and where your ssh-key is installed. The great thing about it is, that you can tell "I can reach HostC via HostB via HostA." It just works. I want to have this for scp.

    Read the article

  • How do I PXE Boot Only when I want it to without user intervention?

    - by troz123
    I would like to setup a PXE environment where I can re-image machines remotely without any user intervention. Only problem is when the re-imaging is completed it will do the re-imaging again and again and again. If I remove the MAC address file then I just get a error saying it can't find the MAC address file and the system stops. I also tried turning off the TFTP server and I get a error stating can't find TFTPD server. How can I make client machines only PXE boot once and after the re-imaging it will boot into Windows and everything is happy? And only PXE boot when I want it too... I'm using TFTPD32 to serve the files. I'm using a Windows 2003 DHCP server that points to pxelinux.0...

    Read the article

  • dig gets the right result from DNS server, but name still fails to resolve

    - by EMiller
    Under what conditions would the following occur? From a given OSX machine on an internal network: $~ cat /etc/resolv.conf nameserver 10.102.120.7 nameserver 10.102.120.2 From the same machine: $~ dig @10.102.120.7 in.local <snip> ... ;; QUESTION SECTION: ;in.local. IN A ;; ANSWER SECTION: in.local. 43200 IN A 10.102.123.30 <snip> ... And yet, this workstation cannot ping in.local, nor load pages hosted by apache on that machine. 10.102.123.30 is definitley up (2 OSX machines I know fail to resolve in.local - but other machines on the network can). I have also checked their /etc/hosts to see if anything there might interfere... Not sure what else to check...

    Read the article

  • MS Windows issue - "Filename or extension is too long"

    - by Daniel
    I run Microsoft windows on a few of my machines. I don't know if many people know about this issue in the OS but you can't have very long filenames, from what I know Linux can have longer names, I have never run into this issue on my Linux machines. Anyway I run into issues whenever copying folders & files to backup drives. I manually backup of my data, finding and changing names of files, this is very very tedious. Is there a software tool to shorten folders or filenames that are found to be to long on Windows? I have drive image duplication software which does the job but in a way that I don't like, plus moving files can become a hassle at times if the names are too long to copy.

    Read the article

  • What hardware factors may be considered bottlenecks on a Hyper-V virtual server during load testing?

    - by sean
    Our organization is load testing our application using virtual servers via Hyper-V to see what the user load can be using fair equipment on a single box setup. The developer group questioned the validity of the tests given the normal use of the box by the other virtual machines. IT admins answered that it is an acceptable platform to load test on because it has its own CPUs, memory and disks allocated. Is their answer mostly correct? What hardware factors may be considered bottle necks given the other virtual machines when testing our application? For example, would bus speed be a concern or network IO? The application consists of a windows service written using the 4.0 .NET Framework and SQL Server 2008 R2.

    Read the article

  • yum update with shared cache

    - by Sammitch
    We've got a big batch of RHEL6 machines that are due for patching, and for some reason the process here does not involve a local repo. I'm new here, I've asked why, ["it just didn't work"] and I don't have enough time to make it work before the window that's already scheduled. So the usual method is to install yum-downloadonly and run yum update --downloadonly --downloaddir=/mnt/cifs_share and then yum update /mnt/cifs_share/*.rpm which just does not look right to me since not all of these machines have the same set of installed packages. The method I tried today was mounting the share to /var/cache/yum/x86_64/6Server/rhel-x86_64-server-6/packages/ which worked, but then yum automatically deleted everything once it finished. I've looked over the yum man page, but I don't see any flag I can feed it to stop it from deleting everything, nor a flag like up2date's --tmpdir=/mnt/cifs_share. Can anyone out there help me kludge this together until I can get a local repository working?

    Read the article

  • slow disk writes between host and guest

    - by Jure1873
    I've got a ubuntu (server kernel) on a amd x4, 4gb ram, 2x seagate sata 1 tb disks for testing virtual machines and the write performance is very slow. The two disks are in a software raid1 array, one small boot ext3 partition, 10gb system partition and the rest is a xfs partition (about 980) gb for data (virtual machines). If I'm copying files from the virtual machine to the host with rsync or scp the copy frequently stalls or goes at about 1mb/s. What's wrong? I've tried disabling barriers on xfs, increased logbufs, allocsize, but it seems nothing helps. The strange thing is that await (for example during copying) for sda is usually under 100, while for sdb is around 400. Any ideas on what could be wrong / what could I do to improve this setup?

    Read the article

  • IT merger - self-sufficient site with domain controller VS thin clients outpost with access to termi

    - by imagodei
    SITUATION: A larger company acquires a smaller one. IT infrastructure has to be merged. There are no immediate plans to change the current size or role of the smaller company - the offices and production remain. It has a Win 2003 SBS domain server, Win 2000 file server, linux server for SVN and internal Wikipedia, 2 or 3 production machines, LTO backup solution. The servers are approx. 5 years old. Cisco network equippment (switches, wireless, ASA). Mail solution is a hosted Exchange. There are approx. 35 desktops and laptops in the company. IT infrastructure unification: There are 2 IT merging proposals. 1.) Replacing old servers, installing Win Server 2008 domain controller, and setting up either subdomain or domain trust to a larger company. File server and other servers remain local and synchronization should be set up to a centralized location in larger company. Similary with the backup - it remains local and if needed it should be replicated to a centralized location. Licensing is managed by smaller company. 2.) All servers are moved to a centralized location in larger company. As many desktop machines as possible are replaced by thin clients. The actual machines are virtualized and hosted by Terminal server at the same central location. Citrix solutions will be used. Only router and site-2-site VPN connection remain at the smaller company. Backup internet line to insure near 100% availability is needed. Licensing is mainly managed by larger company. Only specialized software for PCs that will not be virtualized is managed by smaller company. I'd like to ask you to discuss both solutions a bit. In your opinion, which is better from the operational point of view? Which is more reliable, cheaper in the long run? Easier to manage from the system administrator's point of view? Easier on the budget and easier to maintain from IT department's point of view? Does anybody have any experience with the second option and how does it perform in production environment? Pros and cons of both? Your input will be of great significance to me. Thank you very much!

    Read the article

  • Windows Task Scheduler

    - by Zulakis
    i am trying to deploy a auto-starting program with Administrator Priviliges on our XP-SP1-machines. For this, i am using the Windows Task Scheduler. Since most of our machines get deployed by using a PXE-imaging-system, the Task fails because the Administrator user entered is for example r126/Administrator. If i only enter Administrator then it automatically changes to machinename/Administrator. Since the machinenames are automatically changes by the imaging-system, the tasks fail run. Any ideas on how to fix that?

    Read the article

  • NATing with a single-homed machine possible?

    - by Harry
    I have the following setup: a) a single-homed machine, A, that can see the Internet. b) other machines B, C, and D that cannot see the Internet. c) A, B, C, and D can see each other. d) all machines are running either RHEL 5.3 or Fedora 16. Question: Is it possible to have B, C, and D share the Internet connection with A somehow? Note, again, that machine A does not have a second NIC installed. (The solutions that I am finding on the Net assume A to be a dual-homed system!) Also, could you please recommend a set of book(s) or online resources for a current and in-depth coverage of iptables for people with only a basic knowledge of TCP/IP?

    Read the article

  • windows printer spool gets stuck with file at 64kb from linux and mac

    - by Juan Diego
    Hi I have two printers one on a file server with windows 2003, and other with windows XP. The thing is that when i try to print from my machine, my file stays in queue for ever, it says 64kb out of whatever the file i send. I have seen similar problems with some machines that run on Mac OS X. The windows machines apparently have no problems printing. They are not connect through active directory, just the network. In the past I have seen people install non microsoft windows Printer server on windows, i dont remember the name of any of the programs. I have being googling a lot and have not found anything to replace the microsoft print spooler service, maybe i am mistaken. Everyday I have to restart the print spooler service i even created a bat file for it. I am out of ideas here.

    Read the article

  • How to configure a static wildcard subdomain with dnsmasq.

    - by Prody
    I have a network behind a NAT with a few machines. The machines are: router - NAT, dnsmasq, forwarding - directly connected to the inet server - which runs ssh, www and some other stuff clients - which do stuff on server I also have mydomain.com. server.mydomain.com is pointing to my connection's IP (single IP), which is the router, which forwards ports to server. Server, has a httpd running, which serves different sites based on vhosts. So I have site1.server.mydomain.com, site2.. The problem is that all the traffic is going thru the router, and when I check logs I always see the router's IP for everything (so it's hard to see who is running the script with the while(1)). I would just ServerAlias site1.server.local, but most of the sites have a root URL saved somewhere on top of which other URLs are built, so I can't do that. The solution for me would be telling dnsmasq somehow to answer to *.mydomain.com with server's IP. Is this possible somehow?

    Read the article

  • Problems when loop over a series of ssh-ed commands

    - by Jack Medley
    I have a series of server machines which I want to run the same command on. Each command takes hours and (even though I am running the commands using nohup and setting them to run in the background) I have to wait for each to finish before the next starts. Here is roughly how I have set it up: On the host machines: for i in {1..9}; do ssh RemoteMachine${i} ./RunJobs.sh; done Where RunJobs.sh on each remote machine is: source ~/.bash_profile cd AriadneMatching for file in FileDirectory/Input_*; do nohup ./Executable ${file} & done exit Does anyone know of a way such that I dont have to wait for each job to finish before the next starts? Or alternatively a better way of doing this, I have a feeling what I am do is fairly sub-optimal. Cheers, Jack

    Read the article

  • cisco vpn client randomly disconnects with pfSense

    - by Andre
    My network has two gateways, one is a pfSense box that everyone uses. The other one is a TP-Link firewall essentially for tests. Some machines inside my network need to access a VPN through the Cisco VPN client. If one of those machines is using the pfSense box as the gateway, I experience random connection drops on the VPN. If I am using the TP-Link gateway that doesn't happen. I've tried changing the MTU in the pfSense box and that improved things a little bit but didn't really solve the problem. I also followed the guidelines for traffic shaping in pfSense and the connections still drop quite often. Ideas?

    Read the article

  • Disable OS X Portable Home Directories for specific hosts for all users, not just individuals?

    - by Paul Nendick
    Would it be possible to block any and all Portable Home Directory services for specific hosts? Something like MCX's "MobileAccountNeverAsk-" but for the whole workstation? We have a network with both portable and stationary machines. I'd like our users to be able to use all machines, going portable on the MacBook but not being bothering with syncing when logged into stationary iMacs or Mac Pros. The Open Directory servers are running Snow Leopard (for now) and all clients are running Lion. Thanks! Paul

    Read the article

  • Deactivate SYN flooding mechanism

    - by mlaug
    I am running a server that is running a service on port 59380. There are more than 1000 machines out there connecting to that service. Once I need to restart the service all those machines are connecting at the same time. That made some trouble as I have seen that log entry in kern.log TCP: Possible SYN flooding on port 59380. *Sending cookies*. Check SNMP counters. So I changed sysctl net.ipv4.tcp_syncookies to 0 because the endpoints to not handle tcp syn cookies correctly. Finally I restarted my network to get the changes in production Next time I had to restart the service, the following message was logged TCP: Possible SYN flooding on port 59380. *Dropping request*. Check SNMP counters. How can I prevent the system for doing such actions? All necessary counter measures are done by iptables...

    Read the article

  • Clustering with Vsphere and Apache load balacing

    - by Anonymous2011
    I was looking at Vsphere to automatically load balance my apache web servers, and mysql servers but will this actually do the job? I know it says it does auto load balancing but not actually sure it quite means what I want to achieve. Is there an easier way to set up a php-apache and mysql cluster within virtual machines? Or are there any guides to clustering? (I have tried googling without much luck) Any help towards understanding/setting up clustering and load balancing within virtual machines appreciated.

    Read the article

  • overload environment

    - by Richo
    I've recently switched across to nesting my home directory across all my machines in an svn repo, meaning that my utility scripts, configuration (irssi, vim, zsh, screen etc) as well as my .profile and so forth are easier to keep up to date across all the places I login. I use a set of sourced .local files to override them on a per site basis as required. As it stands, many of my scripts inherit some form of configuration, and for the most part I've been setting an environment variable in .profile, and then if needed on a per site basis overriding it in .profile.local This works great, but are there pitfalls in having a stack of environment variables? If I take my default environment from within an X session before any of my personal configuration I have not even increased it by 50% but some of the machines I work on are low resource, am I bloating my system unneccessarily, or being needlessly paranoid? Should I start moving this config into seperate flatfiles that are loaded as needed? This means extra infrastructure, or alternately writing a single module for storing config that all of my utilities can inherit.

    Read the article

  • How Do I Enable CUPS Browsing Across A Network?

    - by David Mackintosh
    I have a CUPS server with two print queues defined. Once this was defined, all the CUPS clients on the same subnet could see the two print queues automatically, no problem. Now I have a collection of machines on a separate subnet, reachable from the first subnet by a router. How do I enable CUPS browsing on the second set of machines so that they can see the print queues defined on the first machine? Let's call the server A.B.C.7. The first subnet is A.B.C.0/24. The second subnet is A.B.D.0/24, and there is a router with arms on both networks.

    Read the article

  • Can I use dis-similar HW for Win2008r2 DFS-R

    - by cwheeler33
    The setup: Windows 2008R2 Ent on two machines. The roles on each server will include File Servers and DC's. The machines come from two different vendors (Dell/HP) The Dell is an Athlon and the HP is an Intel. Both have roughly the same speed CPU and 8GB of RAM. They have different Raid controllers, and more or less the same amount of disk space (roughly 6TB.) Can the servers use different types of hardware? Is there any documentation about this? The last question I have is about the network. Can DFS-R be forced to use a differen subnet from the regular network?

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >