Search Results

Search found 6852 results on 275 pages for 'ascension systems'.

Page 219/275 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Network Security Device/Software

    - by Campo
    We currently run Symantec Antivirus Corporate 10.2. The software is really easy to manage on a network but the actual virus detection isn't bad but the malware detection is crap. We recently were infected with a email bot that got us put on some block lists. This has been resolved. I cannot have that happen again. I would like to find a program as easy to manage as symantec that I can install on all the user's workstations as well as the servers. We run a windows 2003 domain. We have a couple 2008 test servers in the environment. Most of the workstations are xp though I am using windows 7 and symantect is not compatible with this OS... So we need a solution that would cover all those operating systems. If it could be installed on macs too that would be a bonus though not necessary at all. This software must detect: Viruses AND Malware I am looking for something that combines the features in anti-malware programs like malwarebytes or spybot with an antivirus program like symantec or AVG. Alternatively if there is a piece of hardware that is a firewall, router, and packet inspection for virus/spam that would be the most ideal solution. I then could supplement with a piece of software that could pickup what the hardware misses. Thank you for your suggestions.

    Read the article

  • How do I configure VMware View location-based printing to use Active Directory Groups?

    - by Jason Pearce
    I am attempting to configure VMware View 4.5's Location-Based Printing, which leverages an included OEM version of ThinPrint, to assign printers to active directory groups. The location-based printing feature maps printers that are physically near client systems to VMware View desktops. I am using the Active Directory group policy setting AutoConnect Location-based Printing for VMware View, which is located in the Microsoft Group Policy Object Editor in the Software Settings folder under Computer Configuration. The AutoConnect Location-based Printing for VMware View appearst to be just a name translation table. It permits me to assign a specific printer or printers to an IP Range, Client Name, Mac Address, User, or User Group. I'm attempting to assign printers to active directory user groups. I have created a new active directory group for each printer that I intend to use in VMware View desktop pools. I will then assign active directory users to the active directory groups that represent each network printer. Example: doej is a member of the PTR-FLOOR2-NORTH-ROOM255 active directory group. Using AutoConnect, I assigned the group to receive a network printer by adding PTR-FLOOR2-NORTH-ROOM255 in the User/Group column. Problem: When doej logs in to his VDI session, the printer is not present. However, if I use a wildcard "*" in the User/Group column instead of the specific PTR-FLOOR2-NORTH-ROOM255 active directory group, the printer is present and functions as designed. Alternatives: I have tried assigning printers to active directory groups within AutoConnect in the following ways, all unsuccesfull: PTR-FLOOR2-NORTH-ROOM255 domainexample\PTR-FLOOR2-NORTH-ROOM255 domainexample.local\PTR-FLOOR2-NORTH-ROOM255 Confirmation: The information used to map the printer to the VMware View desktop is stored in a registry entry on the View desktop in HKEY_LOCAL_MACHINE\SOFTWARE\Policies\thinprint\tpautoconnect. For each of these examples, I have reviewed the registry entry and can confirm that the desktop is receiving the information from the AutoConnect translation table. Summary: Can anyone provide an example of how to configure VMware View 4.5's Location-Based Printing so that I may assign network printers to active directory groups via the included AutoConnect tool? I would welcome a clear example of a working configuration. Thank you.

    Read the article

  • ConfigMgr 2012 - How to automatically make updates available to computers without forcing them to be installed?

    - by Massimo
    I'm using System Center Configuration Manager 2012 with the Software Update Point feature; however, in this environment patching has to be strictly manual, because server reboots need to be approved and scheduled by different people; thus, I need to use ConfigMgr's SUP like I would use a plain WSUS server with auto-approval but with manual installation. I created some Automatic Deployment Rules to automatically download and deploy critical updates, and to have an installation dealine of "as soon as possible"; but then, I've also configured those rules to not do anything when the deadline is reached, and to not perform system restarts even if needed (see image). Also, I've configured the device collection to where those rules deploy updates to not have any valid maintencance window. However, I'm experiencing quite the opposite as what I was expecting: as soon as the new updates are processed by the ADRs, they get automatically installed on all systems by the Software Center, and the computers are subsequently restarted. Why is this happening? Am I getting something wrong or is just ConfigMgr 2012 not behaving like it should?

    Read the article

  • On Windows machines, what is the typical toolchain for remote maintenance?

    - by Hanno Fietz
    I need to deploy PHP and Python code and the appropriate environment (web server, db server) to remote Windows systems, and I don't know what toolchain would be the equivalent to ssh, scp, bash and the like. So, basically, what I need to be able to do is the following: access remote Windows with the appropriate privileges in a secure manner, like I routinely do with ssh (I don't even know whether that would be a text or graphic interface on Windows). remotely install software: Apache or IIS, MySQL or Postgres, Python or PHP copy files from remote (the application we're deploying) remotely configure the machine to run regular tasks (e. g. checking for updates to the application) automate tasks like downloading files from a designated place The main question is probably how I get onto the machine securely in the first place, and then the rest is general Windows admin knowledge, which probably is too broad a scope to fit into one question. I have years of experience with maintaining Linux boxes and I have used tools of varying sophistication on those, ranging from plain scping of PHP files to deployment of Java application containers and even full VMs with Vagrant. On Windows, I'm a complete noob, and I don't even know where to start. I have installed Apache, MySQL , PHP on a desktop machine maybe twice in my life, that's about it. Bonus points for things that work from a Linux machine at my end, but I could run a VM and do everything from there.

    Read the article

  • How can I monitor VNC via Nagios?

    - by atroon
    I have a number of remote sites which have VNC running on a few computers for support purposes. They are (obviously) only available on our internal network. I am using Nagios to keep track of all the systems in the network and I want to have it check to make sure the VNC server is running on the appropriate hosts. There is a 'check_vnc' plugin available here but it relies on VNC Snapshot which I don't want to use. Certainly I could use it, but it adds more complexity and dependency, which I want to avoid. It seems simpler to just use check_tcp to make sure I get the proper response to a connection request for VNC, e.g. port 5900, send a connect string, get back framebuffer info. My real question, I suppose, is this: What is the 'proper' generic connect string for VNC (I use both UltraVNC and RealVNC) and what is the expected response? If it's really easier to use the VNC Snapshot and check_vnc, let me know. I just can't imagine that a string of text isn't easier, faster, and less bandwidth intensive to monitor.

    Read the article

  • How to manage unprivileged administration of system services using Debian?

    - by ypnos
    At our lab, we have several services handled by different phd students (like myself). Fluctuation is high and people do the job next to their research duties. Until now, services were running on different machines, with different OS setups that can result in administration hell quickly. We want to consolidate our service setup. Our main idea is that the guys responsible for the services should not meddle with the underlying system anymore. Apart from core systems like NFS and kerberos, a typical service is able to run as non-root already. I'm talking about apache, mysql, subversion, mail with openxchange, and so on. Redirecting privileged ports is also no issue (source). What is left is the configuration of the service and its payload. One scenario we envisioned is that every service has its own user and home directory, accessable by the corresponding admins. Backup and fallback of the service is easy, as everything needed for the service to run is found in one place. Are there established ways to create such a setup? Does a mostly unique method exist to make services find their files (other than in system directories) while still using the corresponding debian packages? Are there any catches with our idea that we may have overlooked? Would you maybe claim that virtualization is the answer to our problem? (In our POV, it wouldn't help us keeping system setup strictly separated from service setup.) Thank you for any advice!

    Read the article

  • which virtualization technology is right for me?

    - by Chris
    I need a little help with this getting this sorted out. I want to setup a linux virtual server that I can use to run both sever and desktop systems. I want a linux system that is minimalist in nature as all the main os will be doing is acting as a hypervisor. The system I'm trying to setup will be running a file server, windows 7, ubuntu 10.04, windows xp and a firewall/gateway security system. All the client OS'es accessing and storing files on the file server. Also all network traffic will be routed through the gateway guest os. The file sever will need direct disk access while the other guests can run one disk images. All of this will be running on the same computer so I wont be romoting in to access the guests OS'es. Also if possible I would like to be able to use my triple head setup in the guest OS'es. I've looked at Xen, kvm and virtualbox but I don't know which is the best for me. I'm really debating between kvm and virtual box as kvm seem to support direct hardware access.

    Read the article

  • Installing 64-bit Ubuntu Server 12.04 LTS, on a VM with VMWare Player, on a 64-bit Windows 7 PC

    - by WannaBeAGeek
    I'm trying to create a VM, using VMWare Player, with an ISO image of Ubuntu Server 12.04 (LTS). The machine I'm doing the installation on has an Intel(R) Core(TM) i5 CPU, and runs 64-bit Windows 7 I managed to create the VM (gave username, password, configured network etc), but I can't install Ubuntu Server. First I get this alert : Binary translation is incompatible with long mode on this platform. Disabling long mode. Without long mode support, the virtual machine will not be able to run 64-bit code. For more details see http://vmware.com/info?id=152. When I click OK, I get another alert : This virtual machine is configured for 64-bit guest operating systems. However, 64-bit operation is not possible. This host supports Intel VT-x, but Intel VT-x is disabled. Intel VT-x might be disabled if it has been disabled in the BIOS/firmware settings or the host has not been power-cycled since changing this setting. (1) Verify that the BIOS/firmware settings enable Intel VT-x and disable 'trusted execution.' (2) Power-cycle the host if either of these BIOS/firmware settings have been changed. (3) Power-cycle the host if you have not done so since installing VMware Player. (4) Update the host's BIOS/firmware to the latest version. For more detailed information, see http://vmware.com/info?id=152. Then, when I click OK, my VM exists, and I get back to the VMWare Player home screen. I don't know much about hardware and virtualisation, so there might be some necessary info I'm not giving. Please don't hesitate to let me know what is missing in my post, for finding solutions. Thanks :)

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

  • Ubuntu 12.10 64bit host reboots when trying to install any guest system using VirtualBox

    - by gts123
    I am having a really nasty problem with VirtualBox as everytime I try to install any guest OS(using ISO file as CD for installation media), the installation starts normally but as soon as it is about to start either installing to virtual hard drive or loading(e.g. as LiveFS) it causes the host system to reboot abruptly. Config is as below: Host system: Ubuntu 12.10 64bit - Intel® Core i7-2640M CPU @ 2.80GHz × 4 Virtualbox version: 4.1.18_Ubuntu r78361 Guest OS systems tried: 32bit version of FreeBSD 9, Debian 6, Tails 0.14 VM setup Tried to have the minimal setup necessary just in case it would avoid for each system to make sure I'd avoid conflicts, but to no avail. I've tried different values and combinations of the below but the problem still persists: Shared Clipboard: Disabled Show in fullscreen/seamless: Disabled Remember runtime changes: DIsabled Base Memory: 2048 MB Chipset: PIIX3 IO APIC: Disabled EFI: Disabled Absolute Pointing device: Disabled Processor(s): 1 CPU PAE/NX: Disabled VT-x/AMD-V: Disabled Video Memory: 12 MB 3d/2d acceleration: Disabled Storage IDE COntroller: PIIX3 (same as chipset instead of PIIX4) Use host I/O cache: No Audio: disabled Network adapter: NAT USB controller: disabled No shared folder Also another sideeffect of the reboot is that it appears that it does not log any information in the error log files; not making things any easier. Please help.

    Read the article

  • Anonymous Login attemps from IPs all over Asia, how do I stop them from being able to do this?

    - by Ryan
    We had a successful hack attempt from Russia and one of our servers was used as a staging ground for further attacks, actually somehow they managed to get access to a Windows account called 'services'. I took that server offline as it was our SMTP server and no longer need it (3rd party system in place now). Now some of our other servers are having these ANONYMOUS LOGIN attempts in the Event Viewer that have IP addresses coming from China, Romania, Italy (I guess there's some Europe in there too)... I don't know what these people want but they just keep hitting the server. How can I prevent this? I don't want our servers compromised again, last time our host took our entire hardware node off of the network because it was attacking other systems, causing our services to go down which is really bad. How can I prevent these strange IP addresses from trying to access my servers? They are Windows Server 2003 R2 Enterprise 'containers' (virtual machines) running on a Parallels Virtuozzo HW node, if that makes a difference. I can configure each machine individually as if it were it's own server of course... UPDATE: New login attempts still happening, now these ones are tracing back to Ukraine... WTF.. here is the Event: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB4FEB30C) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: REANIMAT-328817 Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 94.179.189.117 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Here is one from France I found too: Event Type: Success Audit Event Source: Security Event Category: Logon/Logoff Event ID: 540 Date: 1/20/2011 Time: 11:09:50 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: QA Description: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB35D8539) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: COMPUTER Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 82.238.39.154 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • What is going on when I can't access an SMB server share (not accessible error) until I run cmdkey to delete the credential?

    - by Warren P
    I have a network connection share issue. The first connection works, and seems to stay connected for at least a few hours. However, after each time my windows 7 PC reboots, it can no longer form a network connection to the shared folder, nor browse to it, until I not only unmap and remap the mapped drive, but also, I have to use cmdkey to delete the stored credentials like this: cmdkey /delete:Domain:target=HOSTNAME My work PC is on a domain, and I am not the IT administrator, but I'm curious if there is anything I can do to investigate this issue. Any settings in registry or group policy that I could examine to see why the first connection works, but each subsequent attempt (once a stored credential exists) to browse or use the connection, fails with a connection error saying it is "not accessible", like this: I do not even get any error until at least several minutes go by. THe first thing I see is a window frozen and empty, and then I get this error: This has happened when connecting to a share on a DROBO device, and on a share which is not on the domain, but which was a Microsoft Home Server. I wonder if there's something broken in WIndows 7 professional with regards to connecting to non-domain shares when an active directory domain controller exists, and a particular workstation is joined to a domain? The problem only occurs if I click "remember credentials". It is not fixed by any amount of working with net use. Usingcmdkey to delete all stored credentials for the host is the only way to get back in, and it affects all non-domain shared folders. Update I'm hoping there are some registry locations I could check that could be misconfigured in some way that might explain why SMB/CIFS stored credentials for non-domain systems seem to be auto-invalidated in this weird way. Knowing how whacko Microsoft Windows domain and security handling is sometimes, this could be some kind of stupid "feature".

    Read the article

  • why is Mac OSX Lion losing login/network credentials?

    - by Larry Kyrala
    (moved from stackoverflow...) Symptoms So at work we have OSX 10.7.3 installed and every once in a while I will see the following behaviors: 1) if the screen is locked, then multiple tries of the same user/pass are not accepted. 2) if the screen is unlocked, then opening a new bash term may yield prompts such as: `I have no name$` or lkyrala$ ssh lkyrala@ah-lkyrala2u You don't exist, go away! Even when our macs are working normally, everyone here has to login twice. The first time after boot always fails, but the second time (with the same password, not changing anything, just pressing enter again) succeeds. Weird? Workarounds There are some workarounds that resolve the immediate problem, but don't prevent it from happening again: a) wait (maybe an hour or two) and the problems sometimes go away by themselves. b) kill 'opendirectoryd' and let it restart. (from https://discussions.apple.com/thread/3663559) c) hold the power button to reset the computer Discussion Now, the evidence above points me to something screwy with opendirectory and login credentials. Some other people report having these login problems, but it's hard to determine where the actual problem is (Mac, or network environment?). I should add that most of the network are Windows machines, but we have quite a few Macs and Linux machines as well, but I'm not sure of the details of how the network auth is mapped from various domains to others... all I know is that our network credentials work in Windows domains as well as mac and linux logins -- so something is connecting separate systems, or using the same global auth system.

    Read the article

  • System occasionally hangs boot process with SLES 11

    - by ThaMe90
    I have several (new) systems on which I had to install SLES11 on. However, after a few (though not every) reboots, the system hangs during the boot sequence. It will only continue after I physically press a key on the keyboard. From what I've found in the dmesg log from a failed boot is the following: [ 22.170276] sd 0:0:0:0: [sda] Mode Sense: b7 00 00 08 [ 22.171155] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 22.182760] sda: sda1 sda2 sda3 [ 22.383424] sd 0:0:0:0: [sda] Attached SCSI disk [ 22.545372] PM: Marking nosave pages: 000000000009a000 - 0000000000100000 [ 22.545377] PM: Marking nosave pages: 00000000bf780000 - 0000000100000000 [ 22.546217] PM: Basic memory bitmaps created [ 22.590380] PM: Basic memory bitmaps freed [ 22.596284] PM: Starting manual resume from disk [ 22.602319] PM: Resume from partition 8:1 [ 22.602321] PM: Checking hibernation image. [ 22.602479] PM: Error -22 checking image file [ 22.602481] PM: Resume from disk failed. [ 22.718727] kjournald starting. Commit interval 15 seconds [ 22.718960] EXT3-fs (sda3): using internal journal [ 22.718964] EXT3-fs (sda3): mounted filesystem with ordered data mode [ 1555.644404] udevd version 128 started [ 1555.697664] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 [ 1555.707961] ACPI: Power Button [PWRB] I've looked around the internet for the PM: Resume from disk failed. message, but this seems to only be important when restoring the system after a hybernate, i.e. restore from the hdd. But this is not my situation. I only get this after a reboot, as I said before. The timestamp [ 1555.xxxxxx] is only the result of me pressing a key on the keyboard. Any suggestions on how to proceed? As I am getting stuck on this issue.

    Read the article

  • How to monitor bandwidth use of each device on wifi network

    - by GWLlosa
    I have in my home a standard Comcast cable internet connection. I have it going from the wall to a cable modem, and from the modem to a late-series Linksys router, which provides wired and wireless networking. The vast majority of the users are wireless connections. For day-to-day tasks, this connection is fully sufficient for all my needs. However, on regular occassions, we have social gatherings that involve many people bringing laptops and other PCs and using the network and internet simultaneously, frequently for gaming. I have no administrative oversight over these machines; they have been known to be riddled with spyware and/or bloatware or be running torrents, legal or otherwise. The only reason I care is that on a regular basis, one of the machines will flatline my internet bandwith, and consume it all in order to upload/download/spam people/whatever. When this happens, the latency of the connections for gaming and the like becomes unacceptable, and everyone suffers. My question is: Is there a system I can set up whereby I can easily monitor the various systems connected to my wireless connection, see how much bandwith each one is using, and for what ends? That way, at a glance, I can spot the offending machine and kick it from the connection, without having to go from machine to machine, checking each one's "bandwith used" properties manually, and dealing with the owner's indignant protests all the while. I understand this will likely involve 3rd-party software and/or hardware; my issue is I don't even know where to begin.

    Read the article

  • I need a reverse proxy solution for SSH

    - by Bond
    Hi here is a situation I have a server in a corporate data center for a project. I have an SSH access to this machine at port 22.There are some virtual machines running on this server and then at the back of every thing many other Operating systems are working. Now Since I am behind the data centers firewall my supervisor asked me if I can do some thing by which I can give many people on Internet access to these virtual machines directly. I know if I were allowed to get traffic on port other than 22 then I can do a port forwarding. But since I am not allowed this so what can be a solution in this case. The people who would like to connect might be complete idiots.Who may be happy just by opening putty at their machines or may be even filezilla.I have configured an Apache Reverse Proxy for redirecting the Internet traffic to the virtual machines on these hosts.But I am not clear as for SSH what can I do.So is there some thing equivalent to an Apache Reverse Proxy which can do similar work for SSH in this situation. I do not have firewall in my hands or any port other than 22 open and in fact even if I request they wont allow to open.2 times SSH is not some thing that my supervisor wants.

    Read the article

  • Linux Software RAID1: How to boot after (physically) removing /dev/sda? (LVM, mdadm, Grub2)

    - by flight
    A server set up with Debian 6.0/squeeze. During the squeeze installation, I configured the two 500GB SATA disks (/dev/sda and /dev/sdb) as a RAID1 (managed with mdadm). The RAID keeps a 500 GB LVM volume group (vg0). In the volume group, there's a single logical volume (lv0). vg0-lv0 is formatted with extfs3 and mounted as root partition (no dedicated /boot partition). The system boots using GRUB2. In normal use, the systems boots fine. Also, when I tried and removed the second SATA drive (/dev/sdb) after a shutdown, the system came up without problem, and after reconnecting the drive, I was able to --re-add /dev/sdb1 to the RAID array. But: After removing the first SATA drive (/dev/sda), the system won't boot any more! A GRUB welcome message shows up for a second, then the system reboots. I tried to install GRUB2 manually on /dev/sdb ("grub-install /dev/sdb"), but that doesn't help. Appearently squeeze fails to set up GRUB2 to launch from the second disk when the first disk is removed, which seems to be quite an essential feature when running this kind of Software RAID1, isn't it? At the moment, I'm lost whether this is a problem with GRUB2, with LVM or with the RAID setup. Any hints?

    Read the article

  • Which revision control system for single user

    - by G. Bach
    I'm looking to set up a revision control system with me as a single user. I'd like to have access (read and write) protected using SSL, little overhead, and preferrably a simple setup. I'm looking to do this on my own server, so I don't want to use the option of registering with some professional provider of such a service (I like having direct control over my data; also, I'd like to know how to set up something like that). As far as I'm aware, what kind of project I want to subject to revision control doesn't really matter, but just for completeness' sake, I'm planning on using this for Java project, some html/css/php stuff, and in the future possibly as a synchronizing tool for small data bases (ignore that later one if it doesn't fit in with the paradigm of revision control). My questions primarily arise from the fact that I only ever used Subversion from Eclipse, so I don't have thorough knowledge of what's out there, what fits better for which needs, etc. So far I've heard of Subversion, Git, Mercurial, but I'm open to any system that's widely used and well supported. My server is running Ubuntu 11.10. Which system should I choose, what are the advantages of the respective systems, and if you know of any particularly useful ones, are there tutorials regarding the setup of the system I should choose that you could recommend?

    Read the article

  • OpenVPN-based VPN server on same system it's "protecting": feasible?

    - by Johnny Utahh
    Scenario: hosted machine (typically a VPS) serving wiki, svn, git, forums, email lists (eg: GNU mailman), Bugzilla (etc) privately to < 20 people. People not on team not allowed access. Seeking VPN-restricted access to said server. Have good user experience with OpenVPN-based servers/clients, but have yet to server-admin such systems. Otherwise, experienced Linux sysadmin. Target system: Ubuntu, probably 12.04. Seeking to put an OpenVPN process on above server to "protect" all the above-mentioned services, enabling only OpenVPN-authorized clients/processes to access above services. (Can easily acquire additional IP address(es) as needed for this setup.) Option: if absolutely needed, can employ an additional, dedicated, "VPN server" VPS simply to be my VPN server "front end." But prefer to have all server processes (VPN server plus other server apps) all running on same machine, if possible. Will consider further if dedicated-VPN-machine setup enables 1. easier installation/administration, 2. better/easier end-user experience, and/or 3. makes system significantly more secure. Any of above feasible? The main intention: create a VPN from purely-hosted resources, and not spend all the effort to make a non-VPN, secure site--which typically means "SSL wrapping" + all the continual webserver-application-update management. Let the VPN server deal with access security, and spend list time pushing said security "down" in the other apps/Apache.

    Read the article

  • Desktop Provisioning for a Small Linux Software Development Team

    - by deakblue
    Goal: Get a small team using a standard development image rather than 4 software devs setting up their own environments. Why: it takes a day or days to install a distro, build-specific libraries, tools like editors and IDEs, mysql, couchdb, java, maven, python, android-sdk, etc. It's a giant PITA that when repeated 4 times by 4 developers (not sys admins) wastes time and generates annoying divergences that crop up later (it-builds-on-my-box syndrome). There's no sharing of productivity, settings, tricks, scripts, set-ups. Some of this is helped by segregating the build systems into headless virtualbox images. This doesn't really address tooling though or the GUI-desktop dev that needs doing. So I see three basic strategies, ghosting, virtualization, and finally creating a kind of in-house linux distro (I guess Google does something like this). The target dev environment is based on Debian OpenBox and must allow a mix of 3rd gen Core i7 notebooks 8GB-minimum to work both single and multihead. Important, the lappies are not the same, but a mix of 2012 macbooks and PCs. So: virtualization: is doing all of your work within a VM, like VirtualBox, practical on this hardware or annoying. ghosting: will laptops from different manufacturers make this impractical. DIY distro: short of scripting a bunch of package installs, I don't know if there's any "distro-maker" that could keep this from being an epic project of scripting package installs. So any advice?

    Read the article

  • Doesn't VirtualBox 4.0 support drag-drop file copy yet?

    - by Benjamin
    Version 4.0.0 will be new major release. The following major new features were added: -New settings/disk file layout for VM portability; see the manual for more information. -Open Virtualization Format Archive (OVA) support; see the manual for more information. -VMM: support more than 1.5/2 GB guest RAM on 32-bit hosts -Language bindings: uniform Java bindings for both local (COM/XPCOM) and remote (SOAP) -invocation APIs -Chipset: added support for the Intel ICH9 chipset with 3 PCI buses, PCI express and -Message Signaled Interrupts (MSI) -Audio: Intel HD Audio is now available as guest hardware, for better support with modern -guest operating systems (e.g. 64-bit Windows; bug #2785). -GUI: redesigned user interface with guest window preview -GUI: new display mode with downscaled guest display -Resource control: added support for limiting a VM's CPU time and IO bandwidth. -Storage: support asynchronous I/O for iSCSI, VMDK, VHD and Parallels images -Storage: support for resizing VDI and VHD images -Windows Additions: support for automatically updating the Guest Additions (requires -installed Windows Guest Additions 4.0 or later) -Guest Additions: support for copying files into the guest file system What does the last line mean? I thought this is a drag-drop file copy feature like VMWare. I tried that. But I couldn't copy by drag-drop, ctrl-c ctrl-v either. Edit: I mean VBox 4.0 beta, not 3.x The release note is here. Download link is here.

    Read the article

  • How To Monitor Home Wireless Network Connected Devices Bandwith

    - by GWLlosa
    (Originally posted on SuperUser, not sure if it might be better suited here) I have in my home a standard Comcast cable internet connection. I have it going from the wall to a cable modem, and from the modem to a late-series Linksys router, which provides wired and wireless networking. The vast majority of the users are wireless connections. For day-to-day tasks, this connection is fully sufficient for all my needs. However, on regular occassions, we have social gatherings that involve many people bringing laptops and other PCs and using the network and internet simultaneously, frequently for gaming. I have no administrative oversight over these machines; they have been known to be riddled with spyware and/or bloatware or be running torrents, legal or otherwise. The only reason I care is that on a regular basis, one of the machines will flatline my internet bandwith, and consume it all in order to upload/download/spam people/whatever. When this happens, the latency of the connections for gaming and the like becomes unacceptable, and everyone suffers. My question is: Is there a system I can set up whereby I can easily monitor the various systems connected to my wireless connection, see how much bandwith each one is using, and for what ends? That way, at a glance, I can spot the offending machine and kick it from the connection, without having to go from machine to machine, checking each one's "bandwith used" properties manually, and dealing with the owner's indignant protests all the while. I understand this will likely involve 3rd-party software and/or hardware; my issue is I don't even know where to begin.

    Read the article

  • How can I automatically synchronize a directory tree on multiple machines?

    - by Blacklight Shining
    I have two Mac laptops and a Debian server, each with a directory that I would like to keep in sync between the three. The solution should meet the following criteria (in rough order of importance): It must not use any third-party service (e.g. Dropbox, SugarSync, Google whatever). This does not include installing additional software (as long as it's free). It must not require me to use specific directories or change my way of storing things. (Dropbox does this IIRC) It must work in all directions (changes made on /any/ machine should be pushed to the others) All data sent must be encrypted (I have ssh keypairs set up already) It must work even when not all machines are available (changes should be pushed to a machine when it comes back online) It must work even when the /directories/ on some machines are not available (they may be stored on disk images which will not always be mounted) This can be solved for Macs by using launchd to automatically launch and kill (or in some way change the behavior of) whatever daemon is used for syncing when the images are mounted and unmounted. It must be immediate (using an event-based system, not a periodic one like cron) It must be flexible (if more machines are added, I should be able to incorporate them easily) I also have some preferences that I would like to be fulfilled, but do not have to be: It should notify me somehow if there are conflicts or other errors. It should recognize symbolic and hard links and create corresponding ones. It should allow me to create a list of exceptions (subdirectories which will not be synced at all). It should not require me to set up port forwarding or otherwise reconfigure a network. This can be solved by using an ssh tunnel with reverse port forwarding. If you have a solution that meets some, but not all of the criteria, please contribute it in the comments as it might be useful in some way, and it might be possible to meet some of the criteria separately. What I tried, and why it didn't work: rsync and lsyncd do not support bidirectional synchronization csync2 is designed for server clusters and does not appear to work with machines with dynamic IPs DRBD (suggested by amotzg) involves installing a kernel module and does not appear to work on systems running OS X

    Read the article

  • What is the oldest hardware still in production use? How is it kept running?

    - by sleske
    In the spirit of the question What is your oldest hardware that still works?, I'd like to ask: What is the oldest hardware you know that is still in production use? And what challenges did you (or someone else) face in keeping it running (scarce documentation, no support, no spare parts available...)? Most organizations will retire / upgrade software and hardware after 5-10 years, but sometimes old software is kept running on old boxes, because it "just works". I once worked at a client site that was running a critical piece of (in-house developed) business software on a single server running HP-UX. The server was old (ca. 12-13 years), but fortunately still running without problems; however, getting spares would have been very difficult, and since software installation was undocumented, any significant system changes or even new hardware might have caused significant downtime and data loss. We eventually managed to replace it, but this is not always possible. I also read that many organizations still run decade-old mainframe hardware, particularly for highly customized systems controlling industrial machines or power plants. Which old hardware have you encountered? How did you manage these challenges? Related question: http://serverfault.com/questions/82467/should-old-servers-be-retired

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >