Search Results

Search found 16894 results on 676 pages for 'private members'.

Page 518/676 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Repeated installation of malicious software to do outbound DDOS attack [duplicate]

    - by user224294
    This question already has an answer here: How do I deal with a compromised server? 12 answers We have a Ubuntu Vitual Private Server hosted by a Canadian company. Out VPS was affected to do "outbound DDOS attack" as reported by server security team. There are 4 files in /boot looks like iptable, please note that the capital letter "I","L". VPS:/boot# ls -lha total 1.8M drwx------ 2 root root 4.0K Jun 3 09:25 . drwxr-xr-x 22 root root 4.0K Jun 3 09:25 .. -r----x--x 1 root root 1.1M Jun 3 09:25 .IptabLes -r----x--x 1 root root 706K Jun 3 09:23 .IptabLex -r----x--x 1 root root 33 Jun 3 09:25 IptabLes -r----x--x 1 root root 33 Jun 3 09:23 IptabLex We deleted them. But after a few hours, they appeared again and the attack resumed. We deleted them again. They resurfaced again. So on and so forth. So finally we have to disable our VPS. Please let us know how can we find the malicious script somewhere in the VPS, which can automatically install such attcking software? Thanks.

    Read the article

  • ESX 3.5 refuses to update

    - by Speeddymon
    I have a set of ESX 3.5 servers in 2 different datacenters. One is DR, one is production. They are on the same vlan and so I can access any of them on the private network from my vCenter server. Last month, as a learning experience (I hadn't dealt with ESX much before), I updated the DR server. Other than finding out that a couple of bundles had to be installed manually in order to get the rest to install from vCenter, it went off without a hitch. Now, I'm trying to do the same for our production servers and it is not working. I've googled around for the error I get during scan, and investigate loads of different solutions (editing the integrity file, checking DNS, etc) -- I did install the 2 bundles that had to be installed manually already -- but scan from vCenter is just not working. Side note: I did just scan the DR server again and that scan works fine so shouldn't be a problem with vCenter that has cropped up recently -- it has to be something else. The error I get is: Patch metadata for (servername) missing. Please download updates metadata first. Failed to scan (servername) for updates. I'm all out of ideas on how to make this work, so any help would be hugely appreciated.

    Read the article

  • Prevent 'Run-time error '7' out of memory' error in Excel when using macro

    - by MasterJedi
    I keep getting this error whenever I run a macro in my excel file. Is there any way I can prevent this? My code is below. Debugging highlights the following line as the issue: ActiveSheet.Shapes.SelectAll My macro: Private Sub Save() Dim sh As Worksheet ActiveWorkbook.Sheets("Report").Copy 'Create new workbook with Sheets("Report"(2)) as only sheet. Set sh = ActiveWorkbook.Sheets(1) 'Set the new sheet to a variable. New workbook is now active workbook. sh.Name = sh.Range("B9") & "_" & Format(Date, "mmyyyy") 'Rename the new sheet to B9 value + date. With sh.UsedRange.Cells .Value = .Value 'eliminate all formulas .Validation.Delete 'remove all validation .FormatConditions.Delete 'remove all conditional formatting ActiveSheet.Buttons.Delete ActiveSheet.Shapes.SelectAll Selection.Delete lrow = Range("I" & Rows.Count).End(xlUp).Row 'select rows from bottom up to last containing data in column I Rows(lrow + 1 & ":" & Rows.Count).Delete 'delete rows with no data in column I Application.ScreenUpdating = False .Range("A410:XFD1048576").Delete Shift:=xlUp 'delete all cells outwith report range Application.ScreenUpdating = True Dim counter Dim nameCount nameCount = ActiveWorkbook.Names.Count counter = nameCount Do While counter > 0 ActiveWorkbook.Names(counter).Delete counter = counter - 1 Loop 'remove named ranges from workbook End With ActiveWorkbook.SaveAs "\\Marko\Report\" & sh.Name & ".xlsx" 'Save new workbook using same name as new sheet. ActiveWorkbook.Close False 'Close the new workbook. MsgBox ("Export complete. Choose the next ADP in cell B9 and click 'Calculate'.") 'Display message box to inform user that report has been saved. End Sub Not sure how to make this more efficient or to prevent this error.

    Read the article

  • How to protect folder privacy against unethical network administrators? [closed]

    - by Trevor Trovalds
    I just need a technical solution for the sake of my group's shared passwords, projects, works, etc. safety. Our network has Active Directory with public/groups/users and NTFS permissions, under a Windows Server 2003 which will soon migrate to Windows Server 2008 R2. Our IT crowd is small, consisting of 2 DBAs, 4 designers, 6 developers (including me), 2 netadmins and (a lot of) tech supporters, everyone has local admin rights. Those 2 network admins weren't the ones who set the network up, they just took the lift recently when the previous ones quit. We usually find them laughing at private contents from users stored in the groups AD, sabotaging documents that don't match their personal tastes and, finally, this week we found out they stole a project we (developers and DBAs) were finishing and, long before, they presented it to the CEO as theirs without us knowing. I'm a systems analyst, and initially my group decided to store critical content, like shared passwords, inside encrypted .zip files. Unfortunately we couldn't do the same to the other hundreds of folders and files, which included the stolen project, because the zipping process would take too long for every update. We also tried an encrypted Subversion repository under SSL, but there are many dummies (~38 atm) involved in the projects that have trouble using TortoiseSVN when contributing, and very oftenly we had to fix messed up updates. Well, I think these two give the idea of what we've been trying to reach. So, is there a practical "individual" protection for our extensive data or my hope can already be euthanized? P.S.: Seriously, at the place where I live/work, political corruption gone the wildest, so denounce related options are likely impracticable. Yet both netadmins have strong "political bond" with the CEO and the President, hence their lousy behavior and our failed delation attempts.

    Read the article

  • Can't Ping - Wireless network of home

    - by Naunidh
    Hello, This may seem like other ping problem, but I have tried a lot before posting it here. I have a linksys WRT54G - firmware v8.00.8. I have two laptops one windows vista (192.168.1.99) and Windows Xp (192.168.1.13) connected on WiFi . The Router's IP address is 192.168.1.4, and default gateway is the ADSL modem (192.168.1.1) connected through wire. The problem is that laptops can not ping each other, they can ping the gateway and the linksys router, and both can access internet. Following has been tried (I am pinging from XP machine to Vista): I saw that arp entires for Vista machines were not being populated, so I added static ARP entries. 192.168.1.99 00-19-7e-70-d0-4e static I checked on ethereal that an ICMP packet for MAC address of Vista machine does go out from XP machine towards the Vista machine, but never reaches the Vista machine. So its get eaten by the Router? I added Vista machine to DMZ in my linksys router, so that all the ports are open (In case it was an issue). Firewalls , antivirus etc were turned off, echo was enabled explicitly on vista, file sharing, network discovery were turned on. Network type was set to private. Unchecked everything in Router;s firewall, even though they are only meant for WAN requests. Is there anything else that I should try. Thanks.

    Read the article

  • Nginx Removes the index.php from URL

    - by codeHead
    I have a codeigniter php application on nginx. It works as expected on Apache but after moving to nginx, I noticed that the index.php is automatically removed from the URL in all my links. Infact when I try using index.php it does not go to the desired URL but gets redirected to my default controller. below is a coopy of my nginx.conf file. server{ listen 80; server_name mydomainname.com; root /var/www/domain/current; # index index.php; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log main; location / { # Check if a file or directory index file exists, else route it to index.php. try_files $uri $uri/ /index.php ; } location ~* \.php { fastcgi_pass backend; include fastcgi.conf; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_read_timeout 500; #fastcgi_param SCRIPT_FILENAME $document_root/index.php; add_header Expires "Thu, 01 Jan 1970 00:00:01 GMT"; add_header Cache-Control "no-cache, no-store, private, proxy-revalidate, must-revalidate, post-check=0, pre-check=0"; add_header Pragma no-cache; add_header X-Served-By $hostname; } location ~* ^.+\.(css|js)$ { expires 7d; add_header Pragma public; add_header Cache-Control "public"; } # set expiration of assets to MAX for caching location ~* \.(ico|gif|jpe?g|png)(\?[0-9]+)?$ { expires max; log_not_found on; } } I need to use my URL With the index.php -- please help.

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    Hi, I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

  • What are secure ways of sharing a server (ssh+LAMP) with friends?

    - by Bran the Blessed
    What is the best way to share a virtual server with friends? More precisely, I have the following assets: A virtual private server (Debian Lenny) with root access for myself, running... SSH apache2 mysql Some unused disk space Some friends in need of hosting The problem I would now like to do the following: Hosting one or several domains per friend My friends should have full access to their domains, including running PHP scripts, for example My friends should not be able to poke around in other directories The security of my server should not be compromised by faulty PHP scripts To clarify: I do trust my friends in the sense that they are not trying to do something evil with their access. I just do not trust the programs they are going to run. So, what are your recommendations for establishing such a scenario? Partial solution I already came up with the following plan: Add chrooted SSH users for my friends Add Apache vhosts per user (point the directories to subdirectories of the homedirectories, i.e. /home/alice/example.com, /home/bob/example.net, etc. But how can I enforce a chroot-like environment for the scripts they are running within these vhosts? Any pointers would be appreciated.

    Read the article

  • Follow through - How to setup equivalent USVIDEO.ORG DNS-Proxy on Linux

    - by DNSDC
    I'm quite keen to setup similar service (but FREE) and seems you know how to do this. "you need to run your own private dns with artificial records for example pandora.com you also need a real dns to fall back on. now that all requests for these sites are going to your US located box you can open up port 80 on squid and listen for the traffic. your cache_peer settings should allow you to map each domain to their real ip. The trafic now flows initially from your US located box to the service but then the server responds it responds directly to the host. no magic here. I won't share the fine details as it probably best serves all to not over exploit this." Did you mean we need to 1. Setup Forward-only DNS on a US-based server/ip? 2. Setup cache_peer and cache_peer_domain in Squid, I got this. 3. Any iptables rule, prerouting, postrouting rules needed to accomplish this? Appreciate your expert advice. Cheers, Don

    Read the article

  • GPG - why am I encrypting with subkey instead of primary key?

    - by khedron
    When encrypting a file to send to a collaborator, I see this message: gpg: using subkey XXXX instead of primary key YYYY Why would that be? I've noticed that when they send me an encrypted file, it also appears to be encrypted towards my subkey instead of my primary key. For me, this doesn't appear to be a problem; gpg (1.4.x, macosx) just handles it & moves on. But for them, with their automated tool setup, this seems to be an issue, and they've requested that I be sure to use their primary key. I've tried to do some reading, and I have the Michael Lucas's "GPG & PGP" book on order, but I'm not seeing why there's this distinction. I have read that the key used for signing and the key used for encryption would be different, but I assumed that was about public vs private keys at first. In case it was a trust/validation issue, I went through the process of comparing fingerprints and verifying, yes, I trust this key. While I was doing that, I noticed the primary & subkeys had different "usage" notes: primary: usage: SCA subkey: usage: E "E" seems likely to mean "Encryption". But, I haven't been able to find any documentation on this. Moreover, my collaborator has been using these tools & techniques for some years now, so why would this only be a problem for me?

    Read the article

  • Subversion: Secure connection truncated

    - by Nick
    Hi, I'm trying to set-up a subversion server with apache2/webdav access. I've created the repository and configure Apache according to the official book, and I can see the repository in a webbrowser. The browser shows: conf/ db/ hooks/ locks/ Although clicking any of those links gives an empty xml document like: <D:error> <C:error/> <m:human-readable errcode="2"> Could not open the requested SVN filesystem </m:human-readable> </D:error> I've never used subversion before so I assume this is correct? Anyway, when I try to connect via a command line client, it asks for my password, I give it, then I get the (useless) error message: svn: OPTIONS of 'https://svn.mysite.com': Could not read status line: Secure connection truncated (https://svn.mysite.com) The command I'm using is: svn checkout https://svn.mysite.com/ svn.mysite.com Subversion was installed using Ubuntu's package manager. It's version 1.6.6 on Ubuntu 10.04. My Virtualhost Cofiguration: <VirtualHost 123.123.12.12:443> ServerAdmin [email protected] ServerName svn.mysite.com <Location /> DAV svn SVNParentPath /var/svn/repos SVNListParentPath On AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/subversion/passwd Require valid-user </Location> # Setup The SSL Certificate Paths SSLEngine On SSLCertificateFile /etc/ssl/certs/mysite.com.crt SSLCertificateKeyFile /etc/ssl/private/dmysite.com.key </VirtualHost>

    Read the article

  • Winodws server 2003 Setup

    - by Barracksbuilder
    I work at a university maintaining the computer science department server. I am looking for a more economical way to stream line the set up of student accounts. CS students are granted a Username and password an IIS virtual directory, FTP virtual directory, and a mysql database. Server is running windows server 2003R2 (Possibly migrating to 2008R2) The server is running a domain though no students physically log a terminal into it (No computers are part of my domain.) Creating the account is a manual process. I did right a PHP script to query the Universities AD and copy the information and write it to my AD. I then have to create basically the users home directory. I tried having AD do it but since the user never physically logs in it never creates the directory. Permissions on this folder are set to User - full, Instructors (group) - full, Users (group) - read, IUSER - read. Inside of the users folder their is a "Private" folder with permissions User - full, instructors (group) - full. Next step is IIS I create a virtual directory in the default web site pointed to the users home directory so they have a website. Same goes for FTP virtual directory in the default ftp configuration to allow the users to upload files to their website. Mysql I have to create a user and password then create a mysql scheme (database) full access for the user and full access to the instructors account to be able to access the students database. All of this is done manually and takes me a week to do. The closest description is maybe a shared hosting environment. Is there a better way to do this? Scripting wise, or better structure setup?

    Read the article

  • Tunnell network requests with Windows 7

    - by mark
    I've Windows 7 64bit Pro client in a private LAN behind a Netgear wgr614v7 router. I've also a remote Debian server machine outside. I'd like to tunnel all (or specified ports/protocols) over this outside server, so when I'm on the Windows machine and I request serverfault.com it would not appear from the wgr614v7 public IP but from the server. But it's not only about HTTP traffic, it's basically about everything I'd like to: other TCP ports, even UDP, etc. It must be transparent to the application, e.g. they shouldn't be aware of this. All their requests just appear as being from the server and the tunnel between them takes care about the packets. I'm aware of e.g. Putty and forwarding individual ports or using it as a socks proxy, however not many applications to support this and the support in windows itself looks non-existent to me. I might add it should be something "reasonable" easy to set up. I've heard about PPTP but I'm unsure about it's security implications (by design). Should I go for VPN? There seem to be two common solutions for Linux (OpenSwan and StrongSwan), why would I pick the one over the other? I also fear that setting up a VPN might be quite complex, OTOH maybe it's the only sane way to do the things right? Or is OpenVPN sufficient? I'm seeking for open (source) solutions, what other options to I have or which direction should I head to?

    Read the article

  • setting up vpn server

    - by Lock
    I need help in visualising how to setup our VPN box when we move to our new network with Telstra. We have a safe@office 500P, which has a public IP and a private IP of 192.168.19.2. It is physically connected to our router, which has 4 different interfaces, one being 192.168.19.1. On the VPN box, we have a static route to forward everything to 192.168.19.1 which is the router, and from there it works out where to go. Now, we are moving to a Telstra WAN and things are setup a little differently. Our head office router has only 3 interfaces- 1 is for the link to the switch that has the fibre connection (so our route to the internet and other branches), 1 is for our 10.10.20.x network and one is for the local branch network. I really have no idea how to set this up as with the new setup, we will not have a port for it to plug into on the router. Could I just plug it into the 10.10.20.x network? Would I have to give it a public IP or can we just forward through the ports that it would use? Another suggestion was to VLAN our switch into two networks- one for the 10.10.20.x network and one for the network the VPN currently sits on (192.168.19.x), and setup the router to trunk between the port and the switch. Not sure how to do this. Sorry VPN's are definitely not my strong suit. Any advice appreciated!

    Read the article

  • OSX server setup suggestions

    - by Tom
    I am looking into the possibility to setup an OSX server for my employees, and would like some input on what is the best approach to meet my needs, and perhaps some suggestions if I am moving in the wrong direction. I am thinking of a Mac Mini OSX server, and are not sure if my needs will be met, and what possibilities are out there. I want these capabilities: - Groups/Users managed on server - Shared folders and private folders for users/groups - Access to activated services - Server hosting software for the users (developing tools ++) - Similar to Windows Terminal Server - Virtual desktop environment (both local and over internet/VPN) - Possible to access trough Mac and Windows The reason I am looking at OSX server is that my employees almost only work in OSX environment, and I want to offer the capabilities to logon to the server trough some kind of terminal software, and have full access to their work OSX environment and software on their mac or pc, from anywhere they might be. Instead of having to have multiple setups and need for spending alot of time installing and setting up needed software on every client. This is a small business, where some work on local network, and others from the internet, preferably trough VPN. But a terminal server solution, that are fast and easy to manage would be perfect for our needs. So if anyone have any experience with a similar setup, please let me know what you did, and your experiences with your setup.

    Read the article

  • Monitoring Between EC2 Regions

    - by ABrown
    I'm working on a small EC2 project that involves a handful of servers in two different regions (US East and EU West). My first task is to implement a Nagios monitoring solution. Monitoring within a region is simple - I just use the private domain names/IPs, but I'm a little unsure of the best way to handle monitoring the second region without setting up a second Nagios install. The environment is fairly static, so I'm not going to be scripting the configuration with the EC2 tools just yet. As I see it, I have two options. Two Nagios installations (which is over-kill for the small number of servers I'm dealing with). Pros: I don't have to alter the group permissions nor do I have to pay for the traffic, redundancy in the monitoring solution - I could monitor the Nagios servers. Cons: two installations to deal with and I'd need to run another server instance. Have the single installation monitor both regions. Pros: one installation to deal with. Cons: slightly reduced security - security group will have to have NRPE (5666) opened for one source IP and also paying for a small amount of bandwidth at the Internet rate for data transfer between the regions. I guess my question is - how have others handled this problem and what are your recommendations? Thanks!

    Read the article

  • rsync via cron. How do I enable logging?

    - by tetranz
    Hi all I'm backing up a remote server to another computer using rsync. In cron.daily I have a file with this: rsync -avz -e ssh [email protected]:/ /mybackup/ It uses a public / private key pair to login. This seems to work well most of the time however, I've (foolishly) only ever really checked it by looking at the dates on some important files (MySQL dumps) that I know change every day. Obviously, an error could occur after that file. Sometimes it fails. When I run it manually, something like "client reset" sometimes happens. What is the best way to log it so that I can check with certainty if it completed or not? The cron log doesn't indicate any errors. I haven't tried it but the rsync man page on the oldish version of CentOS on the backup machine doesn't show the --log-file option. I guess I could redirect stdout with but I don't really want to know about every file. I just want to know if it all worked or not.. Thanks

    Read the article

  • Website is not accessible from server which is using proxy

    - by Bhoot
    I hosted a website in a win 2008 R2 server which runs in private domain. I set up bindings for port 80 and 443 for http & https respectively. Created inbound rule for port 80 and 443 also in windows firewall. After doing all this, i am still not able to access my website from remote machine. IE : Internet Explorer cannot display the webpage. Chrome : Oops! Google Chrome could not find xxxxxx Tried accessing website by ip address but no luck. I tried to ping that server but it says TTL expired in Transit. Now i found some more information over internet to check if the server is using any kind of proxy in between. I found my IP address at www.getip.com, but ipconfig/all gives me a different IP address. Is it really a problem if we use proxy ? I am not sure if i have concluded it correctly. But is there any way out to resolve this issue? Update ::: I figured it out. I have to call that website with external IP address. due to the proxy settings i was not able to call that website by the server's IP or name of that machine.

    Read the article

  • How do I tunnel an HTTPS proxy through a virtual machine (VMWare)

    - by Kyle
    I have a personal setup at home using VMWare Workstation. I also have a set of Virtual Private Machines that run Squid, and therefore provide me HTTPS proxy tunnels. Using Proxifier, I can tunnel all traffic for given applications through these tunnels. However, I also have a few virtual machines for dev/staging/experimentation/etc. I generally just use NAT to provide Internet access to the machines, and if I need to use these proxies, I can just setup Proxifier (or a Linux equivalent) to pipe the traffic through them. No problem. But... I got to thinking: Wouldn't it be great if I could assign these proxy tunnels to a virtual machine, so that when I start up the VM, it has instant-on access through the tunnel and not my local connection? (EDIT: Of course, it would USE my local connection, but it would tunnel traffic through the proxy.) To be more clear: I want a solution that binds the proxy to a VM, so that when I start the VM, I don't have to use a proxy client to connect to the tunnel - I am already piping all traffic from that VM through that proxy. I did a bit of searching, and the closest thing I could find was this: How to route public static IP to a virtual machine on a vmware ESXi host? Which wasn't all that applicable. The proxies are protected by user/pass but do not filter by IP. Again, they are HTTPS proxies setup through Squid. Any ideas on how to make this happen? Thanks a ton.

    Read the article

  • I have just created a subnet for a local network, connecting to a standalone server on another network, now I cannot connect to the internet

    - by Seth
    I am just learning some new aspects of servers and networking. We have a network of 5 subnets that all interconnect with each-other. In order to get two computers on the subnet that we were setting up, I changed the IP from the subnet where the standalone server is on (where they used to be set up)to the local subnet we are remotely hooking up. Likewise I also changed the gateway to coincide with the new subnet. Only problem is that since doing this, I am unable to establish a connection to the internet. I can ping the server and correspong gateway & DNS server, but cannot get connected to the internet. We do have a dumb-switch (non-programmable) connected that receives both the internet and private network inputs and distributes (or should do so) to about 5 other computers. Bottom line, I cannot currently connect to the internet, and am wondering what could be causing this.. It is likely something very obvious and pardon me being more vague than I probably should be, but I could use some help resolving this! Thanks for any help!

    Read the article

  • What is the best time to set the IP address for a server headed to a server colocation facility?

    - by jim_m_somewhere
    What is the best time to set the IP address for a server? I have a server that I am going to install the OS on and then I am going to send it to a server colocation facility. The server is going to have Internet facing services (www, email, etc.) I can set up a "fake" IP address during install (by fake I mean private as in RFC 1918) and change the "fake" IPs to the real IPs once I set up the colocation service. The other option is to set up the colocation service...wait for them to give me the "real" IPs and use them during the OS install. The ramification are that...if I use "fake" IPs during install...I will have to wait before I set up things like SSL certs. If I wait for IPs from the colocation provider...then I can set up SSL certs that use the "correct" (as in "real") IP addresses...no changes to the certs until they expire. Do the "gotchas" of changing an IP address on a server outweigh the benefits of a quick install? The other danger with using "fake" IPs is that I could make a mistake when I go through the various files to change the IP address to the "live" IP address. Server OS: CentOS 6.2 or CentOS 6.3, 64 bit. Apps: Apache 2.4.X httpd, MySQL 5.X (will eventually use replication)

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • How to set up vpn tunnel (ipsec) connection

    - by Alfwed
    I'm working with a client who wants to set up a vpn tunnel between their network and ours. They're in charge of the tunnel and to give us the access they are asking me my public IP and my LAN IP. This is what i've got when i do an ifconfig on the server i will use to connect to the vpn $ ifconfig eth0 Link encap:Ethernet HWaddr d4:ae:52:cd:xx:xx inet adr:62.210.xxx.xxx Bcast:62.210.xxx.xxx Masque:255.255.255.0 adr inet6: fe80::d6ae:52ff:xxxx:xx/64 Scope:Lien UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Packets reçus:55255032 erreurs:0 :779628 overruns:0 frame:0 TX packets:5419527 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 Octets reçus:5598164393 (5.5 GB) Octets transmis:1034297288 (1.0 GB) Interruption:16 Mémoire:c0000000-c0012800 lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 Packets reçus:45923382 erreurs:0 :0 overruns:0 frame:0 TX packets:45923382 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 The inet adr:62.210.xxx.xxx is my public IP but it seems like i dont have any LAN IP. Can the connection work without LAN IP or should I create a private network somehow?

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

  • BYOD (accessing files) on a domain without joining?

    - by Philip White
    I run a Samba 4 instance at a small private school. This makes a regular Linux server appear as a directory controller. There are two relevant benefits to this: I have a Samba share for people's documents, and I use the Redirected Folders feature to allow any employee to sit down at any PC, log in with their domain credentials, and their My Documents points to network storage. Everyone has a mapped drive (using Group Policy Preferences) to a share specific to their account type. Students can access one share (one share for all students), teachers have another, and office staff have another. However, I would like to allow BYOD (Bring Your Own Device). Some employees are already asking for it with their personal laptops, and I know eventually most everyone will want to. Is there any way to replicate the two features above without having to join PCs to the domain? Joining personal PCs is impractical if only because only professional editions of Windows support this. Ideally, any operating system (including mobile) could access the relevant shares, but of course Windows is key. Offline caching is optional. (I could set up OpenVPN for teachers who want to access their files from home.) The problem with simply giving SSH access to the relevant shares is primarily that Samba 4 relies on ext4 ACLs and ext4 extended attributes to maintain NTFS permissions. Writing files directly to the Linux server would bypass this and would (probably) not be interoperable with Samba4. Right now I am completely flexible. I am even fine with scrapping the whole domain and using some other software for the two features above. How can I allow school employees and students freedom to securely share files without requiring everyone to have specific editions of Windows?

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >