Search Results

Search found 15651 results on 627 pages for 'setup'.

Page 167/627 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Can I have a single solid state drive and a RAID array on the same machine? [closed]

    - by jaminto
    Hi- To summarize, i'm looking to use a single solid state drive as my primary drive, and two conventional sata drives in a RAID 1 configuration for data. I am trying to install 64-bit Windows 7 onto this configuration. Is this possible? Here are the details: I built a desktop that has been running 64-bit Vista on two 500Gb in a RAID 1 array for a few years. I just purchased an Intel X25-M 80Gb Sata Solid-State Drive, and was planning on using this a my primary drive, and keeping the RAID 1 array as my data drive. I added the SSD drive and in the RAID setup, configured it as a RAID 0 array of only one disk. Then, I tried to do a clean install of windows 7 64-bit, but got stuck in the "Missing driver for CD/DVD drive" black hole of selecting driver files and Windows telling me that i don't have the appropriate driver for my hardware. The missing hardware is NOT a CD/DVD drive, since i'm installing off of my only CD/DVD drive. Plus at one point i was able to point it at a driver for my raid controller, and then my hard drives magically showed up as browsable sources for finding drivers for some other unnamed device that setup couldn't recognize. After a few hours of trying drivers (this was a very slow process) i decided to reboot and look at the BIOS settings. I'm using an ASUS M2A-VM motherboard which has an ATI SB600 RAID controller on board. I switched the "On board SATA Type" setting from "SATA" to "AHCI" thinking that since AHCI is an Intel thing, this would help. Unfortunately, this abandoned my RAID configuration, and my previously mirrored drives are showing up as separate drives when i boot into my current windows installation. Am i trying to do the impossible here? Should i just buy a separate SATA/RAID PCI card and plug the SSD into that? Any help would be greatly appreciated.

    Read the article

  • Setting up a virtual ftp directory that points to another computer

    - by AngryHacker
    I have II5 sitting on an old Windows 2000 Professional box. It has an FTP site there that allows me to access files. It works great, no problem at all. However, now I need to setup a virtual directory that points to a share on another computer on the network (running Windows XP Tablet Edition). The share requires a user name and password. The network is a simple workgroup (i don't have any domains or any of that). What is the correct procedure for that? I've tried setting a share via UNC and typing in the UserID/Password when asked. But when I finished, the virtual machine showed up as an error in the IIS Manager and couldn't access it. I mapped the share onto a drive and then tried to setup a virtual directory with this drive. Same result. Is there something simple I am missing? Would upgrading any part of the picture help at all?

    Read the article

  • Is it possible to command a common router without using the web interface?

    - by MDeSchaepmeester
    Some background The internet arrangement in my student home is really weird. There is one ethernet outlet and several wifi hotspots. Either way requires a login through a web site to get internet access. This is annoying as each device needs to login seperately and with a PS3 for example, it is impossible to get connected at all since the web login procedure doesn't work. Therefore I have installed a D-Link DIR-635 router which is connected to the ethernet outlet. It has DHCP enabled so it uses NAT, but whatever it is connected to also uses NAT and I've read this should not work. A fellow student tried it with an Apple Airport but that keeps giving errors related to NAT after NAT. Anyway my setup does work so bonus points if you can clarify this. I need to login to the web site I mentioned earlier with any device, after which all devices in my LAN have connectivity. This is great. Except... In short From time to time, I lose internet connectivity and my D-Link DIR-635 router needs to do a DHCP renew. I can do this via the web interface but my life would be easier if I could just run a cmd file which tells my router to do this without all the hassle. This would setup a connection to my router and execute the proper command. I have tried googling but couldn't find much helpful stuff.

    Read the article

  • Distributing Microsoft Office Template or Macro over the network

    - by zfranciscus
    We have around 400 users who use Word and we want to make their life easier by distributing templates and macros over the network. The easiest way to do this of course to setup a shared network folder and let them get the appropriate templates and macros. Of course, each user has to know where to copy these files to in their local PC, and we have to rely on constant email communication to let them know for newer version of the macro and templates. The next alternative is to ask them to configure Word to point to these network folder. But of course any disruption to the network means disruption to their work. We are thinking to setup a synchronization mechanism that downloads new templates to their local machine. We are also thinking to make this sync tool to prompt users that it will download new templates - you know just to give them visibility that they are receiving changes. We are wondering what is the best approach that people usually use in their workplaces ? Are there any specific tool that can make this task easier ?

    Read the article

  • How to divert traffic based on hostname using HAProxy?

    - by Bosky
    I've had some initial success with HAProxy setting up a bunch of app servers listening on various other ports. I now have another webserver listening on one port, and i'd like to what changes to make to my config to flow traffic by hostname as well. The following is the current setup, assuming: my apache webserver is running at examplecom:8001 my bunch of app servers 0.0.0.0:8081, 0.0.0.0:8082 , 0.0.0.0:8083 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 debug #quiet #user haproxy #group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen appservers 0.0.0.0:80 mode http balance roundrobin option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 server inst1 0.0.0.0:8081 cookie server01 check inter 2000 fall 3 server inst2 0.0.0.0:8082 cookie server02 check inter 2000 fall 3 server inst3 0.0.0.0:8083 cookie server01 check inter 2000 fall 3 server inst4 0.0.0.0:8084 cookie server02 check inter 2000 fall 3 capture cookie vgnvisitor= len 32 (any other comments on the ^ setup are welcome.) Now I'd like to continue the same above, but in addition in case - if the hostname is myspecialtopleveldomain<dot>com, then would like to flow traffic to example<dot>com:8001 ~B

    Read the article

  • Trouble with Remote Desktop pulling through printers. Drive Redirection works, and the ports created but not the printers

    - by Windex
    I've run out of things to look into. All the support documents have been gone through and still provide no resolution. I've checked the service permissions, (sc sdshow spooler) they all match up with other systems and what is output on the support documents. I'm nearly positive that the issue can't be permissions anyway as the software requires all users to be an administrator, so all users are a local administrator. (I haven't looked into why yet but its on the list, I was just recently brought into this team and we've put procedures in place for quick recovery.) We've applied hot fixes relating to RDS and printing, though I'm not sure which ones they were. I've combed through group policy and no where is printer redirection disabled. It's setup with all default values regarding the use and redirection of printers and a quick install of W2k8 R2 shows that it works by default. This dev install was joined to the same domain, placed in the same OU, shows the same policies applied, etc, etc, etc, The server generates all the correct redirected ports but no printers are created. It will also redirect drives without issue, this would seem to rule out the usermode service that handles redirects being broken. No events are logged related to any of the events and there are no events from the TerminalServices-Printer source. There were local printers setup. I didn't think it would mattter but as I was running out of ideas I tried deleting them all with no change. The TS was configured for the software it will be running before we checked out the redirection of printers so the other team responsible to setting up new servers wants to find a fix instead of reloading a new server. I'm not sure where or what else to look for. Any ideas?

    Read the article

  • Remotely port forward/launch process or a client-less remote desktop app?

    - by DC177E
    I have an XP box running Logmein at a remote location behind a linksys router, which was running well for a whole of four days, until we had a power failure. Our ISP gave us a new IP, the machine restarted, and logmein did not autorun (or, at least, it did not automatically sign in), and our service (which may or may not be a Minecraft server with non-backed-up save files) also did not run upon startup. Logmein does not register the new IP (it still displays the old one). I have a DDNS updater service, so I do know the new dynamic address. I have tried using the built in XP remote desktop service, but, as with almost all non-cloud-based remote desktop services, it requires a port forward. Thus, I would appreciate it if anyone has any ideas as to: A: Any way of accessing our router remotely to forward the remote desktop port. I've seen the Remote Management option (forwarding the setup page to port 8080), but I do not have it enabled. I've tried UPnP, but again, the setup page for our router is not forwarded. B: Any way of remotely launching a process that does not require port forwarding (or uses ports 255XX, 18XXX, or 9000.), such as a remote console service built into XP. I realize this is a near impossibility. C: A Way to remotely start logmein, and sign in, which is likely a definite impossibility. Sorry if this is too specific for Stackexchange, or if I've put it into the wrong section (is SuperUser the correct place for this?). Ideas would, again be much appreciated, as shot-in-the-dark-like this may be.

    Read the article

  • FDE / SSD - partition and leave some unencrypted?

    - by Web Design Hero
    Just bought a used beast of a desktop pc. The system drive is setup as a Raid 0 SSD (Intel 510 SSD Drives) with 128 each. I will probably not have to many programs beyond office and maybe Adobe CS if I spring for it, I will be keeping big data on a regular hdd. My question is about setting up TrueCrypt with my configuration. I have not previously done full disk encryption, but I feel that its probably a good idea. I have done some speed tests using file containers on the hdd and the sdd with truecrypt. While there is a huge hit with the SSDs and Truecrypt, it still outperforms the hdd on its own by a good margin, so I think i will be okay for my needs with truecrypt. I have seen in a few places that they recommend partitioning the drive and leavign some of the SSD not inside truecrypt, does this really make a difference? If so, how much should I leave? Will there be any issue in the Raid0 configuration? I am not really concerned about all the wear leveling issue, rather loose data and be secure, but since I don't need all that space neccesarily, I would like to optimize my setup for security and speed.

    Read the article

  • Changing farm account in Sharepoint 2010

    - by user55709
    After changing the the farm account to a domain user account I get the following error when trying to access the Central Administration page: "Cannot connect to configuration database" After I realized the headache may not be worth it, I decided to a reinstall using the following SP user account guidelines: http://technet.microsoft.com/en-us/library/ee662513.aspx After getting everything up, I am getting an error when using the designated farm account under the Central Admin Website Manage Service Accounts: "Access denied" If it is the farm administrator, why would I not be able to manage service accounts? I am able to access the other part of the admin site. Also, when logging in with the farm account it lists me as a "system account" not the domain account which I used for log in. Am I missing something or is this normal behavior? Am I not suppose to login with the farm account? When I log in with the Setup account (also a domain account) I can access everything with no errors on the site. The only difference between the two accounts is one has local admin privileges on the Sharepoint farm server which is the setup account. if you notice those privileges are not necessary for the farm account according to the article cited.

    Read the article

  • Directory service unavailiable, new hardware same settings

    - by Alex
    I'm working on a project with 2 sites connected by a VPN. Site 1 has the main server and there is a secondary server at site 2 which I am trying to replace. The current setup works perfectly however I can't for the life of me get the replacement server at site 2 up and running. I'm trying to replace like for like just upgraded hardware. I have installed the OS (all Server 2003 Standard SP2) and used exactly the same settings as the old server. I have setup Active Directory, DNS Server, DHCP Server and WINS Server configured. I have used all the same settings as the old server (except IP address and name). I can access the active directory but I can't do anything; add, edit, delete all returns "the directory service is unavaliable". No-one can login on any of the computers on site 2 and the internet is down. Plugging the old server back in and connecting it to the network rectifies the issue (so both new and old are connected at site 2), everyone can login and the internet is back (curious since the modem connects direct to the switch, and even with the new server online I can connect to the router via IP but not the net). I really don't have much experience but I've been roped into doing this because my company is too cheap to hire a real network admin. Any suggestions of where I can start to troubleshoot this, its driving me crazy and I only have a day before all the users are back on site.

    Read the article

  • Server periodically freezing - Help Stabilizing

    - by JonDog
    We run an asp.net/sql server data collection website with a hand full of clients dumping data in and running reports. We moved to a new server (specs below) and have had issues with it freezing and having to reboot it a dozen times over the pass six months. The hosting company has mentioned possible causes (listed below) but cant give a definite answer on what is going wrong. They have offered to reconfigure how ever I like. We have benefited from having a much faster system and really dont want to get rid of the ssd's unless they are the issue. Two possible setup changes that I've talked with them about are also listed below. Any suggestions on what maybe causing the freezing issue as well as suggestion on a new setup would be great. My main questions are: Do SSD generally have problems running the OS & SQL Server on the same RAID Array? and Are the new SSD's still unrefined enough to be running in a production environment? Thanks Current: Xeon Quad Core E3-1270 3.40 Ghz 16 GB DDR3-1333 ECC SDRAM First Hard Drive: 120GB Intel SSD Second Hard Drive: 120GB Intel SSD Third Hard Drive: 120GB Intel SSD Fourth Hard Drive: 120GB Intel SSD SAS 4 Port RAID Card Windows 2012 Standard Edition - 64 Bit MSSQL 2008 Web Edition Possible Causes: Running Sql Server & OS on same RAID Array OS Software Issues Using SSD's CPU Underpowered Not enough RAM Option 1 2x Xeon Quad Core E5-2603 1.80 GHz 16 GB DDR3-1333 ECC SDRAM 1 x 240GB Intel SSD - OS 3 x 1 TB SATA HDD (7200 RPM) - SQL Server SATA 4 Port RAID Card Windows 2012 Standard Edition - 64 Bit Option 2 Dell PowerEdge E3-1270v2 3.5GHz 4 Cores 16 GB DDR3-1600 UDIMM 4 x 128 GB Samsung 840 Pro SSD Add-in H200 (SAS/SATA Controller), 4 Hard Drives - RAID 10 Windows 2012 Standard Edition - 64 Bit

    Read the article

  • USB Hub vs. Dockinstation USB vs. Laptop USB

    - by Will
    I recently had thougts about my current setup in my office, especially about the UBS ports distribution. Here's my setup: I have a Lenovo T410 docked to a Lenovo Dockingstation Series 3, that providey me with 6 USB ports, which I use all (3 ext. drives, mouse, keyboard, USB Hub of monitor). The USB hub on my ext. monitor (most probably powered by the ext. monitor's power supply) provides me with 2 USB ports, where I use one for my webcam and another for USB sticks. On my T410 itself I have 4 USB slots, that are usually not used, as don't want to mess with USB plugs when undocking my laptop, now and then I plug my printer on one of these, just because I don't have any UBS ports left. Now I'm wondering how fast each of these slots are: I assume that all the 6 USB ports from the dockinstation somehow go through the docking connector on the bottom of my laptop. Does this connector has such a big bandwidth for all these 6 USB ports to perform like if they were dedicated ports as the 4 ones on my laptop? Also how is generally the performance of USB hubs (like the one on my ext. monitor?)?

    Read the article

  • IPtables and Remote Desktop with Proxy

    - by Sebastian
    So I setup a windows 2008 web server R2 on VirtualBox. Currently using Bridged Network. I can remote desktop to the machine hosting the VM (10.0.0.183) but cannot remote desktop to the VM itself (10.0.0.195). The remote port on the VM set to 5003. VM setup to accept remote connections (windows side). We also use a proxy for our internet, and I added these rules under NAT. (centOS 5) on our proxy box. -A INPUT -p tcp --dport 3389 -j ACCEPT -A REROUTING -i ppp0 -p tcp --dport 3389 -j REDIRECT --to-port 5003 -A FORWARD -d 10.0.0.195 --dport 5003 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT I've been trying for hours and hours and just cannot get it to work. I also used freedns so that we can use a domain name to connect too this VM over the internet. (the DNS points to our external IP address). If we don't get this right we will have to purchase a PPoE from an ISP to connect to this VM remotely, but I know that there is an alternative route if I can just get this port forwarding right!

    Read the article

  • "No such file or directory"?

    - by user1509541
    Ok, so I have a VDS laying around, and I thought I would turn it into a TF2 game server. When I connect to my server through PuTTY, and use wget to download the package "hldsupdatetool.bin" from Steampowered.com. I go to run it and it says "No such file or directory found". When I use "ls" to see what files are in directory, it lists "hldsupdatetool.bin" as being in the directory. So, why is it saying it's not there? This has been a headache for the past 2 days. It's returning: root@10004:~# wget http://www.steampowered.com/download/hldsupdatetool.bin --2012-07-08 06:04:49-- http://www.steampowered.com/download/hldsupdatetool.bin Resolving www.steampowered.com... 208.64.202.68 Connecting to www.steampowered.com|208.64.202.68|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 3513408 (3.4M) [application/octet-stream] Saving to: “hldsupdatetool.bin.3” 100%[======================================>] 3,513,408 2.45M/s in 1.4s 2012-07-08 06:04:51 (2.45 MB/s) - “hldsupdatetool.bin.3” saved [3513408/3513408] root@10004:~# chmod +x hldsupdatetool.bin.3 root@10004:~# ./hldsupdatetool.bin.3 -bash: ./hldsupdatetool.bin.3: No such file or directory root@10004:~# More: root@10004:~# ls ffmpeg-packages hldsupdatetool.bin.1 hldsupdatetool.bin.3 hldsupdatetool.bin hldsupdatetool.bin.2 setup.sh root@10004:~# ls -la total 13828 drwx------ 4 root root 4096 Jul 8 06:04 . drwxr-xr-x 21 root root 4096 Jul 8 05:57 .. -rw------- 1 root root 8799 Jul 8 06:26 .bash_history -rw-r--r-- 1 root root 570 Jan 31 2010 .bashrc -rw-r--r-- 1 root root 4 Jul 2 19:39 .custombuild drwxr-xr-x 2 root root 4096 Jul 4 18:49 ffmpeg-packages ---x--xrwx 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin -rwxr-xr-x 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.1 -rw-r--r-- 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.2 -rwxr-xr-x 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.3 -rw-r--r-- 1 root root 140 Nov 19 2007 .profile -rw------- 1 root root 1024 Jul 2 19:49 .rnd -rwxr-xr-x 1 root root 38866 May 23 22:02 setup.sh drwxr-xr-x 2 root root 4096 Jul 2 19:44 .ssh root@10004:~#

    Read the article

  • Multiple SSL certificates on one server

    - by Kyle O'Brien
    We're hosting two websites on our fairly tiny but dedicated production server. Both website require SSL authentication. So, we have virtualhosts set up for both of them. They both reference their own domain.key, domain.crt and domain.intermediate.crt files. Each CSR and certificate file for each site was setup using its own unique information and nothing is shared between them (other than the server itself) However, which ever site's symbolic link (set up in /etc/apache2/sites-enabled) is reference first, is the site who's certificate is referenced even if we're visiting the second site. So for example, assume our companies are Cadbury and Nestle. We set up both sites with their own certificates but we create Cadbury's symbolic link in apache's site-enabled folder first and then Nestle's. You can visit Nestle perfectly fine but if you check the certificate installation, it reference's Cadbury's certificate. We're hosting these websites on a dedicated Ubuntu 12.04.3 LTS server. Both certificates are provided by Thawte.com. I came across a few potential solutions with no degree of success. I'm hoping someone else has a decent solution? Thanks Edit: The only other solution that seems to have provided success to some people is using SNI with Apache. However, the setups here didn't seem to coincide with our setup at all.

    Read the article

  • Fedora 17 transparent Ethernet Bridge not forwarding IP traffic

    - by mcdoomington
    I am running on Fedora 17 with the latest ebtables and have been trying to setup a transparent bridge - using the following script, I send a ping through the bridged host and only see the requests on the bridge (among other traffic from eth0), BUT, arps and arp replies are making it through. My host is setup - Client 192.168.1.10 <-- eth0 -- eth2 192.168.1.20 Ethernet script: #!/bin/sh brctl addbr br0; brctl stp br0 on; brctl addif br0 eth0; brctl addif br0 eth2; (ifdown eth0 1>/dev/null 2>&1;); (ifdown eth2 1>/dev/null 2>&1;); ifconfig eth0 0.0.0.0 up; ifconfig eth2 0.0.0.0 up; echo "1" > /proc/sys/net/ipv4/ip_forward; ebtables -P INPUT DROP ebtables -P FORWARD DROP ebtables -P OUTPUT DROP ebtables -A FORWARD -p ipv4 -j ACCEPT ebtables -A FORWARD -p arp -j ACCEPT Any assistance would be great!

    Read the article

  • Windows Server 2008 - Non-Domain users can see my server shares

    - by ManovrareSoft
    Windows Server 2008 - Server Machine Windows 7 Professional - Client Machine I have a domain. It was setup by the client. The shares on the server are restricted correctly when a user logs on to the domain and uses their workstation, I have a few groups setup to restrict some access but the groups are at their core "Domain Users". The problem I am having is that when a user brings in a laptop with Windows 7 Pro on it, they can type up the name of the server in the "Run Dialog" on the start menu like "\SERVERNAME\" and access all of the shares freely... because they are not logged in to the domain there are no restrictions it seems.I have reviewed the permissions on the folders and they all have to be "Domain Users" and I have removed "Everyone" from the list of people able to see it. Guest access is also disabled...What am I doing wrong? Only group in the list is "Domain Users" isn't a domain user a user that is logged in to the domain? How do I stop non-domain users from seeing the shared folder? I noticed this on Windows Server 2003 too at another time. I assume they both had similar security issues and neither were set up by myself so I am not sure what could have been enabled or specifically deactivated that makes this issue appear.

    Read the article

  • Gitosis problems

    - by user49884
    I've spent the last 14 days on git and gitosis problems. I did always find a way around my problems but now I'm stuck. To briefly summarize the situation: I have setup gitosis, created a project and I can check in and out of it. Then I added another uses, giving him access to the project by adding him to gitosis.conf, but he can not even clone project. Then I added yet another user for the same project (following same procedure), he has access to everything (clone, pull and push). Finally, I added one more user who can not do anything either. I could live with all of this, because I have access to work on the project. Now I have added a new project, or have I? To my best believe, I have done everything the exact same way as with the first project. I do not get a repository in the repository folder on my server (when doing "git remote add..." and push). I have tried following ALL the guides google gave me on "how to create a new repository gitosis" (is up to page 7 before not ALL hits are marked as visited). I have also tried to follow a different path, starting with "git init --bare" on the server, and then try to clone it. Didn't work either. I get the following error no matter what I try: ERROR: gitosis.serve.main: Repository read access denied fatal: The remote than hung up unexpectedly (But it works fine for accessing gitosis-admin and my first project) Then I read about debugging of gitosis. I have tried with -v, --verbose and adding LogLevel = DEBUG in gitosis.conf, none of these give me extra information. Project setup gitosis.conf: [group project] writable = project members = me LogLevel = DEBUG To my best believe, everything is done the exact same way, as I did when setting up my first project. I'm really stuck, how do I proceed now?

    Read the article

  • How do I install MySQLdb on a Python 2.6 source build, on Debian Lenny?

    - by nbolton
    I've installed Python 2.6 from source on my Debian Lenny server, as Lenny does not have the python2.6 package. So, my Python 2.5 has MySQLdb installed and working just fine because I installed the python-mysqldb package. I figured I could just install MySQLdb from source, but because I have the Lenny python-dev package, it builds against 2.5: # python setup.py build running build running build_py copying MySQLdb/release.py -> build/lib.linux-x86_64-2.5/MySQLdb running build_ext building '_mysql' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -Dversion_info=(1,2,3,'final',0) -D__version__=1.2.3 -I/usr/include/mysql -I/usr/include/python2.5 -c _mysql.c -o build/temp.linux-x86_64-2.5/_mysql.o -DBIG_JOINS=1 -fPIC gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions build/temp.linux-x86_64-2.5/_mysql.o -L/usr/lib/mysql -lmysqlclient_r -o build/lib.linux-x86_64-2.5/_mysql.so I don't want to run python setup.py install, because I'm afraid it's going to screw up MySQLdb on 2.5 -- should I? I imagine it'd just overwrite 2.5 and do nothing to 2.6 -- maybe there's an argument I can use to install to 2.6? I imagine that I would need also to build against 2.6, so how do I do this?

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • Is it possible (and how if it is) dump two concatenaded disks in a new disk using DD?

    - by pedromarce
    Hi, I have a Lacie enclosure that has a setup with 2 500gb disks configured as 1 drive of 1TB, the only partition created for the whole drive is HFS+ journaled, but the controller in the enclosure is gone and so the drive refuses to mount anymore. I have been able to remove those two disks from the enclosure and connect them using USB ports and a program called R-studio (Raid recovery program) check that the setup the controller in the enclosure was using was both disks concatenated (Not Striped). And so configuring that option in R-studio I could be able to get back all the information. But before I got a license for r-studio for just one use, I would rather buying a new 1TB disk and try to write all the information of those two disks in this new one. I can use Mac or linux machines to do it, and I think it should be ok use DD command in linux to concatenate those two drives into the new one in the right order to get it working again in the new disk and I will reformat the old ones, but I am not sure. So, is it possible in this scenario to write both disks into a new one using DD? Any hints how the command would look? Thanks,

    Read the article

  • Methods and practices for managing a network that has no internet connection

    - by FaultyJuggler
    Originally asked in Super User but realized this belongs here. Long story short, I am setting up a network with 32 servers of varying specs that will be used for testing and development. We will be using RedHat Linux, we also do not have a router as of yet and were looking into making one of the servers act as our router/DHCP etc. The small cluster will be on an isolated network with no internet. I can use external harddrives and discs to transfer anything from external sources into machines on the network, so this isn't a locked down secure network, it just won't have a direct connection to the outside world. I've worked on such setups before, but always long after they were setup. So I'm reaching out to see what everyone knows as far as how groups have handled initial setup and maintenance of such a situation. What is the best way to get them all configured and up to date? What are the best ways to automate updates, network wide installs, etc. With the only given that I have large multi-terabyte external hard drives that would be used to drop whatever files are needed onto a central server, how do i then distribute those files and install their contents? I've done perl scripting, some teammates have played with puppet, so we aren't completely in the dark, I just wanted to avoid reinventing the wheel since this is a common challenge.

    Read the article

  • 3 Server, is this a cluster scenario?

    - by HornedBeast
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? In short, how can I get the best out of my 3 hardware servers? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Windows Server 2008 (x64): Wont boot past bios page

    - by WebSolProv
    Happy New Year, Since a month or so ago I inherited responsibility for small network administration for my sins. The domain controller (yes there is only one, and yes I know it is best practice to have two even in a small domain setup) went down overnight and I have been trying all day to get it back up and running. Unfortunately this machine also administers our entire ActiveDirectory setup: 1) It goes thru the BIOS without any errors, nothing whatsoever 2) It gets into the “select safe mode, safe mode with networking, normal” etc and if you select either of the safe mode options it loads a few files then reboots. If you select normal it just runs for a bit (doesn’t get to the windows splash screen) and then reboots again. 3) If you select windows repair, it asks for an image to repair too: however it would appear that none was taken that can be used (!!) or one is not being shown. 4) I have tried repairing the boot sector and the boot configuration using Bootrec.exe, both which it says were completed successfully but still it doesn’t work. 5) I have tried swapping the drives into another server to rule out hardware and that didn’t work either so clearly it’s the OS. 6) I have tried running chkdsk which ran fine, and also memory check which was also fine. We do have another machine on the network that was installed as a DC so when we decommission the current infrastructure but when I try and "promote" this to the lead DC then I get “you cannot modify domain or trust information because a PDC emulator cannot be contacted" so I am unable to replicare the ActiveDirectory details. If anyone can think of any direction I should follow it would be greatly appreciated, Thanks, Alex

    Read the article

  • Is there any equivalence of `--depth immediates` in `git`?

    - by ???
    Currently, I'm try to setup git front-end to the Subversion repository. My Subversion repository is a single large repository which consists of several co-related projects: svn-root |-- project1 | |-- branches | |-- tags | `-- trunk |-- project2 | |-- branches | |-- tags | `-- trunk `-- project3 |-- branches |-- tags `-- trunk Because it's sometimes needs to move files between different projects, so I don't want to break the repository to separate ones. I'm going to use git-svn to setup a git front-end, but I don't see how to exactly mapping the svn to git structure. The two systems treat branches and tags very different and I doubt it is possible. To simplify the problem, I would just git svn clone the whole root directory and let branches/tags/trunk directories just sit there. But this will definitely result in too many files in branches and tags directories. In Subversion, it's easy to just set the depth of checkout to immediates, which will only checkout the branch/tag titles, without the directory contents. but I don't know if this can be done in git. The git-svn messed me up. I hope there's more elegant solution.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >