Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 49/750 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • A complicated nginx/php-fpm chroot setup

    - by Rsaesha
    I'm running nginx and php-fpm, and I want to set up jails for each host. My setup is a little complicated, so following tutorials on the web gets me nowhere. Each site has a directory /var/www/domain.name/ Inside that directory, there will be a public/ directory which will be the website root, a logs/ directory which will store nginx logs for that site specifically, and the chroot filesystem (etc/, usr/, etc.) The first problem I've run into is that nomatter how I configure it, PHP-FPM cannot find the files that are passed to it via nginx. They result in a "Primary script unknown" error, and to make matters worse, the error messages from PHP-FPM are no more verbose than that, so I can't figure out what path is being passed by nginx. A php-fpm pool configuration for a host looks like this: [host] user = host group = www-data chroot = /var/www/domain.name chdir = /public listen = 127.0.0.1:900x 'x' is incremented for each pool. The nginx config for this host looks like this: server { listen 80; server_name domain.name *.domain.name; root /var/www/domain.name/public; index index.php index.html index.html; location ~ \.php$ { expires epoch; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9001; } } I'm guessing that the problem is the SCRIPT_FILENAME parameter, but I've changed it to just $fastcgi_script_name, and various other combinations, but to no avail. Can anyone help?

    Read the article

  • Dual Monitor Setup with Ati Radeon Hd 5700 results in unusabledesktop win7

    - by NorthPole
    I have an ATI Radeon HD 5700 card which i've been using under fully updated windows 7 with its latest drivers. My monitor is a 2004 NEC LCD1703M which despite being pretty old runs fine. A friend gave me an IIYAMA ProLite E1900WS monitor (2009 or 20010 ). Both monitors are vga only. I've been using a DLDVI to VGA adapter to connect my old monitor and tested the same adapter to the new monitor and it worked fine. So I bought an HDMI to vga adapter with the purpose of having a dual monitor setup. But when both screens are connected to the card the following problem occurs: The monitor connected to the hdmi port cycles between sleep and a black screen while the other shows the operating system for about two seconds before getting black for another two seconds. I can "use" the computer (move the mouse,click,type e.t.c.) while this happens but its not something pleasant. I tried reinstalling the driver, booting with both screens connected (in which case the powerup messages and the bios are mirrored in both screens until I get to the login screen where everything falls apart) Funny thing is, everything works if I disconnect the ATI graphics card and use the onboard intel one. So, any suggestions as to what might be the problem and how I can fix it?

    Read the article

  • Remote installation and configuration software, preferrably open source

    - by Brimstedt
    Hello, Im looking for some remote-installation software. I've looked briefly at unattended, opsi and a bunch of stuff, but there is nowhere near enough time to evaluate them all, and they are rather complex to setup so some insight would be very appreciated. This is foremost for windows clients, but linux support would be good. Something like apt-get would've been great. Requirements: Simple to setup and use Set up groups of users (developers, management, sales, etc) Chose which software to be installed for different groups Add new software to groups and it will be automatically installed on client Dependencies between software Nice-to-have: Linux client support OS-unattended installation thanks in advance

    Read the article

  • Encryption setup for Linux NAS?

    - by Daniel
    There's a bazillion hard disk encryption HOWTOs, but somehow I can't find one that actually does what I want. Which is: I have a home NAS running Ubuntu, which is being accessed by a Linux and a Win XP client. (Hopefully MacOS X soon...) I want to setup encryption for home dirs on the NAS so that: It does not interfere with the boot process (since the NAS it tucked away in a cupboard), the home dirs should be accessible as a regular file system on the client(s) (e.g. via SMB), it is easy to use by 'normal' people, (so it does not require SSH-ing to the NAS, mount the encrypted partition on command line, then connecting via SMB, and finally umount the partition after being done. I can't explain that to my mom, or in fact to anyone.) does not store the encryption key the NAS itself, encrypts file meta-data and content (i.e. safe against the 'RIAA' attack, where an intruder should not be able to identify which songs are in your MP3 collection). What I hoped to do was use Samba + PAM. The idea was that on connecting to the SMB server, I'd have to enter the password on the client, which sends it to the server for authentication, which would use the password to mount the encrpytion partition, and would unmount it again when the session was closed. Turns out that doesn't really work, because SMB does not transmit the password in the plain and hence I can't configure PAM to use the incoming password to mount the encrypted patition. So... anything I'm overlooking? Is there any way in which I can use the password entered on the client (e.g. on SMB connect) to initiate mounting the encrypted dir on the server?

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • Gerrit ssh key setup on windows server

    - by hotpotato
    I am attempting to configure google's 'Gerrit' code review web app on a Windows server 2008 virtual machine on our internal network. We are using Apache Tomcat (6.0.36) to host the web app and have deployed the gerrit.war to tomcats webapp folder, setup the context.xml, web.xml etc for the web app correctly i believe. However when i startup Tomcat using the $CATALINA_HOME/bin/startup.bat i get the following message in the tomcat logs: *Dec 07, 2012 1:03:54 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class com.google.gerrit.httpd.WebAppInitializer com.google.inject.CreationException: Guice creation errors:* 1) No SSH keys under C:\Gerrit\config\etc while locating com.google.gerrit.sshd.HostKeyProvider at com.google.gerrit.sshd.SshModule.configure(SshModule.java:90) I have created a is_rsa.pub SSH key and placed it in the specified directory to no avail. I have been googling this for about a week now and can't seem to find any information about the file or format it is expecting... documentation on setting gerrit up on windows seems hard to come by! Can anyone provide useful information about how to correctly configure a host SSH key in this context?

    Read the article

  • Internet slowed down because of SQUID Server setup

    - by Ranjith Kumar
    Recently I have setup a squid server for our office. I have computer (A) with two ethernet cards, one for internet and the second one for local networkIt has Ubuntu server OS with squid-server and dhcp3-server installedI have added few iptable rules to work like a router and redirect all http traffic to 3128 port This link is my reference. Everything worked fine for 2 days. All of a sudden internet speed went down drastically. When I connected the internet cable to my laptop to test the internet speed it was fine. Again when I reconnected it back to computer A everything was normal. This happened 4 times in a week. Could anyone here please help me why the internet speed is going down and it becomes normal when I reconnect the cable. EDIT: Rebooting the system (computer A) didn't make a difference. I have changed iptables so that http traffic doesn't redirect to 3128 port any further, still no change in the internet speed. I think the problem is not with squid but with something else. Here are my iptable rules SQUID_SERVER="10.1.1.1" INTERNET="eth1" LAN_IN="eth0" SQUID_PORT="3128" PROXYSERVERS=(Atlanta Baltimore Boston Chicago Dallas Denver Houston KansasCity LosAngeles Miami NewYork Philadelphia Phoenix SanAntonio SanDiego SanJose Seattle Washington) SERVERLEN=${#PROXYSERVERS[*]} I=0 iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X modprobe ip_conntrack modprobe ip_conntrack_ftp echo 1 /proc/sys/net/ipv4/ip_forward iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT iptables --table nat --append POSTROUTING --out-interface $INTERNET -j MASQUERADE iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT while [ $I -lt $SERVERLEN ]; do iptables -t nat -A PREROUTING -i $LAN_IN -p tcp -d ${PROXYSERVERS[$I]}.wonderproxy.com --dport 80 -j ACCEPT let I++ done iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to $SQUID_SERVER:$SQUID_PORT iptables -A INPUT --protocol tcp --dport 80 -j ACCEPT iptables -A INPUT --protocol tcp --dport 443 -j ACCEPT iptables -A INPUT --protocol tcp --dport 22 -j ACCEPT iptables -A INPUT -j LOG iptables -A INPUT -j DROP

    Read the article

  • postfix (for sending mail only) multiple domain setup

    - by seanl
    I have the following problem, I have a Centos 5.4 VPS hosting a few nginx sites (some static, some cakephp), I would like to be able to send email from each sites contact page through postfix to my google apps hosted email (different accounts for each site) so that apps can then send out an auto email to the person filling in the contact form etc I have a bare-bones postfix installation with the following added into the main.cf config file. from using this guide virtual_alias_domains = hash:/etc/postfix/virtual_alias_domains virtual_alias_maps = hash:/etc/postfix/virtual_alias_maps (both of these files have been converted into db files using postmap) I have configured DNS correctly for each site and setup SPF records. (I'm aware R-DNS will still reference my actual hostname not the domain name and cause a possible spam issue but one thing at a time) I can telnet localhost and the helo localhost so that I can send a command line email from an address in the virtual_alias_domains to an email in the virtual_alias_maps file which seems sends without giving an error but it is sending to my local linux account not the email address specified. my question is am i approching this the wrong way in terms of the virtual alias mapping or is this even possible to do in the manner im trying. Any help is greatly appreciated thanks. my postconf -n outlook looks like this alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost myhostname = myactual hostname mynetworks = 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 virtual_alias_domains = hash:/etc/postfix/virtual_alias_domains virtual_alias_maps = hash:/etc/postfix/virtual_alias_maps

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • Postfix / Dovecot email setup not storing email

    - by Nick Duffell
    I'm trying to setup postfix / dovecot on my debian server to use it for a mail server. I set everything up according to a tutorial on the net, and it all seemed OK. I can send emails from it, so SMTP is not a problem, however I cannot receive emails. Looking into the files in /home/nick/mail/ I can see that if I send an email to myself (from the server, to itself) the emails are there, but are put straight into the Deleted Messages folder. I don't know why this is. When I send an email from another mail account (not on this server), the emails are nowhere to be found. Also, looking at the log file /var/log/mail.log all seems to be OK, I get the following when I receive an email, which looks OK to me: Nov 7 22:47:22 nickduffell postfix/local[17825]: 05B1173581A6: to=, relay=local, delay=0.37, delays=0.31/0.02/0/0.03, dsn=2.0.0, status=sent (delivered to mailbox) Any ideas? Thanks EDIT: I should also add that although the emails I send myself are in the Deleted Messages folder, and in my mail client I can see that "Trash" has 3 items, I cannot download them in my mail client...

    Read the article

  • Concerns about Apache per-Vhost logging setup

    - by etienne
    I'm both senior developer and sysadmin in my company, so i'm trying to deal with the needs of both activities. I've set up our apache box, wich deals with 30-50 domains atm (and hopefully will grow larger) and hosts both production and development sites, with this directory structure: domains/ domains/domain.ext/ #FTPS chroot for user domain.ext domains/domain.ext/public #the DocumentRoot of http://domain.ext domains/domain.ext/logs domains/domain.ext/subdomains/sub.domain.ext domains/domain.ext/subdomains/sub.domain.ext/public #DocumentRoot of http://sub.domain.ext Each domain.ext Vhost runs with his dedicated user and group via mpm-itk, umask being 027, and the logs are stored via a piped sudo command, like this: ErrorLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_error.log" CustomLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_access.log" combined Now, i've read a lot about not letting the logs out of a very restricted directory, but the developers often need to give a quick look to a particular subdomain error log, and i don't really want to give them admin rights to look into /var/logs. Having them available into the ftp account is REALLY handy during development stages. Do you think this setup is viable and safe enough? To me it is apparently looking good, but i'm concerned about 3 security issues: -is the sudo pipe enough to deal with symlink exploits? Any catches i'm missing? -log dos: logs are in the same partition of all domains. got hundreds of gigs, but still, if one get disk-space dos'd, everything will break. Any workaround? Will a short timed logrotate suffice? -file descriptors limits: AFAIK the default limit for Apache on Ubuntu Server is currently 8192, which should be plenty enough to handle 2 log files per subdomain. Is it? Am i missing something? I hope to read some thoughts on the matter!

    Read the article

  • Optimal Networking Setup for a 2-Story unit?

    - by user29336
    I am moving into a 4 bedroom two-story unit. It’s roughly 2,200 sq ft. I want absolute max throughput possible to be achieved in all focal points. We’re all in internet related industries. Between gaming and web-development latency and throughput are major factors for us. Here’s our main focal points: 1) Garage (office). downstairs 2) Each bedroom x4. upstairs 3) Living room. downstairs The fastest line we can get is Comcast 50mbdown/5up (Wideband). I am looking for the best way to achieve wireless and wired performance for our setup. Our gaming computers may be in our bedroom, and we also may bring it down to the office every now and then for “LAN” sessions. Most wireless will be happening downstairs with our laptops, but since we may do LAN sessions then hard wired latency may be important there too. My concerns: If we do only wireless there would be too much latency for gaming. I don’t know if placing one D-link DGL 4500 on the top floor would be enough; which I currently own. (http://dlink.com/us/en/home-solutions/support/product/dgl-4500-xtreme-n-gaming-router) As far as I’m aware wireless signals transfer best top down. Would this wireless router be enough on top floor and that’s it? My second strategy was a combination of wiring and wireless but I’m not sure what’s easiest way to do this? This is a place we’re renting, so I’m not sure how much leeway we have with wiring, but we’re all pretty competent... if we can’t drill through a wall we can probably “stitch” them across the edges wherever needed. Thoughts on the optimal way to do this?

    Read the article

  • Setup git repository on gentoo server using gitosis & ssh

    - by ikso
    I installed git and gitosis as described here in this guide Here are the steps I took: Server: Gentoo Client: MAC OS X 1) git install emerge dev-util/git 2) gitosis install cd ~/src git clone git://eagain.net/gitosis.git cd gitosis python setup.py install 3) added git user adduser --system --shell /bin/sh --comment 'git version control' --no-user-group --home-dir /home/git git In /etc/shadow now: git:!:14665:::::: 4) On local computer (Mac OS X) (local login is ipx, server login is expert) ssh-keygen -t dsa got 2 files: ~/.ssh/id_dsa.pub ~/.ssh/id_dsa 5) Copied id_dsa.pub onto server ~/.ssh/id_dsa.pub Added content from file ~/.ssh/id_dsa.pub into file ~/.ssh/authorized_keys cp ~/.ssh/id_dsa.pub /tmp/id_dsa.pub sudo -H -u git gitosis-init < /tmp/id_rsa.pub sudo chmod 755 /home/git/repositories/gitosis-admin.git/hooks/post-update 6) Added 2 params to /etc/ssh/sshd_config RSAAuthentication yes PubkeyAuthentication yes Full sshd_config: Protocol 2 RSAAuthentication yes PubkeyAuthentication yes PasswordAuthentication no UsePAM yes PrintMotd no PrintLastLog no Subsystem sftp /usr/lib64/misc/sftp-server 7) Local settings in file ~/.ssh/config: Host myserver.com.ua User expert Port 22 IdentityFile ~/.ssh/id_dsa 8) Tested: ssh [email protected] Done! 9) Next step. There I have problem git clone [email protected]:gitosis-admin.git cd gitosis-admin SSH asked password for user git. Why ssh should allow me to login as user git? The git user doesn't have a password. The ssh key I created is for the user expert. How this should work? Do I have to add some params to sshd_config?

    Read the article

  • Does it actually matter whether you have open applications when installing new software?

    - by Dan
    It seems the norm these days is for installers/setup programs to request that you close all open applications before initiating the install process for a piece of new software. I used to obediently follow these directions without fail, even though it could sometimes be frustrating having to close open documents and stop working on things just to get a new, seemingly unrelated application installed. Then at some point I simply stopped bothering. Nowadays if I have a lot of stuff going on I might even run multiple installers at the same time; I can't even recall a time it has ever posed a problem. Why do setup programs even make this request in the first place, then, when it appears to be unnecessary? Is this just to simplify troubleshooting for companies' support people? Has anyone else ever run into problems as a result of trying to install an app while other apps were open?

    Read the article

  • simple apache2 reverse proxy setup not working

    - by Nick
    I know what proxy is (very high level), it's just I have never set up one, and it feels like I might be missing some big fat point here. My setup: client server (static IP), runs apache on port 80 proxy (has 2 network cards, one is on the clients network, the other one with a static IP on the server network), runs apache on port 80 I am trying to configure these three machines so that when client requests: http://proxy/machine1 It gets served server's pages at server root URL, i.e. http://server/ I can access client pages just fine. However, when I try accessing a page from the client machine, it simply gets redirected to server's IP address, which it clearly can't access since they are not on the same network: ... <meta http-equiv="REFRESH" content="0;url=http://server/machine1"></meta> <title>Redirect</title> ... My apache2 config is: LoadModule proxy_module /modules/mod_proxy.so LoadModule proxy_http_module /modules/mod_proxy_http.so ProxyRequests off <Proxy *> Order Allow,Deny Allow from all </Proxy> ProxyPass /machine1 http://server:80 <Location /machine1> ProxyPassReverse / </Location> What gives? Thanks!

    Read the article

  • Using Round Robin DNS on simple VPN setup

    - by dannymcc
    We have two internet connections which are load balanced to share the load between the two. We set this up after one of the internet provider proved to be less than reliable but great speed and latency wise when it is working. We'd rather utilise both connections as much as possible rather than leave one idle until the other drops out. We have a number of remote workers who occasionally need to connect via VPN from their laptops or iPads, we also have a small number of permanent LAN to LAN tunnels running from smaller branches. Originally we only had one internet connection and used one of our static IP addresses for all VPN users. Now that we have two internet connections running all of the time I am trying to make sure that the VPN is available to our team regardless of which connection drops. So my solution is to create two A records for our domain name with a value of vpn. and the two static IP addresses from each peer. Is this a sensible way of achieving this? Should I expect higher latency due to packets being lost if one peer fails and some packets still get routed to it anyway? A brief mockup of the setup I have:

    Read the article

  • Setup Version Control on Dreamweaver

    - by John Isaacks
    I have a win computer on the Network called WIN2K8FS1 I have TortoiseSVN on a win computer and when I go to checkout a repository with Tortoise it asks me for the URL of the repository. I put in: file://WIN2K8FS1/Media/SVN_repo And it creates the working copy. I am trying to setup Dreamweaver CS5 to work with subversion. I create a new site and I go to the Version Control tab and it asks for a lot if info. First is Access. I choose Subversion since that is the only option Second is Protocol. Not sure which I need so I go with HTTP? Third is Server Address. I am assuming this is the name of the computer with the repository so I put in \\WIN2K8FS1\ Fourth is Repository Path. I put in /Media/SVN_repo Fifth is Port which I leave default to 80 Then it asks for user name and password. I never set one up for anything so I put in my domain username and password. I click test and it tells me: Server and project are not accessible! I am not sure what I am doing wrong. I am not the server admin but I did create the repository and have access to it via Tortoise. So I am not sure what I am doing wrong in Dreamweaver.

    Read the article

  • Setup 2003 R2 Radius server to work on vista/seven

    - by Fox
    Hi All, I'm currently trying to configure my 2003 R2 server RADIUS module to enable WIFI client to authenticate throught my Active Directory. The RADIUS server use MS-CHAP V2 as encryption method. I got several Access Point running DD-WRT, configured to use WPA2-Enterprise security that use Radius Server. Everything is setup, and almost working. When I say almost working, I mean, I can login using my AD Credential on my IPod or even on a MacBook running OS X, Windows XP also work with some little tweak in connection properties. The problem is Windows Vista or Windows Seven clients computers that are not inside domain. It doesn't work at all, it doesn't even prompt for user/password/domain. I already install the patch for IAS to make the certsrv compatible with Vista and Seven, but still doesn't work. Anyone ever encounter the same issue I have right now? I'm searching for a solution to this for several already and still not find anything. Looks like many people have the same issue too. Thanks all for you eventual answers.

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance Have posted this question previously on SuperUser but no responses yet. So was wondering if this is a more apt forum for this.

    Read the article

  • Dual Monitor setup issues between laptop and external led monitor

    - by Julian
    I have two challenges. Monitor will not connect to laptop through HDMI Watching HD video content causes the laptop to sometimes turn off expecially when I'm streaming from tekzilla.com Setup. I got my new HP 2311x LED LCD monitor this week and I have it running as the main monitor extended by the 15 inch screen on my Dell Studio 1558. Right now I have to connect the external monitor through VGA. For the HDMI connection issue. I suspect that either the appropriate drivers are not installed because I don't see any hdmi device in the device manager. I've checked and I don't see any hdmi specific drivers listed online. For the shut down issue, I suspect the laptop might be overheating. Not sure why it would. It never did that while I watched movies on my laptop's default screen. My Laptop Configuration: 15" led lcd screen at 1366 x 768 intel i5 processor integrated graphics card 4 GB DDR3 RAM 500 GB hard drive I've tried everything from switching the source on my monitor to hdmi to start up combinations and nothing has worked. What could be the issue and how do I solve it?

    Read the article

  • Virtual machines with failover setup

    - by kimmmo
    We have three servers and our plan is to run a number of virtual machines on them in such manner, that if one of the nodes blow up, we can either quickly or seamlessly get a spare running on another node. In addition to the normal networking, they're interconnected via dual 10Gbit NIC's, so networked raid/mirroring shouldn't be a problem. The guest VM's are mostly going to be running text mode linux, but of course it wouldn't hurt to be able to spin up a non-mission critical windows guest for running Visual Studio or checking IE compatibility of a web app. We've spent some time trying to get some magical cloud setup running using Stackops and Crowbar but it started to look like they were offering way too much and were too complicated for our needs. The next candidate, I think, is Ubuntu 11.04 server + KVM + Ganeti + Drbd, unless you can come up with a suggestion for a better solution that we have missed. Requirements: Installation should be simple or at least understandable without being in the dev team A browser interface for creating and managing VM's is a nice bonus Single node's hardware failure should cause minimal downtime for VM's that were running on that node Adding more nodes should be possible without shutting down the VM's.

    Read the article

  • Connection refused in ssh tunnel to apache forward proxy setup

    - by arkascha
    I am trying to setup a private forward proxy in a small server. I mean to use it during a conference to tunnel my internet access through an ssh tunnel to the proxy server. So I created a virtual host inside apache-2.2 running the proxy, the proxy_http and the proxy_connect module. I use this configuration: <VirtualHost localhost:8080> ServerAdmin xxxxxxxxxxxxxxxxxxxx ServerName yyyyyyyyyyyyyyyyyyyy ErrorLog /var/log/apache2/proxy-error_log CustomLog /var/log/apache2/proxy-access_log combined <IfModule mod_proxy.c> ProxyRequests On <Proxy *> # deny access to all IP addresses except localhost Order deny,allow Deny from all Allow from 127.0.0.1 </Proxy> # The following is my preference. Your mileage may vary. ProxyVia Block ## allow SSL proxy AllowCONNECT 443 </IfModule> </VirtualHost> After restarting apache I create a tunnel from client to server: #> ssh -L8080:localhost:8080 <server address> and try to access the internet through that tunnel: #> links -http-proxy localhost:8080 http://www.linux.org I would expect to see the requested page. Instead a get a "connection refused" error. In the shell holding open the ssh tunnel I get this: channel 3: open failed: connect failed: Connection refused Anyone got an idea why this connection is refused ?

    Read the article

  • My computer resets when i try to install Windows 7 (USB/DVD)

    - by Ranhiru
    I had a troublesome software starting up when Windows 7 was starting and i had no way to remove it because i couldn't log in to Windows. Not even safe mode. But i used Hiren's Boot CD to edit the registry to stop that software from starting up... But my efforts were not useful as Windows kept on going to an ultimate freeze just before the login screen should have been shown. And just freezes and I cannot do anything. So i decided heck with it and i now want to format the disk and reinstall a fresh copy of Windows 7. But now regardless of USB or DVD i use to boot the setup, it loads fine till you see the animating Windows and it just resets the laptop! Just plain resets! Hiren's Boot CD and Mini XP, strange enough but still works :( Any ideas what might be causing the laptop to reset in Windows 7 setup?

    Read the article

  • Recovering drive via boot to Win7 setup command prompt

    - by Valamas
    I am trying to recover data from two old IDE drives. Drive1 has been successful, but something is wrong with Drive2. It does not appear as a drive letter. Due to limited legacy hardware, the only way i can see these drives is to boot using windows 7 setup and goto the command prompt. Without going further as to why, my question is how i can access the data in this command prompt. I discovered DISKPART command and while a first time user, it looked like something that can fix my problem. Here are the results of my diskpart commands. At the bottom is a image of the commands taken with a camera. The Drive2 is present because when using the diskpart command, I can see it. How can I copy the information using a robocopy script if the drive letter is not available? how can I assign a drive letter? Is there any repair command I need to execute? When i execute DISKPART, the following is what i see. DISKPART> LIST DISK Disk### Status Size Free Disk 5 Online 37 GB 2048 KB So then I select disk 5. DISKPART> SELECT DISK 5 "Disk 5 is now the selected disk" When I list partition DISKPART> LIST PARTITION Partition ### Type Size Partition 1 Primary 101 MB Partition 2 Primary 37 GB So I select partition 2 "Partition 2 is now the selected partition." I then try to assign a drive letter DISKPART> ASSIGN LETTER=G "There is no volume specified." "Please select a volume and try again." When i list volume the drive is not present. DISKPART> LIST VOLUME Result of the above commands

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >