Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 695/750 | < Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >

  • Is this iptables NAT exploitable from the external side?

    - by Karma Fusebox
    Could you please have a short look on this simple iptables/NAT-Setup, I believe it has a fairly serious security issue (due to being too simple). On this network there is one internet-connected machine (running Debian Squeeze/2.6.32-5 with iptables 1.4.8) acting as NAT/Gateway for the handful of clients in 192.168/24. The machine has two NICs: eth0: internet-faced eth1: LAN-faced, 192.168.0.1, the default GW for 192.168/24 Routing table is two-NICs-default without manual changes: Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 (externalNet) 0.0.0.0 255.255.252.0 U 0 0 0 eth0 0.0.0.0 (externalGW) 0.0.0.0 UG 0 0 0 eth0 The NAT is then enabled only and merely by these actions, there are no more iptables rules: echo 1 > /proc/sys/net/ipv4/ip_forward /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # (all iptables policies are ACCEPT) This does the job, but I miss several things here which I believe could be a security issue: there is no restriction about allowed source interfaces or source networks at all there is no firewalling part such as: (set policies to DROP) /sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT /sbin/iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT And thus, the questions of my sleepless nights are: Is this NAT-service available to anyone in the world who sets this machine as his default gateway? I'd say yes it is, because there is nothing indicating that an incoming external connection (via eth0) should be handled any different than an incoming internal connection (via eth1) as long as the output-interface is eth0 - and routing-wise that holds true for both external und internal clients that want to access the internet. So if I am right, anyone could use this machine as open proxy by having his packets NATted here. So please tell me if that's right or why it is not. As a "hotfix" I have added a "-s 192.168.0.0/24" option to the NAT-starting command. I would like to know if not using this option was indeed a security issue or just irrelevant thanks to some mechanism I am not aware of. As the policies are all ACCEPT, there is currently no restriction on forwarding eth1 to eth0 (internal to external). But what are the effective implications of currently NOT having the restriction that only RELATED and ESTABLISHED states are forwarded from eth0 to eth1 (external to internal)? In other words, should I rather change the policies to DROP and apply the two "firewalling" rules I mentioned above or is the lack of them not affecting security? Thanks for clarification!

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • Routing between 2 different subnets on 2 different interfaces in SonicOS

    - by Chris1499
    I'm having a bit of a problem allowing traffic between two of my subnets. Here's the structure I've built. The X0 interface has our windows server on it and it handles DHCP/DNS, etc. X1 has the WAN connection. The Sonicwall is handling DHCP on X2. The X3 interface is connected to a different vlan on the 48 port switch. The Sonicwall is handling DHCP on this network as well. So here's what i want to do. The network on X2 is for our guest wireless; i don't want it to be able to access any of the other networks, just the internet, so i that all blocked in the firewall. No issues there. The X3 network is going to be for programmable controllers, and needs to be able to access the X0 network where our computers are. This is where my problem is. I'm not able to get between the 192.168.2.xxx and the 192.168.1.xxx on interfaces X0 and X3 respectively. I have these rules set up in the firewall. The Lan Primary Subnet is the 192.168.2.0 on X0. So if i'm not mistaken, this will allow traffic between the two through the firewall. Now this is where I'm a little confused. Do i need to use NAT to get the traffic from X0 to go to X3 (and vice versa), or a static route, or both? Currently i have both, though i doubt they're done correctly (also in screenshot). I've tried to ping between the two without luck. Any advice, or if you see what's wrong with my setup, is much appreciated. If you need some more information, let me know. Thanks all! EDIT: So i found that i don't neither either NAT or a static route, that the setting in the firewall is enough. I can now ping from the 192.168.1.xxx network, however i can't access the server on the 192.168.2.xxx network. When i try to access i get "An error occured while reconnecting to Z: to server Microsoft Windows Network: The local device name is already in use. This connection has not been restored. What am i missing?

    Read the article

  • Secure copy uucp style

    - by Alexander Janssen
    I often have the case that I have to make a lot of hops to the remote host, just because there is no direct routing between my client and the remote host. When I need to copy files from a remote host two or more hops away, I always have to: client$ ssh host1 host1$ ssh host2 host2$ scp host3:/myfile . host2$ exit host1$ scp host2:myfile . host1$ exit client$ scp host1:myfile . Back when uucp still was being used this would be as simple as a uucp host1!host2!host3 /myfile . I know that there's uucp over ssh, but unfortunately I don't have the proper privileges on those machines to set it up. Also, I'm not sure if I really want to fiddle around with customer's machines. Does anyone know of a method doing this tasks without the need to setup a lot of tunnels or deploying new software to remote hosts? Maybe some kind of recursive script which clones itself to all the remote hosts, doing the hard work for me? Assume that authentication takes place with public keys and that all hosts do SSH Agent Forwarding. Edit: I'm not looking for a way to automatically forwarding my interactive sesssion to the nexthop host. I want a solution to copy files bangpath-style using scp via multiple hops without the need to install uucp on any of those machines. I don't have the (legal) rights or the privileges to make permanent changes to the ssh-config. Also, I'm sharing this username and hosts with a lot of other people. I'm willing to hack up my own script, but I wanted to know if anyone knows something which already does it. Minimum-invasive changes to hosts on the bangpath, simple invocation from the client. Edit 2: To give you an impression of how it's properly been done in interactive sessions, have a look at the GXPC clustershell. This is basically a Python-script, which spwans itself over to all remote hosts which have connectivity and where your ssh-key is installed. The great thing about it is, that you can tell "I can reach HostC via HostB via HostA." It just works. I want to have this for scp.

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • New harddrives failing within weeks.

    - by Jason Kealey
    I've experienced 8 hard disk failures in 3 months and have tried many things to solve the issue permanently but I have failed. I would like to know if you have any advice for me. System was running Win XP on an Asus P5W-DH Deluxe. I have setup a RAID-1 array. I started out with 2 x 500 GB 7200RPM Western Digital drives. One died. I took it out to RMA it. On the same day, the router was fried. Assumed a power surge occurred; connected an older UPS to protect the system. Once I got my hands on an identical disk, I installed it. The RAID array was rebuilt. A few days later, the other one died. Assumed the rebuild caused it to fail. Took it out for RMA. Before the other one arrived, the remaining one died. I then discovered I could re-enable them using the Intel Matrix Storage Manager. I re-enabled both and the system seemed fine for a week, until both died again. I got two new 1.5 TB 7200RPM Seagate drives and re-installed Windows 7. Also replaced the UPS and power supply. They both died again. The voltage on the plug is stable between 120 and 122V as per the UPS. None of the other devices have had any problems (monitors, etc.). At this point, I see two options: a) electrical issue in the house that was, for some reason, not blocked by the UPS. b) something else inside the system causing surges? motherboard? onboard raid controller? Failures happen fairly quickly, between 2 and 14 days after I fix the previous issue. I just gotten a new computer (Core i7) to replace it. If it is stable, I can determine that b) was the problem. If it fries its hard drive again, I can determine that it is an electrical issue in the house. Do you have any other thoughts? Any tools I can run on the drives that failed to get more information about the original SMART event history?

    Read the article

  • Slash after domain in URL missing for Rails site

    - by joshee
    After redirecting users in a Rails app, for some reason the slash after the domain is missing. Generated URLs are invalid and I'm forced to manually correct them. The problem only occurs on a subdomain. On a different primary domain (same server), everything works ok. For example, after logging out, the site is directing to https://www.sub.domain.comlogin/ rather than https://www.sub.domain.com/login I suspect the issue has something to do with the vhost setup, but I'm not sure. Here are the broken and working vhosts: BROKEN SUBDOMAIN <VirtualHost *:80> ServerName www.sub.domain.com ServerAlias sub.domain.com Redirect permanent / https://www.sub.domain.com </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.sub.domain.com ServerAlias sub.domain.com RailsEnv production # SSL Engine Switch SSLEngine on # SSL Cipher Suite: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL # Server Certificate SSLCertificateFile /path/to/server.crt # Server Private Key SSLCertificateKeyFile /path/to/server.key # Set header to indentify https requests for Mongrel RequestHeader set X_FORWARDED_PROTO "https" BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 DocumentRoot /home/usr/www/www.sub.domain.com/current/public/ <Directory "/home/usr/www/www.sub.domain.com/current/public"> AllowOverride all Allow from all Options -MultiViews </Directory> WORKING PRIMARY DOMAIN <VirtualHost *:80> ServerName www.diffdomain.com ServerAlias diffdomain.com Redirect permanent / https://www.diffdomain.com </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.diffdomain.com ServerAlias diffdomain.com ServerAlias *.diffdomain.com RailsEnv production # SSL Engine Switch SSLEngine on # SSL Cipher Suite: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL # Server Certificate SSLCertificateFile /path/to/server.crt # Server Private Key SSLCertificateKeyFile /path/to/server.key # Set header to indentify https requests for Mongrel RequestHeader set X_FORWARDED_PROTO "https" BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 DocumentRoot /home/usr/www/www.diffdomain.com/current/public/ <Directory "/home/usr/www/www.diffdomain.com/current/public"> AllowOverride all Allow from all Options -MultiViews </Directory> </VirtualHost> Please let me know if there's anything else I could provide that would help determine what's wrong here. UPDATE tried adding a trailing slash to the redirect command, but still no luck.

    Read the article

  • Can I use one virtualbox disk for multiple machines?

    - by mxp
    I'm not sure what search term to use and skimming through the VirtualBox manual didn't help me either, so I ask my two questions here... My setup is this: PC with dual boot into Windows 7 and a Debian operating system (both 64bit). I've created a virtual machine (Kubuntu, 64bit) under Windows and put it's VDI file on a SMB share of my NAS. Then I created a VM under linux using the same settings for memory etc and assigned the existing VDI file to it. My idea was that I could use that virtual machine from Windows and Linux as well. (1) Is this generally something that should work without problems? I noticed that snapshots get me into trouble because they appear to be not visible from the other operating system: The snapshots I took after installing the guest system are not visible under Linux. That's why I shut down the VM after usage and not save its state while it's running. My current problem is this: I have used the VM under Windows first, then under Linux. Now it will only start on Linux. When trying this on Windows the guest OS detects some kind of hard disk error and fails to boot because it cannot mount its drive. Obviously the virtual hard disk won't fail so it must have something to do with me using it under Linux. (2) How can I fix that? Update: It also looks like any changes I made in the VM under Linux have been reset by trying to boot it under Windows. Looks like it's back to the latest snapshot. I'm confused... Update The answer to my first question can be found below. In short: It works, as long as you don't use snapshots. The answer to my second question is this: Under Windows set the VM back to the latest snapshot and then discard the snapshot so it gets merged. There should be no snapshots left at the end. If you have multiple snapshots, discard the earliest ones first (Snapshot 1, then 2, 3, ...). I'm not sure what happens if you start at the end (.., 3, 2, 1). This of course leads to some data loss since you revert all changes since the last snapshot. But at least the VM is usable again.

    Read the article

  • Share the same subnet between Internal network and VPN Clients

    - by Pascal
    I would like to set up a configuration where VPN clients connecting to my Forefront TMG can access all the resources of my Internal network without having the to use the option "Use default gateway on remote network" on the VPN's TCP/IP Ipv4 Advanced Settings. This is important to me, since they can use their own internet while accessing my network through VPN (the security implications of this are acceptable on my cenario) My Internal network runs on 10.50.75.x, and I set up Forefront TMG to relay the DHCP of my Internal network to the VPN clients, so they get IPs from the same range as the Internal network. This setup initially works, and the VPN clients use their own internet, and can access anything that is on the internal network. However, after a while, HTTP Proxy Traffic from the Internal network starts getting routed to the IP of the RRAS Dial In Interface, instead of the IP of the Internal's network gateway. When this happens, the HTTP Proxy starts getting denied for obvious reasons. My first question is: does this happen because Forefront TMG wasn't designed to handle a cenario that I described above, and it "loses itself"? My second question is: Is there any way to solve this problem, either through configuration or firewall policies? My third question is: If there's no way that it can work with the cenario above, is there another cenario that will solve my problem, and do what I'd like it to do properly? Below are my network routes: 1 => Local Host Access => Route => Local Host => All Networks 2 => VPN Clients to Internal Network => Route => VPN Clients => Internal 3 => Internet Access => NAT => Internal, Perimeter, VPN Clients => External 4 => Internal to Perimeter => Route => Internal, VPN Clients => Perimeter Tks!

    Read the article

  • NGINX rewrite rules help. Redirect not working and want to get rid of index.php in urls

    - by Tamerax
    hey! I have 2 questions for nginx users. 1) I'm trying to setup my joomla server onto my new linode running NGINX and after much (like days) of searching and testing, I finally have a config that works with with SEF url plugins...sorta. I was using an apache system on the old server and it used mod_rewrite and life was fine in terms of SEF. Since NGINX doesn't have mod_rewrite, I found something that works BUT it constantly leaves index.php in the urls. ex: http://mysite.com/index.php/forum i want it to be just http://mysite.com/forum but without mod_rewrite it doesn't seem to be possible in joomla that i'm aware of. I know in wordpress it IS possible but I have to use a plugin. Here is my config file: server { listen 80; server_name mysite.com www.mysite.com; access_log /home/public_html/mysite.com/log/access.log; error_log /home/public_html/mysite.com/log/error.log; root /home/public_html/mysite.com/public/; large_client_header_buffers 4 8k; # prevent some 400 errors index index.php index.html; fastcgi_index index.php; location / { expires 30d; error_page 404 = @joomla; log_not_found off; } # Rewrite location @joomla { rewrite ^(.*)$ /index.php?q=last; } # Static Files location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; } # PHP location ~ \.php { keepalive_timeout 0; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/public_html/mysite.com/public /$fastcgi_script_name; } } 2) second question should be easy but i can't get it to work. I want to use the same config I posted above and have either mysite.com or www.mysite.com both forward to mysite.com/portal. Basically when you hit up the front page with or without the www, it all gets forwarded to a sub directory on the server I called Portal. I have tried several variations of: rewrite ^/(.*) http://www.example.com/portal/$1 permanent; but it usually ends with firefox telling me there is some crazy loop happening the address bar saying something like mysite.com/portalportalportalportalportal.........on and on. So, any help on either of these issues would be awesome!! Thanks!!

    Read the article

  • PHP `virtual()` with Apache MultiViews not working after upgrade to Ubuntu 12.04

    - by Izzy
    I use PHP's virtual() directive quite a lot on one of my sites, including central elements. This worked fine for the last ~10 years -- but after upgrading (or rather moving, as it is on a new machine) to Ubuntu 12.04 it somehow got broken. Example setup (simplified) To make it easier to understand, I simplify some things (contents). So say I need a HTML fragment like <P>For further instructions, please look <A HREF='foobar'>here</P> in multiple pages. 10 years ago, I used SSI for that, so it is put into a file in a central place -- so if e.g. the targeted URL changes, I only need to update it in one place. To serve multiple languages, I have Apache's MultiViews enabled -- and at $DOCUMENT_ROOT/central/ there are the files: foobar.html (English variant, and the default) foobar.html.de (German variant). Now in the PHP code, I simply placed: <? virtual("/central/foobar"); ?> and let Apache take care to deliver the correct language variant. The problem As said, this worked fine for about 10 years: German visitors got the German variant, all others the English (depending on their preferred language). But after upgrading to Ubuntu 12.04, it no longer worked: Either nothing was delivered from the virtual() command, or (in connection with framesets) it even ended up in binary gibberish. Trying to figure out what happens, I played with a lot of things. I first thought MultiViews was (somehow) not available anymore -- but calling http://<server>/central/foobar showed the right variant, depending on the configured language preferences. This also proved there was nothing wrong with file permissions. The error.log gave no clues either (no error message thrown). Finally, just as a "last ressort", I changed the PHP command to <? virtual("central/foobar.html"); ?> -- and that very same file was in fact included. So PHP's virtual() function basically worked -- but the language dependend stuff obviously did no longer work together with it as it did before. Of course I tried to find some change (most likely in PHP's virtual() command), using Google a lot, and also searching the questions here -- unfortunately to no avail. Finally: The question Putting "design questions" aside (surely today I would design things differently -- but at least currently I miss the time to change that for a quite huge amount of pages): What can be done to make it work again? I surely missed something -- but I cannot figure out what...

    Read the article

  • How to allow unprivileged apache/PHP to do a root task (CentOS)

    - by Chris
    I am setting up a sort of personal dropbox for our customers on a CentOS 6.3 machine. The server will be accessible thru SFTP and a proprietary http service base on PHP. This machine will be in our DMZ so it has to be secure. Because of this I have apache running as an unprivileged user, hardened the security on apache, the OS, PHP, applied a lot of filtering in iptables and applied some restrictive TCP Wrappers. Now you might have suspected this one was coming, SELinux is also set to enforcing. I'm setting up PAM to use MySQL so my users in the web application can login. These users will all be in a group that can use SSH only for SFTP and users will be chrooted to their own 'home' folder. To allow this SELinux wants the folders to have the user_home_t tag. Also the parent directory needs to be writable by root only. If these restrictions are not met SELinux will kill the SSH pipe immediately. The files that need to be accessible thru both http and SFTP so I have made a SELinux module to allow Apache to search/attr/read/write etc. to directories with the user_home_dir_t tag. As sftp users are stored in MySQL I want to setup their home dirs upon user creation. This is a problem since Apache has no write access to the /home dir, it's only writable by root since it's required to keep SELinux and OpenSSH happy. Basically I need to let Apache do only a few tasks as root and only within /home. So I need to somehow elevate the privileges temporarily or let root do these tasks for apache instead. What I need to have apache do with root privileges is the following. mkdir /home/userdir/ mkdir /home/userdir/userdir chmod -R 0755 /home/userdir umask 011 /home/userdir/userdir chcon -R -t user_home_t /home/userdir chown -R user:sftp_admin /home/userdir/userdir chmod 2770 /home/userdir/userdir This would create a home for the user, now I have an idea that might work, cron. That would mean the server needs to check for users that have no home every minute, then when creating users the interface would freeze for an average of 30 seconds before the account creation can be confirmed which I do not prefer. Does anybody know if something can be done with sudoers? Or any other idea's are welcome... Thanks for your time!

    Read the article

  • How to configure sendmail to relay through a specific server

    - by ErebusBat
    I have a tiny home server setup behind my cable modem (bresnan communications). I want to be able for this box to send out email (not receive) for notifications and whatnot. What I have already done: I have installed and configured sendmail. I have added mail.bresnan.net as my SMART_HOST directive. What I belive the problem is When I attempt to send an email I get the following in my mail log: Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: from=aburns, size=140, class=0, nrcpts=1, msgid=<[email protected]>, relay=aburns@localhost Dec 22 10:24:17 batcave sm-mta[1531]: oBMHOHWZ001531: from=<[email protected]>, size=397, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: to=<[email protected]>, ctladdr=aburns (1000/1000), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30140, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (oBMHOHWZ001531 Message accepted for delivery) Dec 22 10:24:18 batcave sm-mta[1517]: oBMH9mVv001357: to=<[email protected]>, ctladdr=<[email protected]> (1000/1000), delay=00:14:30, xdelay=00:00:42, mailer=relay, pri=300339, relay=pmx0.bresnan.net. [69.145.248.1], dsn=4.0.0, stat=Deferred: Connection timed out with pmx0.bresnan.net. You can see where the message is accepted for delivery by my sendmail server, then where it attempts to hand off to bresnan's server and it timesout. This is where my question is. Asstute readers will notice that pmx0.bresnan.net is not what I have my SMART_HOST directive set as. This is the (outside?) MX server for the bresnan.com/net domain. Apparently bresnan has their network configured so that you can not access this server from within their own network and instead must use the mail.bresnan.net server (which I can connect to). The problem is that I don't know how to tell sendmail to use this server and not the domain. What I have tried Setting a hosts entry so that the pmx0 server points to the mail IP address. This doesn't work, which makes sense as sendmail is obviously doing an MX query to find the server which returns the IP so there is never a need to do a 'normal' DNS resolve so the hosts file never gets involved.

    Read the article

  • Two DHCP Servers, Block Clients for one of them?

    - by Rilindo
    I am building out a kickstart network that resides on a different VLAN uses its own DHCP server. For some reason, my kickstart clients kept getting assign IPs from my primary DHCP server. The way I have it set up is that I have a primary DHCP server on this router here: 192.168.15.1 Connected to that DHCP server is a switch with the IP of 192.168.15.2. My kickstart (Scientific Linux) server is connected to that switch on two ports: Port 2 - where the kickstart server communicates to the rest of the production network via eth0. The IP assigned to the server on that interface is 192.168.15.100 (on eth0). The details are: Interface: eth0 IP: 192.168.15.100 Netmask: 255.255.255.0 Gateway: 192.168.15.1 Port 7 - has it's own VLAN ID (along with port 8). The kickstart server is connected to that port with the IP of 172.16.15.100 (on eth1). Again, the details are: Interface: eth1 IP: 172.16.15.100 Netmask: 255.255.255.0 Gateway: none The kickstart server runs its own DHCP server and assigns them over the eth1. Most of the kick starts are built over the kickstart VLAN through port 8. To prevent the kickstart DHCP server from assigning addresses over the production network, I have the route setup like so: route add -host 255.255.255.255 dev eth1 At this point, the clients kept getting assign IPs from the 192.168.15.1 DHCP server. I need to figure out a way to block client requests from reaching that DHCP. Its should be noted that but I also build KVM hosts on the kickstart server as well, so I need those KVMs to have the ability to get DHCP requests from the 192.168.15.1 DHCP server via the bridge network once I finish resolved this particular problem. (Currently, they communicate via NAT). So what would be done to resolve this? Through iptables or some sort of routing I need to put in? I tried to limited to requests via IPtables on that interface, allowing DHCP requests for 172.16.15.x network: -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 69 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 69 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 68 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 68 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p udp -m udp --dport 67 -j ACCEPT -A INPUT -i eth1 -s 172.16.15.0/24 -p tcp -m tcp --dport 67 -j ACCEPT And rejects assignments on eth1 from 192.168.15.x network: -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 69 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 69 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 68 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 68 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p udp -m udp --dport 67 -j REJECT -A FORWARD -o eth1 -s 192.168.15.0/24 -p tcp -m tcp --dport 67 -j REJECT Nope. :(

    Read the article

  • Have a server, need to figure out a method of backup

    - by PolishHurricane
    My company has an older Dell 2650 server running ArchLinux x64: http://www.dell.com/downloads/global/products/pedge/en/2650_specs.pdf (2 x 2.4GHz Intel Xeon w/around 3287 RAM according to "free -m") We use it to host our internal company site and to post some information from our orders to and we'd like the ability to keep it up as much as possible. What we require: - It needs to always be functional from 8am to 4pm for our data entry person to use it and others to do other things required on it. - If it goes down, we need a quick way to get the machine running again. - If it goes down, we would like to have the data backed up. Some of the major problems include: - The servers old and it may have memory issues - We don't know when one of the hard drives could fail - Our power goes out here once in a while We have a battery backup, but that's pretty much it and it's not for long term. If the server does go down, we have another system in place to store order information that comes in while it's down and repost it when it's back, but we need it up during the day. So we're wondering, what should we get for options? These are the things we thought of, sort of: Setup RAID 1, but that would involve wiping everything right? If we do that, how would we transfer the data over without messing up the server? We could buy an extra server or 2 off eBay for $100, the same model, is that practical or should we get something else? Should we buy a PC or another better server and host off that because it is if anything easier to exchange parts? Should we keep extra parts handy incase it implodes? Should we buy/use backup software? We hear drobo's are cool, but suck. Perhaps there is a software solution to this problem that backs up to another machine or gets us up and running again quickly. Also, if we are to purchase hardware, what is decent? Does anybody know of one for ArchLinux/Linux? We both know a ton about computers but we're kind of unsure what step to take with this, especially with this type of server. Thanks

    Read the article

  • Which hardware to VM ratio for Build-Server virtualization?

    - by Martin
    Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me. Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.)) Therefore (and for scaling purposes) we would like to go virtual with these machines. Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts. Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones. And here begin my questions: Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs? That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or?? Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense?? Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.) Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

    Read the article

  • nginx rewrite rule to convert URL segments to query string parameters

    - by Nick
    I'm setting up an nginx server for the first time, and having some trouble getting the rewrite rules right for nginx. The Apache rules we used were: See if it's a real file or directory, if so, serve it, then send all requests for / to Director.php DirectoryIndex Director.php If the URL has one segment, pass it as rt RewriteRule ^/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1 [L,QSA] If the URL has two segments, pass it as rt and action RewriteRule ^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1&action=$2 [L,QSA] My nginx config file looks like: server { ... location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } How do I get the URL segments into Query String Parameters like in the Apache rules above? UPDATE 1 Trying Pothi's approach: # serve static files directly location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico|html)$ { access_log off; expires 30d; } location / { try_files $uri $uri/ /Director.php; rewrite "^/([a-zA-Z0-9\-\_]+)/$" "/Director.php?rt=$1" last; rewrite "^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$" "/Director.php?rt=$1&action=$2" last; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } This produces the output No input file specified. on every request. I'm not clear on if the .php location gets triggered (and subsequently passed to php) when a rewrite in any block indicates a .php file or not. UPDATE 2 I'm still confused on how to setup these location blocks and pass the parameters. location /([a-zA-Z0-9\-\_]+)/ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME ${document_root}Director.php?rt=$1{$args}; include fastcgi_params; } UPDATE 3 It looks like the root directive was missing, which caused the No input file specified. message. Now that this is fixed, I get the index file as if the URL were / on every request regardless of the number of URL segments. It appears that my location regular expression is being ignored. My current config is: # This location is ignored: location /([a-zA-Z0-9\-\_]+)/ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index Director.php; set $args $query_string&rt=$1; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location / { try_files $uri $uri/ /Director.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index Director.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • Choosing parts for a high-spec custom PC - feedback required [closed]

    - by James
    I'm looking to build a high-spec PC costing under ~£800 (bearing in mind I can get the CPU half price). This is my first time doing this so I have plenty of questions! I have been doing lots of research and this is what I have come up with: http://pcpartpicker.com/uk/p/j4lE Usage: I will be using it for Adobe CS6, rendering in 3DS Max, particle simulations in Realflow and for playing games like GTA IV (and V when it comes out), Crysis 1/2, Saints Row The Third, Deus Ex HR, etc. Questions: Can you see any obvious problem areas with the current setup? Will it be sufficient for the above usage? I won't be doing any overclocking initially. Is it worth buying the H60 liquid cooler, or will the fan that comes with the CPU be sufficient? Is water cooling generally quieter? Is the chosen motherboard good for the current components? And is it future-proof? I read that the HDD is often the bottleneck when it comes to gaming. I presume this is true to other high-end applications? If so, is my selection good? I keep changing my mind about the GPU; first the 560, now the 660. Can anyone shed some light on how to choose? I read mixed opinions about matching the GPU to the CPU. Will the 560 or the 660 be sufficient for my required usage? Atm I'm basing my choice on the PassMark benchmarks and how much they cost. The specs on the GeForce website state that the 560 and the 660 both require 450W. Is this a good figure to base the wattage of my PSU on? If so, how do you decide? Do I really need 750W? The latest GTX 690 requires 650W. Is it a good idea to buy a 750W PSU now to future-proof myself?

    Read the article

  • How to store data on a machine whose power gets cut at random

    - by Sevas
    I have a virtual machine (Debian) running on a physical machine host. The virtual machine acts as a buffer for data that it frequently receives over the local network (the period for this data is 0.5s, so a fairly high throughput). Any data received is stored on the virtual machine and repeatedly forwarded to an external server over UDP. Once the external server acknowledges (over UDP) that it has received a data packet, the original data is deleted from the virtual machine and not sent to the external server again. The internet connection that connects the VM and the external server is unreliable, meaning it could be down for days at a time. The physical machine that hosts the VM gets its power cut several times per day at random. There is no way to tell when this is about to happen and it is not possible to add a UPS, a battery, or a similar solution to the system. Originally, the data was stored on a file-based HSQLDB database on the virtual machine. However, the frequent power cuts eventually cause the database script file to become corrupted (not at the file system level, i.e. it is readable, but HSQLDB can't make sense of it), which leads to my question: How should data be stored in an environment where power cuts can and do happen frequently? One option I can think of is using flat files, saving each packet of data as a file on the file system. This way if a file is corrupted due to loss of power, it can be ignored and the rest of the data remains intact. This poses a few issues however, mainly related to the amount of data likely being stored on the virtual machine. At 0.5s between each piece of data, 1,728,000 files will be generated in 10 days. This at least means using a file system with an increased number of inodes to store this data (the current file system setup ran out of inodes at ~250,000 messages and 30% disk space used). Also, it is hard (not impossible) to manage. Are there any other options? Are there database engines that run on Debian that would not get corrupted by power cuts? Also, what file system should be used for this? ext3 is what is used at the moment. The software that runs on the virtual machine is written using Java 6, so hopefully the solution would not be incompatible.

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • Apache certificates for some urls not working

    - by Vegaasen
    We are having a rather strange problem with a Apache-installation. Here is a short summary: Currently I'm setting up Apache with https, and server-certificates. This is fairly easy and works straight out of the box - as expected. This is the configuration for this setup: Listen 443 SSLEngine on SSLCertificateFile "/progs/apache/ssl/example-site.no.pem" SSLCertificateKeyFile "/progs/apache/ssl/example-site.no.key" SSLCACertificateFile "/progs/apache/ssl/ca/example_root.pem" SSLCADNRequestFile "/progs/apache/ssl/ca/example_intermediate.pem" SSLVerifyClient none SSLVerifyDepth 3 SSLOptions +StdEnvVars +ExportCertData RequestHeader set ssl-ClientCert-Subject-CN "%{SSL_CLIENT_S_DN}s" RewriteEngine On ProxyPreserveHost On ProxyRequests On SSLProxyEngine On ... <LocationMatch /secureStuff/$> SSLVerifyClient require Order deny,allow Allow from All </LocationMatch> ... <Proxy balancer://exBalancer> Header add Set-Cookie "EX_ROUTE=EB.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED BalancerMember http://10.0.0.1:7200 route=ee1 retry=300 flushpackets=off keepalive=on BalancerMember http://10.0.0.2:7200 route=ee2 retry=300 flushpackets=off keepalive=on status=+H ProxySet stickysession=EX_ROUTE scolonpathdelim=Off timeout=10 nofailover=off failonstatus=505 maxattempts=1 lbmethod=bybusyness Order deny,allow Allow from all </Proxy> RewriteCond %{REQUEST_URI} !^/index.html [NC] RewriteRule ^/(.*)$ balancer://exBalancer/$1 [P,NC] ProxyPassReverse / balancer://exBalancer/ Header edit Set-Cookie "(.*)" "$1;HttpsOnly" ... So - everything works fine and as expected for all of the pages that are not a part of the LocationMatch-directive. When requesting something that matches the LocationMatch-directive, I'm asked for a certificate (hence the SSLVerifyClient required attribute) - and getting all the correct certificates in my browser that is based on the root/intermediate chain. After choosing a certificate and clicking "OK", this is what pops up in the apache logs: [ssl:info] [pid 9530:tid 25] [client :43357] AH01998: Connection closed to child 86 with abortive shutdown ( [Thu Oct 11 09:27:36.221876 2012] [ssl:debug] [pid 9530:tid 25] ssl_engine_io.c(1171): (70014)End of file found: [client 10.235.128.55:45846] AH02007: SSL handshake interrupted by system [Hint: Stop button pressed in browser?!] And this just spams the logs. What is happening here? I can see this configuration working on my local machine, but not on one of our servers. There is no configration differences between the servers, only minor application-wise-changes. I've tried the following: 1) Removing CA-certificate-checking (works) 2) Adding required CA-certificate for the whole site (works) 3) Adding "SSLVerifyClient optional" does not work 4) ++ Server/Application Information Local: -OpenSSL v.1.0.1x -Apache 2.4.3 -Ubuntu -mpm: event -every configuration should be turned on (failing) server: -OpenSSL 0.9.8e -Apache 2.4.2 -SunOS -mpm: worker -every configuration should be turned on Please let me know if more information is needed, I'll provide it instantly. Brief sum-up: -Running apache 2.4 -Server certificates works just fine -Client certificates for some /Locations does not work, fails with errors PS: Could it be related with the OpenSSL version and the "Renegotiation" stuff related to TLS/SSLv3?

    Read the article

  • Issue using a "used" SSD as a Windows 8.1 Boot Drive

    - by EpiGrad
    So, I'm something of a Mac person, but decided to take a stab at this whole "build yourself a PC" thing - right now, the thing is assembled, posts just fine, and can get to the BIOS. The problem is the drive I want to use - I intended to use a 80 GB Corsair SSD I've had sitting around as the boot drive, and a new Samsung SSD for games and the like. So I boot using a Windows 8.1 install USB stick, and if the Samsung drive is plugged in, it happily offers to install Windows on it. The Corsair drive though, it's flipped out - I reformatted it as a blank NTFS drive (it was HFS for Mac purposes) and the BIOS can't see it, nor can the Windows installer. What's wrong, and how do I fix it? The tools at my disposal are: The current ASUS BIOS that came with my motherboard (a Z87I-Deluxe), a Mac running the latest OS X which can also boot to Windows 7 if needed via either Parallels or Bootcamp. Update 1: Update: Based on a friend's suggestion to switch SATA ports, Windows 8.1's installer can now see the drive as Drive 0, Partition 1, a 83.8 GB "Primary" partition. But when I click it and hit "Next", I get the following error: "We couldn't create a new partition or locate an existing one. For more information, see the Setup log files" - not that it gives any clue how to access those. Update 2: Following a trail of Google suggestions, I ended up going into advanced tools and just reformatting the drive as follows: Start DISKPART. Type LIST DISK and identify your SSD disk number (from 0 to n disks). Type SELECT DISK <n> where <n> is your SSD disk number. Type CLEAN Type CREATE PARTITION PRIMARY Type ACTIVE Type FORMAT FS=NTFS QUICK Type ASSIGN Type EXIT twice (one to get out of DiskPart, the other to exit the command line tool) Per these instructions. This goes well enough, but now I can select the disk for installation, and I get a new error: "Windows 8 cannot be installed to this disk. The selected disk has an MBR partition table. On EFI systems, Windows can only be installed to GP disks." So, Googling that, I do the following: select disk 0 clean convert gpt exit ...and we might have fixed it. Windows is at least trying to install now.

    Read the article

  • nginx rewrite for wikkawiki

    - by Hans
    Just setup WikkaWiki on my server, I have been trying to have the links go from wiki.mysite.info/wikka.php?wakka=Start into wiki.mysite.info/DotMG. I tried following their guide at http://docs.wikkawiki.org/ModRewrite, however it seems incomplete and outdated. Furthermore, as of version 1.3.2 base_url isn't even manually configurable from the wikka.config.php file. I am using version 1.3.2 of WikkaWiki. My nginx virtual hosts file contains: server { listen 80; server_name wiki.mysite.info; root /usr/share/nginx/wikka/; access_log /usr/share/nginx/.access/wikka; error_log /usr/share/nginx/.error/wikka error; location / { index index.php; try_files $uri $uri/ @wikka; } location @wikka { rewrite ^(.*/[^\./]*[^/])$ $1/ last; rewrite ^(.*)$ /wikka.php?wakka=$1 last; } location ~* \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } Thus far it works, I can go to wiki.mysite.info/APage and it'll display that page, however it doesn't work on all pages, sometime the browser simply downloads the page (For some reason it always downloads the Start page). Also when I go to wiki.mysite.info/ it downloads the wikka.php file... Furthermore, the links on the wiki have the wikka.php?wakka= so whenever I navigate around the wiki, it goes back to being wiki.mysite.info/wikka.php?wakka=APage. I think something is wrong with my rewrite but I can't say for sure. Contents of the fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $server_https; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • My USB bootable thumb drive, no longer boots on a single particular computer on which it previously worked

    - by LiamMeron
    I created a bootable drive, booting CrunchBang, about 2 months ago. About a month ago, I booted into it on another laptop. After shutting down, I have been unable to get it to boot on my own laptop, despite having worked previously. I can still boot into it on my desktop, and can also boot into it on all other computers that I have tried. If I plug it in when I am running Ubuntu, the Home and / folders mount, the only error being that for some reason my PC likes to try and mount the swap partition too, which naturally, gives an error. The BIOS settings are all still setup to boot from USB. When I boot, all I get is a black screen with the white cursor, it will stay there for as long as I leave it. If it is worth anything, I have GRUB loader installed on the drive. The partitions look a little odd, but I am rather unfamiliar with how they are dealt with. The first partition, /, is sdb1 and has the bootable flag. The second partition is an extended system, and is sdb2. The third partition, according to GParted, seems to be nested under the second. This is the swap partition, and it is sdb5 The fourth partition is my home partition and is sdb6 and is also nested under the extended system. The first and fourth partitions are ext4. I don't know if that helps, but the more info, the better accuracy, generally. Thanks. EDIT: I tried reinstalling GRUB on the drive, but that didn't work. However, when I reinstalled GRUB on my laptop, I did it with my USB thumbdrive in. This caused the GRUB updater to find the /boot folder and add the proper details into my laptop's GRUB loader. I could log into CrunchBang from my laptop's GRUB but I was still unable to boot directly to the drive. It looked like my BIOS is unable to find the bootloader. I am unable to install GRUB to a partition I just created, a /boot partition at the start of the drive, GRUB just doesn't allow it. I think I'm going to have to reinstall #! on my drive, which won't be a great loss as I haven't put much time into it.

    Read the article

  • Local dns for testing websites using mobile devices

    - by Morpheu5
    Hi. I have no idea where to start from so sorry in advance if this topic has already been discussed. I usually develop web sites using my laptop as a development server, and recently I needed to test a web site using various mobile devices that can connect via wifi. Having no real AP, I set up a ad-hoc network using my laptop's wireless card and the devices can correctly browse the Internet and access the laptop's web server. The setup is as follows: subnet: 192.168.1.0/24 gateway to the Internet (wired adsl router/modem): 192.168.1.1 laptop: 192.168.1.64 (eth0, wired if connected to the gateway) and 192.168.1.32 (eth1, wifi if somewhat bridged to eth0) mobile devices (same for all, I only use one of them at any time for simplicity): 192.168.1.11 with default gw 192.168.1.1 Now, if I open either 192.168.1.32 or 192.168.1.64 from the mobile devices, I correctly get the default host of my Apache configuration. However I usually work with virtual hosts for many practical reasons, one of which being Drupal's peculiar implementation of multi-sites. For those who don't know how this works, Drupal takes the request's hostname and searches into its sites/ subdirectories for an appropriate configuration file. So, for example, suppose I request www.example.com, then Drupal would search for a config file in the following directories: sites/www.example.com/ sites/example.com/ sites/com/ sites/default/ So I decided to adopt the following style of virtual hosts: if the website I'm working on will be accessible using www.example.com I set up a sites/www.example.com/ directory and create a virtual host for local.www.example.com so Drupal have no trouble finding it. I've been told this is suboptimal from a dns point of view since I'd have to create an authoritative entry for example.com and turn Bind on only when I'm supposed to access the local copy, which is weird. However, if this is the only path I can follow, I still have some problems with Bind's configuration, as I couldn't find any guide that tells me in a clear, noob-friendly way, how to set up such an entry. On the other hand, I was wondering if I could set up an authoritative entry for local, so I could access www.example.com.local and tell in some way (which I don't even know if this is possible) Apache to put www.example.com instead of www.example.com.local in the relevant environment variable. Anyway, I have a last problem, sort of: when I launch Bind in debug mode with high verbosity, and make 192.168.1.32 as the primary dns for the devices, the output doesn't say anything about requests being made from the devices to Bind, so I'm not even sure it comes into play. As you can see, I'm a complete noob at these matters, but I'm eager to learn, so any help/pointer will be appreciated.

    Read the article

< Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >