Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 309/393 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Advice on networking career

    - by fmysky
    Hello! I need some insights on a networking career. I have a valid CCNA and few months of experience as a Jr. network analyst. My focus is a type of job where I can administer networking/storage hardware and possibly managing servers/workstations too. I am new to this job area, specifically new to city like LA. I am currently unemployed and so, my question is: should I continue with my cisco certifications(routing/wireless) or other comptia certs or both to get a reliable job? I am really interested in CCNA wireless but watching craigslist(LA) and other job sites, it seems people need more cisco voice people. While, some others also ask for Net+ certs. I have scarce financial sources so, its better to make some good decisions. I have not been applying yet due to some personal problems but I will soon. Thank you! P.S. I don't know if I can ask questions like this here. Sorry about that.

    Read the article

  • Time drift in Cloud Server - need to mainpulate GRUB config

    - by Aditya Advani
    We are hosting a VPS on a popular host and are experiencing a regular time drift of several minutes a day forward (approx 7). Linux Kernel: 2.6.18-164.11.1.el5 GNU/Linux Distro: CentOS release 5.4 (Final) We reached out to our hosting provider and their support advised us " This is a known issue with Cloud Servers. To fix this you will need to add one line to your grub config located at: /boot/grub/menu.lst The line you need to add is: noapic nolapic divider=10 nolapic_timer This should correct this issue. You will need to restart after this is added in. " Because I am wary of manipulating grub, mostly I'm terrified that our server may fail to restart - I ask you guys, the pro *nix admins - where exactly in this file does the recommended insertion below: # line from 1&1 for time syncing issue (Case 5163) noapic nolapic divider=10 nolapic_timer go? Please specify where exactly, and whether the order of commands is or is not important. Why is the block below "title CentOS ..." indented? If someone could give me an overview of how this works or point me to a resource that's easy to follow, that's what I'm looking for immediately, a light overview or basic understanding of what I;m doing. If GRUB and bootloaders are a deep dark treasure trove of kernel hacking or something, that's great well-recommended in-depth resources are also very welcome. This is my current /boot/grub/menu.lst # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file #boot=/dev/sda # serial --unit=0 --speed=57600 terminal --timeout=5 serial console timeout=5 title CentOS (2.6.18-164.11.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/hda1 console=tty0 console=tty initrd /boot/initrd-2.6.18-164.11.1.el5.img MOST IMPORTANT: I need to know where in the file above it is appropriate to paste the suggested line so I can confidently restart my VPS after manipulating GRUB config

    Read the article

  • Mounting 2.5" SSD in Antec Atlas 550's 3.5" bay

    - by cecilkorik
    Just got some new SSDs to add to my servers, unfortunately there doesn't appear to be anywhere to actually mount them. The cases are Antec Atlas 550 mid-tower server cases. The problem is that the 3.5" bays are intended for 3.5" hard drives (obviously) not SSDs, so they are mounted from the bottom with rubber grommets to reduce vibration. There are NO side-holes in any of the 3.5" bays, all mounting has to be done from the bottom of the drive, through the thick rubber grommets, which requires special extra-long screws with large heads. Those screws will not work with the SSD, the thread is too coarse, 2.5" drives apparently use M3 screws with a finer pitch. The case's 3.5" bays have holes aligned for 2.5" drives and by moving the grommets into the appropriate holes they line up with the SSD. But that's as far as I can get. I don't have and cannot find any screws that use the finer M3 pitch that are long enough and have the wide head. I can't remove the grommets because the screw holes are much too wide without them, the entire head of the screw fits through. And again, there are no side-holes, so I can't use the mounting bracket that came with the drive nor any other 2.5"-3.5" bracket I've found. Here is a pic of what I am dealing with (note that the SSD is just resting on the case, it's not currently screwed in if that's not clear) Any ideas for safely mounting these little guys without resorting to duct tape or superglue would be very appreciated. If anyone knows where I could buy the appropriate type of M3 threaded screw or has any other ideas please help me out here.

    Read the article

  • Enabling WinRM by Group Policy

    - by SaintNick
    I'm having partial success enabling WinRM through Active Directory GPO's on our Server 2008 R2 environment. I've created a GPO that enables "Allow automatic configuration of listeners" and also enables all the necessary predefined WinRM Firewall rules. This GPO works fine for our webservers. Indeed, this is reflected by the "Server Manager Remote Management" nicely flipping to "enabled" in Server Manager Server Summary. However, the same GPO applied to both our Management servers, which are Domain Controllers, does not give the same result. I see the GPO settings being applied, including the listener as confirmed by C:\Windows\system32>winrm e winrm/config/listener Listener [Source="GPO"] Address = * Transport = HTTP Port = 5985 Hostname Enabled = true URLPrefix = wsman CertificateThumbprint ListeningOn = 10.32.40.210, 10.32.40.211, 10.32.40.212 But in Server Manager, Server Summary, Remote Management remains on "disabled" and indeed when trying to connect to one of these machines Server Manager gives an "Access Denied". Manually enabling WinRM locally via Server Manager "Configure Server Manager Remote Management" on either of these machines works fine. What can be the cause? Can it have something to do with theses machines being DC's and needing extra settings in the GPO? Nick Reid

    Read the article

  • PXE booting LACP hosts on Force10 S50N with FTOS

    - by lolwutreddit
    Hardware: S50N Firmware: FTOS 8.4.2.6 Problem: We're trying to PXE boot some servers that are connected via port-channel interfaces with LACP. Current Work-around: we PXE boot a server with a single interface (eth0), and then use a Perl script to turn up the port-channel interfaces after the server is built. Details: Is anyone doing anything similar on Force10 S50 switches with FTOS? If not, is anyone doing this on another S series, or larger chassis-based Force10? I'm wondering if Native VLAN will solve this, since ports in a port-channel cannot explicitly have a VLAN set, and they don't seem to use the tagged or untagged VLAN that the port channel is in. I will confirm this next (I think it's the only thing I haven't tried) Juniper Example: http://broken.net/openindiana/how-to-pxe-boot-systems-on-lacp-using-juniper-switches/ Cisco: there are plenty of documented ways to solve this issue on IOS and Nexus Update/Edit: since there seems to be no way to use interface or port-channel mode commands to get the individual interfaces to show up in spanning-tree (rtsp in this case), the ports should never go into a forwarding state. I'm not going to mess with it anymore unless a) someone that has experience passes it on, or b) Force10 comes up with a solution for this (I'm guessing it will only be introduced on other S platforms (S55, S60), since the S50 seems to be near EOL). I'm basing that on the fact that the Open Automation type features are only being supported on the newer switches.

    Read the article

  • NRPE unable to read output, but why?

    - by ticktockhouse
    I have this problem with NRPE, all the stuff I've found so far on the net seems to point me at things I've already tried. # /usr/local/nagios/plugins/check_nrpe -H nrpeclient gives NRPE v2.12 as expected. Running the command by hand (as defined in nrpe.cfg on "nrpeclient", gives the expected response nrpe.cfg: command[check_openmanage]=/usr/lib/nagios/plugins/additional/check_openmanage -s -e -b ctrl_driver=0 bat_charge "Expected response" But if I try to run the command from the Nagios server I get the following: # /usr/local/nagios/plugins/check_nrpe -H comxps -c check_openmanage NRPE: Unable to read output Can anyone think of anywhere else I might have made a mistake with this? I've done the same thing on multiple other servers with no problem. The only difference I can think of with this is that this box is RHEL 5 based, whereas the others are RHEL 4 based. Those two bits above that I've tested are the what most people seem to suggest when people have had this problem. I should mention that I get a weird error in the logs when I restart nrpe: nrpe[14534]: Unable to open config file '/usr/local/nagios/etc/nrpe.cfg' for reading nrpe[14534]: Continuing with errors... nrpe[14535]: Starting up daemon nrpe[14535]: Warning: Daemon is configured to accept command arguments from clients! nrpe[14535]: Listening for connections on port 5666 nrpe[14535]: Allowing connections from: bodbck,combck,nam-bck Even though, it's plainly reading that /usr/local/nagios/etc/nrpe.cfg file to get the stuff it's talking about further down..

    Read the article

  • Chroot jail of Nginx and php

    - by sqren
    I'm hosting multiple websites on one VPS, and want to chroot each website, eg. /chroot/website1 /chroot/website2 I'm using makejail, which is a highlevel tool, for creating the jails, and copying the libraries and dependencies. Easy peasy. Each website will need nginx, php and mysql. For php I'm using php5-fpm which actually supports chroot by configuration, however I'm not using this (maybe I should?) My question is which approach of the following three is the better: 1) Every website will have its own seperated instance of nginx, php and mysql. The downside is, that each webserver + php has to listen to a different port. I also need a "master" nginx web server in front of them, reverse proxying to the chrooted servers behind it. Probably most secure, but also most advanced. 2) I don't make any chroot jails manually. I setup one nginx web server, that proxies php requests to php-fpm, on different ports. I can have multiple php-fpm configurations each with is own chroot'ed folder. This is quite managable - however only php will be chrooted. Not the actual webserver. Is this secure enough. Also, I tried this option out, and it seems I will need to use TCP instead of sockets for connecting to MySQL. 3) You tell me ;) I'm quite new to chroot jailing, so please correct me if I'm wrong in my assumptions. I've been reading all the tutorials I could find, however, I find the market for chroot guides very scarce. Any help or inputs much appreciated!

    Read the article

  • In Windows 7, why can't I use perfmon against a remote server?

    - by SomeGuy
    I am on Windows 7 and trying to run perfmon against Windows 2003 and Windows 2008 servers. I am running into the same issue with all remote machines. When creating a data collector set, I specify a domain account that is in the administrators group on the remote machines (and "Performance Log Users" and "Performance Monitor Users" to be safe). On the "Available Counters" screen, When I type in a remote computer name, PerfMon locks up for a good 2-3 minutes before I can add any counters. I can then save the collector set. However, when I save it, the go/stop buttons are disabled if I click the set in the left panel, and missing if I click the Data collector set itself in the right panel. See the screens below. I can run data collector sets against my local machine with no problem. I am opening perfmon with my local account in both scenarios. I also have Remote Registry Service started on each remote machine. What is going on?

    Read the article

  • Backup server (OSX) like time machine to backup remote ubuntu 12.04 server [on hold]

    - by Mad
    I've searched my ass of for an good solution to backup my ubuntu server thats in a datacenter. Local we have an osx server with some external drives attached to it. This is for the local working stations that handle timemachine. What i like to do is fetch the files (or mount the root of my ubuntu server) and make an time machine backup from it. I just have one problem that if my osx server crashes i can't put back the system because it contains not only the osx server but also the ubuntu server from the data center. I've used Back in time on ubuntu to do the exact same thing but this was to Ubuntu (local) from Ubuntu (datacenter). So does anybody has an solution? Here are my requirements: Set time intervals for backups; need to be backed up nightly. Set time intervals for keeping backups; hourly, weekly, monthy etc Able to back up all computers and servers from an offsite location the local osx server (10.9). Manageable from that one location to login with ssh to do rsync or rsnapshot Has a GUI (osx) Act like time machine, backup only the files that has been changed. Restore to a point back in time.

    Read the article

  • Cisco "copy startup-config tftp" results in a 0 byte file on the server?

    - by Geoffrey
    I'm pulling my hair out figuring this out. My startup-config is good, I can view it with a show command. I'm trying to copy it to a tftp server: asa5505# copy startup-config tftp Address or name of remote host []? ipaddress Destination filename [startup-config]? t !! %Error writing tftp://ipaddress/t (Timed out attempting to connect) On my TFTP server (SolarWinds), I get the following: binary, PUT. Started file name: C:\TFTP-Root\t binary, PUT. File Exists, C:\TFTP-Root\t binary, PUT. Deleting Existing File. binary, PUT. Interrupted by client, cause: The process cannot access the file 'C:\TFTP-Root\t' because it is being used by another process I've used tftpd32 with same results. I've tried different servers, even one on the same network as the asa ... same results. It'll create a 0 byte file and never do the dump. What's going on? Everything is working normally except for this.

    Read the article

  • SSH garbling characters in vim/nano on remote server

    - by geerlingguy
    ... and it's driving me insane. Basically (this has been happening over the past couple months), I log into a few different CentOS servers (one Linode, another VPS, and a shared host to which I have shell access), running 5.5, 5.7, and 6, from my Mac running OS X Lion, using Terminal. Basically: $ ssh [email protected] [remote-host] $ nano somefile.txt Once I start editing the file, if I use the arrow keys to move around the cursor, or start deleting, then typing again, the cursor jumps around a bit, and if I save the file and reopen it, it's obvious that the cursor was, in fact, jumping all over the place on a line for no apparent reason. I end up getting things like "This is a neof text." When I had typed in (to the cursor-crazy editor) "This is a line of text." It's a big problem when it comes to editing configuration files, because I often have to edit one line, save and close, then reopen just to make sure that line is right... then edit another line... and it's getting quite annoying. I found Linode Lish Shell Vim and Nano rendering troubles: lines not appearing / cursor positions wrong, but I don't know if that relates much, since that's specifically referring to lish.

    Read the article

  • Remote Sending of Emails via SMTP/EXIM Issue

    - by Christian Noel
    I have been encountering a problem when sending messages via EXIM. Here is the scenario: I have 2 servers lets just say host1.com = where all my apps and programs are hosted. host2.com = is another server which handles some apps but is also my smtp mail server. whm and cpanel are installed in both hosts as well as exim. right now, messages are being sent out as [email protected] to clients. host1.com uses the [email protected] so that it can send messages outbound as well. here's the problem, after a few hours from a fresh reboot of host1.com, sending messages from host1.com is no longer possible because i encounter an error that states: system/vendor/swift/Swift/Connection/SMTP.php [309]: The SMTP connection failed to start [tls://mail.host2.com]: fsockopen returned Error Number 110 and Error String 'Connection timed out'` also note that this was working fine earlier (like 10 hours ago) but then it suddenly fails. everytime i restart the host1.com then sending messages will work again. i have checked logs and traces but to no avail the only means of fixing this problem is restarting host1.com.

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS *************** Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • Flash plugin locks up Firefox, Chrome and Safari behind a corporate proxy, IE6 works fine

    - by Shevek
    At work I am forced by corporate policy to use IE6. Obviously this is not so good so I use FF for most of my browsing. However there is a problem once I have installed the Flash plug-in - FF locks up when trying to load Flash media. Looking at the status bar at the time of the lock up it appears this happens when the browser tries to get cross domain data. The Flash Active X plug-in in IE does not suffer this issue. I have tried it in a brand new profile in FF with Flash as the only plug in and it locks up. We have 2 different proxy servers and both exhibit the same problem. I have also tried Chrome and Safari and both lock up with the plug-in installed. So, has anyone else had this problem and solved it? Or, is there any way to disable cross domain data access in the flash plug-in? Or, is there any way to disable the "This site needs an additional plug-in" ribbon which appears when the plug-in is not installed. Many thanks!

    Read the article

  • DFSR NTFS Permissions Not Working!??!

    - by megadood
    I have two windwos 2008 standard servers running DFSR okay. I can create a file on one server, it is replicated to the other okay etc. I have the namespace shared folder on each server shared with full control administrators / everyone change/read permissions. I then browse to the folder on server 1 e.g.\server1\namespace\share\folder1. I right click the folder, and configure the NTFS permissions as I would like for example Adminsitrators Full Control / One User Read/Write Access / No other users in the user list. I save this and then double check the second server e.g. \server2\namespace\share\folder1. I right click the same folder name as before and can see the NTFS permissions have replicated accordingly. I right click the folder and go to properties - security - advanced - effective permissions and select a user that shouldnt be able to get into that folder e.g. testuser. It agrees with the NTFS permissions and shows that testuser has no ticks next to any permissions so should be denied access. I logon to any network PC or the server as testuser. Browse to \server1\namespace\share\folder1. It lets me straight in, no access denied messages. The same applies to server2. It seems as thought all my NTFS permissions are being ignored. I have 1 DFS share and then all the subfolders are a mixture of private folders and public folders so need the NTFS permissions to work ideally. Any idea whats going on? Is this normal? From my tests all users can access any DFSR folder under the namespace\share which is quite worrying. Thanks

    Read the article

  • Data Protection Manager System Protection Backups Failing

    - by TrueDuality
    I'm just starting to setup DPM 2010 in a test environment with a Domain Controller and a File Server. Everything seem to be working fairly well and I can get all of my backup jobs to succeed except for the "Computer\System Protection" backups. Both servers are running fully up to date 64 bit Windows Server 2008 R2 Enterprise with Service Pack 1. The error that is being provided is: DPM cannot create a backup because Windows Server Backup (WSB) on the protected computer encountered an error (WSB Event ID: 517, WSB Error Code: 0x8078001D). (ID 30229 Details: Internal error code: 0x809909FB) This Microsoft Knowledge Base article describes the issue perfectly and provides a hotfix. I downloaded the hotfix, moved it onto the affected server, attempt to run it and receive the following error: The update is not applicable to your computer. I've verified that I have indeed downloaded the 64 bit version. According to this thread the hotfix got rolled into Service Pack 1, yet I'm still experiencing the issue. Both machines do have the Windows Server Backup feature installed. Can anybody point me in the right direction? What am I missing?

    Read the article

  • Qmail Patching Makes me Nervous

    - by JM4
    We have a system running CentOS 5 with Plesk 8.6 and Qmail running. Our primary domain is hosted through Media Temple. When Plesk and Qmail are hosted on a single Dedicated Virtual server, it reads the primary server IP and domain and reports that when sending emails from the system. Our pages are written in PHP so we are using the mail() function. While our email goes out to everybody, several enterprise email domains reject our email because it shows a different originating IP (our primary server IP and domain) than the domain we list in the 'from' address. This is not modifiable. Every domain we own of course does have its own IP as well underneath our primary server IP. I have seen several places online that provide a patch, specifically - which allows Domain Binding: "DomainBindings -- For servers that host multiple domains or have multiple IP addresses assigned to them, it is sometimes useful (or important) to have qmail use a specific IP address for its outgoing mail. By default, qmail uses whatever address the OS chooses for all outbound connections. With this patch, you can specify which address to use. It uses a control file similar to smtproutes to specify the outbound IP address to use, based on the sender's domain (local copy) (pyropus.ca)" Qmail Link First off I do not have netqmail installed so I'll need to find another source, but also I am completely unfamiliar with applying patches to qmail. Will I lose email services if I patch? Is it a simple apply and use process? Will my existing email accounts and data be restored after the patch? I am very, very new to unix/linux so this does make me a bit nervous but I am the only person who can make the change and it is one our company "HAS" to have. Any ideas?

    Read the article

  • Sendmail Sends but never Delivers

    - by Jeremy
    I have tried 10 different emails hosted at Google, Yahoo!, GoDaddy, and some that are privately hosted, and each time I get the following errors. I have blocked sensitive information, but you will be able to see the errors. Feb 16 17:06:50 xxxxx sendmail[31824]: o1GM6ovJ031824: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30054, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (o1GM6oJo031825 Message accepted for delivery) Feb 16 16:54:19 xxxxx sendmail[31625]: o1GLsJPP031625: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30097, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (o1GLsJah031626 Message accepted for delivery) Feb 17 09:05:52 xxxxx sm-mta[10620]: o1H6Z3jM005734: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=07:30:49, xdelay=01:15:36, mailer=esmtp, pri=571331, relay=aspmx3.googlemail.com. [209.85.222.4], dsn=4.0.0, stat=Deferred: Connection timed out with aspmx3.googlemail.com. Feb 17 10:35:23 xxxxx sm-mta[12828]: o1HEZwn8011833: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:59:25, xdelay=00:12:36, mailer=esmtp, pri=300353, relay=aln-mailrelay.att.net. [12.102.252.75], dsn=4.0.0, stat=Deferred: Connection timed out with aln-mailrelay.att.net. If you take a look, they all send, but then (HOURS later) I get an error "stat=Deferred: Connection timed out with {server}". I'm at my wits end, because I use this same setup on each of my servers, and they all work.

    Read the article

  • Trouble serving vhosts when trying to set up wildcard subdomains with dnsmasq in local development e

    - by Jeremy Kendall
    I'm trying to get wildcard DNS enabled on my laptop using dnsmasq. I realize that this has been asked and answered more than once on this forum, but I can't get the solution to work for me. Steps taken so far: Installed dnsmasq Set address=/example.dev/127.0.0.1 in dnsmasq.conf Set listen-address=127.0.0.1 in dnsmasq.conf Ensured nameserver 127.0.0.1 is in /etc/resolv.conf Set prepend domain-name-servers 127.0.0.1; in /etc/dhcp3/dhclient.conf Created a vhost for example.dev Restarted apache and dnsmasq Note: example.dev is not set in /etc/hosts My vhost for example.dev <VirtualHost *:80> ServerName example.dev DocumentRoot /home/jkendall/public_html/example/public ServerAlias *.example.dev # This should be omitted in the production environment SetEnv APPLICATION_ENV development <Directory /home/jkendall/public_html/example/public> DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> The setup above will server example.dev locally without any problem. It will also serve test.example.dev, but test.example.dev returns the default apache "It works!" index.html from /var/www rather than my index.php in /home/jkendall/public_html/example/public. The solution in this Server Fault thread suggests that address=/.example.dev/127.0.0.1 would resolve my problem, but when I try to use that solution, restarting dnsmasq results in a failure with the error message dnsmasq: error at line 62 of /etc/dnsmasq.conf For grins, I moved my project over to /var/www/example and modified the vhost appropriately. I got the same result as described above. At this point I'm not sure what other steps I can take to resolve the issue. Thoughts?

    Read the article

  • Windows 7 disk backup and clone for deployment to multiple systems

    - by gregmac
    I'm in the process of deploying some new PCs (there's only 8), all identical hardware. What I'd like to do is install Windows 7 (64bit), join to domain etc, install a bunch of other software, and then clone that drive to multiple other machines. I'd also like to be able to use it as a backup image, so the machine can be restored back to that image at some future date. I understand this involves at least sysprep, but I am confused after reading some tutorials that talk about using Windows Automated Installation Kit, or hacks with the registry and custom-build batch files. This process seems overly complex to me: I did something similar 10+ years ago, and and don't remember it being this bad. Surely things have improved in a decade? There's also some products that involve having network servers running deployment software, network boot, etc etc.. this is way more than I want to set up. My systems are all identical hardware. Is there a simplified way to clone PCs? Preferably (since I'm a lazy developer, and not an IT admin) I'd like to find some off-the-shelf product that I can run after I get the machine setup, that will spit out a bootable DVD I can run on all the other systems, which will boot up, ask for a computer name, join it to the domain, and that's it. Does such as product exist?

    Read the article

  • Server cost for smartphone app with web service

    - by FrankieA
    Hello, I am working on a smartphone application that will require a backend web service - but I have absolutely clueless to how much it will cost. Web Service will handle: - login of users - cataloging of our user base - holding minimal profile information for users (the only binary data is a display picture which will be < 20k each) - performing some very minor calculation/algorithm before return results - All the above will be communicated to server from a smartphone (iPhone/BlackBerry/Android) Bandwidth Requirements: - We want to handle up to 10k users throughout the day. - I predict 10k * 50 HTTP requests a day = 500,000 requests a day * 30 = 15 million requests a month Space Requirements: - Data will be in SQL database. - I predict 1MB/user * 10k = 10GB + overhead. In other words - space is not a big issue. Software Requirements: (unless someone knows an alternative) - Windows Server 2008 + IIS - MSFT SQL Server Note: This is 100% new to me, so please hit me with all you got. Do I need Windows Server or are there alternative? Is it better to get multiple cheap servers to distribute load? Will Amazon S3 work for me? How about Windows Azure? Thank you!!

    Read the article

  • Trying to run QEMU with a file as hda

    - by Felix
    I'm trying to run QEMU and use a simple file on the host system as the guest's hard drive. Here's what I attempted so far: $ dd if=/dev/zero of=/home/felix/vm/archlinux.img bs=1MB count=8192 8192+0 records in 8192+0 records out 8192000000 bytes (8.2 GB) copied, 86.6054 s, 94.6 MB/s $ qemu -hda /home/felix/vm/archlinux.img -cdrom archlinux-2009.08-netinstall-i686.iso -boot d Then I try to install Archlinux to that file. It goes pretty well (it's able to format it from what I can tell) until I start installing packages, when I get errors like this: And, of course, everything goes downhill from there (unable to mount the partition, corrupted files, ...). What am I doing wrong? Note: I'm just doing this for entertainment purposes. I don't intend to actually use this on servers or anything. The only use I can think of for this kind of installation would be to actually get an 8GB USB stick and dd that file to it and wham! You have a bootable stick with a fully fledged and customized OS, and without torturing the stick through the installation.

    Read the article

  • How to fix a Postfix/MySQL/Dovecot Unknown Host Issue?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • AWS lighttpd: Sending a copy of requests to test.

    - by Martin
    I have a load balanced service on AWS. So the ELB evenly distributes the load across my servers. Each server is running lighttpd that does logging and forwards the requests to my service (on the same machine). I have written a new version of the service. It is installed and running on an EC2 machine test1 (basically a mirror of our current server but the new service running instead of the original) and I have done some preliminary tests that look good. But what I would like to do is mirror a fraction of incoming traffic to the new version of the service so I can do some comparisons between an original version and the new version based on real traffic. Thus I was thinking I could modify one box behind the ELB to duplicate its traffic to the test1. I was thinking I could modify the configuration of lighttpd so that each request is mirrored/duplicated. i.e. the original service keeps responding as before but a mirror request is sent to test1 but the reply is just dropped). Unfortunately I have not been able to work this out. Any ideas on how I could mirror the requests from one box to itself and test1. Or any other ideas for testing.

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >