Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 512/618 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • Swapping RAID sets in and out of the same controller

    - by hazymat
    This is a really simple question, and the answer is probably encoded in various wikipedia articles, however my question is reasonably specific, and I need a bulletproof answer! I'm not sure if my question pertains to hardware RAID in general, or to the specific RAID controller I'm working on. Either way it is the Dell SAS 6/iR (this is an LSI sas1068e chipset). I simply want to: remove a set of striped (RAID 0) disks from this RAID controller in a server put in another set of disks, and create a RAID 1 array (or create a new 'virtual disk', as they call it in the SAS 6/iR manual) Do stuff with the new RAID 1 array Have the option of putting back the old set of disks (the RAID 0 striped ones) I am quite sure this is possible, but I need some form of reliable, evidence-based answer as it's for a client of mine, and I need to migrate their data safely. The question: can I actually do the above? Does the RAID configuration get stored on the disks themselves, or in the hardware controller? Is any data stored in the hardware controller? If there is any chance I cannot completely restore operation of the first set of disks I removed, then I need to know about it! The manual alludes to the answer to this question (see page 45 of this document), and talks about activating an array of disks. I just need someone to confirm I can definitely do the above. See, simple question, right? :)

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • php mail() function painfully slow on local development machine

    - by Michael B
    Background: If you have set up a local apache server for development purposes you may have run into the problem where sendmail takes a long time (at least one minute) to send emails. This is extremely frustrating if you are trying to debug a problem with an email you have generated. There are several forum posts on the internet that discuss this problem. However, none of theme described what to do in enough detail for my limited knowledge. Here are the steps that worked for me: 1) find your hostname (in case you've forgotten it) using this command: :~$ cat /hosts/hostname myhostname 2) edit the file /etc/hosts and make sure the first line is the following: 127.0.0.1 localhost.localdomain localhost myhostname 3) edit the sendmail configuration file ( /etc/mail/sendmail.cf in Ubuntu) and Uncomment the line #O HostsFile=/etc/hosts 4) Restart the computer. The computer should boot up much faster now and the mail() function should return almost immediately. HOWEVER, the emails won't actually be sent unless you follow step 5. 5) You must new use the sendmail '-f' option whenever using the mail function. For example: mail('[email protected]', 'the subject', 'the message', null, '[email protected]'); My question for my fellow serverfaulters is: What further changes can be made so that I don't have to use the sendmail -f option? Although it's not very hard to add the -f option, it is a problem when your CMS (such as Drupal) does not use the -f option when sending mail. You would need to hack a core module to add this option.

    Read the article

  • Connection reset to some websites

    - by user143271
    I'm using a 2wire 3600HGV modem/router. Starting around this afternoon, any time I try to access anything from i.imgur.com I get The connection to i.imgur.com was interrupted in chrome, and the actual error is Error 101 (net::ERR_CONNECTION_RESET). It's network wide (tested multiple browsers on multiple computers and phones). I can access imgur.com just fine, but nothing from its content server i.imgur.com. If I disable wifi on my phone and use its 4G connection, I can access it just fine, so obviously imgur isn't down. I haven't changed any configuration on the router, and I have tried changing DNS servers (I tried google and OpenDNS). It also seems that imgur is not the only site; howtogeek and a couple of others seem to have the same problem. It looks like they are all edgecast cdn content servers, but not all edgecast cdn servers fail. Tumblr, for instance, works just fine. Does anyone have any idea what would be causing this? EDIT: Related to the edgecast remark, it would appear that this is a specific edgecast server: gs1.wpc.edgecastcdn.net. Tumbler's content is on gs1.wac.edgecastcdn.net, so it might be on a different server. Edit #2: These sites all respond to ping just fine as well.

    Read the article

  • Could I have destroyed Partitioning-Scheme/Filesystem of HDDs with External Harddrive Case with builtin Raid-Controller?

    - by th3m3s
    I had just recently bought a Fantec QB-35US3R to have a nice box on my desk to make some backups to. Along with the HDD-Bay I had ordered some 4TB HDDs to let them run in Raid 5, which is handled by the hardware RAID controller of the Fantec HDD-Bay. The QB-35US3R arrived a few days before the hard drives, so I got impatient and had the idea to put three old 1TB disks in the Fantec device, just to test it... Long story short: I made a backup of the most important data on these three disks before they broke. I had set the configuration scheme to RAID 3 at the Fantec device. It seems, that the Fantec RAID controller has "somehow" destroyed the partitioning scheme or the file system, because when put into a HDD docking station, they get recognized by the OS (Ubuntu/Linux) but are not mountable anymore. I tried to recover the data from one HDD via gParted (parted), which ran some hours without success. Here I stopped, before trying other tools, cos I read that the longer a hard drive is running after a the partitioning got destroyed, the worse it gets. What could the HDD-Bay probably have done to my lovely hard drive disks? Is there some routine a RAID controller is executing, when it wants to create a RAID system? Like erasing the partition table (seems not plausible to me.) or writing some information to every hard drive in the RAID (seems more likely to me.)? Is there a chance to recover the data from these HDDs, or is the change a RAID controller makes so significant, that no software is of help?

    Read the article

  • What character can be safely used for naming files on unix/linux?

    - by Eric DANNIELOU
    Before yesterday, I used only lower case letters, numbers, dot (.) and underscore(_) for directories and file naming. Today I would like to start using more special characters. Which ones are safe (by safe I mean I will never have any problem)? ps : I can't believe this question hasn't been asked already on this site, but I've searched for the word "naming" and read canonical questions without success (mosts are about computer names). Edit #1 : (btw, I don't use upper case letters for file names. I don't remember why. But since a few month, I have production problems with upper case letters : Some OS do not support ascii!) Here's what happened yesterday at work : As usual, I had to create a self signed SSL certificate. As usual, I used the name of the website for the files : www2.example.com.key www2.example.com.crt www2.example.com.csr. Then comes the problem : Generate a wildcard self signed certificate. I did that and named the files example.com.key example.com.crt example.com.csr, which is misleading (it's a certificate for *.example.com). I came back home, started putting some stars in apache configuration files filenames and see if it works (on a useless home computer, not even stagging). Stars in file names really scares me : Some coworkers/vendors/... can do some script using rm find xarg that would lead to http://www.ucs.cam.ac.uk/support/unix-support/misc/horror, and already one answer talks about disaster. Edit #2 : Just figured that : does not need to be escaped. Anyone knows why it is not used in file names?

    Read the article

  • Install Windows 8 on SSD and Program & Users on HDD

    - by Foe
    I have been dealing with a few problems while installing Windows 8 on my computer. On my old configuration, I had Windows 7 installed on my 60Go SSD, and my programs and user data on my 1To HDD, thanks to relative links. Yet, while installing Windows 8 on my SSD, he made a small partition on my HDD "System related". Plus, I'm afraid using only links is a bit cheap, and I saw lots of people messing with their registry when trying to put user data on another drive. I read a lot about optimizing Windows 8 for SSD, putting Users on another drive, and very similar situation that didn't quite correspond to what I was trying to achieve. Here's what I tried : http://www.eightforums.com/tutorials/4275-user-profiles-relocate-another-partition-disk.html Booting on Audit mode and using an XML to relocate Users didn't work as the specified version in the file is a test one, and I don't what to enter if I'm using the last release. Booting with the install DVD in repair mode to do a copy of the User and create a relative link, resulting in an error on the logon screen while entering my password saying that "The profile can't be load" (average translation of my error from french to english) Do anyone know how to do a clean separated install of Windows 8, with OS on a drive, and the other data on a second one ? Thanks.

    Read the article

  • How to push changes from Test server to Live server?

    - by anonymous
    As a beginner, I finally noticed the issue with making changes to the live server I've been working on, now that I have a couple users on it, since I bring it down so often. I created an EC2 image of my live server and set up a separate instance on EC2, so now I have 2 EC2 instances, Stage and Production. I set up GitHub and push changes to stage and test my code there, and when it's all done and working, I push it to the production branch, and everything is good. And there is a slight issue here since I name my files config_stage.js and config_production.js and set up .gitignore on each server, and in my code, I would have it read the ENV flags and set up the appropriate configs, is this the correct approach? And my main question is: how do you keep track of non-code changes to the server? For example, I installed HAProxy, Stunnel, Redis, MongoDB and several other things onto the Stage server for testing and now that it's all working and good, how do I deploy them to production? Right now, I'm just keeping track of everything I installed and copying configuration files over, which is very tedious and I'm afraid I may have missed a step somewhere. Is there a better way to port these changes over from my test server to my live server?

    Read the article

  • Monitoring Between EC2 Regions

    - by ABrown
    I'm working on a small EC2 project that involves a handful of servers in two different regions (US East and EU West). My first task is to implement a Nagios monitoring solution. Monitoring within a region is simple - I just use the private domain names/IPs, but I'm a little unsure of the best way to handle monitoring the second region without setting up a second Nagios install. The environment is fairly static, so I'm not going to be scripting the configuration with the EC2 tools just yet. As I see it, I have two options. Two Nagios installations (which is over-kill for the small number of servers I'm dealing with). Pros: I don't have to alter the group permissions nor do I have to pay for the traffic, redundancy in the monitoring solution - I could monitor the Nagios servers. Cons: two installations to deal with and I'd need to run another server instance. Have the single installation monitor both regions. Pros: one installation to deal with. Cons: slightly reduced security - security group will have to have NRPE (5666) opened for one source IP and also paying for a small amount of bandwidth at the Internet rate for data transfer between the regions. I guess my question is - how have others handled this problem and what are your recommendations? Thanks!

    Read the article

  • Script apparently changing file permissions on Mac OS to 000

    - by half_bit
    I wrote a little shellscript that helps installing a web application. The script itself just downloads a zip archive, extracts it and changes the permissions of the extracted files to the one needed to run the webapp. The problem now is that some users reported that after running my script, all the permissions of every file in their home directory or even on their whole computer changed to 000 (except the actual unzipped files which do have the correct permissions). The only lines in my script actually doing IO are these: URL="http://foo.com/" FILENAME="some.zip" curl --silent "$URL$FILENAME" -o $FILENAME > /dev/null echo "Unzipping...\c" if unzip -oqq $FILENAME > /dev/null then chmod -R 777 app/tmp app/webroot app/Config/database* app/configuration* chown -R www:www * rm $FILENAME echo "\t\t\tOK" exit 0 else echo "\t\t\tERROR" exit 1 fi I seriously can't explain this to myself. How can this even be possible? It is entirely possible that the users accidentally ran the script in their home directory, but that still wouldn't explain why the permissions where set to 000, not www/777.

    Read the article

  • Windows-7 Ultimate 64 bit wont connect to my wired/wireless networks

    - by A302
    Windows 7 Ultimate 64 bit. Everything was working fine & then just stopped working. The nic card Realtek PCIe GBE Family Controller is enabled but does not connect to my router (cables & router ports are good). Wireless Atheros AR5007EG is enabled but the connection is limited (encryption type / key have been verified). A laptop running XP can connect both wired / wireless. SSID is not being broadcast, connect to network if it is not broadcasting is checked. Have checked services.msc for Bonjour & did not see it listed. Network & sharing center does not list any active networks. Device manager lists both devices as functioning properly. Router configuration has not been changed. Virus scan has not found anything. I would like to fix this rather than using Acronis to do a system restore. Thanks in advance for any advice offered in solving this. 26 Jan, the nic card & wireless are working using PCLinux OS Live CD. It appears that the problem is Windows 7 related.

    Read the article

  • Connect from Mac OS X to Windows 7 Desktop

    - by jrn
    I am trying to connect from my MacBook to my Windows 7 machine within my own network - if it will work from outside my network that's a plus but no need to have. My Windows 7 machine is freshly installed with Windows 7 Home Premium. It runs the built-in firewall with no settings changed so far as well as Microsoft Security Essentials. So far I tried CoRD and Microsofts Remote Desktop Connections to connect from my Mac to my Windows machine without any success. I did try and disabled the firewall on my Windows machine but could not connect either. The reason I did this was to check wether there is a Windows firewall setting preventing me from connecting. On top of that I manually started the Remote Desktop Services and Remote Desktop Configuration within services.msc. Is there anything else I have to enable for a remote desktop connection? Could there be any router setting I have to tweak? Since I do not want to connect from outside my own network I thought I don't have to do any port forwarding. The error messages I retrieve are all connection timeouts. I can however ping the hostname and/or IP address. Any help would be greatly appreciated. Thanks a lot, jrn

    Read the article

  • building a debian base image

    - by Michael
    Is there a preferred way to create base images for Debian-based customized installations? We are currently going with multistrap but although it's better than hand-crafted chroot stuff, it still has a lot of edges and corners. Is there a more reliable and less error-prone way to produce a root filesystem of a Debian installation with some additional .debs installed? (I don't want to send out a Debian installer with a preseed file though.) Addendum 1: To clarify things a bit: We are delivering some kind of software appliance to our customers. That is, a debian operating system, with some additional software packages -- both our own and third-party ones -- and some configuration changes. To ease the installation process, we have an installer that does nothing more than partitioning, copying files to the partitions and setting up grub. So it's basically an image-based installer. So we are basically running the debian installation ourselves and just distribute the already installed operating system. The question is about the installation part. I want to have that as easy and robust as possible, and of course, it should be an automated process.

    Read the article

  • Formatting a former RAID 0 drive through USB

    - by EXC
    I'll try to be as specific as possible here: I was using two Hitachi 2.5" 500 gb HDDs in my Gateway P-7805u laptop in a RAID 0 configuration. The array was causing the laptop to run extremely hot, however, so I removed them and deleted the RAID array through Intel Matrix HDD manager. I did a clean install of Windows 7 on the original 320 gb HDD that came with the laptop. I never did format the original RAID array HDDs before taking them out of the computer. Now, I am attempting to format the Hitachi 500 gb RAID array HDDs externally through a USB external enclosure. The external HDD drivers install on my clean install OS, but when I go into 'My Computer' there is no external drive available. I cannot format in CMD Prompt because my computer will not designate a drive letter to the external HDD. The drivers install and the HDD is recognized as a Hitachi external drive, but nothing seems to show up in my computer window. I need to know if there is a way to format these drives to NTFS externally.

    Read the article

  • HAProxy appsession vs cookie precedence

    - by user1139473
    I am trying to find the best solution for balancing and keeping persistence on our application behind HAProxy. Here is our basic configuration: https://gist.github.com/endzyme/1804046b23c37beba520 After playing around with taking members down and up and also reloading the haproxy (with -sf) I have noticed that appsession isn't 100% effective, it would appear that sometimes it doesn't always 'request-learn'. I also tried to add a cookie JSESSION prefix to balance in case request-learn didn't take. Unfortunately it would present scenarios where the prefix would list svr2 but it was balanced to a different server. I am assuming it's because the appsession table takes first then sticks on that before using the cookie parameter. I have not tested with using cookie as an inserted option (not prefix on existing cookie) but I am thinking it would yield similar results. My question is: Which one is checked first, appsession or cookie, and is it an immediate catch after it reads the first one, or a fall through? Also as a follow up - is it not recommended to use both in the same backend? Cookie as I understand takes less memory resources, is agnostic to reloads and has way better reliability of persistence. Appsession I assume takes less cpu resource, since it's reading not writing. (Bonus Question: is there a way to inspect appsession/cookie table map? socket show table doesn't show anything except stick-tables) Many thanks in advance, -Nick

    Read the article

  • vmware server 64 bit on ubuntu 9.10 64 bit with P2V windows 2003 SBS poor network speed

    - by RobertHC
    configuration is ubuntu 2.6.31-21 64 bit vmware 2.0.2 64 bit last release hardware is core 2 quad with 8GB ram guest is win 2003 server SBS 32 bit Dear friends, we have a converted physical to virtual windows sbs 2003, converted with last converter available nowadays http://www.vmware.com/products/converter/ vCenter converter. Running the P2V 2K3 SBS on vmware server, it does boot fine, but we do note an abnormal CPU activity and a poor lan speed. As attempts we did what follow. We removed all unneeded peripherals, we removed one NIC (phisycal server was 2 nics), we changed the vmx to ged the nic recognized as intel instead than amd, we removed 1 cpu (physical was 2 cpu), we removed anything was reported as failed driver from system events monitor. Nothing to do, no way and funny results. Let's read some tests results. All are made with the same file copied in different source folders. Copying from client side (both directions copy, to/from server) results are i.e. 10 seconds, copying the same files from server side (again from and to server) results are different... from client to server, speed is round about (bit more) 10 seconds, but from server to client direction is slower: double the time. Beeing very fast and launching a simultaneous copy "from server to client"+"from client to server", this made from the server side, results in a stuck traffic... 45 seconds to do the copy. vmware tools are installed and e1000 driver has been updated. With one processor CPU activity is still going up and down but much less than with two. Because of test, we installed win 2k8 STD 64 bit. We repeated all the above tests with exactly the same file result is just one: always 5 seconds (this matches the lan speed) Any idea about this issue is welcome and thank you if any. Kind regards R.

    Read the article

  • Apache directory access with virtual host [SOLVED]

    - by alexeygaidamaka
    I have a virtual host with a configuration like that. When i'm trying to get into foobar.com/dir providing valid username/password pair i get 403 forbidden page instead of that directory contents. www.foobar.com/dir has 777 rights, .httpaswd is chmoded 644. But i can't figure out why i am still not seeing contents. Please, give me a hint. ServerAdmin webmaster@localhost ServerName www.foobar.com ServerAlias www.foobar.com DocumentRoot /var/www/foobar <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/foobar> Options -Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Directory /var/www/foobar/dir> AllowOverride AuthConfig AuthName "Authorize yourself, please!" AuthType Basic AuthUserFile /etc/apache2/.htpasswd AuthGroupFile /dev/null Allow from All Order Allow,Deny Options +Indexes<<- that one should be added Require valid-user you have to add the line Options +Indexes to see directory contents

    Read the article

  • Windows 7 clean install becomes corrupt after reboot (repeated many fresh installs)

    - by pjotr_dolphin
    My laptop keeps crashing on boot after clean Windows 7 install. Ok, here is the story, and some fact. Computer: Samsung NP900X3C-A04HK (256GB SSD, 8GB RAM) OS to install: Windows 7 Ultimate SP1 (not from Samsung, own fresh Win) I purchased this laptop about a year ago, never booted it into the Windows Home that was installed on it, installed directly Ubuntu on the machine. Full disc encryption was the selected install, so of course it wiped the complete disc (including Samsung Recovery Partition). After some time, I felt like going back to Windows, as Windows 7 is actually quite nice. So I went to buy a fresh Windows 7Ultimate with SP1. Now to the tricky part. Windows installs perfectly, and after installing all Windows updates, drivers from Samsung, software I need, it is time for shutting it down and go to bed. Starting it up again, and it is not booting, these are the type of errors I have gotten so far (fresh installed it more then a dozen times now, and tried different suggestions from threads on the net). Windows failed to start... Status: 0xc000000f Info: The boot selection failed because a required device is inaccessible. File: /boot/bcd Status: 0xc000000f Info: an error occurred while attempting to read the boot configuration data. And some other errors, not all the same. Not memory of this. I have run different disc checks, and all says my SSD is in perfect shape. Note: Soft reboots from Windows menu works, never gets corrupted. But if I Shutdown and then start it up again, this is when it happens. Can someone help me not get back to Ubunut? What can be the cause, and how can it be fixed so I do not get there problems again?

    Read the article

  • Incoming traffic while on public network

    - by zvikico
    I'm developing a web app and I need to be able to get incoming traffic from 3rd party services I use. This is a classic webhooks situation: I send a request with a return address and receive the response (via HTTP) some time later to the given address. The simple solution would be to provide my external IP address and forward the incoming traffic from the router to my machine. However, I'm working in a large office and I cannot control the router configuration. I'm looking for a different way to achieve that. I do have servers online. I can have a daemon running on one of those servers, which will handle the incoming traffic. I can run a parallel daemon on my machine, which will keep an open connection with the remote daemon (over ssh preferred) and when an inbound traffic is received by the remote, it will send it to the local, which will send it to the correct port on my machine, as if it was received in the natural way. Is there any ready-made solution for that? PS. I'm on OS X and my server is Ubuntu. Thanks, zvikico

    Read the article

  • Server down at 23:26 every night

    - by miccet
    We are having a big problem with our sites stability the last couple of weeks and after endless hours of troubleshooting I don't get anywhere. So I turn to you dear community. Setup: 2 x VPS servers - Front end, 8 core, 8G RAM. - Database, 5 core, 3G RAM. Both running Ubuntu. Ruby on Rails EE with Passenger 3 and Rails 2.3.11. MySQL 5.1.67. The problem is that each night, at the exact same time (23:26) the SQL server suddenly shows a processlist full of COMMIT with an increasing Time. After 30-40 seconds (can go longer) a wave seems processed and the site responds for a few seconds before it repeats. During this hick up the database server load spikes while the front end is relaxing. I have looked at slow queries, but is not finding any locks or other unusual queries ran at this time. I have looked at iotop at the time of the halt and there is no activity from mysql. I also tried turning off query_cache and messed around with the mysql configuration file without much change. Any ideas?

    Read the article

  • Set up layer 2 vlan between 2 data centres

    - by user41679
    Hello, Our data centre provider operates 2 sites, and we currently have equipment in one and would like to have equipment in the second. They've told me that they operate a layer 2 vlan between the 2 sites over a 20gbit connection, and that they'd just give me ethernet cable at each end to connect the locations. At the current site, we have Cisco 2960 48TC-L switches, all the machines are on a 192.168.x.x subnet and we have cisco firewalls with which we connect to our internet provider with. My question is what would I need to do to connect the 2 sites? could I just plug the ethernet cables the provide into the cisco switches, and have the same switches the other end? would I need to set up a separate internal network on the other side and connect both through the firewalls? Would the cisco switches need special configuration? We expect to maintain a number of connections between the 2 sites, and each site would have its own internal dns name like dc1.xx.com. Sorry if I'm being vague or haven't included enough information, I've a fairly good knowledge of hardware but we're down a netops guy at the moment and I'd like to get both sites on-line ASAP! Thanks in advance!

    Read the article

  • Can't upgrade MySQL Server on new Ubuntu 12.04 install

    - by user179627
    After freshly installing Ubuntu server 12.04, I did the usual apt-get update / apt-get upgrade, which failed for mysql-server-5.5: Setting up mysql-server-5.5 (5.5.31-0ubuntu0.12.04.2) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I tried a wide variety a approaches suggested by googling, which involved various combinations of apt-get remove/purge/install -f/reinstall, etc., with no luck. I also tried downloading the package directly from launchpad.net and running dpkg -i on it (this had worked for a similar issue with a kernel upgrade), but to no avail. I'm not actually particularly interested in what's going on with mysql, per se (though I will need to figure it out at some time); at this point, my primary concern is that I am unable to apt-get install other packages! What to do?

    Read the article

  • USB Drive that simultaneously connects to more than one computer

    - by user2499
    Background: I have a portable USB drive that I use to make sure I have access to common files whenever at home, work, travel etc for cases when I may not have Internet/Network access of any kind. There are some cases when I have to work simultaneously on a laptop and a desktop computer, and for those cases I usually have to unplug this USB hard drive and move it between the two. Question: dual-computer USB drive? Is there a USB-based solution that would enable me to use this portable drive between two computers simultaneously? If there is not a USB-based solution, does anyone have alternative suggestions, consistent with the underlying rationale? Rationale: Sometimes I have to work on a desktop computer with locked-down networking capabilities (such as at the local photocopy shop) and it can be difficult to get a network configuration that allows dual-computer access without breaking things, or accidentally making my USB drive visible to the entire network. Basically what I need is a very simply LAN that is guaranteed to work regardless of the rules or constraints set by the network administrator for wherever I happen to be at the time. See also: http://superuser.com/questions/99274/how-to-connect-two-computers-with-usb

    Read the article

  • Global Email Forwarding with EXIM?

    - by Dexirian
    Been trying to find a solution to this for a while without success so here i go : I was given the task to build a High-Availability Load-Balanced Network Cluster for our 2 linux servers. I did some workaround and managed to get a DNS + SQL + Web Folders + Mails synchronisation going between both. Now i would like my server 2 to only do mailing and server 1 to only do web hosting. I transfered all the accounts for 1 to 2 using the WHM built-in account transfert feature. I created 2 different rsync jobs that sync, update, and delete the files for mail and websites. Now i was able to successfully transfer 1 mail accounts from 1 to 2, and the server 2 works flawlessly. All i had to do was change the MX entries to point to the new server and bingo. Now my problem is, some clients have their mail softwares configured so that they point to oldserver.domain.com. I cant make the (A) entry of oldserver.domain.com point to the new server for obvious reasons. I thought of using .foward files and add them to the home directories of the concerned users but that would be very difficult. So my question is : Is there a way to configure exim so that it will only foward mails to the new server? I need to change all the users so they use their mail on server 2 without them doing anything. Thanks! EDIT : TO CLARIFY MY PROBLEM Some clients have their mail point to oldserver.xyz instead of mail.olderserver.xyz I want to know if i can do something to prevent modifying the clients configuration I would also like to know is there is a way to find out what clients aren't properly configured

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >