Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 178/1328 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Linux HA cluster w/Xen, Heartbeat, Pacemaker. domU does not failover to secondary node

    - by Kendall
    I am having the followig problem with an OenSuSE + Heartbeat + Pacemaker + Xen HA cluster: when the node a Xen domU is running on is "dead" the Xen domU running on it is not restarted on the second node. The cluster is setup with two nodes, each running OpenSuSE-11.3, Heartbeat 3.0, and Pacemaker 1.0 in CRM mode. For storage I am using a LUN on an iSCSI SAN device; the LUN is formatted with OCFS2 and managed with LVM. The Xen domU has two logical volumes; one for root and the other for swap. I am using IPMI cards for STONITH devices, and a dedicated ethernet link for heartbeat communications. The ha.cf file is as follows: keepalive 1 deadtime 10 warntime 5 udpport 694 ucast eth1 auto_failback off node dhcp-166 node stage use_logd yes crm yes My resources look as follows: shocrm(live)configure# show node $id="5c1aa924-bba4-4f95-a367-6c9a58ac4a38" dhcp-166 node $id="cebc92eb-af24-4833-aaf0-672adf80b58e" stage primitive Xen-Util ocf:heartbeat:Xen \ meta target-role="Started" \ operations $id="Xen-Util-operations" \ op start interval="0" timeout="60" start-delay="0" \ op stop interval="0" timeout="120" \ params xmfile="/etc/xen/vm/xen-util" primitive my-stonith stonith:external/ipmi \ params hostname="dhcp-166" ipaddr="192.168.3.106" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" primitive my-stonith2 stonith:external/ipmi \ params hostname="stage" ipaddr="192.168.3.105" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" property $id="cib-bootstrap-options" \ dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \ cluster-infrastructure="Heartbeat" The Xen domU config file is as follows: name = "xen-util" bootloader = "/usr/lib/xen/boot/domUloader.py" #bootargs = "xvda1:/vmlinuz-xen,/initrd-xen" bootargs = "--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen" memory = 4096 disk = [ 'phy:vg_xen/xen-util-root,xvda1,w', 'phy:vg_xen/xen-util-swap,xvda2,w', ] root = "/dev/xvda1" vif = [ 'mac=00:16:3e:42:42:06' ] #vfb = [ 'type=vnc,vncunused=0,vnclisten=192.168.3.172' ] extra = "" Say domU "Xen-Util" is running on node "stage"; if "stage" goes down, "Xen-Util" does not restart on node "dhcp-166". It seems to want to try as an "xm list" will show it for a few seconds and if you "xm console xen-util" it will give a message like "copying /boot/kernel.gz from xvda1 to /var/lib/xen/tmp/kernel.a53gs for booting". However, it never gets past that, eventually gives up, and no longer appears in "xm list". Now, when node "stage" comes back online after being power cycled, it detects that "Xen-Util" isn't running, and starts it (on stage). I've tried starting "Xen-Util" on node "dhcp-166" without the cluster running, and it works fine. No problems. So, I know it works in that respect. Any ideas? Thanks!

    Read the article

  • Getting at fsid under Linux? Or an alternate way of identifying filesystems?

    - by larsks
    In an environment with automounted home directories, such that the same filesystem exported by a fileserver may be mounted multiple times on the client, I would like to authoritatively be able to identify whether two mountpoints are in fact the same filesystem. That is, if the remote server exports: /home And the local client has: # mount fileserver:/home/l/lars on /home/lars type nfs (rw...) fileserver:/home/b/bob on /home/bob type nfs (rw...) I am looking for a way to identify that both /home/lars and /home/bob are in fact the same filesystem. In theory this is what the fsid result of the statvfs structure is for, but in all cases, for both local and remote filesystems, I am finding that the value of this structure member is 0. Is this some sort of client-side issue? Or do most modern NFS servers simply decline to provide a useful fsid? The end goal of all of this is to robustly interpret the output from the quota command for NFS filesystems. For example, given the example above, running quota as myself may return something like: Disk quotas for user lars (uid 6580): Filesystem blocks quota limit grace files quota limit grace otherserver:/vol/home0/a/alice 12 52428800 52428800 4 4294967295 4294967295 fileserver:/home/l/lars 9353032 9728000 10240000 124018 0 0 ...the problem here being that there exists a quota for me on otherserver which is visible in the results of the quota command, even though my home directory is actually on a different device. My plan was to look up the fsid for each mountpoint listed in the quota output and check to see if it matched the fsid associated with my home directory. It looks like this won't work, so...any suggestions?

    Read the article

  • Can I change the user id of a user on one Linux server to match another server in /etc/passwd?

    - by user76177
    I have a Rails application that is on a virtual machine (RHEL 6) and it's database is on dedicated hardware (also RHEL 6). The app server has an NFS directory from the db server mounted and accessible. It needs to write images to that server that are uploaded via the app. Background processes on the db server need to read and write to the same directory, as they perform resizing operations on the uploaded files. Right now none of this is working, because the user ids are different between the two systems. I only need this to work for this one application, so it is way too much overhead to put an LDAP system in place. Can I simply change the user id of this one user in one of the systems, or will that cause mass chaos? UPDATE: The fix worked, at least on local devices. Unfortunately the device I have mounted to the main db server still thinks my user id is 502 instead of 506. Do I need to remount that device, or is there an NFS daemon I can stop and restart to refresh it?

    Read the article

  • How can I check detailed memory information on linux?

    - by user35153
    I know that I can check cpu info via cat /proc/cpuinfo and memory info via cat /proc/meminfo. but cat /proc/meminfo yiels result like : MemTotal: 66098352 kB MemFree: 329152 kB Buffers: 632432 kB Cached: 62619692 kB SwapCached: 0 kB Active: 6425444 kB Inactive: 58717276 kB SwapTotal: 1951888 kB SwapFree: 1951796 kB Dirty: 38416 kB Writeback: 0 kB AnonPages: 1890268 kB Mapped: 12624 kB Slab: 464580 kB SReclaimable: 275812 kB SUnreclaim: 188768 kB PageTables: 7524 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 35001064 kB Committed_AS: 3860248 kB VmallocTotal: 34359738367 kB VmallocUsed: 125636 kB VmallocChunk: 34359612527 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB But I want to learn info on the clock speed of the memories (in Mhz?) HOW can I get That Info ? Thnks

    Read the article

  • Copy File Contiguously to Disk from OSX/Unix/Linux to FAT32 FS?

    - by alharaka
    So the Sysinternals guys have that cool contig.exe utility that allows me ensure a file is contiguous. I need to copy overs ISO files to a FAT32 USB flash key. Grub4DOS requires the files be continuous, but I do not have Windows access at the moment. Is there a way to copy a file so it is contiguous on the target drive, or a tool like the aforementioned that will make an existing file contiguous. Again, I need it on FAT32, and there lies the rub.

    Read the article

  • How do I convert a Linux disk image into a sparse file?

    - by endolith
    I have a bunch of disk images, made with ddrescue, on an EXT partition, and I want to reduce their size without losing data, while still being mountable. How can I fill the empty space in the image's filesystem with zeros, and then convert the file into a sparse file so this empty space is not actually stored on disk? For example: > du -s --si --apparent-size Jimage.image 120G Jimage.image > du -s --si Jimage.image 121G Jimage.image This actually only has 50G of real data on it, though, so the second measurement should be much smaller. This supposedly will fill empty space with zeros: cat /dev/zero > zero.file rm zero.file But if sparse files are handled transparently, it might actually create a sparse file without writing anything to the virtual disk, ironically preventing me from turning the virtual disk image into a sparse file itself. :) Does it? Note: For some reason, sudo dd if=/dev/zero of=./zero.file works when cat does not on a mounted disk image.

    Read the article

  • Making Apache 2.2 on SuSE Linux Case In-Sensitive. Which is a better approach?

    - by pingu
    Problem: http://<server>/home/APPLE.html http://<server>/hoME/APPLE.html http://<server>/HOME/aPPLE.html http://<server>/hoME/aPPLE.html All the above should pick this http://<server>/home/apple.html I implemented 2 solutions and both are working fine. Not sure which one is better(performance). Please Suggest..Also Directive - CheckCaseOnly on never worked Option 1: a)Enable:mod_speling In /etc/sysconfig/apache2 - APACHE_MODULES="rewrite speling apparmor......" b) Add directive - CheckSpelling on (Either in .htaccess or add in httpd.conf) In httpd.conf <Directory srv/www/htdcos/home> Order allow,deny CheckSpelling on Allow from all </Directory> or In .htaccess inside /srv/www/htdcos/home(your content folder) CheckSpelling on Option 2: a) Enable: mod_rewrite b) Write the rule vhost(you can not write RewriteMap in directory. check apache docs ) <VirtualHost _default_:80> <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteMap lc int:tolower RewriteCond %{REQUEST_URI} [A-Z] RewriteRule (.*) ${lc:$1} [R=301,L] </IfModule> </VirtualHost> <VirtualHost _default_:80> <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteMap lc int:tolower RewriteCond %{REQUEST_URI} [A-Z] RewriteRule (.*) ${lc:$1} [R=301,L] </IfModule> </VirtualHost> This changes the entire request uri into lowercase. I want this to happen for specific folder, but RewriteMap doesn't work in .htaccess. I am novice in regex and Rewrite. I need a RewriteCond which checks only /css//. can any body help

    Read the article

  • How to Change Sharepoint Port

    - by Jack Levin
    I have a Sharepoint web application that I want to change the port it is running on. I am not sure how to do that as It seems that it is not enough to just go to the IIS console and change the web application port. I guess I need to do certain changes in the sharepoint Central administration console as well.

    Read the article

  • fms port preoblem

    - by Elamurugan
    Hi, i installed fms in my server, and already its running a apache for hp, so want to run this fms as a separate port in 8083 While installing i gave port number as 8083,and now its shows in "fms.ini" ADAPTOR.HOSTPORT = 1935,8083 I think it should listen any of these ports, but its not working while am acessing domain.com:8083 domain.com:1935 Please can some body tell me why its not working Thanks in advance

    Read the article

  • Is Linux Virtual Server old, or is it standing still?

    - by Vimvq1987
    Based on this question: http://serverfault.com/questions/124905/widely-used-load-balancing-solutions, LVS may be the right solution for my problem. But when I went to its homepage http://www.linuxvirtualserver.org/, I found that LVS has been updated since Nov, 2008. The world's moving fast, and I don't know if LVS was obsolete or not. Is LVS standing still, or there're some better solutions to replace it? Thank you so much.

    Read the article

  • Understanding what needs to be in place for a server to send outgoing email from a linux box

    - by Matt
    I am attempting to configure an openSuse 11.1 box to send outgoing email for a domain that the same server is hosting. I don't understand enough about smtp servers and the like to know what needs to be in place and working. The system already had Postfix installed, and I confirmed it was running via a > sudo /etc/init.d/postfix status I examined the Postfix config file in /etc/main.cf and configured a couple of items regarding the domain/host name and such, but left it largely default. I attempted to send an email from the command line with the following command: > echo "test 123" | mail -s "test subject" [email protected] Where differentdomain.com was not the same domain as the one best hosted on the server. However, the email never reaches the target account. Any suggestions? EDIT: In the postfix log, (/var/log/mail.info, there's nothing in .err) I see that postfix is trying to connect to what appears to be a different smtp server on our network, with a connection refused: connect to ourdomain.com.inbound15.mxlogic.net[our ip address]:25: Connection refused However, I can't figure out why it is 1) trying to connect to that server and 2) not just sending the messages itself... I mean, isn't postfix an smtp server? I did a grep -ri on ourdomain from /etc and see no configuration files anywhere telling it to do this. Why is it?

    Read the article

  • How to copy file preserving directory path in Linux?

    - by gasan
    I have Eclipse projects and ".project" file in them, the directory structure looks like 'myProject/.project'. I want to copy these '.project' files to another directory, but I want the enclosing directory name to be preserved. Let's say I have 'a/myProject/.project', I want to copy 'myProject/.project' to 'b', so it be 'b/myProject/.project', but 'b/myProject' doesn't exist. When I try in a: cp -r ./myProject/.project ../b it copies only '.project' file itself, without 'myProject' directory. Please advise.

    Read the article

  • Strategy for using snapshots to back up Ubuntu Linux server?

    - by MountainX
    I need some backup advice for my home file server. Here are the mount points, volume groups, logical volumes and used/total space of all the volumes on my Ubuntu 8.10 home file server. / vgA/lvRoot [7.5G/50G] /tmp vgB/lvTmp [195M/30G] /var vgB/lvVar [780M/30G] swap vgB/lvSwap [16.00 GB] /media1 vgC/lvMedia1 [400G/975G] /media2 vgC/lvMedia2 [75G/295G] /boot partition (no volume group) [95M/200M] /video partition (no volume group) [450G/950G] /backups vgD/lvBackupTarget [800G/925G] /home vgE/lvHome [85G/200G] I have just added a 2.0 TB external USB drive that I would like to use to backup everything. (It will be a close fit to get it all on one 2.0 TB drive. I actually have a 2nd external USB drive if needed.) I'd like to backup "/", var, /media1, media2 and /home. I'll deal with /boot and /video separately since they are not logical volumes. For all the logical volumes I'm anticipating taking snapshots and then copying those snapshots to the 2.0 TB external USB drive. I have never done a task like that before. If I do that, I could use the tutorial I found here: http://www.howtoforge.com/linux_lvm_snapshots My questions are: What is the best overall strategy? Is it LVM snapshots, as I'm assuming? How should I prepare, subdivide and mount the 2.0 TB external USB drive? 2.a. Should I create one or more regular partitions or should I create a physical volume with one or more logical volumes? 2.b. Would it be advisable to extactly mirror the source pv/lv layout on the external drive, and if so, is this a good strategy? What's the best way to get the snapshots onto the external drive? dd? Even though this is a strategy question, feedback with actual commands is appreciated. I need step-by-step cookbook-style help because I don't do much server admin work. (Background: This is a home file server that I have rarely had to touch in about 2 years. It has done its job without much intervention. The really old PC that I used to back everything up recently failed, so I'm replacing that with the external USB drive(s) and I'd like to upgrade my backup strategy at the same time. Previously, I just copied stuff from /backups over to the other computer and that would not have made things very easy in a real restore situation. The /backups mount point contains backup copies of "most" of the important data on a file by file basis, but it does not contain copies of /boot, etc. BTW, the actual internal HDD that holds /backups is separate from the other storage devices.) EDIT: I'll propose a strategy... The idea came from a comment here: LVM mirroring VS RAID1 "LVM mirrors are for replication of a logical volume to a different physical volume. It's essentially meant to "move the data to a different disk". The mirror is then broken..." That would fit my requirements well. Here is an ideal situation: establish the LV mirror on the external drive break the link with the mirror create a (persistent) snapshot on the mirror after a week, resync the mirror with the original source and update the mirror break the link and create another snapshot on the mirror. Obviously, the mirror will be like a weekly full backup. And the snapshots on the mirror will represent earlier points in time. If this would work and if it would be time efficient, it would give a nice full & differential type backup on the external drive based on LVM. I have not heard of a strategy like this before. Will it work? Could it be scripted? Thoughts? EDIT 2: Creating Portable DiskSafes With LoopbackFS And LVM Snapshots This article seems intriguing: http://www.howtoforge.com/creating-portable-disksafes-with-loopbackfs-and-lvm-snapshots Unfortunately, I don't understand exactly how to map those ideas to the strategy I'm proposing above. I'm going to ask this last bit as a separate question. I will leave my original question in place because I still desire feedback on the overall best strategy. At this moment I'm assuming it is LVM mirroring in the style of "Creating Portable DiskSafes with LVM Snapshots" but that might be wrong.

    Read the article

  • what firefall linux distro applicance could track internet usage per device in my home?

    - by GregH
    Hello, Anyone know of a community edition/open source/free firewall/gateway software product that I could install onto an old PC to act as my firewall/gateway/proxy etc, BUT for which it has the power to track internet usage per device in my home. So: a) Mandatory - Track internet usage for devices on my home network on a per device basis (e.g. various PCs/Xbox etc) b) Mandatory - Report/graph would would give a breakdown of internet usage, per device (e.g. IP address), per day. c) Desirable - as in b) above but per hour d) Desirable - realtime graph (e.g. 5 minute measurement intervals or something) that shows current internet usage per device e) Mandatory - Handles all internal<=internet requests for all protocols (e.g. HTTP, HTTPS, xbox etc) f) Mandatory - No explicit settings in clients required - i.e. Transparent Monitoring concept (for both HTTP and non-HTTP traffic like xbox, skype etc) g) Mandatory - easy "appliance" like installation onto a dedicated low spec PC thanks in advance

    Read the article

  • what firefall linux distro applicance could track internet usage per device in my home?

    - by GregH
    Hello, Anyone know of a community edition/open source/free firewall/gateway software product that I could install onto an old PC to act as my firewall/gateway/proxy etc, BUT for which it has the power to track internet usage per device in my home. So: a) Mandatory - Track internet usage for devices on my home network on a per device basis (e.g. various PCs/Xbox etc) b) Mandatory - Report/graph would would give a breakdown of internet usage, per device (e.g. IP address), per day. c) Desirable - as in b) above but per hour d) Desirable - realtime graph (e.g. 5 minute measurement intervals or something) that shows current internet usage per device e) Mandatory - Handles all internal<=internet requests for all protocols (e.g. HTTP, HTTPS, xbox etc) f) Mandatory - No explicit settings in clients required - i.e. Transparent Monitoring concept (for both HTTP and non-HTTP traffic like xbox, skype etc) g) Mandatory - easy "appliance" like installation onto a dedicated low spec PC thanks in advance

    Read the article

  • How to set up a linux user that can only access a repository via ssh?

    - by GJ
    I have a mercurial repository on a secure server, to which I want to grant secure access to an external user. I added for him a user account and publickey ssh authentication so that now he could push/pull changesets via ssh. My question is: how can I make this new user account completely disabled from doing anything or accessing any data on the server other than accessing the repository? E.g. he shouldn't even have the possibility to enter an interactive shell session. Thanks

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance, Regarding Eduardo Ivanec's question, here is a netstat -i any port 7000 -nn dump from x.x.x.218: 14:42:15.386462 IP 172.16.1.224.40006 > 172.16.1.218.7000: Flags [S], seq 2804513708, win 14600, options [mss 1460,sackOK,TS val 86702647 ecr 0,nop,wscale 4], length 0 14:42:15.386690 IP 172.16.1.218.7000 > 172.16.1.224.40006: Flags [R.], seq 0, ack 2804513709, win 0, length 0 Update 2: When I run the second command, I get the following error in my browser: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://109.123.109.205/index.php Zero Sized Reply Squid did not receive any data for this request. Your cache administrator is webmaster. Generated Fri, 01 Jul 2011 16:06:06 GMT by remote-site (squid/2.7.STABLE9) remote-site is 172.16.1.224 When I do a tcpdump -i any port 7000 -nn I get the following: root@remote-site:~# tcpdump -i any port 7000 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused

    Read the article

  • How to create an alias for linux server name?

    - by Radek
    The openSUSE server name is 'darkhelmet'. I want to create an alias 'dh' for it. So I can also type 'ssh dh' and 'http://dh' would work too. What file/files and where do I have to edit to make this happen? Extract from /etc/hosts from darkhelmet 127.0.0.1 localhost # special IPv6 addresses ::1 localhost ipv6-localhost ipv6-loopback fe00::0 ipv6-localnet ff00::0 ipv6-mcastprefix ff02::1 ipv6-allnodes ff02::2 ipv6-allrouters ff02::3 ipv6-allhosts 127.0.0.2 darkhelmet.edumate darkhelmet 10.0.0.22 db2workgroup db2workgroup [root][skroob] nslookup darkhelmet Server: 10.0.0.10 Address: 10.0.0.10#53 Name: darkhelmet.edumate Address: 10.0.0.22

    Read the article

  • Linux: Force fsck of a read-only mounted filesystem?

    - by Timothy Miller
    I'm developing for a headless embedded appliance, running CentOS 6.2. The user can connect a keyboard, but not a monitor, and a serial console would require opening the case, something we don't want the user to have to do. This all pretty much obviates the possibility of using a recovery USB drive to boot from, unless all it does is blindly reimage the harddrive. I would like to provide some recovery facilities, and I have written a tool that comes up on /dev/tty1 in place of getty to provide these functions. One such function is fsck. I have found out how to remount the root and other file systems read-only. Now that they are read-only, it should be safe to fsck them and then reboot. Unfortunately, fsck complains to me that the filesystems are mounted and refuses to do anything. How can I force fsck to run on a read-only mounted partition? Based on my research, this is going to have to be something obscure. "-f" just means to force repair of a clean (but unmounted) partition. I need to repair a clean or unclean mounted partition. From what I read, this is something "only experts" should do, but no one has bothered to explain how the experts do it. I'm hoping someone can reveal this to me. BTW, I've noticed that e2fsck 1.42.4 on Gentoo will let you fsck a mounted partition, even mounted read-write, but it seems only to do so if fsck is run from a terminal, so it can ask the user if they're sure they want to do something so dangerous. I'm not sure if the CentOS version does the same thing, but it appears that fsck CAN repair a mounted partition, but it flatly refuses to when not run from a terminal. One last-resort option is for me to compile my own hacked fsck. But I'm afraid I'll mess it up in some unexpected way. Thanks! Note: Originally posted here.

    Read the article

  • How do you add a certificate for WLAN in Linux, at the command-line?

    - by Neil
    I'm using Maemo on a Nokia n810 Internet tablet, and when given a list of installed certificates to choose from when connecting to a PEAP wireless network, it's always blank. I've already installed a couple of certificates through the gui on the device, and only the certificate authorities show up. I've confirmed that Maemo's connection software that handles certificates is buggy, in such a way that certificates are never added, or properly added certificates cannot be found. Is there a way to add WLAN certificates at the command-line, and connect to a wireless network at the command-line as well? I used to use iwconfig to connect, but I never used it with PEAP. Note: I have nothing in /etc/ssl/certs

    Read the article

  • Ideas for SVN/SQL/PHP/Linux Dev Enviroment Supporting Multiple Isolated Environments?

    - by jpganz18
    I am trying to create a "dev" for my users. In that environment they would access to their own account of PHPMyAdmin, SQL, Subversion and FTP which is not a big problem, but I would like to emulate like if each one would be in their own server. I mean so that they could change the PHP configuration (for example) and would be done only in its own environment. Any idea how to do this? Do I have to make something "special" at the installations of my server or something like that?

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >