Search Results

Search found 6493 results on 260 pages for 'git bash'.

Page 240/260 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Can MS Services for Unix be deployed and accessed from a shared drive?

    - by Ian C.
    I'm interested in experimenting with replacing our dependency on MKS with MS' Sevices for Unix toolset. I was wondering if anyone has any experience with deploying SFU on a shared drive? We like to, wherever possible, host our dev tools on one central NAS and call to the NAS to access the tools instead of rolling stuff out to each and every desktop. I'm not interested in the NFS support or ActiveState Perl. Really, none of the daemon technology is required here. I'm looking for replacements for the coreutils/binutils stuff you find in Linux (and MKS on Windows): sed, awk, csh, bash, grep, ls, find -- the meat-and-potates command line apps that our build and test scripts are built around. If I limit the install to just the Interix GNU Components (and maybe the Remote Connectivity components) will is run nicely from a shared location? To head off some questions: Yes, I've looked at Cygwin. Unfortunately it's performance in our build and test environment is poor. It runs considerably slower than MKS and it's not a direct drop-in replacement for MKS (thanks to its internal pathing and limitations with commands like 'ps'), so it's a tougher sell. Yes, I'm looking at the MinGW offering in parallel to this.

    Read the article

  • mysql command line not working

    - by Sandeepan Nath
    I have mysql running in my fedora system. I have xampp setup on the system and php projects present in the webspace are working fine. PhpMyAdmin is working fine. echoing phpinfo() in a PHP script also shows mysql enabled. But running mysql connect command mysql -u[username] -p[password] Gives this - bash: mysql: command not found How do I fix that? Any pointers? I guess I need to do some pointing (define some path in some file) so that my system knows that mysql is installed. What exactly do I have to do? Additional Details This system was someone else's and he is not available here. May be PHP/Mysql was setup already in the system. I just freshly extracted xampp for linux into /opt/lampp/ and have put all the above mentioned things (PHP projects and PhpMyAdmin) there. After doing that I had a socket problem (PhpMyAdmin was not working and showing this)- #2002 - The server is not responding (or the local MySQL server's socket is not correctly configured) I restarted lampp using ./lampp restart but problem remained. Then after turning on system today, I started lampp and everything worked just fine. No project issues anymore only command line Mysql not working

    Read the article

  • init.d script runs correctly but process doesn't live when booted fully up

    - by thetrompf
    I have a problem with an init.d script #!/bin/bash ES_HOME="/var/es/current" PID=$(ps ax | grep elasticsearch | grep $ES_HOME | grep -v grep | awk '{print $1}') #echo $PID #exit 0 case "$1" in start) if [ -z "$PID" ]; then echo "Starting Elasticsearch" echo "Starting Elasticsearch" /var/tmp/elasticsearch su -m elasticsearch -c "${ES_HOME}/bin/elasticsearch" exit 0; else echo "Elasticsearch already running" echo "Elasticsearch already running" /var/tmp/elasticsearch exit 0; fi ;; stop) if [ -n "$PID" ]; then echo "Stopping Elasticsearch" kill ${PID} echo "Stopped Elasticsearch" exit 0; else echo "Elasticsearch is not running" exit 0; fi ;; esac The scripts runs just file, as I can see in /var/tmp/elasticsearch a new line is added after every boot, but if I run: /etc/init.d/elasticsearch stop Just after the server is booted, I get "Elasticsearch is not running", ergo somehow the process does not stay alive. My question is why? and what am I doing wrong? Thanks in advance.

    Read the article

  • Get the desktop/viewport of a window in enlightenment?

    - by Zorf
    Okay, so, given a XID of a window I need to get its desktop or viewport as well as the currently active one. Enlightenment does not seem to properly respond to wmctrl which leads to: ***@note:~ > wmctrl -lG 0x01e00002 -1 21 395 310 146 note Conky (note) # it places conkey wndows on -1 for some reason? 0x01c00002 -1 65 655 230 158 note Conky (note) 0x01a00002 -1 25 215 230 182 note Conky (note) 0x01800002 -1 25 550 310 110 note Conky (note) 0x01600002 -1 685 145 230 120 note Conky (note) 0x01400002 -1 1120 245 280 206 note Conky (note) 0x01200002 -1 1095 35 230 186 note Conky (note) 0x01000002 -1 1145 470 250 266 note Conky (note) 0x00c00002 -1 40 34 230 182 note Conky (note) 0x00e00029 0 0 0 1440 900 note ~ : bash – Konsole # desktop 2, fullscreen 0x03a00060 0 505 231 899 642 note Downloads – 'Dolphin' # destkop 0 0x0480001a 0 206 222 958 526 note Lifelover - Kärlek - becksvart melankoli #desk 2 0x034000e6 0 116 32 984 767 note clemctrl – Kate #desk 0 0x02c01b78 0 309 314 549 520 note ************* # desk 1 0x04e00062 0 104 31 990 619 note XChat: *** @ Free / #*** (+Ccnt) #desk 1 0x05c00112 0 22 35 1396 834 note StarCraft on Reddit - Chromium #desk 3 0x02c0f292 0 453 356 549 520 note *** #desk 1 0x02c000c0 0 860 216 557 645 note Buddy List # desk 1 As can be seen, all windows are on desk 0 in wmctrl except conky windows. Furthermore the geometry-viewport trick also doesn't seem to work that works in some wm's, are there any other tricks to get on which viewport/desktop a window is? There has to be some way to get it right?

    Read the article

  • Ubuntu init.d script not being called on startup

    - by Mike
    I've got a script in ubuntu 9.04 in init.d that I've set to run on start on with update-rc.d using update-rc.d init_test defaults 99. All of the symlinks are there and the permissions appear to be correct -rwxr-xr-x 1 root root 642 2010-10-28 16:44 init_test mike@xxxxxxxxxx:~$ find /etc -name S99* | grep init_test find: /etc/rc5.d/S99init_test find: /etc/rc4.d/S99init_test find: /etc/rc2.d/S99init_test find: /etc/rc3.d/S99init_test The script runs through source and ./ without issue and behaves correctly. Here is the source of the script: #!/bin/bash ### BEGIN INIT INFO # Provides: init test script # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO start() { echo "hi" echo "start called" >> /tmp/test.log return } stop() { echo "Stopping" } echo "Script called" >> /tmp/test.log case "$1" in start) start ;; stop) stop ;; *) echo "Usage: {start|stop|restart}" exit 1 ;; esac exit $? When the machine starts, I don't see "script called" or "start called" in the test.log at all. Is there anything obvious I'm messing up?

    Read the article

  • Attempting to set up xampp and zend server on the same machine

    - by umbregachoong
    I am attempting to set up the zend server and xampp on the same machine but I am running into problems. I came across documentation on the zend site that said you cannot do this. However the folks over at apachefriends said you can. I have since discovered that I can run some of the zendframework examples within xampp by downloading the zendframework2 library and the skeleton app from git and I am doing this right now. However, I would like to know how to set them both up without having any conflicts both for the apache2 server and phpmyadmin. (One of the frustrating things is trying to load phpmyadmin in the deployment dialog by using the zpk tool in Zend). What I did in trying to set up both servers on windows 7 is as follows: First I have tried to set up the httpd conf files separately for each server, xampp running on port 8082 , and zend running on port 8088. At the time xampp would work, but zend server would not. This is after setting up the virtual host files separately for each server. Question 1: Where are the zend server error logs? Earlier, I was able to get both of them running configuring the xampp server httpd-conf file alone, however, I experienced problems with phpmyadmin even after configuring phpmyadmin on xampp to work on a different port other than 3306. Second question here: how to set up the two mysql phpmyadmin instances so they do not conflict with each other? Here is the xampp virtual host section: ##ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/" ServerName localhost 8082 ##ServerAlias www.dummy-host.example.com ##ErrorLog "logs/dummy-host.example.com-error.log" ##CustomLog "logs/dummy-host.example.com-access.log" common Here is the zend virtual host section: DocumentRoot "C:\Program Files (x86)\Zend\Apache2/htdocs" ServerName localhost:8088 </VirtualHost> I have looked at this httpd.apache.org/docs/2.2/vhosts/ and this http://survivethedeepend.com/zendframeworkbook/en/1.0/creating.a.local.domain.using.apache.virtual.hosts but I am obviously doing something wrong here. I also have the java sdk running on this machine with tomcat and apache and I have no conflicts- too bad this is not the case for zend server and xampp Thanks umbre gachoong

    Read the article

  • Force Juniper-network client to use split routing

    - by craibuc
    I'm using the Juniper client for OSX ('Network Connect') to access a client's VPN. It appears that the client is configured to not use split-routing. The client's VPN host is not willing to enable split-routing. Is there a way for me to over-ride this configuration or do sometime on my workstation to get the non-client network traffic to by-pass the VPN? This wouldn't be a big deal, but none of my streaming radio stations (e.g. XM) work will connected to their VPN. Apologies for any inaccuracies in the terminology. ** edit ** The Juniper client changes my system's resolve.conf file from: nameserver 192.168.0.1 to: search XXX.com [redacted] nameserver 10.30.16.140 nameserver 10.30.8.140 I've attempted to restore my preferred DNS entry to the file $ sudo echo "nameserver 192.168.0.1" >> /etc/resolv.conf but this results in the following error: -bash: /etc/resolv.conf: Permission denied How does the super-user account not have access to this file? Is there a way to prevent the Juniper client from making changes to this file?

    Read the article

  • Setting up MongoDB in High Performance Computing LSF linux cluster

    - by Dnaiel
    I am trying to run mongo in a LSF cluster computing environment where I have no admin control. Our sysadmin installed mongodb, but it is not running. Any ideas on what should I ask the server admin to do for it to run? Or if I could run it locally? [node1382]allelix> mongod --dbpath /users/dnaiel/ma/mongodb/ Tue Oct 2 21:33:48 [initandlisten] MongoDB starting : pid=22436 port=27017 dbpath=/seq/epigenome01/allelix/ma/mongodb/ 64-bit host=node1382 Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] ** WARNING: You are running on a NUMA machine. Tue Oct 2 21:33:48 [initandlisten] ** We suggest launching mongod like this to avoid performance problems: Tue Oct 2 21:33:48 [initandlisten] ** numactl --interleave=all mongod [other options] Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] db version v2.2.0, pdfile version 4.5 Tue Oct 2 21:33:48 [initandlisten] git version: f5e83eae9cfbec7fb7a071321928f00d1b0c5207 Tue Oct 2 21:33:48 [initandlisten] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49 Tue Oct 2 21:33:48 [initandlisten] options: { dbpath: "/users/dnaiel/ma/mongodb/" } Tue Oct 2 21:33:48 [initandlisten] journal dir=users/dnaiel/ma/mongodb/journal Tue Oct 2 21:33:48 [initandlisten] recover begin Tue Oct 2 21:33:48 [initandlisten] info no lsn file in journal/ directory Tue Oct 2 21:33:48 [initandlisten] recover lsn: 0 Tue Oct 2 21:33:48 [initandlisten] recover /seq/epigenome01/allelix/ma/mongodb/journal/j._0 Tue Oct 2 21:33:48 [initandlisten] recover cleaning up Tue Oct 2 21:33:48 [initandlisten] removeJournalFiles Tue Oct 2 21:33:48 [initandlisten] recover done Tue Oct 2 21:33:48 [websvr] admin web console waiting for connections on port 28017 Tue Oct 2 21:33:48 [initandlisten] waiting for connections on port 27017 It basically waits forever and cannot start mongodb. These servers are not webservers but they do have network access, it's a cloud computing LSF environment system. Any advice would be welcome, thanks in advance.

    Read the article

  • Mongrel Cluster on Ubuntu Server Karmic

    - by trobrock
    I am trying to get mongrel cluster working on my Ubuntu Server Karmic box in preparation to setup Capistrano. I've been trying to get the two to work all day and finally decided to completely remove Capistrano and see if I can just get Mongrel Cluster to work. I ran this to install mongrel cluster: gem install mongrel mongrel_cluster Everything installed fine, when I change into my app's directory... # mongrel_rails -bash: mongrel_rails: command not found I can run it from its install location: # /var/lib/gems/1.8/bin/mongrel_rails Usage: mongrel_rails <command> [options] Available commands are: ... It lets me build the cluster configuration file fine, but when I run the clister:start command: # /var/lib/gems/1.8/bin/mongrel_rails cluster::start starting port 8000 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8000 -P tmp/pids/mongrel.8000.pid -l log/mongrel.8000.log starting port 8001 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8001 -P tmp/pids/mongrel.8001.pid -l log/mongrel.8001.log starting port 8002 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8002 -P tmp/pids/mongrel.8002.pid -l log/mongrel.8002.log It seems it isnt calling it from the right directory after that command, what can I do to fix this? I tried setting the path previously when trying to set up Capistrano, but the path didnt stay set when Capistrano used ssh to run the commands.

    Read the article

  • Apache 2.2 + mod_fcgid + PHP 5.4: (104) Connection reset by peer

    - by Michele Piccirillo
    On a Debian 6 VPS, I'm running PHP 5.4 via mod_fcgid on a couple of different virtual hosts, managed by Virtualmin GPL. At random, I get 500 Internal Server Errors; restarting Apache brings everything back to normality. Examining the logs, I find messages of this kind: [Thu Oct 04 15:39:35 2012] [warn] [client 173.252.100.117] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Thu Oct 04 15:39:35 2012] [error] [client 173.252.100.117] Premature end of script headers: index.php Any ideas about what is happening? UPDATE: I found a similar question and the author reported to have solved the problem disabling APC. I tried following the advice, but I'm still getting the same errors. VirtualHost configuration SuexecUserGroup "#1000" "#1000" ServerName example.com DocumentRoot /home/example/public_html ScriptAlias /cgi-bin/ /home/example/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/example/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php FCGIWrapper /home/example/fcgi-bin/php5.fcgi .php5 </Directory> <Directory /home/example/cgi-bin> allow from all </Directory> RemoveHandler .php RemoveHandler .php5 IPCCommTimeout 61 FcgidMaxRequestLen 1073741824 php5.fcgi #!/bin/bash PHPRC=$PWD/../etc/php5 export PHPRC umask 022 export PHP_FCGI_CHILDREN PHP_FCGI_MAX_REQUESTS=99999 export PHP_FCGI_MAX_REQUESTS SCRIPT_FILENAME=$PATH_TRANSLATED export SCRIPT_FILENAME exec /usr/bin/php5-cgi Package versions webmin-virtual-server/virtualmin-universal 3.94.gpl-2 apache2/squeeze 2.2.16-6+squeeze8 libapache2-mod-fcgid/squeeze 1:2.3.6-1+squeeze1 php5 5.4.7-1~dotdeb.0 php5-apc 5.4.7-1~dotdeb.0

    Read the article

  • Scriptable BitTorrent clients?

    - by James McMahon
    In an effort further automate all the little computer house keeping tasks that can waste my time I am looking into BitTorrent clients that have the ability to script common tasks. I've done some Googling and it looks like Transmission might have some of said such capabilities, but there site wasn't very clear on the details. Things I am looking to do; Prioritize and label torrents based on trackers Set seed length based on trackers and filesize Set additional seed time when a torrent's seed time expires based on a number of factors, like time spent seeding, remaining disk space and ratio. Move torrents to appropriate places post seeding based on labels and tracker Basically, while I could Python or Bash script things like moving torrents around and other simple actions, I need away to talk to the client to figure out things like the torrent seed time, tracker, labels, filesize, etc. Is there any client out there that would allow me to all or a subset these actions? I have access to Linux, Mac and Windows and am not tied to any particular torrent client. I am a programmer so I have no problems writing scripts, but examples of torrent scripting would also be helpful.

    Read the article

  • Can't pin modified shortcuts to the Windows 7 task bar

    - by Coder
    I have a shortcut to a .bat file which I pin to the task bar using a workaround by using another icon and this seems to work. Now I make a copy of that shortcut, point it to a different .bat file, rename it, and I can't pin this one to the task bar. I have to find some other new unused icon to pin, pin it, then modify it manually. The other problem this causes is that windows seems to track which icons were pinned even if they are modified after the fact. As such, if I use media player as my dummy icon, pin it, then alter it's name and shortcut to point to a .bat file, I can't re-pin windows media player and if I select unpin from the windows media player, it unpins my shortcut to my .bat file. I can't believe how ridiculous this is. Is there a way to pin anything I want to the taskbar (ie. .bat file in my case) that does not cause problems like this? Is there an easy way I can copy an existing shortcut and modify it and re-pin it to the taskbar? The reason I want to copy it is because I start a .bat file (in particular git bash) and I set properties on the window like quick edit, increase the screen buffer and set it's position and size manually. I don't want to have to do this to every single icon I want to pin since they will be identical aside from the shortcut url.

    Read the article

  • How can I both pipe and display output in Windows' command line?

    - by Bob
    I have a process I need to run within a batch file. This process produces some output. I need to both display this output to the screen and send (pipe) it to another program. The bash method uses tee: echo 'ee' | tee /dev/tty | foo Is there an equivalent for Windows? I am happy to use PowerShell if necessary. There are tee ports for Windows, but there does not appear to be an equivalent for /dev/tty, which complicates matters. The specific use-case here: I have a program (launch4j) that I need to run, displaying output to the user. At the same time, I need to be able to detect success or failure in the script. Unfortunately, this program does not set an exit code, and I cannot force it to do so. My current workaround involves piping to find, to search the output (launch4j config.xml | find "Successfully created") - however, that swallows the output I need to display. Therefore, I need some way to both display to the screen and send the ouput to a command - and this command should be able to set ERRORLEVEL (it cannot run asynchronously).

    Read the article

  • Mac OS X : Open up 3 terminals, run different commands from all for each of them, to set up a develo

    - by taelor
    I'm a Ruby on Rails Web Developer and there is a lot of repetition I go through to start up my development environment. I was wondering if there is any way that I can remove some of this repetition by writing a script, or using a program (like quicksilver) or something to get my work environment going. I know how to use quicksilver to open up terminal, and I even have a saved window group to get my 3 or 4 panes open. The next thing I would love to automatically happen is getting all three to goto a certain directory, and each run different commands. One will start the local server, and in another tab, start a background process. the other would open text mate, and then start a console session, while the last one runs a svn(or git) status. Oh yah, and I would love to go ahead and open firefox, and a few tabs going to a couple of locations. Does anyone have any suggestions on how I could make all this happen in once quicksilver command, or a double click on some type of script on my Desktop?

    Read the article

  • Can I recover a zpool after it's been exported, given that devices have not been reallocated?

    - by cali-spc
    I had a zpool we'll call 'testpool'. testpool had 3 devices included in it, and a single zfs called 'test'. I needed to move 'test' to a new, smaller pool. I wanted to name the new pool the same name 'testpool'. Basically did the following. zfs send testpool@backup > /tmp/test-dump zpool export -f testpool zpool create -f testpool newdevice zfs receive -F testpool < /tmp/test-dump Unfortunately I found out that the testpool@backup snapshot was the wrong snapshot. Too old. I have yet to reallocate the three devices that were in the OLD testpool. (None of these 3 devices are 'newdevice', they are a separate 3.) Is there any way I can recover data in those devices? I'm thinking since I named the new, smaller pool the same as the old zpool, I'm pretty much SOL. But if not, that would be nice to know. Edit: More info I did a 'zpool import' and got this. bash-3.00# zpool import pool: testpool id: 14781458723915654709 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE c5t8d0 ONLINE c5t9d0 ONLINE c5t10d0 ONLINE So I'm guessing I just need the syntax to import this zpool using its numeric identifier, while giving it a new name. S.

    Read the article

  • Recommended open-source firmware for ASUS RT-N16

    - by MasterF
    I have recently acquired an ASUS RT-N16 router. My original plan for it was to install Tomato on it. However, after checking their website i found out that the firmware was not updated in the last 2 years. There seem to be a few updated mods but none of them really seemed mature/stable/well-documented. I would like to know what other people recommend as open-source firmware for this router. I know the answers will probably be subjective; so i will give a bit of background on my needs: for now i will only use the Wi-Fi on an Android phone the connection will not be shared with anyone (so QOS is optional) i want a stable (wired) connection on my PC (for online gaming etc.) i want the (wired) download/upload speeds to be as close as possible to those achieved by directly plugging the Ethernet cable to the PC's network card; i have a 100 Mbps connection my ISP uses PPPOE my technical level: i am a software developer and i have good knowledge of bash scripting, but no experience with networking Also, i know that i could probably just use the stock firmware (and maybe will use it for a while), but i'm interested in trying an open-source version (for more features, flexibility, as a learning exercise etc.)

    Read the article

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • qsub: How can I find out what DRM middleware exactly is installed on a cluster?

    - by gojira
    I have a user account on a very big cluster. I have previous experience with Grid Engine and want to use the cluster for array jobs. The documentation tells me to use "qsub" for load balancing / submission of many jobs. Therefore I assumed this means the cluster has Grid Engine. However all my Grid Engine scripts failed to run. I checked the documentation and it is a bit weird. Now I slowly suspect that this cluster does not actually have Grid Engine, maybe it's running something called Torque (?!). The whole terminology in the man pages is a bit weird for me as a Grid Engine user, for example they talk about "bulk jobs" instead of "array jobs". There is no referral to variables on which I rely on, like SGE_TASK_ID etc. Instead they refer to variables starting with PBS_. Still, there are qsub and qstat commands. Also qsub behaves differently, apparently it is not possible to specifiy the command line parameters with bash-script comments etc. There is a documentation for the cluster system, but it does not say what the DRM middleware actually is - it refers to the entire DRM system simply as "qsub". I tried qsub --version qsub: 1.2 2010/8/17 I am not sure what I am actually running when I invoke qsub on that cluster! My question is, how can I find out if I am running Grid Engine or Torque (or whatever it is), and which version?

    Read the article

  • starting oracle database automatically.

    - by Searock
    I am using Fedora 8 and Oracle 10g Express Edition. Every time I start my fedora I have to click on start database. How can I add startdb.sh to startup so that it automatically executes when Fedora starts? I have tried adding the path to /etc/rc.d/rc.local but it still doesn't work. ./usr/lib/oracle/xe/app/oracle/product/10.2.0/server/config/scripts/startdb.sh I have even tried to add this script in /etc/init.d/oracle #!/bin/bash # # Run-level Startup script for the Oracle Instance and Listener # # chkconfig: 345 91 19 # description: Startup/Shutdown Oracle listener and instance ORA_HOME="/u01/app/oracle/product/9.2.0.1.0" ORA_OWNR="oracle" # if the executables do not exist -- display error if [ ! -f $ORA_HOME/bin/dbstart -o ! -d $ORA_HOME ] then echo "Oracle startup: cannot start" exit 1 fi # depending on parameter -- startup, shutdown, restart # of the instance and listener or usage display case "$1" in start) # Oracle listener and instance startup echo -n "Starting Oracle: " su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl start" su - $ORA_OWNR -c $ORA_HOME/bin/dbstart touch /var/lock/subsys/oracle echo "OK" ;; stop) # Oracle listener and instance shutdown echo -n "Shutdown Oracle: " su - $ORA_OWNR -c "$ORA_HOME/bin/lsnrctl stop" su - $ORA_OWNR -c $ORA_HOME/bin/dbshut rm -f /var/lock/subsys/oracle echo "OK" ;; reload|restart) $0 stop $0 start ;; *) echo "Usage: $0 start|stop|restart|reload" exit 1 esac exit 0 and even this doesn't work. startdb.sh is located at /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/config/scripts/startdb.sh Thanks.

    Read the article

  • samba joined to AD canot see users when in the security tab on client

    - by Jonathan
    I've got samba joined via kerberos and winbindd to our AD network and user authentication and everything else is working great. However when I try to add users/groups to file permissions it tells me they are not found. All the users groups show up fine with getent so I'm not sure why they are not showing up. Here is my smb.conf and I would much appreciate any help with this. #GLOBAL PARAMETERS [global] socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=11264 SO_SNDBUF=11264 workgroup = [hidden] realm = [hidden] preferred master = no server string = xerxes web/file server security = ADS encrypt passwords = yes log level = 3 log file = /var/log/samba/%m max log size = 50 printcap name = cups printing = cups winbind enum users = Yes winbind enum groups = Yes winbind use default domain = Yes winbind nested groups = Yes winbind separator = + winbind refresh tickets = yes idmap uid = 1600-20000 idmap gid = 1600-20000 template primary group = "Domain Users" template shell = /bin/bash kerberos method = system keytab nt acl support = yes [homes] comment = Home Direcotries valid users = %S read only = No browseable = No create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [test] comment = Test path=/mnt/test writeable=yes valid users = %s create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [printers] comment = All Printers path = /var/spool/cups browseable = no printable = yes

    Read the article

  • Using OSX home directories from linux

    - by Steffen
    I'm running an OSX (Snow Leopard) Server with OpenDirectory, which is nothing else than a modified OpenLDAP with some Apple-specific schemas. However, I want to reuse this directory on some of my Linux (Debian Squeeze) boxes. It's no problem to authenticate against OSXs LDAP Server, this works fine already. What I struggle with is the way the home folders are specified in OSX. If I query the passwd config on one of my linux machines, the OSX imported entries are looking like this myaccount:x:1034:1026:Firstname Lastname:/Network/Servers/hostname.example.com/Volumes/MyShare/Users/myaccount:/bin/bash While those network home folders might be fine for OSX-Clients, I don't want those server based paths on my linux machines. I saw that there is an NFSHomeDirectory Attribute in the OSX User inspector, but if I change this the whole user home path gets changed. Since my users should be able to login on both systems, OSX and Linux, this is not what I want. Does anyone have an idea how I must configure OSX to make my linux machines use home folders like /net/myaccount and leave the configuration for OSX clients untouched?

    Read the article

  • Kernel Memory Leak in Ubuntu 9.10?

    - by kayahr
    After some days of work (Using suspend-to-ram during the night) I notice I loose more and more available memory. Even when I close all applications the situation doesn't improve. I even went down to the command line and closed ALL running processes except the init process and the bash I'm working in. I unmounted all these ram disks which Ubuntu is using, I even unloaded all modules which could be unloaded. But still "free" tells me that 1 GB of RAM is used (without buffers/cache). In "top" there is no visible process which occupies all this memory. The only way to free the memory is restarting the machine. How can I find out where I lose all this memory? Is there a known "suspect" who can cause a problem like this? I'm using Ubuntu 9.10 64 bit on a Dell Latitude E6500 (4 GB RAM) with the latest closed-source nvidia driver and Gnome with Compiz. The applications I use most of the time are firefox and eclipse. Any hints how I can find the problem? I'm not a kernel hacker so if the solution is patching the kernel or something like that then I might be out of the game...

    Read the article

  • Cutting Ubuntu to the bone for Virtualbox VM

    - by user32853
    I've been looking around for a Linux variant which will install only the software I need rather than everything Ubuntu (for example) puts in by default. This is to create a virtual machine in Virtualbox which has bash, apache, python, perl, SQLite, openssh and a few other programs but nothing else. I'd prefer to go with Ubuntu if possible but another modern distro would do as well (I like using apt-get and yum rather than downloading/compiling etc). So far, I've tried: SuseStudio.com, which is probably the best so far. Pressing F4 to get the boot options on Ubuntu 9.10, but there is no minimal installation (I think there was once). Arch Linux, slightly confusing install procedure but I might go back and try again. Gentoo, started well but fairly soon the HD on the virtual machine went to 2Gb, even before the installation had started in earnest (I'd partitioned the disks is all). I realise there are various "small" Linuxes around like Puppy, Feather, DSL, etc, but they seem to be aimed at desktop users or as a techie's toolkit, and I want a small-as-possible server distro which can be managed with tools like apt or yum or similar. TIA for any advice you can offer! -- Monty

    Read the article

  • Cannot load from raid with grub

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... This is my partial fdisk -l output: Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Anybody can resolve this issue? I have big headache with this.

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >