I am sourcing a file under tcsh. This file could be anywhere on the filesystem. How can I retrieve the path of my sourced file ?
$0 won't work : I don't execute the file, I source it.
Many thanks !
I compile my own PHP, partly to learn more about how PHP is put together, and partly because I'm always finding I need modules that aren't available by default, and this way I have control over that.
My problem is that I can't get JPEG support in PHP. Using CentOS 5.6. Here are my configuration options when compiling PHP 5.3.8:
'./configure' '--enable-fpm' '--enable-mbstring' '--with-mysql' '--with-mysqli' '--with-gd' '--with-curl' '--with-mcrypt' '--with-zlib' '--with-pear' '--with-gmp' '--with-xsl' '--enable-zip' '--disable-fileinfo' '--with-jpeg-dir=/usr/lib/'
The ./configure output says:
checking for GD support... yes
checking for the location of libjpeg... no
checking for the location of libpng... no
checking for the location of libXpm... no
And then we can see that GD is installed, but that JPEG support isn't there:
# php -r 'print_r(gd_info());'
Array
(
[GD Version] => bundled (2.0.34 compatible)
[FreeType Support] =>
[T1Lib Support] =>
[GIF Read Support] => 1
[GIF Create Support] => 1
[JPEG Support] =>
[PNG Support] => 1
[WBMP Support] => 1
[XPM Support] =>
[XBM Support] => 1
[JIS-mapped Japanese Font Support] =>
)
I know that PHP needs to be able to find libjpeg, and it obviously can't find a version it's happy with. I would have thought /usr/lib/libjpeg.so or /usr/lib/libjpeg.so.62 would be what it needs, but I supplied it with the correct lib directory (--with-jpeg-dir=/usr/lib/) and it doesn't pick them up so I guess they can't be the right versions.
rpm says libjpeg is installed. Should I yum remove and reinstall it, and all it's dependent packages? Might that fix the problem?
Here's a paste bin with a collection of hopefully useful system information:
http://pastebin.com/ied0kPR6
Hi!
I want to run program from my notebook by ssh-connection to remote server.
The problem is I am going home with my note :)
How to keep programm running?
I'm using mod_itk as MPM for increased security in shared environment. I also have a Firefox Sync Server within one of the VHosts I host. That vhost is restricted to a certain user via AssignUserId user group.
The problem is that the socket /var/run/wsgi...whatever.sock is chmodded srwx------ and owned by Apache's wwwrun. While I configured the vhost with
WSGIProcessGroup sync
WSGIDaemonProcess sync user=djechelon group=djechelon processes=1 threads=5
I still get the error that Apache wants to access a socket that is not accessible and because of this gets an error.
Is it possible to configure mod_wsgi in order to create different sockets with different owners for different applications or to chmod its socket in a different way (less secure)?
Currently, I'm running Firefox Sync as the only WSGI application. Moving it to a vhost that doesn't AssignUserId could solve this problem but will force me to change URL (and buy an additional SSL certificate), so I wouldn't consider this
I'm using a Linux server as fileshare.
These files are accessed by Windows computers with a Samba server, accessed by Macs with a netatalk server (afpd) and also trough ssh an sftp for Windows, Macs and Linux system.
It seems like some of these systems care about the use of characters in filenames and some don't..
There is a tool called 'convmv' to convert filenames from one to another, but which one should I use?
Should I setup the Samba server for a defined file encoding? Same for netatalk?
I'm trying to ftp updates to it, but i can't seem to find where the .conf is that deals with ftp so i can enable/configure it
when i attempt to connect to the NAC from my desktop via winSCP (using ftp) i get an error saying the connection is being actively refused.
I occasionally lose my remote SSH connection to my VPS. I use screen for long-running processes, but am wondering what happens to the processes I had running aside from those run within a screen session if I lose the connection to the box.
When I re-establish a connection to the box, what happened to the bash and sshd processes that were running when I lost the connection? Today I lost connection repeatedly and noticed many more bash and sshd processes than usual.
If there are processes hanging around, do I need to kill them? How could I determine which processes were abandoned from my previous session?
Thanks for any replies!
When I delete folders or files in through osx terminal using the rm -rf, where do they go? I heard that some say they are deleted directly, but some also say it only "remove the link to the file making it unable to be found or accessed without special tools" (http://superuser.com/questions/370786/where-do-files-and-directories-go-when-i-run-rm-rf-folder-or-file-name-in-ubu).
Someone said something about ext3 being able to save rm-ed files in ubuntu but what about mac?
I have a directory listing as follows (given by ls -la):
total 8
drwxr-xr-x 6 <user> <group> 204 Oct 18 12:13 .
drwxr-xr-x 7 <user> <group> 238 Oct 18 11:29 ..
drwxr-xr-x 14 <user> <group> 476 Oct 18 12:31 .git
-rw-r--r-- 1 <user> <group> 601 Oct 18 12:03 index.html
drwxr-xr-x 2 <user> <group> 68 Oct 18 12:13 test
drwxr-xr-x 2 <user> <group> 68 Oct 18 12:13 test2
Running ack . -f prints out the files in the directory:
index.html
How can I get ack to print out the directories in the directory? I want to ignore the .git directory (which I understand is default behavior for ack). On that note, how can I ignore certain directories?
I am using ack 1.9.6 on Mac OSX 10.8.2.
I want to archive all .ctl files in a folder, recursively.
tar -cf ctlfiles.tar `find /home/db -name "*.ctl" -print`
The error message :
tar: Removing leading `/' from member names
tar: /home/db/dunn/j: Cannot stat: No such file or directory
tar: 74.ctl: Cannot stat: No such file or directory
I have these files: /home/db/dunn/j 74.ctl and j 75. Notice the extra space. What if the files have other special characters? How do I archive these files recursively?
I bought a domain name on name.com & I want to use free webhosting on 110mb.com
By default name.com integrates services of Google apps. Name server entries are
ns1.name.com
ns2.name.com
ns3.name.com
ns4.name.com
When I registered on 110mb.com it gave me two addresses
ns1.110mb.com
ns2.110mb.com
This is where I'm lost. The concept is that "Domain name should point to an address of the server where the website is hosted" right? Then
why are these 4 entires by default. How exactly is it working?
should I remove these 4 and then add 110mb.com servers or just append 110mb.com server addresses to name.com ones.
I would like to use google apps. If I change these name server addresses would that remove google apps? I especially want to use email service of google. And I really don't understand what is CNAME, MX, or something something. I want to learn about these stuff & how it exactly works.
When I search for webhost tutorial. I'm unable to find any fruitful results.
I have sshed into a linux box and I'm using dvtm and bash (although I have also tried this with Gnu screen and bash). I have two terminals, current /dev/pts/29 and /dev/pts/130. I want to redirect the input from one to the other.
From what I understand, in /dev/pts/130 I can type:
cat </dev/pts/29
And then when I type in /dev/pts/29 the characters I type should show up in /dev/pts/130. However what ends up happening is that every other character I type gets redirected. For example, if I type "hello" I get this:
/dev/pts/29 | /dev/pts/130
$ | $ cat </dev/pts/29
$ el | hlo
This is really frustrating as I need to do this in order to redirect the io of a process running in gdb (I've tried both run /dev/pts/# and set inferior-tty /dev/pts/# and both resulted in the aforementioned behavior). Am I doing something wrong, or is this a bug in bash/screen/dvtm?
I have a CSV file where data are in the following format
|001|,|abc,def|,123456,789,|aaa|,|bbb|,444,555,666
I want to replace only those "," that appears between numbers with some other character like say SOH or $ or *
other "," appearing in the line should not get replaced i.e. to say I wish to have following output
|001|,|abc,def|,123456*789,|aaa|,|bbb|,444*555*666
Can someone please help me with sed command pattern to get the above desired output
I have to concatenate a number of files in a directory structure which contains spaces in the folder names looking like this: ./CH 0000100014/A10/11XT/11xt#001.csv
find . -name "*.csv" -type f -print0 | xargs -0 cat > allmycsv.txt
does the job, however now I need to include the information contained in the path, i.e. CH 0000100014/A10/11XT as a header of each inputfile to cat.
find . -name "*.csv" -type f -print0 | xargs -0 -I % sh -c 'echo %; cat %' >allmycsv.txt
would do the job, if I had no spaces in the path, but in my case, cat does not get along with the space in the path name. Is there a way out?
Cheers,
E
P.S. I am working on bash on OSX
I have just started learning about network topologies, but there are a lot of confusion about different types of network topologies i have learnt so far.
First of all, BUS topology.
If i have like 100 PCs in the same wire connected using BUS topology, and the network connection speed is 100Mbps, then each PC will have a connection of 1Mbps, right ?
With the same scenario, if i connect those 100 PCs using STAR topology, then each PC will have a connection of 100Mbps ?
Then with the TREE topology, i divide the system into 10 sub-system (10 tree branches) , each branch has 10 PCs, then i will have other 10 small "BUS-topology" networks each one will have a connection of 10Mbps and therefore each PC will also have 10Mbps ?
And the last one is RING topology, 100 PCs, each PC will have 100Mbps connection ?
Is rsync is a good choice for my project ?
I have to :
- copy files from source to destination folder via SSH,
- be sure all files are copied,
- delete source files after copy.
- if I have conflict name, I have to rename files.
It looks like I can use option : --remove-source-files (to delete source files)
But how rsync manage conflict, can I had rules ?
Use case on my project :
I run scientific calculation on server A and results are inserted in folder "process", for each calculation I have a repository like this : /process/calc1.
Now I would like to transfer repository "/calc1" to server B (I get /process/calc1), and delete "calc1" from server A.
...During another calculation I get "/process/calc2" on server A, the idea is also to move "calc2" in "/process/" directory on server B, then I have now on server B :
- /process/calc1
- /process/calc2
(and /process/ on server A is empty).
How rsync will manage conflict (on server B) if I have another folder like "/process/calc1" in server A after a new calculation (if "/process/calc1" already exist on server B) ?
Is it possible to add rules with rsync, and rename "/process/calc1" by "process/calc1R2" in server B ? And so on (ex:calc1R3) ?
Thanks.
I have 357 .png files located in different sub dirs of the current dir:
settings# find . -name \*.png |wc -l
357
settings# find . -name \*.png | head
./assets/authenticationIcons/audio.png
./assets/authenticationIcons/bbid.png
./assets/authenticationIcons/camera.png
./bin/icons/ca_video_chat.png
./bin/icons/ca_voice_control.png
./bin/icons/ca_vpn.png
./bin/icons/ca_wifi.png
Is there a oneliner to calculate the total disk space occupied by them (before I pngcrush them)?
I've tried (unsuccessfully):
settings# find . -name \*.png | xargs du -s
4 ./assets/support/wifi_locked_icon_white.png
1 ./assets/support/wifi_vpn_icon_connected.png
1 ./assets/support/wi_fi.png
1 ./assets/support/wi_fi_conected.png
8 ./bin/blackberry-tablet-icon.png
2 ./bin/icons/ca_about.png
2 ./bin/icons/ca_accessibility.png
2 ./bin/icons/ca_accounts.png
2 ./bin/icons/ca_airplane_mode.png
2 ./bin/icons/ca_application_permissions.png
1 ./bin/icons/ca_balance.png
Hi, as suggested by the title, I want to change CPU: actually I have two computers, one with Ubuntu running on an AMD Athlon 64 dual core 5200+ and the other with FreeBSD running on an AMD Sempron single core LE-1250.
I would like to swap (I am not sure that this is the correct term...) the CPUs from one computer to the other one, that is take the dual core from the ubuntu pc and put it inside the freebsd pc and viceversa. The mobo is the same.
Do you think I will encounter problems?
guys. I'd like to deploy my app on two different servers, located in US and Germany. As I suppose, I need to set up some kind of load balancer, that would deternime from which country my user is, and resolve it to US/Germany server. The general aim is to provide user abitiliy to work with the closest server (CDN is not a solution, 'cause we dont share static content).
Where should I place load balancer that would resolve user to USA/GER severs? In usa/germany? What shold it look like? A usual server with some specific app or what?
Thank you.
Recently the script was working fine, but from some days I'm receiving such message, while running the readlink -f "$0" command:
readlink: illegal option -- f
usage: readlink [-n] [file ...]
I was running the following code to debug:
#!/bin/sh
DIR=`pwd`
RLPATH=`which readlink`
RLOUT=`readlink -f -- "${0}"`
DIROUT=`dirname -- ${RLOUT}`
echo "dir: ${DIR}"
echo "path: ${PATH}"
echo "path to readlink: ${RLPATH}"
echo "readlink output: ${RLOUT}"
echo "dirname output: ${DIROUT}"
Output:
# ./debug.sh
readlink: illegal option -- f
usage: readlink [-n] [file ...]
usage: dirname string [...]
dir: /home/svr
path: /sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin:/root/bin
path to readlink: /usr/bin/readlink
readlink output:
dirname output:
What is wrong ?
I'm taking to putting various files in /tmp, and I wondered what the rules on deleting them are?
I'm imagining it's different for different distributions, and I'm particularly interested in Ubuntu and Fedora desktop versions.
But a nice general way of finding out would be a great thing.
Even better would be a nice general way of controlling it! (Something like 'every day at 3 in the morning, delete any /tmp files older than 60 days, but don't clear the directory on reboot')
I foolishly removed some source code from my Mac OS X Snow Leopard machine with rm -rf when doing something with buildout. I want to try and recover these files again. I haven't touched the system since to try and seek an answer.
I found this article and it seems like the grep method is the way to go, but when running it on my machine I'm getting 'Resource busy' when trying to run it on the disk.
I'm using this command:
sudo grep -a -B1000 -A1000 'video_output' /dev/disk0s2 > file.txt
Where 'dev/disk0s2' is what came up when I ran df.
I get this when running:
grep: /dev/disk0s2: Resource busy
I'm not an expert with this stuff, I'm trying my best. Please can anyone help me further? I'm on the verge of losing two days of source code work!
Thank you
I have one mac which is always on and is my main computer. I also have a MacBook and I'm trying to Sync my iphoto library. So I can successfully use rsync to sync files. I'm using a cron to have it run once a day.
In reality the macbook isn't always on, so I'm looking for a way to run rsync when ever the two computers are connected on the same wifi network. So I'm guessing the best place is to somehow run rsync when the airport is connected. Whats the best way
I have one mac which is always on and is my main computer. I also have a MacBook and I'm trying to Sync my iphoto library. So I can successfully use rsync to sync files. I'm using a cron to have it run once a day.
In reality the macbook isn't always on, so I'm looking for a way to run rsync when ever the two computers are connected on the same wifi network. So I'm guessing the best place is to somehow run rsync when the airport is connected. Whats the best way
Well we all know that it holds passwords. But cat-ing it gives out nothing. Not even encrypted gibberish. So how exactly is a password stored in this? Is this like a device file or something?