how do i redirect a USB tty (arduino) to a file in Macos X 10.6 in Terminal ?
"screen /dev/tty.usbserial... 9600"
works, but
"cat /dev/tty.usbserial..."
does not, but blocks.
I try to create file in folder with group write access, user tomcat7 is in group. Why isn't it workin?
skr@konrad~/data/asu$ sudo -u tomcat7 sh
$ whoami
tomcat7
$ echo > /home/skr/data/asu/g.gz.index
sh: 2: cannot create /home/skr/data/asu/g.gz.index: Permission denied
$ ls -la /home/skr/data/asu/
total 18708
drwxrwxr-x 2 skr skr 4096 Sep 29 08:38 .
drwxrwxr-x 85 skr skr 4096 Jul 30 00:42 ..
$ grep ^skr /etc/group
skr:x:1002:tomcat7:mail
Tried to logout, but it doesn't help. Any ideas?
We use Apple's Time Machine to back up our workstations at the office.
If I want to restore a file, I need to open up the Time Machine GUI and browse files there. The GUI is ugly eye-candy and gets in my way.
Is there a way to browse the Time Machine archive using the Mac's command-line?
I'm used to Netapps and other storage appliances. I use backintime for my Ubuntu workstation. To restore a file with one of those systems, you can restore a file with a simple command like:
cp .snapshot/daily.0/filename.txt .
or
cp /backup/backintime/20100611-000002/backup/etc/shadow /etc/shadow
Is there an equivalent for Apple's Time Machine?
how do i redirect a USB tty (arduino) to a file in Macos X 10.6 in Terminal ?
"screen /dev/tty.usbserial... 9600"
works, but
"cat /dev/tty.usbserial..."
does not, but blocks.
I am getting the following three lines in an error message in /var/mail/username after the following job runs in crontab...
15 * * * * /Applications/MAMP/htdocs/iconimageryidx/includes/insertPropertyRESI.php
Errors:
/applications/mamp/htdocs/iconimageryidx/includes/insertpropertyRESI.php: line 1: ?php: No such file or directory
/applications/mamp/htdocs/iconimageryidx/includes/insertpropertyRESI.php: line 3: syntax error near unexpected token `'initialize.php''
/applications/mamp/htdocs/iconimageryidx/includes/insertpropertyRESI.php: line 3: `require_once('initialize.php');
The PHP script I am trying to execute DOES in fact exist, and I have made absolutely sure the spelling is correct several times. I ran a crontab on another script before and it worked just fine...any ideas??
The 2nd & 3rd Errors are from line 3 in the following script (the one I am trying to run with the crontab):
<?php
require_once('initialize.php');
require_once('insertPropertyTypes.php');
$sDate;
if(isset($_GET['startDate'])) {
$sDate = $_GET['startDate'];
} else {
$sDate = '';
}
$insertResi = new InsertPropertyTypes('Listing', $sDate, 'RESI');
?>
When I run my script insertPropertyRESI.php in the browser, it runs just fine???? Also, initialize.php and insertPropertyTypes.php are in the same directory as insertPropertyRESI.php
I am using MAMP with PHP 5.3.5
thakns for the help :?
Using sed, how can I delete all blank lines?
Input file:
Steve Blenheim:238-923-7366:95 Latham Lane, Easton, PA 83755:11/12/56:20300
Betty Boop:245-836-8357:635 Cutesy Lane, Hollywood, CA 91464:6/23/23:14500
Igor Chevsky:385-375-8395:3567 Populus Place, Caldwell, NJ 23875:6/18/68:23400
Norma Corder:397-857-2735:74 Pine Street, Dearborn, MI 23874:3/28/45:245500
Jennifer Cowan:548-834-2348:583 Laurel Ave., Kingsville, TX 83745:10/1/35:58900
Using Solaris and Linux servers and OpenSSH, is it possible to prevent users from copying files using "scp" while still allowing shell access with "ssh"?
I realize that 'ssh $server "cat file" ' type file accesses are much harder to prevent, but I need to see about stopping "scp" for starters.
Failing that, is there a way to reliably log all SCP access on the server side through syslog?
I've messed up my path variable, and now some apps that I run raise errors saying Command Not Found (error 127) for commands like 'date' and 'sleep'. These commands work fine when executed directly in the shell.
I'm guessing this has something to do with a malformed $PATH variable, and need to know how to reset it.
I've deleted the files ~/.bashrc , ~/.bash_profile, /etc/bash.bashrc, and ~/.bashrc and ~/.profile.
What other files could hold my $PATH? Is there some simpler way to reset the Path than dig into the myriad files which could hold my path?
Note, this path problem is only with my user. I made a test user on my system, and the path was fine, back to normal.
Is there a tool that makes possible to backup a Mysql database to Amazon S3 or Amazon Glacier without having o create a local file with the database contents?
Something like that:
mysqldump -u root -ppass -h host --all-databases | magical-s3-tool s3-bucket backup-yyyy-mm-dd.sql
This magical tool would use the pipe data and transfer the backup data directly to S3, without creating a local file.
I work with a lot of tab-delimited data files, with varying columns of uncertain length.
Typically, the way people view these files is to bring them down from the server to their Windows or Mac machine, and then open them up in Excel. This is certainly fully-featured, allowing filtering and other nice options. But sometimes, you just want to look at something quickly on the command line.
I wrote a bare-bones utility to display the first<n>lines of a file like so:
--- line 1 ---
1:{header-1} 2:{header-2} 3:...
--- line 2 ---
1:{data-1} 2:{data-2} 3:...
This is, obviously, very lame, but it's enough to pipe through grep, or figure out which header columns to use "cut -f" on.
Is there a *nix-based viewer for a terminal session which will display rows and columns of a tab-delimited file and let you move the viewing window over the file, or otherwise look at data?
I don't want to write this myself; instead, I'd just make a reformatter which would replace tabs with spaces for padding so I could open the file up in emacs and see aligned columns. But if there's already a tool out there to do something like this, that'd be great!
(Or, I could just live with Excel.)
I've followed this guide to setup PHP in FastCGI mode with Nginx. This guide describes 2 ways of doing it: TCP socket and UNIX socket.
I've ran some Apache Benchmark on my locale machine and here are the results:
Below tests ran multiple times to get better average statistics:
$ ab -c 200 -n 100000 http://....
APACHE: 1800 req/sec
NGINX (TCP socket): 2500 req/sec
NGINX (UNIX socket): 15000 req/sec
As far as I understand, there is overhead with using a TCP socket rather than a UNIX socket, hence the better performance with the latter. However I was not expecting such a performance difference given that the TCP socket is on the localhost, and therefore would like to ask the following question:
Q: Given the huge performance gain with using a UNIX socket, what are the configuration scenarios where it would make sense to use a TCP socket instead?
I'm having a bit of a problem configuring my PuTTY client to work with the auto-completion feature in the ksh shell.
I do a listing on the root with the directories /home and /homeroot and it returns the directories in a list just fine. I can't select it, though, by hitting X = (where X is the number).
/home/nitrodist>ls /h #hits esc + =
1) home/
2) homeroot/
#hits 2 + = for the 'homeroot' dir
1) home/
2) homeroot/
#hits just the '=' key.
1) home/
2) homeroot/
Any ideas? I've su -'d to another user who can actually do it with their PuTTY session and I can't do it there, which makes me think it's a PuTTY configuration issue. This is running on a ksh93 shell on HP-UX, if that makes any difference.
Here's my ksh config:
/home/campbelm>set -o
Current option settings
allexport off
bgnice on
emacs off
errexit off
gmacs off
ignoreeof off
interactive on
keyword off
markdirs off
monitor on
noexec off
noclobber off
noglob off
nolog off
notify off
nounset off
privileged off
restricted off
trackall off
verbose off
vi on
viraw on
xtrace off
/home/campbelm>
I have setup and configured a DHCP server on a sparc running Solaris 10. Now I want to test the DHCP server by creating a DHCP client on another computer running Solaris 11.
I would like to know, how do I specify a network address for a dhcp client such that its generated ip address is within a specific subnet.
For example:
The DHCP server host = 172.1.1.1
So I want the client machine to have an IP Address in the range of 172.1.1.1 255.255.255.0.
Please help me.
I have a node server (0.6.6) running an Express application, along with Mongoose and s3, on an Ubuntu 11.04 machine.
Several times per hour, the server is hanging. That means that the application is working fine, I see the express loggings, and then all of a sudden the server stops responding. No errors, no traces, no loggings, and strangely enough the browser won't show the request even in the network debugging window. From any machine in the local network it's the same behaviour. I restart the server and it's okay again for several minutes, then again starts to hang, everytime while doing something different.
The same application on Amazon on the same Ubuntu version works fine and never hangs.
I know all this is kind of vague, but I don't know where to start. Has any of you seen something like this before? Any idea?
I do "crontab -e" and add the following line:
0 9 * * * /usr/bin/python /home/g1/g1/utils/statsEmail.py > /home/g1/log/statsemail.log
But it doesn't work! Why? The script itself works. Also, the log is empty.
My other command in crontab is this, and it works:
0 9 * * * /usr/bin/python /home/g1/g1/sphinx/updateall.py > /home/g1/log/updateall.log
I have a directory listing as follows (given by ls -la):
total 8
drwxr-xr-x 6 <user> <group> 204 Oct 18 12:13 .
drwxr-xr-x 7 <user> <group> 238 Oct 18 11:29 ..
drwxr-xr-x 14 <user> <group> 476 Oct 18 12:31 .git
-rw-r--r-- 1 <user> <group> 601 Oct 18 12:03 index.html
drwxr-xr-x 2 <user> <group> 68 Oct 18 12:13 test
drwxr-xr-x 2 <user> <group> 68 Oct 18 12:13 test2
Running ack . -f prints out the files in the directory:
index.html
How can I get ack to print out the directories in the directory? I want to ignore the .git directory (which I understand is default behavior for ack). On that note, how can I ignore certain directories?
I am using ack 1.9.6 on Mac OSX 10.8.2.
I'm new to installing applications via the Terminal, so excuse my absolute ignorance on the subject.
I want to install SoX ( http://sox.sourceforge.net/ ), so I can do some ninja audio editing. First I installed git, then I installed SoX. I didn't get any error messages and the installation has spawned a sox-folder in my Users/myName-folder.
However when I use the program by typing "sox" in the Terminal, nothing happens, all I get is "command not found".
Does anybody know how to troubleshoot this?
The background for this question: I currently have to do a lot of my work in terminal over ssh, and I use screen quite a bit. Because I found the ctrl-a key binding for screen commands so annoying since I'm accustomed to using ctrl-a to go to the beginning of a line, I changed it to ctrl-z. The only problem with this is that when I'm in Matlab, think I am in Screen but am not, pressing ctrl-z will instantly kill my Matlab session, because ctrl-z is the key binding for suspending processes in *nix.
So the question is: can I remove the key binding for ctrl-z in my shell so that it does no longer suspend a process?
My shell is terminal.app on OSX.
I have very basic shell scripting knowledge.
I have photos under original folder on many different folder like this
folder
+ folder1
+ original
+ folder2
+ original
+ folder3
+ original
+ folder4
+ original
Using mogrify I'm trying to create thumbs under a thumb folder following a structure to this.
folder
+ folder1
+ original
+ thumb
+ folder2
+ original
+ thumb
+ folder3
+ original
+ thumb
+ folder4
+ original
+ thumb
I'm a little lost in how to write the shell script that may iterate through it. I'm ok giving mogrify its settings but I don't complete understand how to tell the script to go iterate each folder to run the mogrify command.
I am currently trying to block some websites by their domain names for all the clients of my OpenVPN server.
My first idea was to use the /etc/hosts file. But, its effects seem to be limited to the host only and not to be taken in consideration by OpenVPN.
I then tried to configure bind9 and to interface it with OpenVPN, but that solution was unsuccessful and uneasy to use.
After this, I considered using iptables to drop all the packets from/to those websites but that forum thread made me thought otherwise since iptables' behavior with FQDN may generate complex issues.
Have you got a solution to block websites for all clients using an OpenVPN server on which I am root?
I need an advice about USB programming in linux. i have to design a USB monitoring program that 'll keep checking usb ports of a linux cent os. as soon as a usb or external hard disk is connected, this program will shoot an email to some specific person about detail of usb (as size, mount on, time). when usb is disconnected, it will again shoot an email to some person with same kind of information. mean while this program will also write logs in syslog/messages with name of programing for easy tracking.
Now I want ask that what is best way to develop this program. as I'm new to this field so i know nothing about it? either i should use perl, bash scripting or some other language? I have no idea what is right way to adopt coz this program will keep running all the time to keep a check on usb ports. I know few commands in like lsusb, fdisk (to check attached usb) and df -h (to get detail of usb) but dont know how i can achieve using these commands that i am thinking.
also one more thing that in future i also need to modify this program for ubuntu and Citrix XenServer and it should be same everywhere.
I'm currently using this script line to find all the log files from a given directory structure and copy them to another directy where I can easily compress them.
find . -name "*.log" -exec cp \{\} /tmp/allLogs/ \;
The problem I have, is, the directory/subdirectory information gets lost because, I'm copying only the file.
For instance I have:
./product/install/install.log
./product/execution/daily.log
./other/conf/blah.log
And I end up with:
/tmp/allLogs/install.log
/tmp/allLogs/daily.log
/tmp/allLogs/blah.log
And I would like to have:
/tmp/allLogs/product/install/install.log
/tmp/allLogs/product/execution/daily.log
/tmp/allLogs/other/conf/blah.log