Search Results

Search found 574 results on 23 pages for 'mkdir'.

Page 4/23 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Why does my Net::Telnet program timeout?

    - by user304852
    I'm written small code to connect to remote server using Perl but observing error messages #!/usr/bin/perl -w use Net::Telnet; $telnet = new Net::Telnet ( Timeout=>60, Errmode=>'die'); $telnet->open('192.168.50.40'); $telnet->waitfor('/login:/'); $telnet->print('queen'); $telnet->waitfor('/password:/'); $telnet->print('kinG!'); $telnet->waitfor('/:/'); $telnet->print('vol >> C:\result.txt'); $telnet->waitfor('/:/'); $telnet->cmd("mkdir vol"); $telnet->print('mkdir vol234'); $telnet->cmd("mkdir vol1"); $telnet->waitfor('/\$ $/i'); $telnet->print('whoamI'); print $output; But while running i'm getting following errors C:\>perl -c E:\test\net.pl E:\test\net.pl syntax OK C:\>perl E:\test\net.pl command timed-out at E:\test\net.pl line 13 C:\> Help me in this regard.. i'm not much aware of perl

    Read the article

  • Makefile : Build in a separate directory tree

    - by Simone Margaritelli
    My project (an interpreted language) has a standard library composed by multiple files, each of them will be built into an .so dynamic library that the interpreter will load upon user request (with an import directive). Each source file is located into a subdirectory representing its "namespace", for instance : The build process has to create a "build" directory, then when each file is compiling has to create its namespace directory inside the "build" one, for instance, when compiling std/io/network/tcp.cc he run an mkdir command with mkdir -p build/std/io/network The Makefile snippet is : STDSRC=stdlib/std/hashing/md5.cc \ stdlib/std/hashing/crc32.cc \ stdlib/std/hashing/sha1.cc \ stdlib/std/hashing/sha2.cc \ stdlib/std/io/network/http.cc \ stdlib/std/io/network/tcp.cc \ stdlib/std/io/network/smtp.cc \ stdlib/std/io/file.cc \ stdlib/std/io/console.cc \ stdlib/std/io/xml.cc \ stdlib/std/type/reflection.cc \ stdlib/std/type/string.cc \ stdlib/std/type/matrix.cc \ stdlib/std/type/array.cc \ stdlib/std/type/map.cc \ stdlib/std/type/type.cc \ stdlib/std/type/binary.cc \ stdlib/std/encoding.cc \ stdlib/std/os/dll.cc \ stdlib/std/os/time.cc \ stdlib/std/os/threads.cc \ stdlib/std/os/process.cc \ stdlib/std/pcre.cc \ stdlib/std/math.cc STDOBJ=$(STDSRC:.cc=.so) all: stdlib stdlib: $(STDOBJ) .cc.so: mkdir -p `dirname $< | sed -e 's/stdlib/stdlib\/build/'` $(CXX) $< -o `dirname $< | sed -e 's/stdlib/stdlib\/build/'`/`basename $< .cc`.so $(CFLAGS) $(LDFLAGS) I have two questions : 1 - The problem is that the make command, i really don't know why, doesn't check if a file was modified and launch the build process on ALL the files no matter what, so if i need to build only one file, i have to build them all or use the command : make path/to/single/file.so Is there any way to solve this? 2 - Any way to do this in a "cleaner" way without have to distribute all the build directories with sources? Thanks

    Read the article

  • Can Linux file permissions be fooled?

    - by puk
    I came across this example today and I wondered how reliable Linux file permissions are for hiding information $ mkdir fooledYa $ mkdir fooledYa/ohReally $ chmod 0300 fooledYa/ $ cd fooledYa/ $ ls >>> ls: cannot open directory .: Permission denied $ cd ohReally $ ls -ld . >>> drwxrwxr-x 2 user user 4096 2012-05-30 17:42 . Now I am not a Linux OS expert, so I have no doubt that someone out there will explain to me that this is perfectly logical from the OS's point of view. However, my question still stands, is it possible to fool, not hack, the OS into letting you view files/inode info which you are not supposed to? What if I had issued the command chmod 0000 fooledYa, could an experienced programmer find some round about way to read a file such as fooledYa/ohReally/foo.txt?

    Read the article

  • Publish Static Content to WebLogic

    - by James Taylor
    Most people know WebLogic has a built in web server. Typically this is not an issue as you deploy java applications and WebLogic publishes to the web. But what if you just want to display a simple static HTML page. In WebLogic you can develop a simple web application to display static HTML content. In this example I used WLS 10.3.3. I want to display 2 files, an HTML file, and an xsd for reference. Create a directory of your choice, this is what I will call the document root. mkdir /u01/oracle/doc_root Copy the static files to this directory  In the document root directory created in step 1 create the directory WEB-INF mkdir WEB-INF In the WEB-INF directory create a file called web.xml with the following content <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/j2ee/dtds/web-app_2_3.dtd"> <web-app> </web-app> Login to the WebLogic console to deploy application Click on Deployments Click on Lock & Edit Click Install and set the path to the directory created in step 1 Leave default "Install this deployment as an application" and click Next Select a Managed Server to deploy to and click Next Accept the defaults and click Finish  Deployment completes successfully, now click the Activate Changes You should now see the application started in the deployments You can now access your static content via the following URL http://localhost:7001/doc_root/helloworld.html

    Read the article

  • Symlink - Permission Denied

    - by John Smith
    I'm facing an interesting problem with plenty of Permission Denied outputs when using SymLinks Linux: Slackware 13.1 Directory with Symlink: root@Tower:/var/lib# ls -lah drwxr-xr-x 8 root root 0 2012-12-02 20:09 ./ drwxr-xr-x 15 root root 0 2012-12-01 21:06 ../ lrwxrwxrwx 1 ntop ntop 21 2012-12-02 20:09 ntop - /mnt/user/media/ntop6/ Symlinked Directory: root@Tower:/mnt/user/media# ls -lah drwxrwx--- 1 nobody users 1.4K 2012-12-02 19:28 ./ drwxrwx--- 1 nobody users 128 2012-11-18 16:06 ../ drwxrwxrwx 1 ntop ntop 320 2012-12-02 20:22 ntop6/ What I have done: I have used chown -h ntop:ntop on the ntop directory in /var/lib Just to be sure, I have chmod 777 to both directories Permission denied actions: root@Tower:/var/lib# sudo -u ntop mkdir /var/lib/ntop/test mkdir: cannot create directory `/var/lib/ntop/test': Permission denied Any ideas?

    Read the article

  • chown: changing ownership of `.': Invalid argument

    - by Pierre
    I'm trying to install some new files on our new server while our sysadmin is in holidays: Here is my df # df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb3 273G 11G 248G 5% / tmpfs 48G 260K 48G 1% /dev/shm /dev/sdb1 485M 187M 273M 41% /boot xxx.xx.xxx.xxx:/commun 63T 2.2T 61T 4% /commun as root , I can create a new directory and run chown under /home/lindenb # cd /home/lindenb/ # mkdir X # chown lindenb X but I cannot run the same command under /commun # cd /commun/data/users/lindenb/ # mkdir X # chown lindenb X chown: changing ownership of `X': Invalid argument why ? how can I fix this ? updated: mount: /dev/sdb3 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/sdb1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) xxx.xx.xxx.xxx:/commun on /commun type nfs (rw,noatime,noac,hard,intr,vers=4,addr=xxx.xx.xxx.xxx,clientaddr=xxx.xx.xxx.xxx) version: $ cat /etc/redhat-release CentOS release 6.3 (Final)

    Read the article

  • PowerShell One Liner: Duplicating a folder structure in a Sharepoint document library

    - by Darren Gosbell
    I was asked by someone at work the other day, if it was possible in Sharepoint to create a set of top level folders in one document library based on the set of folders in another library. One document library has a set of top level folders that is basically a client list and we needed to create the same top level folders in another library. I knew that it was possible to open a Sharepoint document library in explorer using a UNC style path and that you could map a drive using a technique like this one: http://www.endusersharepoint.com/2007/11/16/can-i-map-a-document-library-as-a-mapped-drive/. But while explorer would let us copy the folders, it would also take all of the folder contents too, which was not what we wanted. So I figured that some sort of PowerShell script was probably the way to go and it turned out to be even easier than I thought. The following script did it in one line, so I thought I would post it here in my "online memory". :) dir "\\sharepoint\client documents" | where {$_.PSIsContainer} | % {mkdir "\\sharepoint\admin documents\$($_.Name)"} I use "dir" to get a listing from the source folder, pipe it through "where" to get only objects that are folders and then do a foreach (using the % alias) and call "mkdir".

    Read the article

  • Figure out what non-symlink path would be?

    - by David Mackintosh
    On Linux, if I've cd'd around and am now in a directory, is there a way to figure out what the real path to that directory is if I had not used a symbolic link to get there? Consider: $ pwd /home/dave/tmp $ mkdir -p 1/2/3/4/5 $ ln -s 1/2/3/4/5 5 $ cd 5 $ pwd /home/dave/tmp/5 Or: $ pwd /home/dave/tmp $ mkdir -p 1/2/3/4/5 $ ln -s 1/2/3/4 4 $ cd 4/5 $ pwd /home/dave/tmp/4/5 Is there any way to figure out that /home/dave/tmp/5 is really /home/dave/1/2/3/4/5 ?

    Read the article

  • How to install custom c library?

    - by arijit
    I just wanted to add a c library to Ubuntu which was created by Harvard University for cs50 course. They provided instructions for how to install the library which is listed below. Debian, Ubuntu First become root, as with: sudo su - Then install the CS50 Library as follows: apt-get install gcc wget http://mirror.cs50.net/library/c/cs50-library-c-3.1.zip unzip cs50-library-c-3.1.zip rm -f cs50-library-c-3.1.zip cd cs50-library-c-3.1 gcc -c -ggdb -std=c99 cs50.c -o cs50.o ar rcs libcs50.a cs50.o chmod 0644 cs50.h libcs50.a mkdir -p /usr/local/include chmod 0755 /usr/local/include mv -f cs50.h /usr/local/include mkdir -p /usr/local/lib chmod 0755 /usr/local/lib mv -f libcs50.a /usr/local/lib cd .. rm -rf cs50-library-c-3.1 I did exactly as directed. But the compiler reported “Undefined reference to a function”--the function was Get String. So, I searched for a solution and found one. It said to use the -l switch. Now when I compile I use something like: gcc –o hello.c hello –lcs50 (I don’t remember the exact command.) However, I cannot use the make command, which is easier to use. I understand that there is some problem with linking the library. What is a good solution to this problem?

    Read the article

  • Linux file permissions seem right but I can't write to a directory

    - by CaseyB
    I believe that I have the permissions set correctly but I can't write to a directory. Here's my problem: cborders@Kraken:/var/www$ ls -la total 12 drwxrwxr-x 2 webz webz 4096 2011-12-30 14:58 ./ drwxr-xr-x 13 root root 4096 2011-12-30 14:58 ../ -rw-rw-r-- 1 webz webz 177 2011-12-30 14:58 index.html cborders@Kraken:/var/www$ id cborders uid=1000(cborders) gid=1000(cborders) groups=1000(cborders),4(adm),20(dialout),24(cdrom),46(plugdev),109(sambashare),113(lpadmin),114(admin),1002(webz) cborders@Kraken:/var/www$ mkdir test mkdir: cannot create directory `test': Permission denied The owner of the directory is a user called webz and the permissions allow the user and group rwx access to it. I am in the webz group but I still can't make any changes. What am I doing wrong here?

    Read the article

  • receiving "command not found" error messages after fresh reinstall of Lubuntu 14.04

    - by user236378
    Lubuntu 14.04 was working really great. . .until I messed up and had to do a complete fresh reinstall. Now I receive error messages when I input commands into the Terminal, even after immediately completing the fresh install. For example I type: sudo leafpad ?/etc/default/ or sudo leafpad ?/etc/default/grub I get: sudo: leafpad: command not found I type: sudo update-initramfs ?-u or sudo update-grub I get: sudo: update-initramfs: command not found or sudo: update-grub: command not found If I use the command mkdir I get: mkdir: command not found I also get this same exact error message, command not found, with sudo apt-get and wget In other words I can't do anything that I was able to do when inputting commands into the terminal. So I cannot add any repositories or update anything at all. I am not really sure what is causing the problem(s). It appeared to me that Lubuntu installed and booted up OK. However just as soon as I enter anything into the Terminal I immediately get the above error messages. I have tried to do the reinstall three times, same error messages. If anyone can suggest any fixes I would really appreciate it very much. Thank you!

    Read the article

  • How do I extract files from one tarball to another tarball in one step?

    - by Martin
    I have some fairly large tarball archives, from which I need to extract some files. I will later repack those files to transfer them to another server. Currently that is a two (multi) step process for me: mkdir ttmp tar -vxzf large.tgz -C ttmp/ --strip-components=<INT> <folder-to-be-extracted> or alternatively with wildcards mkdir ttmp tar -vxzf large.tgz -C ttmp/ --strip-components=<INT> \ --wildcards --no-anchored '*pattern*' Then I go ahead and recompress the created folder tar -vczf small.tgz ttmp/* rm -rf ttmp How can I combine these two commands into one? Like this tar -x large.tgz > tar -c small.tgz Just to show, what I already tried: Whenever I search the terms "extract" I will end up here or here or even here. When I use the term "split" I will end up here and that is definitely not what I intend to do. When I use "repack" I end up in strange places.

    Read the article

  • How to run script from root as another user (with user PATH)

    - by Sandra
    I would like to have these commands run as the ss user from root mkdir bin cp -r /opt/gitolite . gitolite/install -ln gitolite setup -pk ss.pub mkdir -p .gitolite/hooks/common ln -s /opt/pre-receive .gitolite/hooks/common/ so everything is executed in /home/ss. The 4th line requires $HOME/bin as you can see from the 3rd line. The only way I can get it to work is by adding su -c "command" ss to each line, which is not a nice hack. This is an extension to my previous question, where I wasn't precise enough. Question How do I run all these commands as a script in a practical way?

    Read the article

  • Crontab: cut line to many lines?

    - by Heoa
    Hard-to-read-line @daily export sunshine="~/logs/Sunshine-`date '+\%F'`" && export sunshineUrl="http://www.sunshine.net/main/search_results.asp?currency_id=1&min_price=&max_price=50000&country_id=241&region_id=&Submit=Search" && mkdir -p $sunshine && cd $sunshine && wget --mirror -l 1 $sunshineUrl Which mark do I need to have it on many lines? @daily <SOME MARK HERE> export sunshine="~/logs/Sunshine-`date '+\%F'`" && <SOME MARK HERE> export sunshineUrl="http://www.sunshine.net/main/search_results.asp?currency_id=1&min_price=&max_price=50000&country_id=241&region_id=&Submit=Search" && <SOME MARK HERE> mkdir -p $sunshine && <SOME MARK HERE> cd $sunshine && wget --mirror -l 1 $sunshineUrl No success by appending \, //, \n or /n.

    Read the article

  • php error: unexpected T_OBJECT_OPERATOR.... trying to install magento using ssh commands to dreamhos

    - by Jane
    I am trying to install magento (e-commerce platform) I am following a tutorial that tells me to run this command using ssh: ./pear mage-setup but i'm getting this error: Parse error: syntax error, unexpected T_OBJECT_OPERATOR in /home/domainname.com/downloader/pearlib/php/System.php on line 400 Line 400 is commented in the code snippit from the system.php file: /* Magento fix for set tmp dir in config.ini */ if (class_exists('Maged_Controller',false)) { /*line 400 */ $magedConfig = Maged_Controller::model('Config',true)->load();** if ($magedConfig->get('use_custom_permissions_mode') == '1' && $mode = $magedConfig->get('mkdir_mode')) { $result = System::mkDir(array('-m' . $mode, $tmpdir)); } else { $result = System::mkDir(array('-p', $tmpdir)); } if (!$result) { return false; } } Can anyone help my demystify this error?

    Read the article

  • Filtering and copying with PowerShell

    - by Bergius
    In my quest to improve my PowerShell skills, here's an example of an ugly solution to a simple problem. Any suggestions how to improve the oneliner are welcome. Mission: trim a huge icon library down to something a bit more manageable. The original directory structure looks like this: /Apps and Utilities /Compile /32 Bit Alpha png /Compile 16 n p.png /+ 10 or more files /+ 5 more formats with 10 or more files each /+ 20 or so icon names /+ 22 more categories I want to copy the 32 Bit Alpha pngs and flatten the directory structure a bit. Here's my quick and very dirty solution: $dest = mkdir c:\icons; gci -r | ? { $_.Name -eq '32 Bit Alph a png' } | % { mkdir ("$dest\" + $_.Parent.Parent.Name + "\" + $_.Parent.Name); $_ } | gci | % { cp $_. FullName -dest ("$dest\" + $_.Directory.Parent.Parent + "\" + $_.Directory.Parent) } Not nice, but it solved my problem. Resulting structure: /Apps and Utilities /Compile /Compile 16 n p.png /etc /etc /etc How would you do it?

    Read the article

  • NetBeans IDE 6.8 not working nicely with cygwin 1.7.5.1

    - by Milktrader
    I'm trying to use NetBeans to compile C code and have the following versions from cygwin gcc 3.4.5 g++ 3.4.5 GNU Make 3.81 GNU gdb 6.8.0 Here are the messages from trying to compile the Welcome program /usr/bin/make -f nbproject/Makefile-Debug.mk SUBPROJECTS= .build-conf make[1]: Entering directory `/cygdrive/c/Users/Milktrader/Documents/NetBeansProjects/Welcome_1' /usr/bin/make -f nbproject/Makefile-Debug.mk dist/Debug/MinGW-Windows/welcome_1.exe make[2]: Entering directory /cygdrive/c/Users/Milktrader/Documents/NetBeansProjects/Welcome_1' mkdir -p build/Debug/MinGW-Windows make[2]: mkdir: Command not found make[2]: *** [build/Debug/MinGW-Windows/welcome.o] Error 127 make[2]: Leaving directory/cygdrive/c/Users/Milktrader/Documents/NetBeansProjects Welcome_1' make[1]: * [.build-conf] Error 2 make[1]: Leaving directory `/cygdrive/c/Users/Milktrader/Documents/NetBeansProjects/Welcome_1' make: * [.build-impl] Error 2 BUILD FAILED (exit value 2, total time: 1s)\ Is it worth downloading a previous cygwin version (1.5)? Blog tutorials (including the NetBeans site) have this older version in their examples.

    Read the article

  • Unrecognized option: -o Could not create the Java virtual machine.

    - by Kerubu
    I've got an unusual build error when using Buildroot to create an image for my Phidget SBC. It's unusual because it occurs ONLY on my development laptop and NOT on my general use laptop even though I am using EXACTLY the same Builroot environment as published by Phidgets themselves. When I try to create my Buildroot image I get the following error when it attempts to compile GNU classpath: Making all in tools make[2]: Entering directory `/home/xxxx/buildroot_phidgetsbc/buildroot-phidgetsbc_1.0.4.20111028/output/build/classpath-0.98/tools' /bin/mkdir -p classes asm /bin/mkdir -p ../tools/generated/gnu/classpath/tools/gjdoc/expr java -classpath antlr.Tool -o ../tools/generated/gnu/classpath/tools/gjdoc/expr/ \ ./gnu/classpath/tools/gjdoc/expr/java-expression.g Unrecognized option: -o Could not create the Java virtual machine. make[2]: *** [tools.zip] Error 1 The only difference I can possibly thing of is the different Linux (Ubuntu) versions I am using on each laptop. Also I cannot find a -o option documented for Java and don't understand why it works on one laptop but not the other. Any suggestions would be helpful.

    Read the article

  • Lustre - issues with simple setup

    - by ethrbunny
    Issue: I'm trying to assess the (possible) use of Lustre for our group. To this end I've been trying to create a simple system to explore the nuances. I can't seem to get past the 'llmount.sh' test with any degree of success. What I've done: Each system (throwaway PCs with 70Gb HD, 2Gb RAM) is formatted with CentOS 6.2. I then update everything and install the Lustre kernel from downloads.whamcloud.com and add on the various (appropriate) lustre and e2fs RPM files. Systems are rebooted and tested with 'llmount.sh' (and then cleared with 'llmountcleanup.sh'). All is well to this point. First I create an MDS/MDT system via: /usr/sbin/mkfs.lustre --mgs --mdt --fsname=lustre --device-size=200000 --param sys.timeout=20 --mountfsoptions=errors=remount-ro,user_xattr,acl --param lov.stripesize=1048576 --param lov.stripecount=0 --param mdt.identity_upcall=/usr/sbin/l_getidentity --backfstype ldiskfs --reformat /tmp/lustre-mdt1 and then mkdir -p /mnt/mds1 mount -t lustre -o loop,user_xattr,acl /tmp/lustre-mdt1 /mnt/mds1 Next I take 3 systems and create a 2Gb loop mount via: /usr/sbin/mkfs.lustre --ost --fsname=lustre --device-size=200000 --param sys.timeout=20 --mgsnode=lustre_MDS0@tcp --backfstype ldiskfs --reformat /tmp/lustre-ost1 mkdir -p /mnt/ost1 mount -t lustre -o loop /tmp/lustre-ost1 /mnt/ost1 The logs on the MDT box show the OSS boxes connecting up. All appears ok. Last I create a client and attach to the MDT box: mkdir -p /mnt/lustre mount -t lustre -o user_xattr,acl,flock luster_MDS0@tcp:/lustre /mnt/lustre Again, the log on the MDT box shows the client connection. Appears to be successful. Here's where the issues (appear to) start. If I do a 'df -h' on the client it hangs after showing the system drives. If I attempt to create files (via 'dd') on the lustre mount the session hangs and the job can't be killed. Rebooting the client is the only solution. If I do a 'lctl dl' from the client it shows that only 2/3 OST boxes are found and 'UP'. [root@lfsclient0 etc]# lctl dl 0 UP mgc MGC10.127.24.42@tcp 282d249f-fcb2-b90f-8c4e-2f1415485410 5 1 UP lov lustre-clilov-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 4 2 UP lmv lustre-clilmv-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 4 3 UP mdc lustre-MDT0000-mdc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 4 UP osc lustre-OST0000-osc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 5 UP osc lustre-OST0003-osc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 Doing a 'lfs df' from the client shows: [root@lfsclient0 etc]# lfs df UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 149944 16900 123044 12% /mnt/lustre[MDT:0] OST0000 : inactive device OST0001 : Resource temporarily unavailable OST0002 : Resource temporarily unavailable lustre-OST0003_UUID 187464 24764 152636 14% /mnt/lustre[OST:3] filesystem summary: 187464 24764 152636 14% /mnt/lustre Given that each OSS box has a 2Gb (loop) mount I would expect to see this reflected in available size. There are no errors on the MDS/MDT box to indicate that multiple OSS/OST boxes have been lost. EDIT: each system has all other systems defined in /etc/hosts and entries in iptables to provide access. SO: I'm clearly making several mistakes. Any pointers as to where to start correcting them?

    Read the article

  • How do you preseed an ssh key?

    - by Wes Felter
    I tried this: d-i preseed/late_command string mkdir -p /target/root/.ssh d-i preseed/late_command string cp /cdrom/id_rsa.pub /target/root/.ssh/authorized_keys d-i preseed/late_command string chmod -R go-rwx /target/root/.ssh (I'm using a USB installer and I put id_rsa.pub in the root directory of the USB drive.) The /root/.ssh directory is not created and the installer complains that the chmod command failed (not surprising if the directory doesn't exist).

    Read the article

  • Delete directory by referencing symbolic link

    - by Adam
    To set up the question, imagine this scenario: mkdir ~/temp cd ~/ ln -s temp temporary rm -rf temporary, rm -f temporary, and rm temporary each will remove the symbolic link but leave the directory ~/temp/. I have a script where the name of the symbolic link is easily derived but the name of the linked directory is not. Is there a way to remove the directory by referencing the symbolic link, short of parsing the name of the directory from ls -od ~/temporary?

    Read the article

  • Java EE 6 and NoSQL/MongoDB on GlassFish using JPA and EclipseLink 2.4 (TOTD #175)

    - by arungupta
    TOTD #166 explained how to use MongoDB in your Java EE 6 applications. The code in that tip used the APIs exposed by the MongoDB Java driver and so requires you to learn a new API. However if you are building Java EE 6 applications then you are already familiar with Java Persistence API (JPA). Eclipse Link 2.4, scheduled to release as part of Eclipse Juno, provides support for NoSQL databases by mapping a JPA entity to a document. Their wiki provides complete explanation of how the mapping is done. This Tip Of The Day (TOTD) will show how you can leverage that support in your Java EE 6 applications deployed on GlassFish 3.1.2. Before we dig into the code, here are the key concepts ... A POJO is mapped to a NoSQL data source using @NoSQL or <no-sql> element in "persistence.xml". A subset of JPQL and Criteria query are supported, based upon the underlying data store Connection properties are defined in "persistence.xml" Now, lets lets take a look at the code ... Download the latest EclipseLink 2.4 Nightly Bundle. There is a Installer, Source, and Bundle - make sure to download the Bundle link (20120410) and unzip. Download GlassFish 3.1.2 zip and unzip. Install the Eclipse Link 2.4 JARs in GlassFish Remove the following JARs from "glassfish/modules": org.eclipse.persistence.antlr.jar org.eclipse.persistence.asm.jar org.eclipse.persistence.core.jar org.eclipse.persistence.jpa.jar org.eclipse.persistence.jpa.modelgen.jar org.eclipse.persistence.moxy.jar org.eclipse.persistence.oracle.jar Add the following JARs from Eclipse Link 2.4 nightly build to "glassfish/modules": org.eclipse.persistence.antlr_3.2.0.v201107111232.jar org.eclipse.persistence.asm_3.3.1.v201107111215.jar org.eclipse.persistence.core.jpql_2.4.0.v20120407-r11132.jar org.eclipse.persistence.core_2.4.0.v20120407-r11132.jar org.eclipse.persistence.jpa.jpql_2.0.0.v20120407-r11132.jar org.eclipse.persistence.jpa.modelgen_2.4.0.v20120407-r11132.jar org.eclipse.persistence.jpa_2.4.0.v20120407-r11132.jar org.eclipse.persistence.moxy_2.4.0.v20120407-r11132.jar org.eclipse.persistence.nosql_2.4.0.v20120407-r11132.jar org.eclipse.persistence.oracle_2.4.0.v20120407-r11132.jar Start MongoDB Download latest MongoDB from here (2.0.4 as of this writing). Create the default data directory for MongoDB as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db Refer to Quickstart for more details. Start MongoDB as: arungup-mac:mongodb-osx-x86_64-2.0.4 <arungup> ->./bin/mongod./bin/mongod --help for help and startup optionsMon Apr  9 12:56:02 [initandlisten] MongoDB starting : pid=3124 port=27017 dbpath=/data/db/ 64-bit host=arungup-mac.localMon Apr  9 12:56:02 [initandlisten] db version v2.0.4, pdfile version 4.5Mon Apr  9 12:56:02 [initandlisten] git version: 329f3c47fe8136c03392c8f0e548506cb21f8ebfMon Apr  9 12:56:02 [initandlisten] build info: Darwin erh2.10gen.cc 9.8.0 Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_40Mon Apr  9 12:56:02 [initandlisten] options: {}Mon Apr  9 12:56:02 [initandlisten] journal dir=/data/db/journalMon Apr  9 12:56:02 [initandlisten] recover : no journal files present, no recovery neededMon Apr  9 12:56:02 [websvr] admin web console waiting for connections on port 28017Mon Apr  9 12:56:02 [initandlisten] waiting for connections on port 27017 Check out the JPA/NoSQL sample from SVN repository. The complete source code built in this TOTD can be downloaded here. Create Java EE 6 web app Create a Java EE 6 Maven web app as: mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=model -DartifactId=javaee-nosql -DarchetypeVersion=1.5 -DinteractiveMode=false Copy the model files from the checked out workspace to the generated project as: cd javaee-nosqlcp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/model src/main/java Copy "persistence.xml" mkdir src/main/resources cp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/META-INF ./src/main/resources Add the following dependencies: <dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.jpa</artifactId> <version>2.4.0-SNAPSHOT</version> <scope>provided</scope></dependency><dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.nosql</artifactId> <version>2.4.0-SNAPSHOT</version></dependency><dependency> <groupId>org.mongodb</groupId> <artifactId>mongo-java-driver</artifactId> <version>2.7.3</version></dependency> The first one is for the EclipseLink latest APIs, the second one is for EclipseLink/NoSQL support, and the last one is the MongoDB Java driver. And the following repository: <repositories> <repository> <id>EclipseLink Repo</id> <url>http://www.eclipse.org/downloads/download.php?r=1&amp;nf=1&amp;file=/rt/eclipselink/maven.repo</url> <snapshots> <enabled>true</enabled> </snapshots> </repository>  </repositories> Copy the "Test.java" to the generated project: mkdir src/main/java/examplecp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/example/Test.java ./src/main/java/example/ This file contains the source code to CRUD the JPA entity to MongoDB. This sample is explained in detail on EclipseLink wiki. Create a new Servlet in "example" directory as: package example;import java.io.IOException;import java.io.PrintWriter;import javax.servlet.ServletException;import javax.servlet.annotation.WebServlet;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;/** * @author Arun Gupta */@WebServlet(name = "TestServlet", urlPatterns = {"/TestServlet"})public class TestServlet extends HttpServlet { protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { out.println("<html>"); out.println("<head>"); out.println("<title>Servlet TestServlet</title>"); out.println("</head>"); out.println("<body>"); out.println("<h1>Servlet TestServlet at " + request.getContextPath() + "</h1>"); try { Test.main(null); } catch (Exception ex) { ex.printStackTrace(); } out.println("</body>"); out.println("</html>"); } finally { out.close(); } } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); }} Build the project and deploy it as: mvn clean packageglassfish3/bin/asadmin deploy --force=true target/javaee-nosql-1.0-SNAPSHOT.war Accessing http://localhost:8080/javaee-nosql/TestServlet shows the following messages in the server.log: connecting(EISLogin( platform=> MongoPlatform user name=> "" MongoConnectionSpec())) . . .Connected: User: Database: 2.7  Version: 2.7 . . .Executing MappedInteraction() spec => null properties => {mongo.collection=CUSTOMER, mongo.operation=INSERT} input => [DatabaseRecord( CUSTOMER._id => 4F848E2BDA0670307E2A8FA4 CUSTOMER.NAME => AMCE)]. . .Data access result: [{TOTALCOST=757.0, ORDERLINES=[{DESCRIPTION=table, LINENUMBER=1, COST=300.0}, {DESCRIPTION=balls, LINENUMBER=2, COST=5.0}, {DESCRIPTION=rackets, LINENUMBER=3, COST=15.0}, {DESCRIPTION=net, LINENUMBER=4, COST=2.0}, {DESCRIPTION=shipping, LINENUMBER=5, COST=80.0}, {DESCRIPTION=handling, LINENUMBER=6, COST=55.0},{DESCRIPTION=tax, LINENUMBER=7, COST=300.0}], SHIPPINGADDRESS=[{POSTALCODE=L5J1H7, PROVINCE=ON, COUNTRY=Canada, CITY=Ottawa,STREET=17 Jane St.}], VERSION=2, _id=4F848E2BDA0670307E2A8FA8,DESCRIPTION=Pingpong table, CUSTOMER__id=4F848E2BDA0670307E2A8FA7, BILLINGADDRESS=[{POSTALCODE=L5J1H8, PROVINCE=ON, COUNTRY=Canada, CITY=Ottawa, STREET=7 Bank St.}]}] You'll not see any output in the browser, just the output in the console. But the code can be easily modified to do so. Once again, the complete Maven project can be downloaded here. Do you want to try accessing relational and non-relational (aka NoSQL) databases in the same PU ?

    Read the article

  • Mount a LUKS partition at boot

    - by Adam Matan
    Hi, I have installed an Ubuntu machine with two encrypted LUKS partitions: one for / and one for /home. I've reinstalled the machine to upgrade to 10.04. Again, the / is installed using LUKS, and I'm able to mount the /home using: mkdir /media/home sudo cryptsetup luksOpen /dev/sda2 home sudo mount -t ext3 /dev/mapper/home /media/home The problem is, this cryptfs mapper disappears after boot, so I putting the appropriate line in fstab fails. How do I set the cryptfs to prompt for password and unlock the drive at boot? Thanks, Adam

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >