Search Results

Search found 574 results on 23 pages for 'mkdir'.

Page 21/23 | < Previous Page | 17 18 19 20 21 22 23  | Next Page >

  • Yet another Ant + JUnit classpath problem

    - by user337591
    Hi, I'm developing an Eclipse SWT application using Eclipse. There are also some JUnit 4 tests, which test some DAO's. But when I try to run the tests via an ant build, all of the tests fail, because the test classes aren't found. Google brought up about a million of people who all have the same problem, but none of their solutions seem to work for me -.- . These are the contents of my build.xml file: <property name="test.reports" value="./test/reports" /> <property name="classes" value="build" /> <path id="project.classpath"> <pathelement location="${classes}" /> </path> <target name="testreport"> <mkdir dir="${test.reports}" /> <junit fork="yes" printsummary="no" haltonfailure="no"> <batchtest fork="yes" todir="${test.reports}" > <fileset dir="${classes}"> <include name="**/Test*.class" /> </fileset> </batchtest> <formatter type="xml" /> <classpath refid="project.classpath" /> </junit> <junitreport todir="${test.reports}"> <fileset dir="${test.reports}"> <include name="TEST-*.xml" /> </fileset> <report todir="${test.reports}" /> </junitreport> </target> The test classes are in the build-directory together with the application classes, although they are in some subfolders according to their packages. Maybe this is important too: At first Ant complained that JUnit wasn't in its classpath, but since I put it there (with the eclipse configuration editor) it complains about JUnit being in its classpath twice. WARNING: multiple versions of ant detected in path for junit [junit] jar:file:C:/Users/as df/Documents/eclipse/plugins/org.apache.ant_1.7.1.v20090120-1145/lib/ant.jar!/org/apache/tools/ant/Project.class [junit] and jar:file:/C:/Users/as%20df/Documents/eclipse/plugins/org.apache.ant_1.7.1.v20090120-1145/lib/ant.jar!/org/apache/tools/ant/Project.class I've tried specifying each and every subdirectory, each and every class file, I've tried filesets and filelists, nothing seems to work. Thanks for your help, I've been sitting for hours on this thing now...

    Read the article

  • why java application not working after applying "web look and feel" theme?

    - by Vasu
    I have developed "Employee Management System" java project .For improving the ui appearance i have integrated "web look and feel" into my application.Theme is applied correctly. But here the problem arises: At first i have runned the java application without connecting to oracle data base,application have runned and worked perfectly. But when i connected the application to oracle database and runned again the application is taking more time to open and getting strucked. Code: For applying theme try { WebLookAndFeel.install(); }catch(Exception ex){ ex.printStackTrace(); } Code for Connecting DataBase: if (con == null) { File sd = new File(""); File in = new File(sd.getAbsolutePath() + File.separator + "conf.properties"); File dir = new File(sd.getAbsolutePath() + File.separator + "conf.properties"); if (!dir.exists()) { // dir.mkdir(); dir.createNewFile(); Properties pro = new Properties(); pro.load(new FileInputStream(in)); pro.setProperty("driverclass", "oracle.jdbc.driver.OracleDriver"); pro.setProperty("url", "jdbc:oracle:thin:@192.168.1.1:1521:main"); pro.setProperty("username", "gb16"); pro.setProperty("passwd", "gb16"); try { FileOutputStream out = new FileOutputStream(in); pro.store(out, "Human Management System initialization properties"); out.flush(); out.close();} catch(Exception e) { e.printStackTrace(); } } else { // System.out.println("Already exists "); } Properties pro = new Properties(); pro.load(new FileInputStream(in)); Class.forName(pro.getProperty("driverclass")); con = DriverManager.getConnection(pro.getProperty("url"), pro.getProperty("username"), pro.getProperty("passwd")); st = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE); st = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE); } else { return con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_UPDATABLE); } without the theme the application with connected to database working correctly. Please help me in solving this issue. Thanks in advance..

    Read the article

  • log4bash: Cannot find a way to add MaxBackupIndex to this logger implementation

    - by Syffys
    I have been trying to modify this log4bash implementation but I cannot manage to make it work. Here's a sample: #!/bin/bash TRUE=1 FALSE=0 ############### Added for testing log4bash_LOG_ENABLED=$TRUE log4bash_rootLogger=$TRACE,f,s log4bash_appender_f=file log4bash_appender_f_dir=$(pwd) log4bash_appender_f_file=test.log log4bash_appender_f_roll_format=%Y%m log4bash_appender_f_roll=$TRUE log4bash_appender_f_maxBackupIndex=10 #################################### log4bash_abs(){ if [ "${1:0:1}" == "." ]; then builtin echo ${rootDir}/${1} else builtin echo ${1} fi } log4bash_check_app_dir(){ if [ "$log4bash_LOG_ENABLED" -eq $TRUE ]; then dir=$(log4bash_abs $1) if [ ! -d ${dir} ]; then #log a seperation line mkdir $dir fi fi } # Delete old log files # $1 Log directory # $2 Log filename # $3 Log filename suffix # $4 Max backup index log4bash_delete_old_files(){ ##### Added for testing builtin echo "Running log4bash_delete_old_files $@" &2 ##### if [ "$log4bash_LOG_ENABLED" -eq $TRUE ] && [ -n "$3" ] && [ "$4" -gt 0 ]; then local directory=$(log4bash_abs $1) local filename=$2 local maxBackupIndex=$4 local suffix=$(echo "${3}" | sed -re 's/[^.]/?/g') local logFileList=$(find "${directory}" -mindepth 1 -maxdepth 1 -name "${filename}${suffix}" -type f | xargs ls -1rt) local fileCnt=$(builtin echo -e "${logFileList}" | wc -l) local fileToDeleteCnt=$(($fileCnt-$maxBackupIndex)) local fileToDelete=($(builtin echo -e "${logFileList}" | head -n "${fileToDeleteCnt}" | sed ':a;N;$!ba;s/\n/ /g')) ##### Added for testing builtin echo "log4bash_delete_old_files About to start deletion ${fileToDelete[@]}" &2 ##### if [ ${fileToDeleteCnt} -gt 0 ]; then for f in "${fileToDelete[@]}"; do #### Added for testing builtin echo "Removing file ${f}" &2 #### builtin eval rm -f ${f} done fi fi } #Appender # $1 Log directory # $2 Log file # $3 Log file roll ? # $4 Appender Name log4bash_filename(){ builtin echo "Running log4bash_filename $@" &2 local format local filename log4bash_check_app_dir "${1}" if [ ${3} -eq 1 ];then local formatProp=${4}_roll_format format=${!formatProp} if [ -z ${format} ]; then format=$log4bash_appender_file_format fi local suffix=.`date "+${format}"` filename=${1}/${2}${suffix} # Old log files deletion local previousFilenameVar=int_${4}_file_previous local maxBackupIndexVar=${4}_maxBackupIndex if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then builtin eval export $previousFilenameVar=$filename log4bash_delete_old_files "${1}" "${2}" "${suffix}" "${!maxBackupIndexVar}" else builtin echo "log4bash_filename $previousFilenameVar = ${!previousFilenameVar}" fi else filename=${1}/${2} fi builtin echo $filename } ######################## Added for testing filename_caller(){ builtin echo "filename_caller Call $1" output=$(log4bash_abs $(log4bash_filename "${log4bash_appender_f_dir}" "${log4bash_appender_f_file}" "1" "log4bash_appender_f" )) builtin echo ${output} } #### Previous logs generation for i in {1101..1120}; do file="${log4bash_appender_f_file}.2012${i:2:3}" builtin echo "${file} $i" touch -m -t "2012${i}0000" ${log4bash_appender_f_dir}/$file done for i in {1..4}; do filename_caller $i done I expect log4bash_filename function to step into the following if only when the calculated log filename is different from the previous one: if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then For this scenario to apply, I'd need ${!previousFilenameVar} to be correctly set, but it's not the case, so log4bash_filename steps into this if all the time which is really not necessary... It looks like the issue is due to the following line not working properly: builtin eval export $previousFilenameVar=$filename I have a some theories to explain why: in the original code, functions are declared and exported as readonly which makes them unable to modify global variable. I removed readonly declarations in the above sample, but probleme persists. Function calls are performed in $() which should make them run into seperated shell instances so variable modified are not exported to the main shell But I cannot manage to find a workaround to this issue... Any help is appreciated, thanks in advance!

    Read the article

  • Form validation with optional File Upload field callback

    - by MotiveKyle
    I have a form with some input fields and a file upload field in the same form. I am trying to include a callback into the form validation to check for file upload errors. Here is the controller for adding and the callback: public function add() { if ($this->ion_auth->logged_in()): //validate form input $this->form_validation->set_rules('title', 'title', 'trim|required|max_length[66]|min_length[2]'); // link url $this->form_validation->set_rules('link', 'link', 'trim|required|max_length[255]|min_length[2]'); // optional content $this->form_validation->set_rules('content', 'content', 'trim|min_length[2]'); $this->form_validation->set_rules('userfile', 'image', 'callback_validate_upload'); $this->form_validation->set_error_delimiters('<small class="error">', '</small>'); // if form was submitted, process form if ($this->form_validation->run()) { // add pin $pin_id = $this->pin_model->create(); $slug = strtolower(url_title($this->input->post('title'), TRUE)); // path to pin folder $file_path = './uploads/' . $pin_id . '/'; // if folder doesn't exist, create it if (!is_dir($file_path)) { mkdir($file_path); } // file upload config variables $config['upload_path'] = $file_path; $config['allowed_types'] = 'jpg|png'; $config['max_size'] = '2048'; $config['max_width'] = '1920'; $config['max_height'] = '1080'; $config['encrypt_name'] = TRUE; $this->load->library('upload', $config); // upload image file if ($this->upload->do_upload()) { $this->load->model('file_model'); $image_id = $this->file_model->insert_image_to_db($pin_id); $this->file_model->add_image_id_to_pin($pin_id, $image_id); } } // build page else: // User not logged in redirect("login", 'refresh'); endif; } The callback: function validate_upload() { if ($_FILES AND $_FILES['userfile']['name']): if ($this->upload->do_upload()): return true; else: $this->form_validation->set_message('validate_upload', $this->upload->display_errors()); return false; endif; else: return true; endif; } I am getting the error Fatal error: Call to a member function do_upload() on a non-object on line 92 when I try to run this. Line 92 is the if ($this->upload->do_upload()): line in the validate_upload callback. Am I going about this the right way? What's triggering this error?

    Read the article

  • I need to copy only selected files and folders in PHP

    - by OM The Eternity
    I am using the following code, in which initially i am taking the difference of two folder structure and then the out put needs to be copied to other folder. here is the code below.. $source = '/var/www/html/copy1'; $mirror = '/var/www/html/copy2'; function scan_dir_recursive($dir, $rel = null) { $all_paths = array(); $new_paths = scandir($dir); foreach ($new_paths as $path) { if ($path == '.' || $path == '..') { continue; } if ($rel === null) { $path_with_rel = $path; } else { $path_with_rel = $rel . DIRECTORY_SEPARATOR . $path; } $full_path = $dir . DIRECTORY_SEPARATOR . $path; $all_paths[] = $path_with_rel; if (is_dir($full_path)) { $all_paths = array_merge( $all_paths, scan_dir_recursive($full_path, $path_with_rel) ); } } return $all_paths; } $diff_paths = array_diff( scan_dir_recursive($mirror), scan_dir_recursive($source) ); /*$diff_path = array_diff($mirror,$original);*/ echo "<pre>Difference ";print_r($diff_paths); foreach($diff_paths as $path) { echo $source1 = "var/www/html/copy2/".$path; echo "<br>"; $des = "var/www/html/copy1/".$path; copy_recursive_dirs($source1, $des); } function copy_recursive_dirs($dirsource, $dirdest) { $dir_handle=opendir($dirsource); mkdir($dirdest,0777); while(false!==($file=readdir($dir_handle))) {/*echo "<pre>"; print_r($file);*/ if($file!="." && $file!="..") { if(is_dir($dirsource.DIRECTORY_SEPARATOR.$file)) { //Copy the file at the same level in the destination folder copy_recursive_dirs($dirsource.DIRECTORY_SEPARATOR.$file, $dirdest.DIRECTORY_SEPARATOR.$file); } else{ //Copy the dir at the same lavel in the destination folder copy ($dirsource.DIRECTORY_SEPARATOR.$file, $dirdest.DIRECTORY_SEPARATOR.$file); } } } closedir($dir_handle); return true; } Whenever I execute the script I get the difference output but do not get the other copy on second folder as per code... Pls help me in rectifying...

    Read the article

  • Archive tar files to a different location inperl

    - by user314261
    Hi, I am a newbee in Perl. I am reading a directory having some archive files and uncompressing the archive files one by one. Everything seems well however the files are getting uncompressed in the folder which has the main perl code module which is running the sub modules. I want the archive to be generated in the folder I specify. This is my code: sub ExtractFile { #Read the folder which was copied in the output path recursively and extract if any file is compressed my $dirpath = $_[0]; opendir(D, "$dirpath") || die "Can't open dir $dirpath: $!\n"; my @list = readdir(D); closedir(D); foreach my $f (@list) { print " \$f = $f"; if(-f $dirpath."/$f") { #print " File in directory $dirpath \n ";#is \$f = $f\n"; my($file_name, $file_dirname,$filetype)= fileparse($f,qr{\..*}); #print " \nThe file extension is $filetype"; #print " \nThe file name is is $file_name"; # If compressed file then extract the file if($filetype eq ".tar" or $filetype eq ".tzr.gz") { my $arch_file = $dirpath."/$f"; print "\n file to be extracted is $arch_file"; my $tar = Archive::Tar->new($arch_file); #$tar->extract() or die ("Cannot extract file $arch_file"); #mkdir($dirpath."/$file_name"); $tar->extract_file($arch_file,$dirpath."/$file_name" ) or die ("Cannot extract file $arch_file"); } } if(-d $dirpath."/$f") { if($f eq "." or $f eq "..") { next; } print " Directory\n";# is $f"; ExtractFile($dirpath."/$f"); } } } The method ExtractFile is called recursively to loop all the archives. When using $tar-extract() it uncompresses in the folder which calls this metohd. when I use $tar-extract_file($arch_file,$dirpath."/$file_name" ) I get an error : No such file in archive: '/home/fsang/dante/workspace/output/s.tar' at /home/fsang/dante/lib/Extraction.pm line 80 Please help I have checked that path and input output there is no issue with it. Seems some usage problem I am not aware of for $tar-extract_file(). Many thanks for anyone resolving this issue. Regards, Sakshi

    Read the article

  • svnstat script

    - by Kyle Hodgson
    So I'm building out a shell script to check out all of our relevant svn repositories for analysis in svnstat. I've gotten all of this to work manually, now I'm writing up a bash script in cygwin on my Vista laptop, as I intend to move this to a Linux server at some point. Edit: I gave up on this and wrote a simple .bat script. I'll figure out the Linux deployment some other way. Edit: added the sleep 30 and svn log commands. I can tell now, with the svn log command, that it's not getting to the svn log ... this time, it did Applications, and ran the log, and then check out Database, and froze. I'll put the sleep 30 before and after the log this time. co2.sh #!/bin/bash function checkout { mkdir $1 svn checkout svn://dev-server/$1 $1 svn log --verbose --xml >> svn.log $1 sleep 30 } cd /cygdrive/c/Users/My\ User/Documents/Repos/wc checkout Applications checkout Database checkout WebServer/www.mysite.com checkout WebServer/anotherhost.mysite.com checkout WebServer/AnotherApp checkout WebServer/thirdhost.mysite.com checkout WebServer/fourthhost.mysite.com checkout WebServer/WebServices It works, for the most part - but for some reason it has a tendency to stop working after a few repositories, usually right after finishing a repository before going to the next one. When it fails, it will not recover on its own. I've tried commenting out the svn line, it goes in and creates all the directories just fine when I do that - so its not that. I'm looking for direction as well as direct advice. Cygwin has been very stable for me, but I did start using the native rxvt instead of "bash in a cmd.exe window" recently. I don't think that's the problem, as I've left top on remote systems running all night and rxvt didn't seem to mind. Also I haven't done any bash scripting in cygwin so I suppose this might not be recommended; though I can't see why not. I don't want all of WebServer, hence me only checking out certain folders like that. What I suspect is that something is hanging up the svn checkout. Any ideas here? Edit: this time when I hit ctrl+z to cancel out, I forgot I was on Windows and typed ps to see if the job was still running; and as you can see there are lots of svn processes hanging around... strange. Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [1]- Stopped bash co2.sh [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ kill %1 [1]- Stopped bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ [1]- Terminated bash co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ ps PID PPID PGID WINPID TTY UID STIME COMMAND 7872 1 7872 2340 0 1000 Jun 29 /usr/bin/svn 7752 1 6140 7828 1 1000 Jun 29 /usr/bin/svn 6192 1 5044 2192 1 1000 Jun 30 /usr/bin/svn 7292 1 7452 1796 1 1000 Jun 30 /usr/bin/svn 6236 1 7304 7468 2 1000 Jul 2 /usr/bin/svn 1564 1 5032 7144 2 1000 Jul 2 /usr/bin/svn 9072 1 3960 6276 3 1000 Jul 3 /usr/bin/svn 5876 1 5876 5876 con 1000 11:22:10 /usr/bin/rxvt 924 5876 924 10192 4 1000 11:22:10 /usr/bin/bash 7212 1 7332 5584 4 1000 13:17:54 /usr/bin/svn 9412 1 5480 8840 4 1000 15:38:16 /usr/bin/svn S 8128 924 8128 9452 4 1000 17:38:05 /usr/bin/bash 9132 8128 8128 8172 4 1000 17:43:25 /usr/bin/svn 3512 1 3512 3512 con 1000 17:43:50 /usr/bin/rxvt I 10200 3512 10200 6616 5 1000 17:43:51 /usr/bin/bash 9732 1 9732 9732 con 1000 17:45:55 /usr/bin/rxvt 3148 9732 3148 8976 6 1000 17:45:55 /usr/bin/bash 5856 3148 5856 876 6 1000 17:51:00 /usr/bin/vim 7736 924 7736 8036 4 1000 17:53:26 /usr/bin/ps Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ jobs [2]+ Stopped ./co2.sh Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ Here's an strace on the PID of the hung svn program, it's been like this for hours. Looks like its just doing nothing. I keep suspecting that some interruption on the server is causing this; does svn have a locking mechanism I'm not aware of? Kyle Hodgson@KyleHodgson-PC ~/winUser/Documents/Repos $ strace -p 7304 ********************************************** Program name: C:\cygwin\bin\svn.exe (pid 7304, ppid 6408) App version: 1005.25, api: 0.156 DLL version: 1005.25, api: 0.156 DLL build: 2008-06-12 19:34 OS version: Windows NT-6.0 Heap size: 402653184 Date/Time: 2009-07-06 18:20:11 **********************************************

    Read the article

  • Does likewise-open > version 5.4 contain CIFS support?

    - by Ben Andken
    I'm trying to get the CIFS server working in likewise-open. I've found this set of instructions and everything seems to work until I try to connect ([url]http://www.likewise.com/resources/documentation_library/manuals/cifs/likewise-cifs-smb-file-server-guide.html#id2765992):[/url] 1.6. Build and Configure a Standalone Likewise-CIFS Server This section demonstrates how to build and configure a standalone instance of Likewise-CIFS from the command line. The following procedure assumes that you want to set up Likewise-CIFS on a Linux server to share files with Windows computers in a network without Active Directory. This procedure also assumes you know how to build Linux applications from their source code and then install them. Download Likewise-CIFS from its open source git location: $ git clone git://git.likewiseopen.org/ Download, build, and install the following tools. The tools listed are known to work, but earlier or later versions might work as well. Also, instead of downloading the tools, you might be able to install them on your platform with apt-get or some other means. http://ftp.gnu.org/gnu/autoconf/autoconf-2.65.tar.gz http://ftp.gnu.org/gnu/automake/automake-1.9.6.tar.gz http://ftp.gnu.org/gnu/libtool/libtool-2.2.6a.tar.gz http://pkgconfig.freedesktop.org/releases/pkg-config-0.23.tar.gz gcc --version 3.x or greater Build Likewise-CIFS: $ cd likewise-open $ build/mkcomp --debug all Install Likewise-CIFS: $ sudo su $ cd staging/install-root $ tar cf - . | (cd / && tar xvf -) Make sure Samba is not running: $ /etc/init.d/smb stop Make sure SELinux is either disabled or set to permissive. Make sure the ports required by Likewise are open. For a list of ports that Likewise uses, see the Likewise Open Installation and Administration Guide. Configure Likewise Open: $ /etc/init.d/lwsmd start $ for i in /etc/likewise/*.reg; do /opt/likewise/bin/lwregshell upgrade $i; done $ /etc/init.d/lwsmd stop $ /etc/init.d/lwsmd start $ /opt/likewise/bin/lwsm start srvsvc $ /opt/likewise/bin/domainjoin-cli configure --enable nsswitch Add a user account to the local Likewise provider database. In the following example, substitute the account name that you want for newuser. $ /opt/likewise/bin/lw-add-user --home /home/newuser --shell /bin/bash newuser Successfully added user newuser Enable the user and set the password: $ /opt/likewise/bin/lw-mod-user --enable-user --set-password newuser New Password: ********** Successfully modified user newuser Look up new user's identity as follows. Substitute the value from the command hostname -s for the hostname. Keep in mind that Likewise truncates a hostname longer than 15 characters to the first 15 characters of the string. % id hostname\\newuser uid=2000(HOSTNAME\newuser) gid=1800(HOSTNAME\Likewise Users) groups=1800(HOSTNAME\Likewise Users) context=system_u:system_r:unconfined_t:s0 Make a CIFS directory for the user: mkdir /lwcifs/newuser chown 2000:1800 /lwcifs/newuser From a Windows computer, map the Likewise-CIFS drive share: Computer->Map Network Drive... Folder: \\IP_hostname\c$ Click "Finish" Username: hostname\newuser Password: user_password The last step fails when I try to connect. I've tried with Windows XP Pro and Windows 7 Pro. The rest of the directions only appear to work for version 5.4 (the one that shipped with 10.04). For 12.04, version 6.1 is the only one available and it doesn't appear to have the srvsvc module mentioned in these instructions. Is CIFS support dropped in the 6.1 version of likewise-open?

    Read the article

  • Huawei b153 limit of devices

    - by bdecaf
    I set up my home network all through this 3G wifi router. Problem is it only allows 5 devices to connect. That's not much especially if a wifi printer and gaming consoles keep hogging these slots. On the other hand I don't see the point on blocking these devices. They are (should) not doing anything online just intern in my network. The documentation I can find is surpirisingly unhelpful and focuses how to plug the device in a power socket. So what would be my options. Notes: I have already been able to get a shell on the device using ssh. It's running some Busybox. But I fail to find the how and where this limit is enforced/created. Notes 2: Specifically my device is a 3WebCube - unfortunately not specifically marked with the Huawei Model number. Successes so far After enabling ssh in the options I can login: ssh -T [email protected] [email protected]'s password: ------------------------------- -----Welcome to ATP Cli------ ------------------------------- unfortunately because of this -T - the tab key does not work for autocomplete and all inputted commands will be echoed. Also no history with arrow keys. ATP interface this custom interface is not very useful: ATP>help help Welcome to ATP command line tool. If any question, please input "?" at the end of command. ATP>? ? cls debug help save ? exit ATP>save? save? Command failed. ATP>save ? save ? ATP>debug ? debug ? display set trace ? Shell BUT undocumented - I somehow found on a auto translated chinese website - all you need to do is input sh ATP>sh sh BusyBox vv1.9.1 (2011-03-27 11:59:11 CST) built-in shell (ash) Enter 'help' for a list of built-in commands. # builtin commands # help Built-in commands: ------------------- . : alias bg break cd chdir command continue eval exec exit export false fg getopts hash help jobs kill let local pwd read readonly return set shift source times trap true type ulimit umask unalias unset wait shows standard unix structure: # ls / var tmp proc linuxrc init etc bin usr sbin mnt lib html dev in /bin # ls /bin zebra strace ppps ln echo cat wscd startbsp pppc klog ebtables busybox wlancmd sshd ping kill dns brctl web sntp netstat iwpriv dhcps auth usbdiagd sms mount iwcontrol dhcpc atserver upnp sleep mknod iptables date atcmd upg siproxd mkdir ipcheck cp at umount sh mini_upnpd ip console ash test_at rm mic igmpproxy cms telnetd ripd ls ethcmd cmgr swapdev ps log equipcmd cli in /sbin # ls /sbin vconfig reboot insmod ifconfig arp route poweroff init halt using tftp after installing tftp on my desktop I was able to send files with tftp -s -l curcfg.xml 192.168.1.103 and to download onto the huawei with tftp -g -r curcfg.xml 192.168.1.103 I think I'll need that - because I don't see any editor installed. readout stuff (still playing around where I would get interesting info) For confirmation of hardware: # cat /var/log/modem_hardware_name ^HWVER:"WL1B153M001"# # cat /var/log/modem_software_name 1096.11.03.02.107 # cat /var/log/product_name B153

    Read the article

  • Sharing the same `ssh-agent` among multiple login sessions

    - by intuited
    Is there a convenient way to ensure that all logins from a given user (ie me) use the same ssh-agent? I hacked out a script to make this work most of the time, but I suspected all along that there was some way to do it that I had just missed. Additionally, since that time there have been amazing advances in computing technology, like for example this website. So the goal here is that whenever I log in to the box, regardless of whether it's via SSH, or in a graphical session started from gdm/kdm/etc, or at a console: if my username does not currently have an ssh-agent running, one is started, the environment variables exported, and ssh-add called. otherwise, the existing agent's coordinates are exported in the login session's environment variables. This facility is especially valuable when the box in question is used as a relay point when sshing into a third box. In this case it avoids having to type in the private key's passphrase every time you ssh in and then want to, for example, do git push or something. The script given below does this mostly reliably, although it botched recently when X crashed and I then started another graphical session. There might have been other screwiness going on in that instance. Here's my bad-is-good script. I source this from my .bashrc. # ssh-agent-procure.bash # v0.6.4 # ensures that all shells sourcing this file in profile/rc scripts use the same ssh-agent. # copyright me, now; licensed under the DWTFYWT license. mkdir -p "$HOME/etc/ssh"; function ssh-procure-launch-agent { eval `ssh-agent -s -a ~/etc/ssh/ssh-agent-socket`; ssh-add; } if [ ! $SSH_AGENT_PID ]; then if [ -e ~/etc/ssh/ssh-agent-socket ] ; then SSH_AGENT_PID=`ps -fC ssh-agent |grep 'etc/ssh/ssh-agent-socket' |sed -r 's/^\S+\s+(\S+).*$/\1/'`; if [[ $SSH_AGENT_PID =~ [0-9]+ ]]; then # in this case the agent has already been launched and we are just attaching to it. ##++ It should check that this pid is actually active & belongs to an ssh instance export SSH_AGENT_PID; SSH_AUTH_SOCK=~/etc/ssh/ssh-agent-socket; export SSH_AUTH_SOCK; else # in this case there is no agent running, so the socket file is left over from a graceless agent termination. rm ~/etc/ssh/ssh-agent-socket; ssh-procure-launch-agent; fi; else ssh-procure-launch-agent; fi; fi; Please tell me there's a better way to do this. Also please don't nitpick the inconsistencies/gaffes ( eg putting var stuff in etc ); I wrote this a while ago and have since learned many things.

    Read the article

  • Building nginx 1.0.4 on Amazon EC2 micro - perl and python problems

    - by digitaltoast
    I'd like to run nginx as a reverse proxy with apache2 on my EC2 micro instance. yum install nginx gives me nginx-0.8.53-1.2.amzn1.x86_64.rpm The current nginx is 1.0.4 I found and followed this guide: http://kdn2.info/2011/05/install-nginx-on-amazon-ec2/ It works fine up to and including "make". When I get to checkinstall --fstrans=no I get ERROR: ld.so: object '/usr/lib/installwatch.so' from LD_PRELOAD cannot be preloaded: ignored. test -d '/var/log/nginx' || mkdir -p '/var/log/nginx' ERROR: ld.so: object '/usr/lib/installwatch.so' from LD_PRELOAD cannot be preloaded: ignored. make[1]: Leaving directory `/root/src/nginx-1.0.4' ======================== Installation successful ========================== Copying documentation directory... ./ ./CHANGES ./LICENSE ./README cp: cannot stat `//var/tmp/gRWoVgIcdbmjfTjoVGBM/newfiles.tmp': No such file or directory Copying files to the temporary directory...OK Striping ELF binaries and libraries...OK Compressing man pages...OK Building file list...OK Building RPM package... FAILED! *** Failed to build the package ...and the logfile is full of: Building target platforms: x86_64 Building for target x86_64 Processing files: nginx-1.0.4-1.x86_64 error: File not found: /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/usr error: File not found: /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/usr/doc There IS /usr/src/rpm/BUILDROOT/nginx-1.0.4-1.x86_64/ but no /usr Following further down the page, it says: "If we want to use, for example, PHP 5.2 we can download PHP and Nginx compatible with Amazon Kernel(Xen Kernel) from the CentosALT Repository." So I install the two repositories, but when I yum install http://centos.alt.ru/pub/nginx/1.0/RPMS/x86_64/nginx-stable-1.0.4-1.el5.x86_64.rpm I get Error: Package: nginx-stable-1.0.4-1.el5.x86_64 (/nginx-stable-1.0.4-1.el5.x86_64) Requires: perl(:MODULE_COMPAT_5.8.8) You could try using --skip-broken to work around the problem but that doesn't fix it. When I do yum update, I get --> Finished Dependency Resolution Error: Package: python-distribute-0.6.19-10.1.x86_64 (devel_languages_python) Requires: python < 2.5 Installed: 1:python-2.6-1.19.amzn1.noarch (@amzn-main) python = 1:2.6-1.19.amzn1 Error: Package: python-distribute-0.6.19-10.1.i586 (devel_languages_python) Requires: python < 2.5 Installed: 1:python-2.6-1.19.amzn1.noarch (@amzn-main) python = 1:2.6-1.19.amzn1 I've tried everything - yum clean all and various other suggestions found on other sites. If anyone has any suggestions or a known package of the current 1.04 nginx working on EC2 Micro (Linux ip-10-56-63-85 2.6.35.11-83.9.amzn1.x86_64 #1 SMP Sat Feb 19 23:42:04 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux - which I think is RHEL 5?) then I'd be grateful. Incidentally, does this repolist look right? repo id repo name status CentALT CentALT Packages for Enterprise Linux 5 - x86_64 enabled: 112+157 amzn-main amzn-main-Base enabled: 2,706 amzn-main-debuginfo amzn-main-debuginfo disabled amzn-main-nosrc amzn-main-nosrc disabled amzn-updates amzn-updates-Base enabled: 328 amzn-updates-debuginfo amzn-updates-debuginfo disabled amzn-updates-nosrc amzn-updates-nosrc disabled devel_languages_python Python and Python Modules (SLE_10) enabled: 1,452+768 epel Extra Packages for Enterprise Linux 5 - x86_64 enabled: 5,892+604 epel-debuginfo Extra Packages for Enterprise Linux 5 - x86_64 - Debug disabled epel-source Extra Packages for Enterprise Linux 5 - x86_64 - Source disabled epel-testing Extra Packages for Enterprise Linux 5 - Testing - x86_64 disabled epel-testing-debuginfo Extra Packages for Enterprise Linux 5 - Testing - x86_64 - Debug disabled epel-testing-source Extra Packages for Enterprise Linux 5 - Testing - x86_64 - Source disabled s3tools Tools for managing Amazon S3 - Simple Storage Service (RHEL_6) enabled: 2+1 repolist: 10,492

    Read the article

  • Pecl install ssh2, make failed

    - by user28259
    Hi! I'm trying really hard since two hours to install ssh2 with pecl... But all I get is: /bin/sh /root/ssh2-0.11.0/libtool --mode=compile cc -I. -I/root/ssh2-0.11.0 -DPHP_ATOM_INC -I/root/ssh2-0.11.0/include -I/root/ssh2-0.11.0/main -I/root/ssh2-0.11.0 -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/ssh2-0.11.0/ssh2.c -o ssh2.lo mkdir .libs cc -I. -I/root/ssh2-0.11.0 -DPHP_ATOM_INC -I/root/ssh2-0.11.0/include -I/root/ssh2-0.11.0/main -I/root/ssh2-0.11.0 -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/ssh2-0.11.0/ssh2.c -fPIC -DPIC -o .libs/ssh2.o /root/ssh2-0.11.0/ssh2.c:52: error: duplicate 'static' /root/ssh2-0.11.0/ssh2.c: In function 'zif_ssh2_methods_negotiated': /root/ssh2-0.11.0/ssh2.c:503: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:504: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:508: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:509: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:510: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:511: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:516: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:517: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:518: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c:519: warning: passing argument 4 of 'add_assoc_string_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:360: note: expected 'char *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c: In function 'zif_ssh2_publickey_add': /root/ssh2-0.11.0/ssh2.c:1045: warning: passing argument 1 of '_efree' discards qualifiers from pointer target type /usr/include/php/Zend/zend_alloc.h:46: note: expected 'void *' but argument is of type 'const char *' /root/ssh2-0.11.0/ssh2.c: In function 'zif_ssh2_publickey_list': /root/ssh2-0.11.0/ssh2.c:1104: warning: passing argument 4 of 'add_assoc_stringl_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:361: note: expected 'char *' but argument is of type 'const unsigned char *' /root/ssh2-0.11.0/ssh2.c:1105: warning: passing argument 4 of 'add_assoc_stringl_ex' discards qualifiers from pointer target type /usr/include/php/Zend/zend_API.h:361: note: expected 'char *' but argument is of type 'const unsigned char *' make: *** [ssh2.lo] Error 1 I looked on google a lot, I found some patches which didn't worked at all. So if you think you could help me, go ahead! Thanks!

    Read the article

  • Unable to connect to OpenVPN server

    - by Incognito
    I'm trying to get a working setup of OpenVPN on my VM and authenticate into it from a client. I'm not sure but it looks to me like it's socket related, as it's not set to LISTEN, and localhost seems wrong. I've never set up VPN before. # netstat -tulpn | grep vpn Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name udp 0 0 127.0.0.1:1194 0.0.0.0:* 24059/openvpn I don't think this is set up correctly. Here's some detail into what I've done. I have a VPS from MediaTemple: These are my interfaces before starting openvpn: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:39482 errors:0 dropped:0 overruns:0 frame:0 TX packets:39482 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3237452 (3.2 MB) TX bytes:3237452 (3.2 MB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:4885284 errors:0 dropped:0 overruns:0 frame:0 TX packets:4679884 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:835278537 (835.2 MB) TX bytes:1989289617 (1.9 GB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:205.[redacted] P-t-P:205.186.148.82 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 I've followed this guide on setting up a basic server and getting a .p12 file, however, I was receiving an error that stated /dev/net/tun was missing, so I created it mkdir -p /dev/net mknod /dev/net/tun c 10 200 chmod 600 /dev/net/tun This resolved the error preventing the service from launching, however, I am unable to connect. On the server I've set up the myserver.conf file (as per the tutorial) to indicate local 127.0.0.1 (I've also attempted with the public IP address, perhaps I don't understand what they mean by local IP?). The server launches without error, this is what the log looks like when it starts: Sun Apr 1 17:21:27 2012 OpenVPN 2.1.3 x86_64-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Mar 11 2011 Sun Apr 1 17:21:27 2012 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Sun Apr 1 17:21:27 2012 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Sun Apr 1 17:21:27 2012 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Sun Apr 1 17:21:27 2012 TUN/TAP device tun0 opened Sun Apr 1 17:21:27 2012 /sbin/ifconfig tun0 10.8.0.1 pointopoint 10.8.0.2 mtu 1500 Sun Apr 1 17:21:27 2012 GID set to openvpn Sun Apr 1 17:21:27 2012 UID set to openvpn Sun Apr 1 17:21:27 2012 UDPv4 link local (bound): [AF_INET]127.0.0.1:1194 Sun Apr 1 17:21:27 2012 UDPv4 link remote: [undef] Sun Apr 1 17:21:27 2012 Initialization Sequence Completed This creates a tun0 interface that looks like this: tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) And the netstat command still indicates the state is not set to LISTEN. On the client-side I've installed the p12 certs onto two devices (one is an android tablet, the other is an Ubuntu desktop). I don't see port 1194 as open either. Both clients install the cert files and then ask me for the L2TP secret (which was set on the file), but then they oddly ask me for a username and a password, which I don't know where I could possibly get those from. I attempted all of my logins, and some whacky guesses that were frantically pulling at straws. If there's any more information I could provide let me know.

    Read the article

  • How to restore Linode to Vagrant VM?

    - by Iain Elder
    I'm trying to set up a Linux development environment so I can safely make changes to my website without breaking the live site. Linode hosts my live site. A simple solution would be to host my development server on Linode as well, but I want to avoid doubling my hosting costs. The cheapest way I see is to use Vagrant on my Windows workstation to host my development environment. After I attempt to restore the backup to Vagrant and reboot the VM, I can no longer ssh into the Vagrant host. It's probably because by restoring the backup I overwrite some special Vagrant configuration, but I'm not sure how to avoid that. How do I make this approach work? If my approach is fundamentally wrong, can you suggest an alternative? Creating the backup On the Linode I used these commands to create a compressed copy of the entire filesystem, while ignoring things that shouldn't be included in the backup: $ sudo rsync -ahvz --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/backup/*} /* /backup/2 $ sudo tar -czf /backup/2.gz /backup/2 The backup file is called 2.gz because this is thesecond backup. The first backup is called 1.gz. I use WinSCP to copy the backup file to my Windows workstation. Setting up the Vagrant host I need a Vagrant box that matches my Linode operating system (Ubuntu 12.04.3 LTS, kernel 3.9.3). I selected the closet match from vagrantbox.es: Ubuntu Server Precise 12.04.3 amd64 Kernel is ready for Docker (Docker not included) On my workstation I ran these commands to add the box and initialize and boot an instance: $ vagrant box add ubuntu-precise http://nitron-vagrant.s3-website-us-east-1.amazonaws.com/vagrant_ubuntu_12.04.3_amd64_virtualbox.box $ mkdir linode-test $ cd linode-test $ vagrant init ubuntu-precise $ vagrant up Now Vagrant is running a machine with SSH on port 2222. The operating system version is the same. The kernel version is 3.8.0. Sounds close enough. Restoring the backup With WinSCP I copied the backup file 2.gz to /home/vagrant/2.gz on the Vagrant box. With PuTTY I connected via ssh to my new Vagrant box: On the box move the backup to the filesystem root. $ sudo mv 2.gz / Extract the archive to the filesystem root: $ sudo tar -xvpz -f 2.gz -C / --strip-components=2 (I discovered I need to use strip components because all files in the archive have the prefix backup/2/. I'll fix this for the next backup.) After the tar command completes, I log out of the box. Testing the backup When I try to log in again, it doesn't let me log in as vagrant with a password any more. It does let me log in as iain, my user on the live Linode, with a password. That surprised me because I disabled password authentication on my live Linode. I figured that I have to restart the ssh service for the change to take effect. Instead of restarting just ssh, I chose to restart the whole system. Now I can't even get to the login screen. PuTTY says "connection refused" when I try to connect. What went wrong?

    Read the article

  • bash: per-command history. How does it work?

    - by romainl
    OK. I have an old G5 running Leopard and a Dell running Ubuntu 10.04 at home and a MacPro also running Leopard at work. I use Terminal.app/bash a lot. On my home G5 it exhibits a nice feature: using ? to navigate history I get the last command starting with the few letters that I've typed. This is what I mean (| represents the caret): $ ssh user@server $ vim /some/file/just/to/populate/history $ ss| So, I've typed the two first letters of "ssh", hitting ? results in this: $ ssh user@server instead of this, which is the behaviour I get everywhere else : $ vim /some/file/just/to/populate/history If I keep on hitting ? or ?, I can navigate through the history of ssh like this: $ ssh otheruser@otherserver $ ssh user@server $ ssh yetanotheruser@yetanotherserver It works the same for any command like cat, vim or whatever. That's really cool. Except that I have no idea how to mimic this behaviour on my other machines. Here is my .profile: export PATH=/Developer/SDKs/flex_sdk_3.4/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/sw/bin:/sw/sbin:/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:$HOME/Applications/bin:/usr/X11R6/bin export MANPATH=/usr/local/share/man:/usr/local/man:opt/local/man:sw/share/man export INFO=/usr/local/share/info export PERL5LIB=/opt/local/lib/perl5 export PYTHONPATH=/opt/local/bin/python2.7 export EDITOR=/opt/local/bin/vim export VISUAL=/opt/local/bin/vim export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home export TERM=xterm-color export GREP_OPTIONS='--color=auto' GREP_COLOR='1;32' export CLICOLOR=1 export LS_COLORS='no=00:fi=00:di=01;34:ln=target:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.deb=00;31:*.rpm=00;31:*.TAR=00;31:*.TGZ=00;31:*.ARJ=00;31:*.TAZ=00;31:*.LZH=00;31:*.ZIP=00;31:*.Z=00;31:*.Z=00;31:*.GZ=00;31:*.BZ2=00;31:*.DEB=00;31:*.RPM=00;31:*.jpg=00;35:*.png=00;35:*.gif=00;35:*.bmp=00;35:*.ppm=00;35:*.tga=00;35:*.xbm=00;35:*.xpm=00;35:*.tif=00;35:*.png=00;35:*.fli=00;35:*.gl=00;35:*.dl=00;35:*.psd=00;35:*.JPG=00;35:*.PNG=00;35:*.GIF=00;35:*.BMP=00;35:*.PPM=00;35:*.TGA=00;35:*.XBM=00;35:*.XPM=00;35:*.TIF=00;35:*.PNG=00;35:*.FLI=00;35:*.GL=00;35:*.DL=00;35:*.PSD=00;35:*.mpg=00;36:*.avi=00;36:*.mov=00;36:*.flv=00;36:*.divx=00;36:*.qt=00;36:*.mp4=00;36:*.m4v=00;36:*.MPG=00;36:*.AVI=00;36:*.MOV=00;36:*.FLV=00;36:*.DIVX=00;36:*.QT=00;36:*.MP4=00;36:*.M4V=00;36:*.txt=00;32:*.rtf=00;32:*.doc=00;32:*.odf=00;32:*.rtfd=00;32:*.html=00;32:*.css=00;32:*.js=00;32:*.php=00;32:*.xhtml=00;32:*.TXT=00;32:*.RTF=00;32:*.DOC=00;32:*.ODF=00;32:*.RTFD=00;32:*.HTML=00;32:*.CSS=00;32:*.JS=00;32:*.PHP=00;32:*.XHTML=00;32:' export LC_ALL=C export LANG=C stty cs8 -istrip -parenb bind 'set convert-meta off' bind 'set meta-flag on' bind 'set output-meta on' alias ip='curl http://www.whatismyip.org | pbcopy' alias ls='ls -FhLlGp' alias la='ls -AFhLlGp' alias couleurs='$HOME/Applications/bin/colors2.sh' alias td='$HOME/Applications/bin/todo.sh' alias scale='$HOME/Applications/bin/scale.sh' alias stree='$HOME/Applications/bin/tree' alias envoi='$HOME/Applications/bin/envoi.sh' alias unfoo='$HOME/Applications/bin/unfoo' alias up='cd ..' alias size='du -sh' alias lsvn='svn list -vR' alias jsc='/System/Library/Frameworks/JavaScriptCore.framework/Versions/A/Resources/jsc' alias asl='sudo rm -f /private/var/log/asl/*.asl' alias trace='tail -f $HOME/Library/Preferences/Macromedia/Flash\ Player/Logs/flashlog.txt' alias redis='redis-server /opt/local/etc/redis.conf' source /Users/johncoltrane/Applications/bin/git-completion.sh export GIT_PS1_SHOWUNTRACKEDFILES=1 export GIT_PS1_SHOWUPSTREAM="verbose git" export GIT_PS1_SHOWDIRTYSTATE=1 export PS1='\n\[\033[32m\]\w\[\033[0m\] $(__git_ps1 "[%s]")\n\[\033[1;31m\]\[\033[31m\]\u\[\033[0m\] $ \[\033[0m\]' mkcd () { mkdir -p "$*" cd "$*" } function cdl { cd $1 la } n() { $EDITOR ~/Dropbox/nv/"$*".txt } nls () { ls -c ~/Dropbox/nv/ | grep "$*" } copy(){ curl -s -F 'sprunge=<-' http://sprunge.us | pbcopy } if [ -f /opt/local/etc/profile.d/cdargs-bash.sh ]; then source /opt/local/etc/profile.d/cdargs-bash.sh fi if [ -f /opt/local/etc/bash_completion ]; then . /opt/local/etc/bash_completion fi Any idea?

    Read the article

  • ovs-vsctl: "eth0" is not a valid UUID

    - by Przemek Lach
    I'm trying to setup an open v-switch inside my Ubuntu 12.04 Server VM. I have created three interfaces for this VM and I want to create a port mirror inside of the VM using these there interfaces and open v-switch. There are three Host-Only Adapters: eth0, eth1, eth2. The idea is that three other VM's will be connected to these adapters. One of these VM's will stream UDP video to eth0 and I want the vswitch'd VM to mirror those packets from eth0 onto eth1 and eth2. Each of the VM's connected to eth1 and eth2 will get the same video stream. I performed the following steps to install open v-switch: $ apt-get install python-simplejson python-qt4 python-twisted-conch automake autoconf gcc uml-utilities libtool build-essential $ apt-get install build-essential autoconf automake pkg-config $ wget http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ tar xf http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ cd http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ apt-get install libssl-dev iproute tcpdump linux-headers-`uname -r` $ ./boot.sh $ ./configure - -with-linux=/lib/modules/`uname -r`/build $ make $ sudo make install After installation I configured as follows: $ insmod datapath/linux/openvswitch.ko $ sudo touch /usr/local/etc/ovs-vswitchd.conf $ mkdir -p /usr/local/etc/openvswitch $ ovsdb-tool create /usr/local/etc/openvswitch/conf.db Then I started the server: $ ovsdb-server /usr/local/etc/openvswitch/conf.db \ --remote=punix:/usr/local/var/run/openvswitch/db.sock \ --remote=db:Open_vSwitch,manager_options \ --private-key=db:SSL,private_key \ --certificate=db:SSL,certificate \ --bootstrap-ca-cert=db:SSL,ca_cert --pidfile --detach --log-file $ ovs-vsctl –no-wait init (run only once) $ ovs-vswitchd --pidfile --detach The above steps I got from this tutorial and it all worked fine. I then proceeded to add a port mirror based on the open v-switch documentation under Port Mirroring. I successfully completed the following commands: $ ovs-vsctl add-br br0 $ ovs-vsctl add-port br0 eth0 $ ovs-vsctl add-port br0 eth1 $ ovs-vsctl add-port br0 eth2 $ ifconfig eth0 promisc up $ ifconfig eth1 promisc up $ ifconfig eth2 promisc up At this point when I run ovs-vsctl show I get the following: 75bda8c2-b870-438b-9115-e36288ea1cd8 Bridge "br0" Port "br0" Interface "br0" type: internal Port "eth0" Interface "eth0" Port "eth2" Interface "eth2" Port "eth1" Interface "eth1" And when I run ifconfig I get the following: eth0 Link encap:Ethernet HWaddr 08:00:27:9f:51:ca inet6 addr: fe80::a00:27ff:fe9f:51ca/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth1 Link encap:Ethernet HWaddr 08:00:27:53:02:d4 inet6 addr: fe80::a00:27ff:fe53:2d4/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth2 Link encap:Ethernet HWaddr 08:00:27:cb:a5:93 inet6 addr: fe80::a00:27ff:fecb:a593/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth3 Link encap:Ethernet HWaddr 08:00:27:df:bb:d8 inet addr:192.168.1.139 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fedf:bbd8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2211 errors:0 dropped:0 overruns:0 frame:0 TX packets:1196 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:182987 (182.9 KB) TX bytes:125441 (125.4 KB) NOTE: I use eth3 as a bridge adapter for SSH'ing into the VM. So now, I think I've done everything correctly but when I try to create the bridge using the following command: $ ovs-vsctl -- set Bridge br0 mirrors=@m -- --id=@eth0 get Port eth0 -- --id=@eth1 get Port eth1 -- --id=@m create Mirror name=app1Mirror select-dst-port=eth0 select-src-port=@eth0 output-port=@eth1,eth2 I get the following error: ovs-vsctl: "eth0" is not a valid UUID I don't understand why it's not able to find the interfaces?

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • How can I make subversion reset the stored passwords/users and remember my authentication credential

    - by NicDumZ
    Hello folks! Background: I used to have everything working just fine on my fresh install: $ svn co https://domain:443/ test1 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn co https://domain:443/ test2 # checkouts nicely, without asking for my password. At some point I needed to commit stuff using a different account. So I did that $ svn ci --username other.user Authentication realm: <https://domain:443> Subversion repository Password for 'other.user': # works fine But since then, everytime I want to commit as 'nicdumz' (default user, all repos have been checked-out with that user), it prompts me for my password: $ svn ci Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': Hey come on, why :) The same happens if I want a fresh checkout, since read-access is also protected. So I tried fixing the issue by myself. I read around that ~/.subversion/auth was storing credentials, so I removed it from the way: $ cd ~/.subversion $ mv auth oldauth $ mkdir auth It seemed to work at first, because svn had forgotten about certificate validation: $ svn co https://domain:443/ test3 Error validating server certificate for 'https://domain:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! Certificate information: - Hostname: **REMOVED** - Valid: **REMOVED** - Issuer: **REMOVED** - Fingerprint: **checked with issuer and REMOVED** (R)eject, accept (t)emporarily or accept (p)ermanently? p Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz-machine-hostname': Authentication realm: <https://domain:443> Subversion repository Username: nicdumz Password for 'nicdumz': # proceeds to checkout correctly $ svn up Authentication realm: <https://domain:443> Subversion repository Password for 'nicdumz': What? how is this happening? If you have suggestions to investigate more about the behaviour, I am very interested. If I'm correct, there is no way to do a verbose svn up or anything of the like, so I'm not sure should I go for investigation. Oh, and for what it's worth: $ svn --version svn, version 1.6.6 (r40053) compiled Oct 26 2009, 06:19:08 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme

    Read the article

  • Nvidia drivers don't work with mainline kernel

    - by dutchie
    I want to try some of the new features in the btrfs filesystem, and to do that I need to use a newer kernel than is included in Ubuntu 12.04. To do that, I have installed linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb, linux-headers-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb, and linux-image-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb from the mainline kernel download here. However, on rebooting into the 3.4 kernel, my desktop is stuck at a very low resolution and I cannot increase it to the full. This did happen when I first installed, but a simple install of the nvidia-current package got everything working nicely with my GTX570 card. There were appear to be some DKMS errors when I installed the kernel, and they indicated I should look at /var/lib/dkms/nvidia-current/295.40/build/make.log: josh@sirius:~/Downloads$ sudo dpkg -i linux-*.deb Selecting previously unselected package linux-headers-3.4.0-030400. (Reading database ... 309400 files and directories currently installed.) Unpacking linux-headers-3.4.0-030400 (from linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb) ... Selecting previously unselected package linux-headers-3.4.0-030400-generic. Unpacking linux-headers-3.4.0-030400-generic (from linux-headers-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb) ... Selecting previously unselected package linux-image-3.4.0-030400-generic. Unpacking linux-image-3.4.0-030400-generic (from linux-image-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb) ... Done. Setting up linux-headers-3.4.0-030400 (3.4.0-030400.201205210521) ... Setting up linux-headers-3.4.0-030400-generic (3.4.0-030400.201205210521) ... Examining /etc/kernel/header_postinst.d. run-parts: executing /etc/kernel/header_postinst.d/dkms 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic ERROR (dkms apport): kernel package linux-headers-3.4.0-030400-generic is not supported Error! Bad return status for module build on kernel: 3.4.0-030400-generic (x86_64) Consult /var/lib/dkms/nvidia-current/295.40/build/make.log for more information. Setting up linux-image-3.4.0-030400-generic (3.4.0-030400.201205210521) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic ERROR (dkms apport): kernel package linux-headers-3.4.0-030400-generic is not supported Error! Bad return status for module build on kernel: 3.4.0-030400-generic (x86_64) Consult /var/lib/dkms/nvidia-current/295.40/build/make.log for more information. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic update-initramfs: Generating /boot/initrd.img-3.4.0-030400-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.4.0-030400-generic Found initrd image: /boot/initrd.img-3.4.0-030400-generic Found linux image: /boot/vmlinuz-3.2.0-24-generic Found initrd image: /boot/initrd.img-3.2.0-24-generic Found memtest86+ image: /memtest86+.bin Found Ubuntu 12.04 LTS (12.04) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 Found Windows 7 (loader) on /dev/sda3 done /var/lib/dkms/nvidia-current/295.40/build/make.log: DKMS make.log for nvidia-current-295.40 for kernel 3.4.0-030400-generic (x86_64) Thu Jun 7 00:58:39 BST 2012 NVIDIA: calling KBUILD... test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /var/lib/dkms/nvidia-current/295.40/build/.tmp_versions ; rm -f /var/lib/dkms/nvidia-current/295.40/build/.tmp_versions/* make -f scripts/Makefile.build obj=/var/lib/dkms/nvidia-current/295.40/build cc -Wp,-MD,/var/lib/dkms/nvidia-current/295.40/build/.nv.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.6/include -I/usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include -Iarch/x86/include/generated -Iinclude -include /usr/src/linux-headers-3.4.0-030400-generic/include/linux/kconfig.h -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=1024 -Wno-unused-but-set-variable -fno-omit-frame-pointer -fno-optimize-sibling-calls -pg -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -I/var/lib/dkms/nvidia-current/295.40/build -Wall -MD -Wsign-compare -Wno-cast-qual -Wno-error -D__KERNEL__ -DMODULE -DNVRM -DNV_VERSION_STRING=\"295.40\" -Wno-unused-function -Wuninitialized -mno-red-zone -mcmodel=kernel -UDEBUG -U_DEBUG -DNDEBUG -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(nv)" -D"KBUILD_MODNAME=KBUILD_STR(nvidia)" -c -o /var/lib/dkms/nvidia-current/295.40/build/.tmp_nv.o /var/lib/dkms/nvidia-current/295.40/build/nv.c In file included from include/linux/kernel.h:19:0, from include/linux/sched.h:55, from include/linux/utsname.h:35, from /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h:38, from /var/lib/dkms/nvidia-current/295.40/build/nv.c:13: include/linux/bitops.h: In function ‘hweight_long’: include/linux/bitops.h:66:41: warning: signed and unsigned type in conditional expression [-Wsign-compare] In file included from /usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include/asm/uaccess.h:577:0, from include/linux/poll.h:14, from /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h:97, from /var/lib/dkms/nvidia-current/295.40/build/nv.c:13: /usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include/asm/uaccess_64.h: In function ‘copy_from_user’: /usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include/asm/uaccess_64.h:53:6: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] In file included from /var/lib/dkms/nvidia-current/295.40/build/nv.c:13:0: /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h: At top level: /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h:114:75: fatal error: asm/system.h: No such file or directory compilation terminated. make[3]: *** [/var/lib/dkms/nvidia-current/295.40/build/nv.o] Error 1 make[2]: *** [_module_/var/lib/dkms/nvidia-current/295.40/build] Error 2 NVIDIA: left KBUILD. nvidia.ko failed to build! make[1]: *** [module] Error 1 make: *** [module] Error 2

    Read the article

  • Video Recording Not Working in ICS

    - by Nirav Ranpara
    I have implement code Record video in Android Phone . This code is working in 2.2 , 2.3 . not in ICS But when I checked in ICS code is not working ? here I posted code and xml file. videorecord.java import java.io.File; import java.io.IOException; import android.app.Activity; import android.app.AlertDialog; import android.content.Context; import android.content.DialogInterface; import android.content.Intent; import android.content.SharedPreferences; import android.hardware.Camera; import android.media.CamcorderProfile; import android.media.MediaRecorder; import android.os.Bundle; import android.os.CountDownTimer; import android.os.Environment; import android.util.Log; import android.view.Display; import android.view.KeyEvent; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.View; import android.widget.EditText; import android.widget.FrameLayout; import android.widget.ImageView; import android.widget.LinearLayout; import android.widget.TextView; import android.widget.Toast; public class videorecord extends Activity{ SharedPreferences.Editor pre; String filename; CountDownTimer t; private Camera myCamera; private MyCameraSurfaceView myCameraSurfaceView; private MediaRecorder mediaRecorder; Integer cnt=0; LinearLayout myButton; TextView myButton1; SurfaceHolder surfaceHolder; boolean recording; private TextView txtcount; private ImageView btnplay; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); recording = false; setContentView(R.layout.videorecord); init(); myCamera = getCameraInstance(); if(myCamera == null){ } myCameraSurfaceView = new MyCameraSurfaceView(this, myCamera); FrameLayout myCameraPreview = (FrameLayout)findViewById(R.id.videoview); Display display = getWindowManager().getDefaultDisplay(); int width = display.getWidth(); int height = display.getHeight(); myCameraSurfaceView.setLayoutParams(new LinearLayout.LayoutParams(width, height-60)); myCameraPreview.addView(myCameraSurfaceView); myButton = (LinearLayout)findViewById(R.id.mybutton); btnplay.setOnClickListener(myButtonOnClickListener); } private void init() { txtcount = (TextView) findViewById(R.id.txtcounter); //myButton1 = (TextView) findViewById(R.id.mybutton1); btnplay = (ImageView)findViewById(R.id.btnplay); t = new CountDownTimer( Long.MAX_VALUE , 1000) { @Override public void onTick(long millisUntilFinished) { cnt++; String time = new Integer(cnt).toString(); long millis = cnt; int seconds = (int) (millis / 60); int minutes = seconds / 60; seconds = seconds % 60; txtcount.setText(String.format("%d:%02d:%02d", minutes, seconds,millis)); } @Override public void onFinish() { } }; } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { if ((keyCode == KeyEvent.KEYCODE_BACK)) { if(recording) { new AlertDialog.Builder(videorecord.this).setTitle("Do you want to save Video ?") .setPositiveButton("OK", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { filename(); //finish(); } }).setNegativeButton("Cancle", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { // TODO Auto-generated method stub } }).show(); } else { if ((keyCode == KeyEvent.KEYCODE_BACK)) { //Intent homeIntent= new Intent(Intent.ACTION_MAIN); //homeIntent.addCategory(Intent.CATEGORY_HOME); //homeIntent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); //startActivity(homeIntent); //this.finishActivity(1); finish(); } //moveTaskToBack(true); // finish(); return super.onKeyDown(keyCode, event); } } else { // Toast.makeText(getApplicationContext(), "asd", Toast.LENGTH_LONG).show(); android.os.Process.killProcess(android.os.Process.myPid()) ; } return super.onKeyDown(keyCode, event); } ImageView.OnClickListener myButtonOnClickListener = new ImageView.OnClickListener(){ public void onClick(View v) { if(recording){ Log.e("Record error", "error in recording ."); mediaRecorder.stop(); t.cancel(); filename(); releaseMediaRecorder(); }else{ releaseCamera(); Log.e("Record Stop error", "error in recording ."); // if(!prepareMediaRecorder()){ prepareMediaRecorder(); finish(); } mediaRecorder.start(); recording = true; // myButton1.setText("STOP Recording"); // btnplay.setImageResource(android.R.drawable.ic_media_pause); btnplay.setImageResource(R.drawable.stoprec); t.start(); } }}; private Camera getCameraInstance(){ Camera c = null; try { c = Camera.open(); } catch (Exception e){ } return c; } private void filename() { AlertDialog.Builder alert = new AlertDialog.Builder(this); alert.setTitle("Save Video"); alert.setMessage("Enter File Name"); final EditText input = new EditText(this); alert.setView(input); alert.setPositiveButton("Ok", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { if(input.getText().length()>=1) { filename = input.getText().toString(); File sdcard = new File(Environment.getExternalStorageDirectory() + "/VideoRecord"); File from = new File(sdcard,"null.mp4"); File to = new File(sdcard,filename+".mp4"); from.renameTo(to); SharedPreferences sp = videorecord.this.getSharedPreferences("data", MODE_WORLD_WRITEABLE); pre = sp.edit(); pre.clear(); pre.commit(); pre.putString("lastvideo", filename+".mp4"); pre.commit(); //btnplay.setImageResource(android.R.drawable.ic_media_play); btnplay.setImageResource(R.drawable.startrec); // Intent intent = new Intent(videorecord.this,StopVidoWatch_Activity.class); // startActivity(intent); Intent myIntent = new Intent(getApplicationContext(), StopVidoWatch_Activity.class).setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(myIntent); } else { filename(); } } }); alert.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { // Intent intent = new Intent(videorecord.this,StopVidoWatch_Activity.class); // startActivity(intent); File file = new File(Environment.getExternalStorageDirectory() + "/VideoRecord/null.mp4"); //boolean deleted = file.delete(); file.delete(); finish(); } }); alert.show(); } private boolean prepareMediaRecorder(){ myCamera = getCameraInstance(); mediaRecorder = new MediaRecorder(); myCamera.unlock(); mediaRecorder.setCamera(myCamera); mediaRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER); mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mediaRecorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH)); File folder = new File(Environment.getExternalStorageDirectory() + "/VideoRecord"); boolean success = false; if (!folder.exists()) { success = folder.mkdir(); } if (!success) { } else { } mediaRecorder.setOutputFile("/sdcard/VideoRecord/"+filename+".mp4"); mediaRecorder.setMaxDuration(60000); mediaRecorder.setMaxFileSize(5000000); Display display = getWindowManager().getDefaultDisplay(); int width = display.getHeight(); int height = display.getWidth(); String s = new String(); s= s.valueOf(width); String s1 = new String(); s1= s1.valueOf(height); // Toast.makeText(videorecord.this, "Width : " + s , Toast.LENGTH_LONG).show(); // Toast.makeText(videorecord.this, "Height : " + s1 , Toast.LENGTH_LONG).show(); mediaRecorder.setVideoSize(height, width); mediaRecorder.setPreviewDisplay(myCameraSurfaceView.getHolder().getSurface()); try { mediaRecorder.prepare(); } catch (IllegalStateException e) { releaseMediaRecorder(); return false; } catch (IOException e) { releaseMediaRecorder(); return false; } return true; } @Override protected void onPause() { super.onPause(); releaseMediaRecorder(); releaseCamera(); } private void releaseMediaRecorder() { if (mediaRecorder != null) { mediaRecorder.reset(); mediaRecorder.release(); mediaRecorder = null; myCamera.lock(); } } private void releaseCamera(){ if (myCamera != null){ myCamera.release(); myCamera = null; } } public class MyCameraSurfaceView extends SurfaceView implements SurfaceHolder.Callback{ private SurfaceHolder mHolder; private Camera mCamera; public MyCameraSurfaceView(Context context, Camera camera) { super(context); mCamera = camera; mHolder = getHolder(); mHolder.addCallback(this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceChanged(SurfaceHolder holder, int format, int weight, int height) { if (mHolder.getSurface() == null){ return; } try { mCamera.stopPreview(); } catch (Exception e){ } try { mCamera.setPreviewDisplay(mHolder); mCamera.startPreview(); } catch (Exception e){ } } public void surfaceCreated(SurfaceHolder holder) { try { mCamera.setPreviewDisplay(holder); mCamera.startPreview(); } catch (IOException e) { } } public void surfaceDestroyed(SurfaceHolder holder) { } } } videorecord.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:id="@+id/videoview" android:layout_width="fill_parent" android:layout_height="fill_parent"></FrameLayout> <LinearLayout android:id="@+id/mybutton" android:layout_width="fill_parent" android:layout_marginBottom="0dip" android:layout_height="wrap_content" android:orientation="horizontal" android:layout_weight="0" > <!-- <TextView android:text="START Recording" android:id="@+id/mybutton1" android:layout_height="wrap_content" android:layout_width="wrap_content" style="@style/savestyle" android:layout_weight="1" android:gravity="left" > </TextView> --> <ImageView android:layout_height="wrap_content" android:id="@+id/btnplay" android:padding="5dip" android:background="#A0000000" android:textColor="#ffffffff" android:layout_width="wrap_content" android:src="@drawable/startrec" /> </LinearLayout> <TextView android:text="00:00:00" android:id="@+id/txtcounter" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="right|bottom" android:padding="5dip" android:background="#A0000000" android:textColor="#ffffffff" /> </FrameLayout> <RelativeLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@color/bgcolor" > <LinearLayout android:layout_above="@+id/mybutton" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent" > </LinearLayout> </RelativeLayout> </LinearLayout>

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • Using Oracle Enterprise Manager Ops Center to Update Solaris via Live Upgrade

    - by LeonShaner
    Introduction: This Oracle Enterprise Manager Ops Center blog entry provides tips for using Ops Center to update Solaris using Live Upgrade on Solaris 10 and Boot Environments on Solaris 11. Why use Live Upgrade? Live Upgrade (LU) can significantly reduce downtime associated with patching Live Upgrade avoids dropping to single-user mode for long periods of time during patching Live Upgrade relies on an Alternate Boot Environment (ABE)/(BE), which is patched while in multi-user mode; thereby allowing normal system operations to continue with the active BE, while the alternate BE is being patched Activating an newly patched (A)BE is essentially a reboot; therefore the downtime is ~= reboot Admins can easily revert to the prior Boot Environment (BE) as a safeguard / fallback. Why use Ops Center to patch via Live Upgrade, Alternate Boot Environments, and Solaris 11 equivalents? All the benefits of Ops Center's extensive patch and package knowledge base can be leveraged on top of Live Upgrade Ops Center can orchestrate patching based on Live Upgrade and Solaris 11 features, which all works together to minimize downtime Ops Centers advanced inventory and reporting features assurance that each OS is updated to a verifiable, consistent standard, rather than relying on ad-hoc (error prone) procedures and scripts Ops Center gives admins control over the boot environment specifications or they can let Ops Center decide when a BE is necessary, thereby reducing complexity and lowering the opportunity for user error Preparing to use Live Upgrade-like features in Solaris 11 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Solaris 11 has features which are similar in concept to Live Upgrade on Solaris 10, but differ greatly in implementationImportant distinctions: Solaris 11 assumes ZFS root Solaris 11 adds Boot Environments (BE's) as an integrated feature (see beadm) Solaris 11 BE's avoid single-user patching (vs. Solaris 10 w/ ZFS snapshot=ABE). Solaris 11 Image Packaging System (IPS) has hooks for BE creation, as needed Solaris 11 allows pkgs to be installed + upgraded in alternate BE (e.g. instead of the live system) but it is controlled on a per-pkg basis Boot Environments are activated across a reboot; instead of spending long periods installing + upgrading packages in single user mode. Fallback to a prior BE is a function of the BE infrastructure (a la beadm). (Generally) Reboot + BE activation can be much much faster on Solaris 11 Preparing to use Live Upgrade on Solaris 10 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Live Upgrade Pre-requisite patches must be applied before the first Live Upgrade Alternate Boot Environments are created (see "Pre-requisite Patches" section, below...) Solaris 10 Update 6 or newer on ZFS root is the practical starting point for Live Upgrade Live Upgrade with ZFS root is far more straight-forward than any scheme based on Alternative Boot Environments in slices or temporarily breaking mirrors Use Solaris best practices to upgrade the OS to at least Solaris 10 Update 4 (outside of Ops Center) UFS root can (technically) be used, but it is significantly more involved (e.g. discouraged) -- there are many reasons to move to ZFS while going through the process to update to Solaris 10 Update 6 or newer (out side of Ops Center) Recommendation: Start with Solaris 10 Update 6 or newer on ZFS root Recommendation: Start with Ops Center 12c or newer Ops Center 12c can automatically create your ABE's for you, without the need for custom scripts Ops Center 12c Update 2 avoids kernel panic on unpatched Solaris 10 update 9 (and older) -- unrelated to Live Upgrade, but more on the issue, below. NOTE: There is no magic!  If you have systems running Solaris 10 Update 5 or older on UFS root, and you don't know how to get them updated to Solaris 10 on ZFS root, then there are services available from Oracle Advanced Customer Support (ACS), which specialize in this area. Live Upgrade Pre-requisite Patches (Solaris 10) Certain Live Upgrade related patches must be present before the first Live Upgrade ABE's are created on Solaris 10.Use the following MOS Search String to find the “living document” that outlines the required patch minimums, which are necessary before using any Live Upgrade features: Solaris Live Upgrade Software Patch Requirements(Click above – the link is valid as of this writing, but search in MOS for the same "Solaris Live Upgrade Software Patch Requirements" string if necessary) It is a very good idea to check the document periodically and adapt to its contents, accordingly.IMPORTANT:  In case it wasn't clear in the above document, some direct patching of the active OS, including a reboot, may be required before Live Upgrade can be successfully used the first time.HINT: You can use Ops Center to determine what to expect for a given system, and to schedule the “pre-patching” during a maintenance window if necessary. Preparing to use Ops Center Discover + Manage (Install + Configure the Ops Center agent in) each Global Zone Recommendation:  Begin by using OCDoctor --agent-prereq to determine whether OS meets OC prerequisites (resolve any issues) See prior requirements and recommendations w.r.t. starting with Solaris 10 Update 6 or newer on ZFS (or at least Solaris 10 Update 4 on UFS, with caveats) WARNING: Systems running unpatched Solaris 10 update 9 (or older) should run the Ops Center 12c Update 2 agent to avoid a potential kernel panic The 12c Update 2 agent will check patch minimums and disable certain process accounting features if the kernel is not sufficiently patched to avoid the panic SPARC: 142900-05 Obsoleted by: 142900-06 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142901-05 Obsoleted by: 142901-06 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) OR SPARC: 142909-17 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142910-17 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) Ops Center 12c (initial release) and 12c Update 1 agent can also be safely used with a workaround (to be performed BEFORE installing the agent): # mkdir -p /etc/opt/sun/oc # echo "zstat_exacct_allowed=false" > /etc/opt/sun/oc/zstat.conf # chmod 755 /etc/opt/sun /etc/opt/sun/oc # chmod 644 /etc/opt/sun/oc/zstat.conf # chown -Rh root:sys /etc/opt/sun/oc NOTE: Remove the above after patching the OS sufficiently, or after upgrading to the 12c Update 2 agent Using Ops Center to apply Live Upgrade-related Pre-Patches (Solaris 10)Overview: Create an OS Update Profile containing the minimum LU-related pre-patches, based on the Solaris Live Upgrade Software Patch Requirements, previously mentioned. SIMULATE the deployment of the LU-related pre-patches Observe whether any of the LU-related pre-patches will require a reboot The job details for each Global Zone will advise whether a reboot step will be required ACTUALLY deploy the LU-related pre-patches, according to your change control process (e.g. if no reboot, maybe okay to do now; vs. must do later because of the reboot). You can schedule the job to occur later, during a maintenance window Check the job status for each node, resolving any issues found Once the LU-related pre-patches are applied, you can Ops Center to patch using Live Upgrade on Solaris 10 Using Ops Center to patch Solaris 10 with LU/ABE's -- the GOODS!(this is the heart of the tip): Create an OS Update Profile containing the patches that make up your standard build Use Solaris Baselines when possible Add other individual patches as needed ACTUALLY deploy the OS Update Profile Specify the appropriate Live Upgrade options, e.g. Synchronize the active BE to the alternate BE before patching Do not activate the BE after patching Check the job status for each node, resolving any issues found Activate the newly patched BE according to your change control process Activate = Reboot to the ABE, making the ABE the new active BE Ops Center does not separate LU activate from reboot, so expect a reboot! Check the job status for each node, resolving any issues found Examples (w/Screenshots) Solaris 10 and Live Upgrade: Auto-Create the Alternate Boot Environment (ZFS root only) ABE to be created on ZFS with name S10_12_07REC (Example) Uses built in feature to call “lucreate -n S10_12_07REC” behind scenes if not already present NOTE: Leave “lucreate” params blank (if you do specify options, the will be appended after -n $ABEName) Solaris 10 and Live Upgrade: Alternate Boot Environment Creation via Operational Profile (script) The Alternate Boot Environment is to be created via custom, user-supplied script, which does whatever is needed for the system where Live Upgrade will be used. Operational Profile, which provides the script to create an ABE: Very similar to the automatic case, but with a Script (Operational Profile), which is used to create the ABE Relies on user-supplied script in the form of an Operational Profile Could be used to prepare an ABE based on a UFS root in a slice, or on a separate device (e.g. by breaking a mirror first) – it is up to the script author to do the right thing! EXAMPLE: Same result as the ZFS case, but illustrating the Operational Profile (e.g. script) approach to call: # lucreate -n S10_1207REC NOTE: OC special variable is $ABEName Boot Environment Profile, which references the Operational Profile Script = Operational Profile on this screen Refers to Operational Profile shown in the previous section The user-supplied S10_Create_BE Operational Profile will be run The Operational Profile must send a non-zero exit code if there is a problem (so that the OS Update job will not proceed) Solaris 10 OS Update Profile (to provide the actual patch specifications) Solaris 10 Baseline “Recommended” chosen for “Install” Solaris 10 OS Update Plan (two-steps in this case) “Create a Boot Environment” + “Update OS” are chosen. Using Ops Center to patch Solaris 11 with Boot Environments (as needed) Create a Solaris 11 OS Update Profile containing the packages that make up your standard build ACTUALLY deploy the Solaris 11 OS Update Profile BE will be created if needed (or you can stipulate no BE) BE name will be auto-generated (if needed), or you may specify a BE name Check the job status for each node, resolving any issues found Check if a BE was created; if so, activate the new BE Activate = Reboot to the BE, making the new BE the active BE Ops Center does not separate BE activate from reboot NOTE: Not every Solaris 11 OS Update will require a new BE, so a reboot may not be necessary. Solaris 11: Auto BE Create (as Needed -- let Ops Center decide) BE to be created as needed BE to be named automatically Reboot (if necessary) deferred to separate step Solaris 11: OS Profile Solaris 11 “entire” chosen for a particular SRU Solaris 11: OS Update Plan (w/BE)  “Create a Boot Environment” + “Update OS” are chosen. Summary: Solaris 10 Live Upgrade, Alternate Boot Environments, and their equivalents on Solaris 11 can be very powerful tools to help minimize the downtime associated with updating your servers.  For very old Solaris, there are some important prerequisites to adhere to, but once the initial preparation is complete, Live Upgrade can be used going forward.  For Solaris 11, the built-in Boot Environment handling is leveraged directly by the Image Packaging System, and the result is a much more straight forward way to patch, and far fewer prerequisites to satisfy in getting there.  Ops Center simplifies using either approach, and helps you improve consistency from system to system, which ultimately helps you improve the overall up-time across all the Solaris systems in your environment. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Making Thunar the default file browser without hiding the desktop icons

    - by Manu
    I really dislike Ubuntu's default file browser, nautilus, and decided to opt for a lighter alternative (Thunar or Xfe). I've used the following script to change the default to Thunar, but now all my icons are gone from the desktop ! The files are still there, in /home/myid/Desktop, but they do not appear. Is there a way to show them, or is this a consequence of removing nautilus as the default file browser ? Can I modify the following script* in order to keep the icons ? *copied from https://help.ubuntu.com/...: ## Originally written by aysiu from the Ubuntu Forums ## This is GPL'ed code ## So improve it and re-release it ## Define portion to make Thunar the default if that appears to be the appropriate action makethunardefault() { ## I went with --no-install-recommends because ## I didn't want to bring in a whole lot of junk, ## and Jaunty installs recommended packages by default. echo -e "\nMaking sure Thunar is installed\n" sudo apt-get update && sudo apt-get install thunar --no-install-recommends ## Does it make sense to change to the directory? ## Or should all the individual commands just reference the full path? echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nMaking backup directory\n" ## Does it make sense to create an entire backup directory? ## Should each file just be backed up in place? sudo mkdir nonautilusplease echo -e "\nModifying folder handler launcher\n" sudo cp nautilus-folder-handler.desktop nonautilusplease/ ## Here I'm using two separate sed commands ## Is there a way to string them together to have one ## sed command make two replacements in a single file? sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-folder-handler.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-folder-handler.desktop echo -e "\nModifying browser launcher\n" sudo cp nautilus-browser.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop --browser/thunar/g' nautilus-browser.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-browser.desktop echo -e "\nModifying computer icon launcher\n" sudo cp nautilus-computer.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-computer.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-computer.desktop echo -e "\nModifying home icon launcher\n" sudo cp nautilus-home.desktop nonautilusplease/ sudo sed -i -n 's/nautilus --no-desktop/thunar/g' nautilus-home.desktop sudo sed -i -n 's/TryExec=nautilus/TryExec=thunar/g' nautilus-home.desktop echo -e "\nModifying general Nautilus launcher\n" sudo cp nautilus.desktop nonautilusplease/ sudo sed -i -n 's/Exec=nautilus/Exec=thunar/g' nautilus.desktop ## This last bit I'm not sure should be included ## See, the only thing that doesn't change to the ## new Thunar default is clicking the files on the desktop, ## because Nautilus is managing the desktop (so technically ## it's not launching a new process when you double-click ## an icon there). ## So this kills the desktop management of icons completely ## Making the desktop pretty useless... would it be better ## to keep Nautilus there instead of nothing? Or go so far ## as to have Xfce manage the desktop in Gnome? echo -e "\nChanging base Nautilus launcher\n" sudo dpkg-divert --divert /usr/bin/nautilus.old --rename /usr/bin/nautilus && sudo ln -s /usr/bin/thunar /usr/bin/nautilus echo -e "\nRemoving Nautilus as desktop manager\n" killall nautilus echo -e "\nThunar is now the default file manager. To return Nautilus to the default, run this script again.\n" } restorenautilusdefault() { echo -e "\nChanging to application launcher directory\n" cd /usr/share/applications echo -e "\nRestoring backup files\n" sudo cp nonautilusplease/nautilus-folder-handler.desktop . sudo cp nonautilusplease/nautilus-browser.desktop . sudo cp nonautilusplease/nautilus-computer.desktop . sudo cp nonautilusplease/nautilus-home.desktop . sudo cp nonautilusplease/nautilus.desktop . echo -e "\nRemoving backup folder\n" sudo rm -r nonautilusplease echo -e "\nRestoring Nautilus launcher\n" sudo rm /usr/bin/nautilus && sudo dpkg-divert --rename --remove /usr/bin/nautilus echo -e "\nMaking Nautilus manage the desktop again\n" nautilus --no-default-window & ## The only change that isn't undone is the installation of Thunar ## Should Thunar be removed? Or just kept in? ## Don't want to load the script with too many questions? } ## Make sure that we exit if any commands do not complete successfully. ## Thanks to nanotube for this little snippet of code from the early ## versions of UbuntuZilla set -o errexit trap 'echo "Previous command did not complete successfully. Exiting."' ERR ## This is the main code ## Is it necessary to put an elseif in here? Or is ## redundant, since the directory pretty much ## either exists or it doesn't? ## Is there a better way to keep track of whether ## the script has been run before? if [[ -e /usr/share/applications/nonautilusplease ]]; then restorenautilusdefault else makethunardefault fi;

    Read the article

< Previous Page | 17 18 19 20 21 22 23  | Next Page >