Search Results

Search found 6355 results on 255 pages for 'unix socket'.

Page 85/255 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Too many open files in one of my java routine.

    - by Irfan Zulfiqar
    I have a multithreaded code that has to generated a set of objects and write them to a file. When I run it I sometime get "Too many open files" message in Exception. I have checked the code to make sure that all the file streams are being closed properly. Here is the stack trace. When I do ulimit -a, open files allowed is set to 1024. We think increasing this number is not a viable option / solution. [java] java.io.FileNotFoundException: /export/event_1_0.dtd (Too many open files) [java] at java.io.FileInputStream.open(Native Method) [java] at java.io.FileInputStream.<init>(FileInputStream.java:106) [java] at java.io.FileInputStream.<init>(FileInputStream.java:66) [java] at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70) [java] at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161) [java] at java.net.URL.openStream(URL.java:1010) Now what we have identified so far by looking closely at the list of open files is that the VM is opening same class file multiple times. /export/BaseEvent.class 236 /export/EventType1BaseEvent.class 60 /export/EventType2BaseEvent.class 48 /export/EventType2.class 30 /export/EventType1.class 14 Where BaseEvent is partent of all the classes and EventType1 ant EventType2 inherits EventType1BaseEvent and EventType2BaseEvent respectively. Why would a class loader load the same class file 200+ times. It seems it is opening up the base class as many time it create any child instance. Is this normal? Can it be handler any other way apart from increasing the number of open files?

    Read the article

  • Using Regex groups in bash

    - by AlexeyMK
    Greetings, I've got a directory with a list of pdfs in it: file1.pdf, file2.pdf, morestuff.pdf ... etc. I want to convert these pdfs to pngs, ie file1.png, file2.png, morestuff.png ... etc. The basic command is, convert from to, But I'm having trouble getting convert to rename to the same file name. The obvious 'I wish it worked this way' is convert *.pdf *.png But clearly that doesn't work. My thought process is that I should utilize regular expression grouping here, to say somethink like convert (*).pdf %1.png but that clearly isn't the right syntax. I'm wondering what the correct syntax is, and whether there's a better approach (that doesn't require jumping into perl or python) that I'm ignoring. Thanks!

    Read the article

  • Shell script task status monitoring

    - by Bikram Agarwal
    I'm running an ANT task in background and checking in 60 second intervals whether that task is complete or not. If it is not, every 60 seconds, a message should be displayed on screen - "Deploy process is still running. $slept seconds since deploy started", where $slept is 60, 120, 180 n so on. There's a limit of 1200 seconds, after which the script will show the log via 'ant log' command and ask the user whether to continue. If the user chooses to continue, 300 seconds are added to the time limit and the process repeats. The code that I am using for this task is - ant deploy & limit=1200 deploy_check() { while [ ${slept:-0} -le $limit ]; do sleep 60 && slept=`expr ${slept:-0} + 60` if [ $$ = "`ps -o ppid= -p $!`" ]; then echo "Deploy process is still running. $slept seconds since deploy started." else wait $! && echo "Application ${New_App_Name} deployed successfully" || echo "Deployment of ${New_App_Name} failed" break fi done } deploy_check if [ $$ = "`ps -o ppid= -p $!`" ]; then echo "Deploy process did not finish in $slept seconds. Here's the log." ant log echo "Do you want to kill the process? Press Ctrl+C to kill. Press Enter to continue." read log limit=`expr ${limit} + 300` deploy_check fi Now, the problem is - this code is not working. This looks like a perfectly good code and yet, this is not working. Can anyone point out what is wrong with this code, please.

    Read the article

  • Exit SSH from the script

    - by Kimi
    I Want to exit ssh: Does the below line work: ssh -f -T ${USAGE_2_USER}@${USAGE_2_HOST} Or do i need to write it some other way . Please tell should I use exit with ssh an how?

    Read the article

  • HPUX setacl leaves uid behind

    - by Woot4Moo
    I have a shell script that I execute after uninstalling a web application. The script is meant to clean up permissions that were needed during the execution of the application. find /opt/path -exec setacl -d user:myUser{} ';' After this executes and the acl is removed I am left with an acl that looks as follows user:101:--- /opt/path How can I properly call setacl to remove the user without leaving behind a uid?

    Read the article

  • In terminal, merging multiple folders into one.

    - by Josh Pinter
    I have a backup directory created by WDBackup (western digital external HD backup util) that contains a directory for each day that it backed up and the incremental contents of just what was backed up. So the hierarchy looks like this: 20100101 My Documents Letter1.doc My Music Best Songs Every First Songs.mp3 My song.mp3 # modified 20100101 20100102 My Documents Important Docs Taxes.doc My Music My Song.mp3 # modified 20100102 ...etc... Only what has changed is backed up and the first backup that was ever made contains all the files selected for backup. What I'm trying to do now is incrementally copy, while keeping the folder structure, from oldest to newest, each of these dated folders into a 'merged' folder so that it overrides the older content and keeps the new stuff. As an example, if just using these two example folders, the final merged folder would look like this: Merged My Documents Important Docs Taxes.doc Letter1.doc My Music Best Songs Every First Songs.mp3 My Song.mp3 # modified 20100102 Hope that makes sense. Thanks, Josh

    Read the article

  • Setting variables in shell script by running commands

    - by rajya vardhan
    >cat /tmp/list1 john jack >cat /tmp/list2 smith taylor It is guaranteed that list1 and list2 will have equal number of lines. f(){ i=1 while read line do var1 = `sed -n '$ip' /tmp/list1` var2 = `sed -n '$ip' /tmp/list2` echo $i,$var1,$var2 i=`expr $i+1` echo $i,$var1,$var2 done < $INFILE } So output of f() should be: 1,john,smith 2,jack,taylor But getting 1,p,p 1+1,p,p If i replace following: var1 = `sed -n '$ip' /tmp/list1` var2 = `sed -n '$ip' /tmp/list2` with this: var1=`head -$i /tmp/vip_list|tail -1` var2=`head -$i /tmp/lb_list|tail -1` Then output: 1,john,smith 1,john,smith Not an expert of shell, so please excuse if sounds childish :)

    Read the article

  • Why Does Piping Binary Text to the Screen often Horck a Terminal

    - by Alan Storm
    Imaginary Situation: You’ve used mysqldump to create a backup of a mysql database. This database has columns that are blobs. That means your “text” dump files contains both strings and binary data (binary data stored as strings?) If you cat this file to the screen $ cat dump.mysql you’ll often get unexpected results. The terminal will start beeping, and then the output finishes scrolling by you’ll often have garbage chacters entered on your terminal as through you’d typed them, and sometimes your prompts and anything you type will be garbage characters. Why does this happen? Put another way, I think I’m looking for an overview of what’s actually happening when you store binary strings into a file, and when you cat those files, and when the results of the cat are reported to the terminal, and any other steps I’m missing.

    Read the article

  • aumix problem- how can I make the changes permenantly

    - by thillai-selvan
    Hai every one. How can I make the changes in the aumix permanently? I am running the aumix application using the following command thinapplaunch aumix. I am increasing the volume manually. I saved and quit. I am again launching that application using the same command. thinapplaunch aumix Now all the changes I made is still there. All the volumes are full. But when I logged out and login again the changes is not exists. All the volumes are not full. How can I make the changes permanently? Any help much appreciated. Thanks in Advance!!!

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • Search and replace

    - by zx
    Hi, I have a really large SQL dump around 400MB. It's in the following format, "INSERT INTO user VALUES('USERID', 'USERNAME', 'PASSWORD', '0', '0', 'EMAIL', 'GENDER', 'BIRTHDAY', '182', '13', '640', 'Married', 'Straight', '', 'Yes', 'Yes', '1146411153', '1216452123', '1149440844', '0', picture', '1', '0', '0', 'zip', '0', '', '0', '', '', '0')" Is there anyway I can just get the email and password out from that, I want to import the users into another table. Anyone know how I can do this and just get email-password stripped out from that content? Thank you in advance

    Read the article

  • Bash: using commands as parameters (specifically cd, dirname and find)

    - by sixtyfootersdude
    This command and output: % find . -name file.xml 2> /dev/null ./a/d/file.xml % So this command and output: % dirname `find . -name file.xml 2> /dev/null` ./a/d % So you would expect that this command: % cd `dirname `find . -name file.xml 2> /dev/null`` Would change the current directory to ./a/d. Strangely this does not work. When I type cd ./a/d. The directory change works. However I cannot find out why the above does not work...

    Read the article

  • Hudson trigger builds remotely gives a forbidden 403 error

    - by Ritesh M Nayak
    I have a shell script on the same machine that hudson is deployed on and upon executing it, it calls wget on a hudson build trigger URL. Since its the same machine, I access it as http://localhost:8080/hudson/job/jobname/build?token=sometoken Typically, this is supposed to trigger a build on the project. But I get a 403 forbidden when I do this. Anybody has any idea why? I have tried this using a browser and it triggers the build, but via the command line it doesn't seem to work. Any ideas?

    Read the article

  • what does the @ symbol mean in ls -l directory listing?

    - by Andrew Arrow
    When I run ls -l on my mac I see two .yml files: -rw-r--r-- 1 aa staff 6 Apr 15 05:50 s1.yml -rw-r--r--@ 1 aa staff 362 Apr 15 05:49 s3.yml same owner, same permissions but one has a @ at the end of the permisions. The one with the @ shows up in my editor, the one without does not. So there must be some significance. How can I turn on the @ for the file without it? I selected the files in the finder and did get info and everything looks identical between the two files.

    Read the article

  • How to feed data over STDIN to multiple external commands in ruby.

    - by Erik
    This question is a bit like my previous (answered) question: How to run multiple external commands in the background in ruby. But, in this case I am looking for a way to feed ruby strings over STDIN to external processes, something like this (the code below is not valid but illustrates my goal): #!/usr/bin/ruby str1 = 'In reality a relatively large string.....' str2 = 'Another large string' str3 = 'etc..' spawn 'some_command.sh', :stdin => str1 spawn 'some_command.sh', :stdin => str2 spawn 'some_command.sh', :stdin => str3 Process.waitall

    Read the article

  • Tool Compare the tables in two different databeses

    - by user191124
    I am using Toad. Frequently i need to compare tables in two different test environments. the tables present in them are same but the data differs. i just need to know what are the differences in the same tables which are in two different data bases.Are there any tools which can be installed on windows and use it to compare. Much appreciate your help:)

    Read the article

  • What do programs see when ZFS can't deliver uncorrupted data?

    - by Jay Kominek
    Say my program attempts a read of a byte in a file on a ZFS filesystem. ZFS can locate a copy of the necessary block, but cannot locate any copy with a valid checksum (they're all corrupted, or the only disks present have corrupted copies). What does my program see, in terms of the return value from the read, and the byte it tried to read? And is there a way to influence the behavior (under Solaris, or any other ZFS-implementing OS), that is, force failure, or force success, with potentially corrupt data?

    Read the article

  • git filter-branch chmod

    - by Evan Purkhiser
    I accidental had my umask set incorrectly for the past few months and somehow didn't notice. One of my git repositories has many files marked as executable that should be just 644. This repo has one main master branch, and about 4 private feature branches (that I keep rebased on top of the master). I've corrected the files in my master branch by running find -type f -exec chmod 644 {} \; and committing the changes. I then rebased my feature branches onto master. The problem is there are newly created files in the feature branches that are only in that branch, so they weren't corrected by my massive chmod commit. I didn't want to create a new commit for each feature branch that does the same thing as the commit I made on master. So I decided it would be best to go back through to each commit where a file was made and set the permissions. This is what I tried: git filter-branch -f --tree-filter 'chmod 644 `git show --diff-filter=ACR --pretty="format:" --name-only $GIT_COMMIT`; git add .' master.. It looked like this worked, but upon further inspection I noticed that the every commit after a commit containing a new file with the proper permissions of 644 would actually revert the change with something like: diff --git a b old mode 100644 new mode 100755 I can't for the life of me figure out why this is happening. I think I must be mis-understanding how git filter-branch works. My Solution I've managed to fix my problem using this command: git filter-branch -f --tree-filter 'FILES="$FILES "`git show --diff-filter=ACMR --pretty="format:" --name-only $GIT_COMMIT`; chmod 644 $FILES; true' development.. I keep adding onto the FILES variable to ensure that in each commit any file created at some point has the proper mode. However, I'm still not sure I really understand why git tracks the file mode for each commit. I had though that since I had fixed the mode of the file when it was first created that it would stay that mode unless one of my other commits explicit changed it to something else. That did not appear to the be the case. The reason I thought that this would work is from my understanding of rebase. If I go back to HEAD~5 and change a line of code, that change is propagated through, it doesn't just get changed back in HEAD~4.

    Read the article

  • I am currently serving my static files in Django. How do I use Apache2 to do this?

    - by alex
    (r'^media/(?P<path>.*)$', 'django.views.static.serve',{'document_root': settings.MEDIA_ROOT}), As you can see, I have a directory called "media" under my Django project. I would like to delete this line in my urls.py and instead us Apache to serve my static files. What do I do to my Apache configs (which files do I change) in order to do this? By the way, I installed Apache2 like normal: sudo aptitude install apache2

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >