Search Results

Search found 20351 results on 815 pages for 'chrome os'.

Page 269/815 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • To stop returning through SSH using Pexpect

    - by chrissygormley
    Hello, I am trying to use pexpect to ssh into a computer but I do not want to return back to the original computer. The code I have is: #!/usr/bin/python2.6 import pexpect, os def ssh(): # Logs into computer through SSH ssh_newkey = 'Are you sure you want to continue connecting' # my ssh command line p=pexpect.spawn('ssh [email protected]') i=p.expect([ssh_newkey,'password:',pexpect.EOF]) p.sendline("password") i=p.expect('-bash-3.2') print os.getcwd() ssh() This allows me to ssh into the computer but when I run the os.getcwd() the pexpect has returned me to the original computer. You see I want to ssh into another computer and use their environment not drag my environment using pexpect. Can anyone suggest how to get this working or an alternative way. Thanks

    Read the article

  • parallel-python error: RuntimeError("Socket connection is broken")

    - by user288558
    I am using a simple program to send a function: import pp nodes=('mosura02','mosura03','mosura04','mosura05','mosura06', 'mosura09','mosura10','mosura11','mosura12') nodes=('miner:60001',) def pptester(): js=pp.Server(ppservers=nodes) js.set_ncpus(0) tmp=[] for i in range(200): tmp.append(js.submit(ppworktest,(),(),('os',))) return tmp def ppworktest(): import os return os.system("uname -a") the result is: wkerzend@mosura:/home/wkerzend/tmp/ppython_test>ssh miner "source ~/coala_python_setup.sh;ppserver.py -d -p 60001" 2010-04-12 00:50:48,162 - pp - INFO - Creating server instance (pp-1.6.0) 2010-04-12 00:50:52,732 - pp - INFO - pp local server started with 32 workers 2010-04-12 00:50:52,732 - pp - DEBUG - Strarting network server interface=0.0.0.0 port=60001 Exception in thread client_socket: Traceback (most recent call last): File "/usr/lib64/python2.6/threading.py", line 525, in __bootstrap_inner self.run() File "/usr/lib64/python2.6/threading.py", line 477, in run self.__target(*self.__args, **self.__kwargs) File "/home/wkerzend/python_coala/bin/ppserver.py", line 161, in crun ctype = mysocket.receive() File "/home/wkerzend/python_coala/lib/python2.6/site-packages/pptransport.py", line 178, in receive raise RuntimeError("Socket connection is broken") RuntimeError: Socket connection is broken

    Read the article

  • What is the difference between #ifdef __IPHONE_3.2 and #if __IPHONE_3.2?

    - by Jonathan
    Hi, I have an iphone app that needs to work for 3.1.3 for the iPhone and 3.2 for the iPad. It is an iPhone app that I want to work on the iPad. The main difference is the MPMoviePlayerController which introduces/and deprecates lots of things in 3.2. Since, the iPhone OS only goes up to 3.1.3 and the iPad is on 3.2, I need to seperate my code so it only compiles the required code for the respective OS. I can't use [[UIDevice currentDevice] model] because I end up with deprecated warnings on the 3.1.3 code. Also, UIUserInterfaceIdiomPad is new in 3.2 so it doesn't work well with 3.1.3... So, I decided to use this, which only compiles what is necessary for the particular OS: #if __IPHONE _3_2 //do 3.2 iPad stuff #else //do 3.1.3 iPhone/iPod Touch stuff #endif My question is... What is the difference between these? #ifdef __IPHONE_3_2 and #if __IPHONE_3_2 Thank you

    Read the article

  • iPhone SDK: am I still allowed to develop with SDK 3.1.3 on Leopard?

    - by BoltClock
    Mac OS X Snow Leopard's been in stores for a while now. Sadly, the Mac that I'll be developing on is still running Leopard, and I don't have admin access to the Mac either so I can't do anything about the OS version. Therefore I can only use iPhone SDK 3.1.3, which we've obtained thanks to this answer, to start building apps on that Mac. Am I still allowed to develop apps with this setup and deploy for iPhone OS 3.x to the App Store? Keyword is allowed because all I've seen so far are technical questions/answers, but I'm not certain if Apple's going to be a jerk and shun me because I can't use a Snow Leopard Mac despite having only started this year (and I don't even want to get started about iPhone SDK 4's revised agreement, not with its NDA still in place...).

    Read the article

  • PyML 0.7.2 - How to prevent accuracy from dropping after storing/loading a classifier?

    - by Michael Aaron Safyan
    This is a followup from "Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?". The solution to that question was close, but not quite right, (the SparseDataSet is broken, so attempting to save/load with that dataset container type will fail, no matter what. Also, PyML is inconsistent in terms of whether labels should be numbers or strings... it turns out that the oneAgainstRest function is actually not good enough, because the labels need to be strings and simultaneously convertible to floats, because there are places where it is assumed to be a string and elsewhere converted to float) and so after a great deal of hacking and such I was finally able to figure out a way to save and load my multi-class classifier without it blowing up with an error.... however, although it is no longer giving me an error message, it is still not quite right as the accuracy of the classifier drops significantly when it is saved and then reloaded (so I'm still missing a piece of the puzzle). I am currently using the following custom mutli-class classifier for training, saving, and loading: class SVM(object): def __init__(self,features_or_filename,labels=None,kernel=None): if isinstance(features_or_filename,str): filename=features_or_filename; if labels!=None: raise ValueError,"Labels must be None if loading from a file."; with open(os.path.join(filename,"uniquelabels.list"),"rb") as uniquelabelsfile: self.uniquelabels=sorted(list(set(pickle.load(uniquelabelsfile)))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; self.classifiers=[]; for classidx, classname in enumerate(self.uniquelabels): self.classifiers.append(PyML.classifiers.svm.loadSVM(os.path.join(filename,str(classname)+".pyml.svm"),datasetClass = PyML.VectorDataSet)); else: features=features_or_filename; if labels==None: raise ValueError,"Labels must not be None when training."; self.uniquelabels=sorted(list(set(labels))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; points = [[float(xij) for xij in xi] for xi in features]; self.classifiers=[PyML.SVM(kernel) for label in self.uniquelabels]; for i in xrange(len(self.uniquelabels)): currentlabel=self.uniquelabels[i]; currentlabels=['+1' if k==currentlabel else '-1' for k in labels]; currentdataset=PyML.VectorDataSet(points,L=currentlabels,positiveClass='+1'); self.classifiers[i].train(currentdataset,saveSpace=False); def accuracy(self,pts,labels): logger=logging.getLogger("ml"); correct=0; total=0; classindexes=[self.labeltoindex[label] for label in labels]; h=self.hypotheses(pts); for idx in xrange(len(pts)): if h[idx]==classindexes[idx]: logger.info("RIGHT: Actual \"%s\" == Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])); correct+=1; else: logger.info("WRONG: Actual \"%s\" != Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])) total+=1; return float(correct)/float(total); def prediction(self,pt): h=self.hypothesis(pt); if h!=None: return self.uniquelabels[h]; return h; def predictions(self,pts): h=self.hypotheses(self,pts); return [self.uniquelabels[x] if x!=None else None for x in h]; def hypothesis(self,pt): bestvalue=None; bestclass=None; dataset=PyML.VectorDataSet([pt]); for classidx, classifier in enumerate(self.classifiers): val=classifier.decisionFunc(dataset,0); if (bestvalue==None) or (val>bestvalue): bestvalue=val; bestclass=classidx; return bestclass; def hypotheses(self,pts): bestvalues=[None for pt in pts]; bestclasses=[None for pt in pts]; dataset=PyML.VectorDataSet(pts); for classidx, classifier in enumerate(self.classifiers): for ptidx in xrange(len(pts)): val=classifier.decisionFunc(dataset,ptidx); if (bestvalues[ptidx]==None) or (val>bestvalues[ptidx]): bestvalues[ptidx]=val; bestclasses[ptidx]=classidx; return bestclasses; def save(self,filename): if not os.path.exists(filename): os.makedirs(filename); with open(os.path.join(filename,"uniquelabels.list"),"wb") as uniquelabelsfile: pickle.dump(self.uniquelabels,uniquelabelsfile,pickle.HIGHEST_PROTOCOL); for classidx, classname in enumerate(self.uniquelabels): self.classifiers[classidx].save(os.path.join(filename,str(classname)+".pyml.svm")); I am using the latest version of PyML (0.7.2, although PyML.__version__ is 0.7.0). When I construct the classifier with a training dataset, the reported accuracy is ~0.87. When I then save it and reload it, the accuracy is less than 0.001. So, there is something here that I am clearly not persisting correctly, although what that may be is completely non-obvious to me. Would you happen to know what that is?

    Read the article

  • Python 2.5.2: trying to open files recursively

    - by user248959
    Hi, the script below should open all the files inside the folder 'pruebaba' recursively but i get this error: Traceback (most recent call last): File "/home/tirengarfio/Desktop/prueba.py", line 8, in f = open(file,'r') IOError: [Errno 21] Is a directory This is the hierarchy: pruebaba folder1 folder11 test1.php folder12 test1.php test2.php folder2 test1.php The script: import re,fileinput,os path="/home/tirengarfio/Desktop/pruebaba" os.chdir(path) for file in os.listdir("."): f = open(file,'r') data = f.read() data = re.sub(r'(\s*function\s+.*\s*{\s*)', r'\1echo "The function starts here."', data) f.close() f = open(file, 'w') f.write(data) f.close() Any idea? Regards Javi

    Read the article

  • running an outside program (executable) in python?

    - by Mesut
    Hello all, I just started working on python and I have been trying to run an outside executable form python. I have an executable for a program written in Fortran. Lets say the name for the executable is flow.exe. And my executable is lacated in C:\Documents and Settings\flow_model I tried both os.system and popen commands but so far couldnt make it work. The following code seems like opens the command window but wouldnt execute the model. # Import system modules import sys, string, os, arcgisscripting os.system("C:/Documents and Settings/flow_model/flow.exe") Any suggestions out there? Any help would be greatly appreciated.

    Read the article

  • Windows 7 Task Scheduler

    - by Btibert3
    Hi All, Very new to this, and I have no idea where to start. I want to schedule a python script using Task Scheduler in Windows 7. When I add a "New Action", I place the following command as the script/program : c:\python25\python.exe As the argument, I add the full path to the location of my python script path\script.py Here is my script: import datetime import csv import os now = datetime.datetime.now() print str(now) os.chdir('C:/Users/Brock/Desktop/') print os.getcwd() writer = csv.writer(open("test task.csv", "wb")) row = ('This is a test', str(now)) writer.writerow(row) I got an error saying the script could not run. Any help you can provide to get me up and running will be very much appreciated! Thanks, Brock

    Read the article

  • :-( A valid signing identity matching this profile could not be found in your keychain

    - by user262325
    Hello everyone I hope to test my app on iPod Touch I created development provisioning profile. I dragged downloaded .mobileprovision file to Organizer There is a yellow triangle warned that "A valid signing identity matching this profile could not be found in your keychain" The others distribution provisioning profiles have no any problem. I checked my connected iPod Touch. Organizer also said that: OS Installed on "interdev"'s iPod 3.1.3 (7E18) Xcode Supported iPhone OS Versions 3.1.1 (7C146) 3.1.1 (7C145) 3.1 (7C144) 3.0.1 (7A400) 3.0 2.2.1 2.2 2.1.1 2.1 2.0.2 (5C1) 2.0.1 (5B108) 2.0 (5A347) 2.0 (5A345) ipod os 3.1.3 xocde 3.1 Do I need to upgrade Xcode? Thanks interdev

    Read the article

  • What happens when you run out of ram with mlockall set?

    - by James Dean
    I am working on a C++ application that requires a large amounts of memory for a batch run. ( 20gb) Some of my customers are running into memory limits where sometimes the OS starts swapping and the total run time doubles or worse. I have read that I can use the mlockall to keep the process from being swapped out. What would happen when the process memory requirements approaches or exceeds the the available physical memory in this way? I guess the answer might be OS specific so please list the OS in your answer.

    Read the article

  • PyML 0.7.2 - How to prevent accuracy from dropping after stroing/loading a classifier?

    - by Michael Aaron Safyan
    This is a followup from "Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?". The solution to that question was close, but not quite right, (the SparseDataSet is broken, so attempting to save/load with that dataset container type will fail, no matter what. Also, PyML is inconsistent in terms of whether labels should be numbers or strings... it turns out that the oneAgainstRest function is actually not good enough, because the labels need to be strings and simultaneously convertible to floats, because there are places where it is assumed to be a string and elsewhere converted to float) and so after a great deal of hacking and such I was finally able to figure out a way to save and load my multi-class classifier without it blowing up with an error.... however, although it is no longer giving me an error message, it is still not quite right as the accuracy of the classifier drops significantly when it is saved and then reloaded (so I'm still missing a piece of the puzzle). I am currently using the following custom mutli-class classifier for training, saving, and loading: class SVM(object): def __init__(self,features_or_filename,labels=None,kernel=None): if isinstance(features_or_filename,str): filename=features_or_filename; if labels!=None: raise ValueError,"Labels must be None if loading from a file."; with open(os.path.join(filename,"uniquelabels.list"),"rb") as uniquelabelsfile: self.uniquelabels=sorted(list(set(pickle.load(uniquelabelsfile)))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; self.classifiers=[]; for classidx, classname in enumerate(self.uniquelabels): self.classifiers.append(PyML.classifiers.svm.loadSVM(os.path.join(filename,str(classname)+".pyml.svm"),datasetClass = PyML.VectorDataSet)); else: features=features_or_filename; if labels==None: raise ValueError,"Labels must not be None when training."; self.uniquelabels=sorted(list(set(labels))); self.labeltoindex={}; for idx,label in enumerate(self.uniquelabels): self.labeltoindex[label]=idx; points = [[float(xij) for xij in xi] for xi in features]; self.classifiers=[PyML.SVM(kernel) for label in self.uniquelabels]; for i in xrange(len(self.uniquelabels)): currentlabel=self.uniquelabels[i]; currentlabels=['+1' if k==currentlabel else '-1' for k in labels]; currentdataset=PyML.VectorDataSet(points,L=currentlabels,positiveClass='+1'); self.classifiers[i].train(currentdataset,saveSpace=False); def accuracy(self,pts,labels): logger=logging.getLogger("ml"); correct=0; total=0; classindexes=[self.labeltoindex[label] for label in labels]; h=self.hypotheses(pts); for idx in xrange(len(pts)): if h[idx]==classindexes[idx]: logger.info("RIGHT: Actual \"%s\" == Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])); correct+=1; else: logger.info("WRONG: Actual \"%s\" != Predicted \"%s\"" %(self.uniquelabels[ classindexes[idx] ], self.uniquelabels[ h[idx] ])) total+=1; return float(correct)/float(total); def prediction(self,pt): h=self.hypothesis(pt); if h!=None: return self.uniquelabels[h]; return h; def predictions(self,pts): h=self.hypotheses(self,pts); return [self.uniquelabels[x] if x!=None else None for x in h]; def hypothesis(self,pt): bestvalue=None; bestclass=None; dataset=PyML.VectorDataSet([pt]); for classidx, classifier in enumerate(self.classifiers): val=classifier.decisionFunc(dataset,0); if (bestvalue==None) or (val>bestvalue): bestvalue=val; bestclass=classidx; return bestclass; def hypotheses(self,pts): bestvalues=[None for pt in pts]; bestclasses=[None for pt in pts]; dataset=PyML.VectorDataSet(pts); for classidx, classifier in enumerate(self.classifiers): for ptidx in xrange(len(pts)): val=classifier.decisionFunc(dataset,ptidx); if (bestvalues[ptidx]==None) or (val>bestvalues[ptidx]): bestvalues[ptidx]=val; bestclasses[ptidx]=classidx; return bestclasses; def save(self,filename): if not os.path.exists(filename): os.makedirs(filename); with open(os.path.join(filename,"uniquelabels.list"),"wb") as uniquelabelsfile: pickle.dump(self.uniquelabels,uniquelabelsfile,pickle.HIGHEST_PROTOCOL); for classidx, classname in enumerate(self.uniquelabels): self.classifiers[classidx].save(os.path.join(filename,str(classname)+".pyml.svm")); I am using the latest version of PyML (0.7.2, although PyML.__version__ is 0.7.0). When I construct the classifier with a training dataset, the reported accuracy is ~0.87. When I then save it and reload it, the accuracy is less than 0.001. So, there is something here that I am clearly not persisting correctly, although what that may be is completely non-obvious to me. Would you happen to know what that is?

    Read the article

  • problem with dmg file

    - by madalina
    Hello, I have created a dmg file named try.dmg which contains an executable application myapplication.app together with a compiled library mylibrary.lib; I used Disk Utility in order to create the try.dmg file. I mounted the try.dmg file on my computer and I opened the application myapplication.app with no problems. Still, I tried to do the same thing on another Macintosh, with the same operating system Mac OS X, but I got some problems: I mounted the try.dmg file and then when I tried to open the application myapplication.app I got an error message, that the application has quit unexpectedly. Can someone explain me what is it happening? How come this try.dmg file can be mounted and the application can be opened with no problems on my computer with Mac OS X, but on another computer with Mac OS X, after I mount the try.dmg file I get errors when I try opening the application? Thank you very much for your help. Best wishes, Madalina

    Read the article

  • Piping SoX in Python - subprocess alternative?

    - by Cochise Ruhulessin
    I use SoX in an application. The application uses it to apply various operations on audiofiles, such as trimming. This works fine: from subprocess import Popen, PIPE kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} pipe = Popen(['sox','-t','mp3','-', 'test.mp3','trim','0','15'], **kwargs) output, errors = pipe.communicate(input=open('test.mp3','rb').read()) if errors: raise RuntimeError(errors) This will cause problems on large files hower, since read() loads the complete file to memory; which is slow and may cause the pipes' buffer to overflow. A workaround exists: from subprocess import Popen, PIPE import tempfile import uuid import shutil import os kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} tmp = os.path.join(tempfile.gettempdir(), uuid.uuid1().hex + '.mp3') pipe = Popen(['sox','test.mp3', tmp,'trim','0','15'], **kwargs) output, errors = pipe.communicate() if errors: raise RuntimeError(errors) shutil.copy2(tmp, 'test.mp3') os.remove(tmp) So the question stands as follows: Are there any alternatives to this approach, aside from writing a Python extension to the Sox C API?

    Read the article

  • accessing files after setup.py install

    - by Matthew
    I'm developing a python application and have a question regarding coding it so that it still works after an user has installed it on his or her machine via setup.py install or similar. In one of my files, I use the following: file = "TestParser/View/MainWindow.ui" cwd = os.getcwd() argv_path = os.path.dirname(sys.argv[0]) file_path = os.path.join(cwd, argv_path, file) in order to get the path to MainWindow.ui, when I only know the path relative to the main script's location. This works regardless of from where I call the main script. The issue is that after an user installs the application on his or her machine, the relative path is different, so this doesn't work. I could use __file__, but according to this, py2exe doesn't have __file__. Is there a standard way of achieving this? Or a better way?

    Read the article

  • Using Spotlight as the "database" of an application

    - by vicvicvic
    I'm developing an OS X application to organize "things" (as iTunes is to music and iPhoto to photos). Instead of having my own database and index, I'm considering using Spotlight to essentially serve this purpose. Has anyone tried this? Is it wise? The main benefit, as I see it, would be simplicity and avoiding redundancy. It seems a bit wasteful to implement my own index machinery when OS X comes with one built in. I have little experience working with Spotlight, however. From a user's perspective, I do know that it has been slow and imprecise in older versions of OS X. I also have a gut-feeling that since it's aimed at searching the whole filesystem, using it for "local" purposes becomes hackish. Obviously, my applications's index needs to constantly be up-to-date. Can mdimport be used for this?

    Read the article

  • Clear command line output from Python [Eclipse]

    - by Tomas Lycken
    I'm using Eclipse for writing Python, and I want to be able to easily clear the screen. I've seen this question, and tried (among other things suggested there) the following solution import os def clear(): os.system('cls' if os.name == 'nt' else 'clear') but it doesn't entirely solve my problem. Instead of clearing the screen, the routine prints a small square (as if wanting to print an unknown character) to the command output window in Eclipse. Typing cls in the command line works perfectly fine, as does running a Python script with the above code from command line. But how can I make it look nice in Eclipse as well?

    Read the article

  • Linking to libcrypto for Leopard?

    - by adib
    Hi I am using Mac OS X 10.6 SDK and my deployment target is set to Mac OS 10.5. I'm linking to libcrypto (AquaticPrime requires this) and found out that my app doesn't launch on Leopard. The error is dyld: Library not loaded: /usr/lib/libcrypto.0.9.8.dylib I've tried the following workarounds but none of them work: Linking directly to libcrypto.0.9.7.dylib (the 10.6 SDK refuses to link directly with libcrypto.0.9.7.dylib. Copying the 10.5 SDK's version of libcrypto.0.9.7.dylib to the 10.6 lib directory and try t link with it (this time the link process succeeded but in Leopard the app still tries to lookup the non-existent libcrypto.0.9.8.dylib file and thus won't launch). Copying libcrypto.0.9.7.dylib from a Mac OS X 10.5.8 installation and link with it (the link was successful but the app still looks for libcrypto.0.9.8.dylib). Is there a way to link to this library and still use the 10.6 SDK? Thanks.

    Read the article

  • 0xDEADBEEF equivalent for 64-bit development?

    - by Peter Mortensen
    For C++ development for 32-bit systems (be it Linux, Mac OS or Windows, PowerPC or x86) I have initialised pointers that would otherwise be undefined (e.g. they can not immediately get a proper value) like so: int *pInt = reinterpret_cast<int *>(0xDEADBEEF); (To save typing and being DRY the right-hand side would normally be in a constant, e.g. BAD_PTR.) If pInt is dereferenced before it gets a proper value then it will crash immediately on most systems (instead of crashing much later when some memory is overwritten or going into a very long loop). Of course the behavior is dependent on the underlying hardware (getting a 4 byte integer from the odd address 0xDEADBEEF from a user process may be perfectly valid), but the crashing has been 100% reliable for all the systems I have developed for so far (Mac OS 68xxx, Mac OS PowerPC, Linux Redhat Pentium, Windows GUI Pentium, Windows console Pentium). For instance on PowerPC it is illegal (bus fault) to fetch a 4 byte integer from an odd address. What is a good value for this on 64-bit systems?

    Read the article

  • serving files using django - is this a security vulnerability

    - by Tom Tom
    I'm using the following code to serve uploaded files from a login secured view in a django app. Do you think that there is a security vulnerability in this code? I'm a bit concerned about that the user could place arbitrary strings in the url after the upload/ and this is directly mapped to the local filesystem. Actually I don't think that it is a vulnerability issue, since the access to the filesystem is restricted to the files in the folder defined with the UPLOAD_LOCATION setting. UPLOAD_LOCATION = is set to a not publicly available folder on the webserver url(r'^upload/(?P<file_url>[/,.,\s,_,\-,\w]+)', 'aeon_infrastructure.views.serve_upload_files', name='project_detail'), @login_required def serve_upload_files(request, file_url): import os.path import mimetypes mimetypes.init() try: file_path = settings.UPLOAD_LOCATION + '/' + file_url fsock = open(file_path,"r") file_name = os.path.basename(file_path) file_size = os.path.getsize(file_path) print "file size is: " + str(file_size) mime_type_guess = mimetypes.guess_type(file_name) if mime_type_guess is not None: response = HttpResponse(fsock, mimetype=mime_type_guess[0]) response['Content-Disposition'] = 'attachment; filename=' + file_name #response.write(file) except IOError: response = HttpResponseNotFound() return response

    Read the article

  • Get python tarfile to skip files without read permission

    - by chris
    I'm trying to write a function that backs up a directory with files of different permission to an archive on Windows XP. I'm using the tarfile module to tar the directory. Currently as soon as the program encounters a file that does not have read permissions, it stops giving the error: IOError: [Errno 13] Permission denied: 'path to file'. I would like it to instead just skip over the files it cannot read rather than end the tar operation. This is the code I am using now: def compressTar(): """Build and gzip the tar archive.""" folder = 'C:\\Documents and Settings' tar = tarfile.open ("C:\\WINDOWS\\Program\\archive.tar.gz", "w:gz") try: print "Attempting to build a backup archive" tar.add(folder) except: print "Permission denied attempting to create a backup archive" print "Building a limited archive conatining files with read permissions." for root, dirs, files in os.walk(folder): for f in files: tar.add(os.path.join(root, f)) for d in dirs: tar.add(os.path.join(root, d))

    Read the article

  • How many layers are between my program and the hardware?

    - by sub
    I somehow have the feeling that modern systems, including runtime libraries, this exception handler and that built-in debugger build up more and more layers between my (C++) programs and the CPU/rest of the hardware. I'm thinking of something like this: 1 + 2 OS top layer Runtime library/helper/error handler a hell lot of DLL modules OS kernel layer Do you really want to run 1 + 2?-Windows popup (don't take this serious) OS kernel layer Hardware abstraction Hardware Go through at least 100 miles of circuits Eventually arrive at the CPU ADD 1, 2 Go all the way back to my program Nearly all technical things are simply wrong and in some random order, but you get my point right? How much longer/shorter is this chain when I run a C++ program that calculates 1 + 2 at runtime on Windows? How about when I do this in an interpreter? (Python|Ruby|PHP) Is this chain really as dramatic in reality? Does Windows really try "not to stand in the way"? e.g.: Direct connection my binary < hardware?

    Read the article

  • memory available to 64bit Fedora guest under 32bit XP host running virtualbox

    - by Chris Card
    I have successfully installed a 64 bit Fedora 11 guest os using VirtualBox on a host machine (AMD64) running 32 bit Windows XP . At the moment the host machine has 2 Gb ram installed and I've allocated 1 Gb to the guest, which all works well. The host machine can hold a maximum of 4 Gb ram, so I was wondering if it's worth buying an extra 2 Gb for it. I know that 32 bit Windows XP can't use all of the 4 Gb, but can the guest os use any of the ram that the host os can't use?

    Read the article

  • Amazon java.lang.VerifyError Android

    - by easycheese
    I have been trying to submit 2 separate apps into the Amazon App store but they keep being rejected. Here is the stack trace for the first: 11-05 11:14:36.488 E/AndroidRuntime(28128): FATAL EXCEPTION: AsyncTask #1 11-05 11:14:36.488 E/AndroidRuntime(28128): java.lang.RuntimeException: An error occured while executing doInBackground() 11-05 11:14:36.488 E/AndroidRuntime(28128): at android.os.AsyncTask$3.done(AsyncTask.java:200) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.lang.Thread.run(Thread.java:1096) 11-05 11:14:36.488 E/AndroidRuntime(28128): Caused by: java.lang.VerifyError: com.companionfree.WLThemeViewer.AmazonClientManager 11-05 11:14:36.488 E/AndroidRuntime(28128): at com.companionfree.WLThemeViewer.UpdateDBs.doInBackground(UpdateDBs.java) 11-05 11:14:36.488 E/AndroidRuntime(28128): at com.companionfree.WLThemeViewer.UpdateDBs.doInBackground(UpdateDBs.java) 11-05 11:14:36.488 E/AndroidRuntime(28128): at android.os.AsyncTask$2.call(AsyncTask.java:185) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 11-05 11:14:36.488 E/AndroidRuntime(28128): ... 4 more And the relevant logcat for the second 10-12 15:41:48.929 D/dalvikvm( 2451): GC_FOR_MALLOC freed 8099 objects / 524416 bytes in 34ms 10-12 15:41:49.327 I/RPC ( 1563): rx thread timeout (1 clients): 10-12 15:41:49.828 I/RPC ( 1563): rx thread timeout (1 clients): 10-12 15:41:50.089 I/ActivityManager( 1563): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=com.companionfree.pushup/.MainScreen } 10-12 15:41:50.099 D/SurfaceFlinger( 1563): Layer::setBuffers(this=0xeafa50), pid=1563, w=1, h=1 10-12 15:41:50.099 D/SurfaceFlinger( 1563): Layer::setBuffers(this=0xeafa50), pid=1563, w=1, h=1 10-12 15:41:50.139 D/SurfaceFlinger( 1563): Layer::requestBuffer(this=0xeafa50), index=0, pid=1563, w=480, h=800 success 10-12 15:41:50.189 I/ActivityManager( 1563): Start proc com.companionfree.pushup for activity com.companionfree.pushup/.MainScreen: pid=2644 uid=10129 gids={1015, 3003} 10-12 15:41:50.319 I/RPC ( 1563): rx thread timeout (1 clients): 10-12 15:41:50.359 W/dalvikvm( 2644): VFY: Lcom/companionfree/pushup/WorkoutDbAdapter; is not instance of Landroid/app/Activity; 10-12 15:41:50.369 W/dalvikvm( 2644): VFY: bad arg 0 (into Landroid/app/Activity;) 10-12 15:41:50.369 W/dalvikvm( 2644): VFY: rejecting call to Lcom/amazon/android/Kiwi;.onActivityResult (Landroid/app/Activity;IILandroid/content/Intent;)Z 10-12 15:41:50.369 W/dalvikvm( 2644): VFY: rejecting opcode 0x71 at 0x0000 10-12 15:41:50.369 W/dalvikvm( 2644): VFY: rejected Lcom/companionfree/pushup/WorkoutDbAdapter;.onActivityResult (IILandroid/content/Intent;)V 10-12 15:41:50.369 W/dalvikvm( 2644): Verifier rejected class Lcom/companionfree/pushup/WorkoutDbAdapter; 10-12 15:41:50.369 D/AndroidRuntime( 2644): Shutting down VM 10-12 15:41:50.369 W/dalvikvm( 2644): threadid=1: thread exiting with uncaught exception (group=0x40025a70) 10-12 15:41:50.369 E/AndroidRuntime( 2644): FATAL EXCEPTION: main 10-12 15:41:50.369 E/AndroidRuntime( 2644): java.lang.VerifyError: com.companionfree.pushup.WorkoutDbAdapter 10-12 15:41:50.369 E/AndroidRuntime( 2644): at com.companionfree.pushup.MainScreen.onCreateMainScreen(MainScreen.java) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at com.companionfree.pushup.MainScreen.onCreate(MainScreen.java) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1088) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2802) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2859) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread.access$2300(ActivityThread.java:136) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2179) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.os.Handler.dispatchMessage(Handler.java:99) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.os.Looper.loop(Looper.java:143) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread.main(ActivityThread.java:5073) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at java.lang.reflect.Method.invokeNative(Native Method) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at java.lang.reflect.Method.invoke(Method.java:521) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:858) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:616) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at dalvik.system.NativeStart.main(Native Method) 10-12 15:41:50.379 W/ActivityManager( 1563): Force finishing activity com.companionfree.pushup/.MainScreen 10-12 15:41:50.399 D/SurfaceFlinger( 1563): Layer::setBuffers(this=0xeff6b8), pid=1563, w=1, h=1 10-12 15:41:50.399 D/SurfaceFlinger( 1563): Layer::setBuffers(this=0xeff6b8), pid=1563, w=1, h=1 10-12 15:41:50.419 D/SurfaceFlinger( 1563): Layer::requestBuffer(this=0xeff6b8), index=0, pid=1563, w=480, h=337 success 10-12 15:41:50.469 D/dalvikvm( 2451): GC_FOR_MALLOC freed 7889 objects / 521072 bytes in 105ms 10-12 15:41:50.819 I/RPC ( 1563): rx thread timeout (1 clients): I see the same verify error on both but I can't figure it out. The only common library used between the 2 apps is the FlurryAgent.jar for analytics. For the top app I have For the bottom app I have in the manifests. The only information I have been able to find out is about libraries (GSON) and needing to use dx but I am using Eclipse so that doesn't help. To make this more difficult, the error does NOT occur on the Android Market. Yet the testers at Amazon say that it FC 5/5 times on each of their devices (I tried using an emulator for their test devices and they worked fine). I know they use "wrapper" code around my app and I think it must be interfering in some way. Does anyone have any experience with this?

    Read the article

  • Linux Kernel - Slab Allocator Question

    - by Drex
    I am playing around with the kernel and am looking at the kmem_cache files_cachep belonging to fork.c. It detects the sizeof(files_struct). My question is this: I have altered files_struct and added a rb_root (red/black tree root) using the built-in functionality in linux/rbtree.h. I can properly insert values into this tree. However, at some point, a segfault occurs and GDB backtraces the following information: (gdb) backtrace 0 0x08066ad7 in page_ok (page=) at arch/um/os-Linux/sys-i386/task_size.c:31 1 0x08066bdf in os_get_top_address () at arch/um/os-Linux/sys-i386/task_size.c:100 2 0x0804a216 in linux_main (argc=1, argv=0xbfb05f14) at arch/um/kernel/um_arch.c:277 3 0x0804acdc in main (argc=1, argv=0xbfb05f14, envp=0xbfb05f1c) at arch/um/os-Linux/main.c:150 I have spent many hours trying to figure out why there is a segfault given that the red/black tree inserts properly. I'm thinking it's a memory allocation issue with new processes made by fork() of a parent process. Could this be the case and could it have something to do with kmem_cache files_cachep?

    Read the article

  • How can I get read-ahead bytes?

    - by Bruno Martinez
    Operating systems read from disk more than what a program actually requests, because a program is likely to need nearby information in the future. In my application, when I fetch an item from disk, I would like to show an interval of information around the element. There's a trade off between how much information I request and show, and speed. However, since the OS already reads more than what I requested, accessing these bytes already in memory is free. What API can I use to find out what's in the OS caches? Alternatively, I could use memory mapped files. In that case, the problem reduces to finding out whether a page is swapped to disk or not. Can this be done in any common OS?

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >