Search Results

Search found 121 results on 5 pages for 'pickle'.

Page 4/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • How to parse strings representing xml.dom.minidom nodes in python?

    - by Francis Davey
    I have a collection of nodes xml.dom.Node objects created using xml.dom.minidom. I store them (individually) in a database by converting them to a string using the toxml() method of a the Node object. The problem is that I'd sometimes like to be able to convert them back to the appropriate Node object using a parser of some kind. As far as I can see the various libraries shipped with python use Expat which won't parse a string like '' or indeed anything which is not a correct xml string. So, does anyone have any ideas? I realise I could pickle the nodes in some way and then unpickle them, but that feels unpleasant and I'd much rather be storing in a form I can read for maintenance purposes. Surely there is something that will do this?

    Read the article

  • Why is this the output of this python program?

    - by Andrew Moffat
    Someone from #python suggested that it's searching for module "herpaderp" and finding all the ones listed as its searching. If this is the case, why doesn't it list every module on my system before raising ImportError? Can someone shed some light on what's happening here? import sys class TempLoader(object): def __init__(self, path_entry): if path_entry == 'test': return raise ImportError def find_module(self, fullname, path=None): print fullname, path return None sys.path.insert(0, 'test') sys.path_hooks.append(TempLoader) import herpaderp output: 16:00:55 $> python wtf.py herpaderp None apport None subprocess None traceback None pickle None struct None re None sre_compile None sre_parse None sre_constants None org None tempfile None random None __future__ None urllib None string None socket None _ssl None urlparse None collections None keyword None ssl None textwrap None base64 None fnmatch None glob None atexit None xml None _xmlplus None copy None org None pyexpat None problem_report None gzip None email None quopri None uu None unittest None ConfigParser None shutil None apt None apt_pkg None gettext None locale None functools None httplib None mimetools None rfc822 None urllib2 None hashlib None _hashlib None bisect None Traceback (most recent call last): File "wtf.py", line 14, in <module> import herpaderp ImportError: No module named herpaderp

    Read the article

  • I need to know what link is clicked, how do i get these variables with cherrypy?

    - by user291071
    Lets say I display 3 links, I want to accomplish 2 things, know which link is clicked, and record this choice in a list/pickle or txt file but also capture this variable in cherrypy so I can perform another action. How do I do this? Its been suggested that I use a query string which makes sense but I can't get the querystring variable to cherrypy to use for further actions. So would anyone have a simple code of cherrypy with lets say 2 pages and have one page display 2 links with a querystring in each and the second page able to get that value?

    Read the article

  • Subversion: Ignore a Directory in the Repo on Commit

    - by Charles
    I have all the boost header files in this repository and when I do a check in it takes a really long time to scan all those files that will never change. Because I want users that checkout the project to be able to compile without installing boost I am in a pickle. I want to checkout everything, and then ignore updates (there will never be any) on a directory. Tortoise svn has a ignore-on-commit change list, but I cannot find anyway to add an entire directory to this list, and I do not fancy the idea of 'modifying' all the boost files so I can add them to this change list. Is there a simple solution?

    Read the article

  • Pre-Pre-build Steps in Hudson....

    - by Spedge
    Hey, I'm in a bit of a pickle. I'm trying to run some environmental scripts before I run the build in a m2 project, but it seems no matter how hard I try - the 'pre' build script are never run early enough. Before the 'pre-build' scripts are run, the project checks to see if the correct files are in the workspace - files that won't be there until the scripts I've written are executed. To make them 'pre-build', I'm using the M2 Extra Steps plugin - but's it's not 'pre' enough. Has anyone got any suggestions as to how I can carry out what I want to do? Cheers.

    Read the article

  • jquery :not selector not working in next() method

    - by Richard
    what is the next best thing to use when you want to select the next li item, but not the one that has someClassName. The not selector returns an empty array! or is this a case off using filter? <li class="first">pickle</> <li class="someClassName">tomato</li> <li>chicken</> <li>cocosnut</> var current = $('ul.items li.first'); var next = current.next(':not(li.someClassName)'); thanks, Richard

    Read the article

  • jQuery: how to pick unique IDs ?

    - by Seerumi
    Hello. New to whole this jQuery (and javascript altogether, heh) and so far it's been excellent, but now I'm in a small pickle. Let's say I have list of forms generated from SQL database and every single one of them has to have unique id, so how I can select the specific item that is to be manipulated (changing values via php). the $("#submit").click(function()) will trigger every submit buttons on the page, so how I can the #submit to be some random id that I clicked. There might be a smarter way, but I'm new to this so try to bear with me. thought of passing the unique value with onClick="myfunction(unique_id)", but don't know how it goes with jQuery. hope this made any sense

    Read the article

  • Exception message (Python 2.6)

    - by TurboJupi
    If I want to open binary file (in Python 2.6), that doesn't exists, program exits with an error and prints this: Traceback (most recent call last): File "C:\Python_tests\Exception_Handling\src\exception_handling.py", line 4, in <module> pkl_file = open('monitor.dat', 'rb') IOError: [Errno 2] No such file or directory: 'monitor.dat' I can handle this with 'try-except', like: try: pkl_file = open('monitor.dat', 'rb') monitoring_pickle = pickle.load(pkl_file) pkl_file.close() except Exception: print 'No such file or directory' Does anybody know, how could I, in caught Exception, print the following line? File "C:\Python_tests\Exception_Handling\src\exception_handling.py", line 11, in <module> pkl_file = open('monitor.dat', 'rb') So, program would not exits, and I would have useful information.

    Read the article

  • Regular Expression to recognise truncated forms of search string?

    - by Moonshield
    I'm trying to formulate a regular expression which will recognise the search term truncated by any number of characters from the right. For example, if the search term is "pickle", the regex should recognise "pi", "pick" but not "pickaxe". Initially I came up with the following: p(i(c(k(l(e)?)?)?)?)? That works perfectly, but seems a crude way of doing it. Is there a better way of doing this? I had a look around for something similar to what I want, but I'm not entirely sure what to search for.

    Read the article

  • index error:list out of range

    - by kaushik
    from string import Template from string import Formatter import pickle f=open("C:/begpython/text2.txt",'r') p='C:/begpython/text2.txt' f1=open("C:/begpython/text3.txt",'w') m=[] i=0 k='a' while k is not '': k=f.readline() mi=k.split(' ') m=m+[mi] i=i+1 print m[1] f1.write(str(m[3])) f1.write(str(m[4])) x=[] j=0 while j<i: k=j-1 l=j+1 if j==0 or j==i: j=j+1 else: xj=[] xj=xj+[j] xj=xj+[m[j][2]] xj=xj+[m[k][2]] xj=xj+[m[l][2]] xj=xj+[p] x=x+[xj] j=j+1 f1.write(','.join(x)) f.close() f1.close() It say line 33,xj=xj+m[l][2] has index error,list out of range please help thanks in advance

    Read the article

  • StringListProperty limited to 500 char strings (Google App Engine / Python)

    - by MarcoB
    It seems that StringListProperty can only contain strings up to 500 chars each, just like StringProperty... Is there a way to store longer strings than that? I don't need them to be indexed or anything. What I would need would be something like a "TextListProperty", where each string in the list can be any length and not limited to 500 chars. Can I create a property like that? Or can you experts suggest a different approach? Perhaps I should use a plain list and pickle/unpickle it in a Blob field, or something like that? I'm a bit new to Python and GAE and I would greatly appreciate some pointers instead of spending days on trial and error...thanks!

    Read the article

  • python challenge, but for C++

    - by davidthepsycho
    Does anyone know any site or book that presents problems like python challenge, but for C++? When I think python challenge, I do not mean only a set of problems to be solved with C++ (for that I could probably use the same problems of python challenge), but rather problems that will probably be best solved using C++ STL, special features of the language, etc. For example, there is one python challenge that is specifically designed to teach you how to use pickle, a serializing library for python. Until now, I only know programming contests problems, but they could also be solved with C, java or other languages.

    Read the article

  • Class instance clustering in object reference graph for multi-entries serialization

    - by Juh_
    My question is on the best way to cluster a graph of class instances (i.e. objects, the graph nodes) linked by object references (the -directed- edges of the graph) around specifically marked objects. To explain better my question, let me explain my motivation: I currently use a moderately complex system to serialize the data used in my projects: "marked" objects have a specific attributes which stores a "saving entry": the path to an associated file on disc (but it could be done for any storage type providing the suitable interface) Those object can then be serialized automatically (eg: obj.save()) The serialization of a marked object 'a' contains implicitly all objects 'b' for which 'a' has a reference to, directly s.t: a.b = b, or indirectly s.t.: a.c.b = b for some object 'c' This is very simple and basically define specific storage entries to specific objects. I have then "container" type objects that: can be serialized similarly (in fact their are or can-be "marked") they don't serialize in their storage entries the "marked" objects (with direct reference): if a and a.b are both marked, a.save() calls b.save() and stores a.b = storage_entry(b) So, if I serialize 'a', it will serialize automatically all objects that can be reached from 'a' through the object reference graph, possibly in multiples entries. That is what I want, and is usually provides the functionalities I need. However, it is very ad-hoc and there are some structural limitations to this approach: the multi-entry saving can only works through direct connections in "container" objects, and there are situations with undefined behavior such as if two "marked" objects 'a'and 'b' both have a reference to an unmarked object 'c'. In this case my system will stores 'c' in both 'a' and 'b' making an implicit copy which not only double the storage size, but also change the object reference graph after re-loading. I am thinking of generalizing the process. Apart for the practical questions on implementation (I am coding in python, and use Pickle to serialize my objects), there is a general question on the way to attach (cluster) unmarked objects to marked ones. So, my questions are: What are the important issues that should be considered? Basically why not just use any graph parsing algorithm with the "attach to last marked node" behavior. Is there any work done on this problem, practical or theoretical, that I should be aware of? Note: I added the tag graph-database because I think the answer might come from that fields, even if the question is not.

    Read the article

  • Can working exclusively with niche apps or tech hurt your career in software development? How to get out of the cycle? [closed]

    - by Keoma
    I'm finding myself in a bit of a pickle. I've been at a pretty comfortable IT group for almost a decade. I got my start here working on web development, mostly CRUD, but have demonstrated the ability to figure out more complex problems. I'm not a rock star, but I have received many compliments on my programming aptitude, and technologists and architects have commented on my ability to pick things up (for example, I recently learned a very popular web framework that shall remain nameless since I don’t want to be identified). My problem is that, over time, my responsibilities have been shifting towards work such as support or ‘development’ with some rather niche products (afraid to mention here due to potential for being identified). Some of this work, if it includes anything resembling coding, is very menial scripting in languages such as Powershell or VBScript. The vast majority of the time, however, a typical day consists of going back and forth with the product’s vendor support to send them logs and apply configuration changes or patches they recommend. I’m basically starved for some actual software development. However, even though I’m more than capable of doing that development work (and actually do a much better job at it than anything else), our boss is more interested in the kind of work I mentioned above, her reasoning being that since no one else in the organization wants to do it, it must mean job security. This has been going on for close to 3 years, and the only reason I have held on is on the promise that we would eventually get more development projects assigned to us. Well, that turned out not to be true at all. A recent talk with the boss has just made it more explicitly clear, as she told me in no uncertain terms that it’s very likely that development work (web or otherwise) would go to another group. The reason given to me is that our we don’t have enough resources in our group to handle that. So now I find myself in the position that I either have to stay in what has essentially become a dead end IT job that is tied to the fortunes of a niche stack of apps, or try to find a position that will be better for my long term career. My problem (is it a problem?), however, is that compared to others, my development projects in the last three years are very sparse in number. To compound things, projects using the latest and most popular frameworks, amount to the big fat number of just one—with no work of that kind in the foreseeable future. I am very concerned that this sparseness in my resume is a deficit, and that it will hurt my chances of landing a different job. I’m also wondering how much it will hurt me, and whether that can be ameliorated with hobby projects of my own. I guess I’m looking for opinions. Thank you very much for reading.

    Read the article

  • Should I go back to college and graduate with a poor GPA or try to jump into an entry-level development position? [closed]

    - by jshin47
    I once attended a top-10 American university but I am currently not in school for several different reasons. Chief among them is that I did very poorly two semesters and even failed one of them (got two F's) which put me in automatic suspension. My major is not CS but math. I am in a pickle at the moment. After I was suspended I got a job at a niche IT company in the area. I am employed as something of an IT generalist; my primary responsibilities are Windows systems administration/networking but I also do some Android, iOS, and .NET development. I have released a few apps to the app store under my name and my company's name, and we have done work for a few big clients. I started working at my job about 1.5 years ago and I am somewhat happily employed but I do not see it as a long-term fit because it is a small company with little opportunity to advance. I would like to move out to California and particularly to the Bay Area to get a job at a more reputable or exciting company, even at a lower rate of pay, but I am not sure if I should do that or try to go back to school. If I went back to school, it would take 1-1.5 years to graduate and some $. Best case scenario I would graduate with a 2.9 or 3.0 GPA. It is a top-10 school, but that's a crappy GPA. If I do not go back to school, I will be a field where most people have degrees, without a degree. If anything goes wrong I could be really screwed as I feel I will get no respect without a degree. On the other hand I really would like to get started in the field and get more serious about developing good development practices, learning new languages/frameworks, and working with people who know a lot more than I so I can learn and grow as a developer and eventually do my own thing. Basically, I am wondering: Should I just go back to school? How much does the bad GPA / good school reputation weigh in? What about the fact that I am a Math major and not a CS major (have never taken a CS course)? Does my skill set as something of a generalist bode well for me finding work at a start up in the Bay Area? If not (2), should I hunker down and focus on producing a really good (or a few medicore) iOS apps? Android apps? etc... How would you look at someone who did great in HS, kind of goofed off in college and eventually quit, and got into development? Thanks for any thoughts or input.

    Read the article

  • password incorrect 3 times + suspected failed update

    - by Cheese
    I have been lurking your site for the past few hours, and have found myself in a bit of a pickle. Visiting my parents, I discover that neither computer, nor laptop work. Long story short, I've got the laptop working, but have completely fudged up the computer. I am a n00b, but I was at least willing to give it a go. The comp originally had ubuntu 11.10 installed, later updated to 12.04. We have cds for both. I do not understand what the initial problem was for my parents, but somehow when I turned on the computer, it worked for me. Soon after, I was nagged to install the latest updates. So, I spent the next half an hour wondering why the updates kept on asking for 11.04 cdroms, until I realised that you could turn off the cdrom necessity. After doing this via console, I installed some of the smaller updates, before being told to do a partial update. This failed a few times, and ended up freezing whilst reinstalling drivers. After a hard restart I continued to type whatever I could find on the forum into the console. At some point, the console started saying that I had 3 incorrect password inputs, and sudo commands stopped altogether. I found another thread discussing this; but people kept on suggesting changing passwords (which I did to no avail) or other things that made use of sudo (which I am locked out of, although I am technically the admin) I found myself somehow on the Ctrl+Alt+F1 console, and after being utterly confused (and Ctrl+AltF5 failing for me), another hard reset occurred. Somewhere along the way I created a USB start up for 14.04, (but this does not seem to work) Now I am left with an admin (and guest) account that log in but have blank screens (with only the desktop background showing) and I can't do anything in the console because I'm locked out. Interestingly, the console now says that I am running 14.04 although all updates said they had failed. Aside from the obvious lessons I have learnt (don't fiddle about in the console when you have no idea what you're doing "Dog wearing safety glasses "I have no idea what I am doing" GIF would be inserted here ) Is there any way I can redeem this almighty muck up? A million thanks for any help!

    Read the article

  • IPCop Packet Mangling

    - by Zenham
    I've found myself in a pickle replacing an old firewall for a client this afternoon. I'm configuring their new IPCop firewall (1.4.21), Zerina OpenVPN addon is installed. What I need to do: There are three network interfaces, currently set up as red (WAN), green (LAN, 192.168.20.0/24) and orange (remote network 10.1.20.0/24). The orange interface is a direct fiber link to another organization. Simple description: Traffic and networks appear to be properly configured at this point, but I have many (150+) specific IPs on the LAN which, when accessing the resources on the 10.1.20.x network, need to be mangled to appear to be coming from the 10.1.20.0/24 network (and return traffic properly delivered). The routing on the far side was configured earlier and should be fine, but I need to redirect any packets coming across destined for those IPs to end up at their proper destination. The addressing is fixed and predictable (ie. 192.168.20.125 - 10.1.20.125). I need to insert whatever rules I have into the IPCop ruleset through /etc/rc.local I know, I'm just not sure about how I should structure this. There's CUSTOMOUTPUT and CUSTOMINPUT targets, both which currently just consist of the single rule redirecting packets to the OVPNOUTPUT/OVPNINPUT targets, so I'm guessing I should insert a rule matching outbound packets destined for the 10.1.20.x network and redirecting to a new target (maybe called TO-ORANGE) and a rule at the top of CUSTOMINPUT which redirects to a FROM-ORANGE target. Under those targets, I would have rules which do the IP matching and mangling. Am I approaching this right? If so, I'm not very familiar with mangle, and would appreciate seeing examples of how to write that source-IP rewrite. If not, how would you suggest doing this? TIA! edit: I notice additionally that the nat table has CUSTOMPREROUTING and CUSTOMPOSTROUTING targets, I guess I could alternatively post the rules in there....

    Read the article

  • network policy + WPA enterprise (tkip) Windows 2008 R2

    - by Aceth
    hi I've attempted the following guide and in a bit of a pickle. http://techblog.mirabito.net.au/?p=87 My main goal is to have a username / password based wireless authentication with active directory integration. I keep getting the error Network Policy Server denied access to a user. Contact the Network Policy Server administrator for more information. User: Security ID: domain\rhysbeta Account Name: rhysbeta Account Domain: domain Fully Qualified Account Name: domain\rhysbeta Client Machine: Security ID: NULL SID Account Name: - Fully Qualified Account Name: - OS-Version: - Called Station Identifier: 00-12-BF-00-71-3C:wirelessname Calling Station Identifier: 00-23-76-5D-1E-31 NAS: NAS IPv4 Address: 0.0.0.0 NAS IPv6 Address: - NAS Identifier: - NAS Port-Type: Wireless - IEEE 802.11 NAS Port: 2 RADIUS Client: Client Friendly Name: Belkin54g Client IP Address: x.x.x.10 Authentication Details: Connection Request Policy Name: Secure Wireless Connections Network Policy Name: Secure Wireless Connections Authentication Provider: Windows Authentication Server: srvr.example.com Authentication Type: EAP EAP Type: - Account Session Identifier: - Logging Results: Accounting information was written to the local log file. Reason Code: 22 Reason: The client could not be authenticated because the Extensible Authentication Protocol (EAP) Type cannot be processed by the server. ` I would love to have it so that non domain devices

    Read the article

  • How to get desired FireFox last tab behavior?

    - by JustJeff
    All tabs should be the same; so if any of the have a 'close' button, they all should, including the last tab. I see no reason that a tab's close button should suddenly vanish simply b/c that tab has become the last one open. If I have N tabs open, and park the mouse over the left-most tab's close button, this vanishing close button trick means now I have to make a large mouse move to get to the app's close button. Unsat. Mouse moves = too many milliseconds wasted. Closing the last tab should NOT take me to my home page, or any other page whatsoever. I want the browser to close with the last tab. I do not expect or want "new tab" behavior when I click a Close button. Now, I've gone into about:config and played with browser.tabs.closeWindoWithLastTab, but this setting oversteps its purpose; while it does make the browser close, for some inexplicable reason, it also suppresses the last tab's close button! I have tried the "last tab close button" add-on, and while this does restore the close button, the add-on oversteps by taking the liberty of turning closeWindowWithLastTab off. Is there some way out of this pickle? Is it too hard to just code things to provide simple, orthogonal actions, so that everybody can config the UI to their liking, and not just to a few pre-fab configurations that the developers think everyone should like? Btw, FF 13.0.1 on ms windows

    Read the article

  • Why Haven’t NFC Payments Taken Off?

    - by David Dorf
    With the EMV 2015 milestone approaching rapidly, there’s been renewed interest in smartcards, those credit cards with an embedded computer chip.  Back in 1996 I was working for a vendor helping Visa introduce a stored-value smartcard to the US.  Visa Cash was debuted at the 1996 Olympics in Atlanta, and I firmly believed it was the beginning of a cashless society.  (I later worked on MasterCard’s system called Mondex, from the UK, which debuted the following year in Manhattan). But since you don’t have a Visa Cash card in your wallet, it’s obvious the project never took off.  It was convenient for consumers, faster for merchants, and more cost-effective for banks, so why did it fail?  All emerging payment systems suffer from the chicken-and-egg dilemma.  Consumers won’t carry the cards if few merchants accept them, and merchants won’t install the terminals if few consumers have cards. Today’s emerging payment providers are in a similar pickle.  There has to be enough value for all three constituents – consumers, merchants, banks – to change the status quo.  And it’s not enough to exceed the value, it’s got to be a leap in value, because people generally resist change.  ATMs and transit cards are great examples of this, and airline kiosks and self-checkout systems are to a lesser extent. Although Google Wallet and ISIS, the two leading NFC payment platforms in the US, have shown strong commitment, there’s been very little traction.  Yes, I can load my credit card number into my phone then tap to pay, but what was the incremental value over swiping my old card?  For it to be a leap in value, it has to offer more than just payment, which I can do very easily today.  The other two ingredients are thought to be loyalty programs and digital coupons, but neither Google nor ISIS really did them well. Of course a large portion of the mobile phone market doesn’t even support NFC thanks to Apple, and since it’s not in their best interest that situation is unlikely to change.  Another issue is getting access to the “secure element,” the chip inside the phone where accounts numbers can be held securely.  Telco providers and handset manufacturers own that area, and they’re not willing to share with banks.  (Host Card Emulation, which has been endorsed by MasterCard and Visa, might be a solution.) Square recently gave up on its wallet, and MCX (the group of retailers trying to create a mobile payment platform) is very slow out of the gate.  That leaves PayPal and a slew of smaller companies trying to introduce easier ways to pay. But is it really so cumbersome to carry and swipe (soon to insert) a credit card?  Aren’t there more important problems to solve in the retail customer experience?  Maybe Apple will come up with some novel way to use iBeacons and fingerprint identification to make payments, but for now I think we need to focus on upgrading to Chip-and-PIN and tightening security.  In the meantime, NFC payments will continue to struggle.

    Read the article

  • Rails: Should partials be aware of instance variables?

    - by Alexandre
    Ryan Bates' nifty_scaffolding, for example, does this edit.html.erb <%= render :partial => 'form' %> new.html.erb <%= render :partial => 'form' %> _form.html.erb <%= form_for @some_object_defined_in_action %> That hidden state makes me feel uncomfortable, so I usually like to do this edit.html.erb <%= render :partial => 'form', :locals => { :object => @my_object } %> _form.html.erb <%= form_for object %> So which is better: a) having partials access instance variables or b) passing a partial all the variables it needs? I've been opting for b) as of late, but I did run into a little pickle: some_action.html.erb <% @dad.sons.each do |a_son| %> <%= render :partial => 'partial', :locals => { :son => a_son } %> <% end %> _partial.html.erb The son's name is <%= son.name %> The dad's name is <%= son.dad.name %> son.dad makes a database call to fetch the dad! So I would either have to access @dad, which would be going back to a) having partials access instance variables or I would have to pass @dad in locals, changing render :partial to <%= render :partial = 'partial', :locals = { :dad = @dad, :son = a_son } %, and for some reason passing a bunch of vars to my partial makes me feel uncomfortable. Maybe others feel this way as well. Hopefully that made some sense. Looking for some insight into this whole thing... Thanks!

    Read the article

  • Running Webrat with Selenium

    - by yuval
    I set up Cucumber+Webrat+Selenium according to this article. Whenever I run my server, though, I keep getting: ERROR Server Exception: sessionId should not be null; has this session been started yet? (Selenium::CommandError) Two hours on Google haven't done much for me. Could you please help out? Thanks! I am working on Ruby 1.8.7 and Rails 2.3.5 on Mac OS X 10.6. My installed gems in test.rb are: config.gem "database_cleaner", :lib => false, :version => ">=0.5.0" config.gem "rspec", :lib => false, :version => ">=1.2.2" config.gem "rspec-rails", :lib => false, :version => ">=1.2.2" config.gem "webrat", :lib => false, :version => ">=0.4.4" config.gem "cucumber", :lib => false, :version => ">=0.3.0" config.gem "thoughtbot-factory_girl", :lib => "factory_girl", :source => "http://gems.github.com" config.gem "pickle", :lib => false, :version => ">= 0.1.21" Thank you very much!

    Read the article

  • Cron Job on Ubuntu Hardy Executing But Not Deleting Files As Expected

    - by Patrick McKenzie
    I have a bit of a pickle here and wonder if anyone can give me some pointers: I have a cron job which executes for a particular user daily and is supposed to sweep files in a particular directory. Technically, it is two jobs. I've turned on cron.log to verify they're actually executing, and they are: May 24 11:03:01 AppNameGoesHere /USR/SBIN/CRON[11257]: (mongrel_AppNameGoesHere) CMD (rm -rf /var/www/apps/AppNameGoesHere/current/public/ {popular,index,purchasing,purchasing-alternate,support,about-us,guarantee,screenshots}.htm{,l}) May 24 11:04:01 AppNameGoesHere /USR/SBIN/CRON[11260]: (mongrel_AppNameGoesHere) CMD (rm -rf /var/www/apps/AppNameGoesHere/current/public/ {stats,popular,bcf,articles,expenses}) I have removed the actual usernames and formatted it so that it is less ugly on StackOverflow. Now, my question: Despite the fact that I can see these deletions executing and apparently succeeding in the log, if I go to the specified directory, the files are still there. I initially suspected permission hijinx were going on, but I've verified that I can delete the files manually by su-ing into the mongrel_AppNameGoesHere user and issuing individual rm commands or by copy/pasting the cron job to the command line. Anything that I don't manually zap stays unzapped despite days of that cron job executing successfully. Any suggestions on to what might be happening? I was previously using Dapper Drake with these cron jobs in the /etc/crontab file directly, and when I upgraded to Hardy I moved them to user-specific crontabs (via sudo crontab -e - u mongrel_AppNameGoesHere), which was the point where they appear to have stopped working.)

    Read the article

  • Passing custom Python objects to nosetests

    - by Rob
    I am attempting to re-organize our test libraries for automation and nose seems really promising. My question is, what is the best strategy for passing Python objects into nose tests? Our tests are organized in a testlib with a bunch of modules that exercise different types of request operations. Something like this: testlib \-testmoda \-testmodb \-testmodc In some cases the test modules (i.e. testmoda) is nothing but test_something(), test_something2() functions while in some cases we have a TestModB class in testmob with the test_anotherthing1(), test_anotherthing2() functions. The cool thing is that nose easily finds both. Most of those test functions are request factory stuff that can easily share a single connection to our server farm. Thus we do a lot of test_something1(cnn), TestModB.test_anotherthing2(cnn), etc. Currently we don't use nose, instead we have a hodge-podge of homegrown driver scripts with hard-coded lists of tests to execute. Each of those driver scripts creates its own connection object. Maintaining those scripts and the connection minutia is painful. I'd like to take free advantage of nose's beautiful discovery functionality, passing in a connection object of my choosing. Thanks in advance! Rob P.S. The connection objects are not pickle-able. :(

    Read the article

  • cx_Oracle makes subprocess give OSError

    - by Shrikant Sharat
    I am trying to use the cx_Oracle module with python 2.6.6 on ubuntu Maverick, with Oracle 11gR2 Enterprise edition. I am able to connect to my oracle db just fine, but once I do that, the subprocess module does not work anymore. Here is an iPython session that reproduces the problem... In [1]: import subprocess as sp, cx_Oracle as dbh In [2]: sp.call(['whoami']) sharat Out[2]: 0 In [3]: con = dbh.connect('system', 'password') In [4]: con.close() In [5]: sp.call(['whomai']) --------------------------------------------------------------------------- OSError Traceback (most recent call last) /home/sharat/desk/calypso-launcher/<ipython console> in <module>() /usr/lib/python2.6/subprocess.pyc in call(*popenargs, **kwargs) 468 retcode = call(["ls", "-l"]) 469 """ --> 470 return Popen(*popenargs, **kwargs).wait() 471 472 /usr/lib/python2.6/subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags) 621 p2cread, p2cwrite, 622 c2pread, c2pwrite, --> 623 errread, errwrite) 624 625 if mswindows: /usr/lib/python2.6/subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) 1134 1135 if data != "": -> 1136 _eintr_retry_call(os.waitpid, self.pid, 0) 1137 child_exception = pickle.loads(data) 1138 for fd in (p2cwrite, c2pread, errread): /usr/lib/python2.6/subprocess.pyc in _eintr_retry_call(func, *args) 453 while True: 454 try: --> 455 return func(*args) 456 except OSError, e: 457 if e.errno == errno.EINTR: OSError: [Errno 10] No child processes So, the call to sp.call works fine before connecting to oracle, but breaks after that. Even if I have closed the connection to the database. Looking around, I found http://bugs.python.org/issue1731717 as somewhat related to this issue, but I am not dealing with threads here. I don't know if cx_Oracle is. Moreover, the above issue mentions that adding a time.sleep(1) fixes it, but it didn't help me. Any help appreciated. Thanks.

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >