Search Results

Search found 16680 results on 668 pages for 'python datetime'.

Page 407/668 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • grabbing a substring while scraping with Python2.6

    - by Diego
    Hey can someone help with the following? I'm trying to scrape a site that has the following information.. I need to pull just the number after the </strong> tag.. [<li><strong>ISBN-13:</strong> 9780375853401</li>, <li><strong>Pub. Date: </strong> 05/11/2010</li>] [<li><strong>UPC:</strong> 490355000372</li>, <li><strong>Catalog No:</strong> 15024/25</li>, <li><strong>Label:</strong> CAMERATA</li>] here's a piece of the code I've been using to grab the above data using mechanize and BeautifulSoup. I'm stuck here as it won't let me use the find() function for a list br_results = mechanize.urlopen(br_results) html = br_results.read() soup = BeautifulSoup(html) local_links = soup.findAll("a", {"class" : "down-arrow csa"}) upc_code = soup.findAll("ul", {"class" : "bc-meta3"}) for upc in upc_code: upc_text = upc.contents.contents print upc_text

    Read the article

  • Programatically Determining Bin Path

    - by Andy
    I'm working on a web app called pj and there is a bin file and a src folder. The relative paths before I deploy the app will look something like: pj/bin and pj/src/pj/script.py. However, after deployment, the relative paths will look like: pj_dep/deployed/bin and pj_dep/deployed/lib/python2.6/site-packages/pj/script.py Question: Within script.py, I am trying to find the path of a file in the bin directory. This leads to 2 different behaviors in the dev and deployment environment. If I do os.path.join(os.path.dirname(__file__), 'bin') to try to get the path for the dev environment, I will have a different path for the deployment environment. Is there a more generalized way I can find the bin directory so that I do not need to rely on an if statement to determine how many directories to go up based on the current env? This doesn't seem flexible and might cause other issues later on when the code is moved.

    Read the article

  • Conditional row coloring in a PocketPyGUI table (PythonCE)

    - by PabloG
    I'm working on a an PythonCE application, using the PocketPyGUI toolkit. I'm using the gui.Table control to display a large list of choices (addresses, codes and data associated), and I want to assign a different color to the rows that have been completed. Is there any way to colorize the rows given certain conditions? TIA, Pablo

    Read the article

  • Django and mod_python intermittent error?

    - by Peter
    I have a Django site at http://sm.rutgers.edu/relive/af_api/index/. It is supposed to display "Home of the relive APIs". If you refresh this page many times, you can see different renderings. 1) The expected page. 2) Django "It worked!" page. 3) "ImportError at /index/" page. If you scroll down enough to ROOT_URLCONF part, you will see it says 'relive.urls'. But apparently, it should be 'af_api.urls', which is in my settings.py file. Since these results happen randomly, is it possible that either Django or mod_python is working unstably?

    Read the article

  • Moving a turtle to the center of a circle.

    - by Maggie
    I've just started using the turtle graphics program, but I can't figure out how to move the turtle automatically to the center of a circle (no matter where the circle is located) without it drawing any lines. I thought I could use the goto.() function but it's too specific and I need something general.

    Read the article

  • Do Django Models inherit managers? (Mine seem not to)

    - by Zach
    I have 2 models: class A(Model): #Some Fields objects = ClassAManager() class B(A): #Some B-specific fields I would expect B.objects to give me access to an instance of ClassAManager, but this is not the case.... >>> A.objects <app.managers.ClassAManager object at 0x103f8f290> >>> B.objects <django.db.models.manager.Manager object at 0x103f94790> Why doesn't B inherit the objects attribute from A?

    Read the article

  • socket.accept error 24: To many open files

    - by Creotiv
    I have a problem with open files under my Ubuntu 9.10 when running server in Python2.6 And main problem is that, that i don't know why it so.. I have set ulimit -n = 999999 net.core.somaxconn = 999999 fs.file-max = 999999 and lsof gives me about 12000 open files when server is running. And also i'm using epoll. But after some time it's start giving exeption: File "/usr/lib/python2.6/socket.py", line 195, in accept error: [Errno 24] Too many open files And i don't know how it can reach file limit when it isn't reached. Thanks for help)

    Read the article

  • Jython java call throws exception asking for 2 args when only one arg is coded

    - by clutch
    I have an Java method I want to call within my Jython servlet running on tomcat5. It looks like this: @SuppressWarnings("unchecked") public School loadByName(String name) { List<School> school; school = getHibernateTemplate().find("from " + getPersistentClass().getName() + " where name = ?", name); return uniqueResult(school); } I call it in Jython using: foobar = SchoolDAOHibernate.loadByName('Univeristy') It throws an error that says loadByName() expects 2 args; got 1. What other argument could it be looking for?

    Read the article

  • SQLAlchemy declarative syntax with autoload in Pylons

    - by Juliusz Gonera
    I would like to use autoload to use an existings database. I know how to do it without declarative syntax (model/_init_.py): def init_model(engine): """Call me before using any of the tables or classes in the model""" t_events = Table('events', Base.metadata, schema='events', autoload=True, autoload_with=engine) orm.mapper(Event, t_events) Session.configure(bind=engine) class Event(object): pass This works fine, but I would like to use declarative syntax: class Event(Base): __tablename__ = 'events' __table_args__ = {'schema': 'events', 'autoload': True} Unfortunately, this way I get: sqlalchemy.exc.UnboundExecutionError: No engine is bound to this Table's MetaData. Pass an engine to the Table via autoload_with=<someengine>, or associate the MetaData with an engine via metadata.bind=<someengine> The problem here is that I don't know where to get the engine from (to use it in autoload_with) at the stage of importing the model (it's available in init_model()). I tried adding meta.Base.metadata.bind(engine) to environment.py but it doesn't work. Anyone has found some elegant solution?

    Read the article

  • Parsing html for domain links

    - by Hallik
    I have a script that parses an html page for all the links within it. I am getting all of them fine, but I have a list of domains I want to compare it against. So a sample list contains list=['www.domain.com', 'sub.domain.com'] But I may have a list of links that look like http://domain.com http://sub.domain.com/some/other/page I can strip off the http:// just fine, but in the two example links I just posted, they both should match. The first I would like to match against the www.domain.com, and the second, I would like to match against the subdomain in the list. Right now I am using url2lib for parsing the html. What are my options in completely this task?

    Read the article

  • Django: Applying Calculations To A Query Set

    - by TheLizardKing
    I have a QuerySet that I wish to pass to a generic view for pagination: links = Link.objects.annotate(votes=Count('vote')).order_by('-created')[:300] This is my "hot" page which lists my 300 latest submissions (10 pages of 30 links each). I want to now sort this QuerySet by an algorithm that HackerNews uses: (p - 1) / (t + 2)^1.5 p = votes minus submitter's initial vote t = age of submission in hours Now because applying this algorithm over the entire database would be pretty costly I am content with just the last 300 submissions. My site is unlikely to be the next digg/reddit so while scalability is a plus it is required. My question is now how do I iterate over my QuerySet and sort it by the above algorithm? For more information, here are my applicable models: class Link(models.Model): category = models.ForeignKey(Category, blank=False, default=1) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) url = models.URLField(max_length=1024, unique=True, verify_exists=True) name = models.CharField(max_length=512) def __unicode__(self): return u'%s (%s)' % (self.name, self.url) class Vote(models.Model): link = models.ForeignKey(Link) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) def __unicode__(self): return u'%s vote for %s' % (self.user, self.link) Notes: I don't have "downvotes" so just the presence of a Vote row is an indicator of a vote or a particular link by a particular user.

    Read the article

  • How to inject a key string to andoid device through ADB?

    - by Nandi
    Hi, Can somebody help me for the following. I want to select a perticular string in the list displayed in android phone. If i take example of phone book. i want to pass a person name to the device using adb interface and that name should get highlighted in the list. I tried all adb commands for this but could pass string and key events to the screen but not able to select the respective string. please help. Thanks in advance.

    Read the article

  • Regular Expression Question

    - by zyq524
    I'm trying to use regular expression to extract the comments in the heading of a file. For example, the source code may look like: //This is an example file. //Please help me. #include "test.h" int main() //main function { ... } What I want to extract from the code are the first two lines, i.e. //This is an example file. //Please help me. Any idea?

    Read the article

  • SUDS rendering a duplicate node and wrapping everything in it

    - by PylonsN00b
    Here is my code: #Make the SOAP connection url = "https://api.channeladvisor.com/ChannelAdvisorAPI/v1/InventoryService.asmx?WSDL" headers = {'Content-Type': 'text/xml; charset=utf-8'} ca_client_inventory = Client(url, location="https://api.channeladvisor.com/ChannelAdvisorAPI/v1/InventoryService.asmx", headers=headers) #Make the SOAP headers login = ca_client_inventory.factory.create('APICredentials') login.DeveloperKey = 'REMOVED' login.Password = 'REMOVED' #Attach the headers ca_client_inventory.set_options(soapheaders=login) synch_inventory_item_list = ca_client_inventory.factory.create('SynchInventoryItemList') synch_inventory_item_list.accountID = "REMOVED" array_of_inventory_item_submit = ca_client_inventory.factory.create('ArrayOfInventoryItemSubmit') for product in products: inventory_item_submit = ca_client_inventory.factory.create('InventoryItemSubmit') inventory_item_list = get_item_list(product) inventory_item_submit = [inventory_item_list] array_of_inventory_item_submit.InventoryItemSubmit.append(inventory_item_submit) synch_inventory_item_list.itemList = array_of_inventory_item_submit #Call that service baby! ca_client_inventory.service.SynchInventoryItemList(synch_inventory_item_list) Here is what it outputs: <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:ns0="http://api.channeladvisor.com/webservices/" xmlns:ns1="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tns="http://api.channeladvisor.com/webservices/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header> <tns:APICredentials> <tns:DeveloperKey>REMOVED</tns:DeveloperKey> <tns:Password>REMOVED</tns:Password> </tns:APICredentials> </SOAP-ENV:Header> <ns1:Body> <ns0:SynchInventoryItemList> <ns0:accountID> <ns0:accountID>REMOVED</ns0:accountID> <ns0:itemList> <ns0:InventoryItemSubmit> <ns0:Sku>1872</ns0:Sku> <ns0:Title>The Big Book Of Crazy Quilt Stitches</ns0:Title> <ns0:Subtitle></ns0:Subtitle> <ns0:Description>Embellish the seams and patches of crazy quilt projects with over 75 embroidery stitches and floral motifs. You&apos;ll use this handy reference book again and again to dress up wall hangings, pillows, sachets, clothing, and other nostalgic creations.</ns0:Description> <ns0:Weight>4</ns0:Weight> <ns0:FlagStyle/> <ns0:IsBlocked xsi:nil="true"/> <ns0:ISBN></ns0:ISBN> <ns0:UPC>028906018721</ns0:UPC> <ns0:EAN></ns0:EAN> <ns0:QuantityInfo> <ns0:UpdateType>UnShipped</ns0:UpdateType> <ns0:Total>0</ns0:Total> </ns0:QuantityInfo> <ns0:PriceInfo> <ns0:Cost>0.575</ns0:Cost> <ns0:RetailPrice xsi:nil="true"/> <ns0:StartingPrice xsi:nil="true"/> <ns0:ReservePrice xsi:nil="true"/> <ns0:TakeItPrice>6.95</ns0:TakeItPrice> <ns0:SecondChanceOfferPrice xsi:nil="true"/> <ns0:StorePrice>6.95</ns0:StorePrice> </ns0:PriceInfo> <ns0:ClassificationInfo> <ns0:Name>Books</ns0:Name> <ns0:AttributeList> <ns0:ClassificationAttributeInfo> <ns0:Name>Designer/Author</ns0:Name> <ns0:Value>Patricia Eaton</ns0:Value> </ns0:ClassificationAttributeInfo> <ns0:ClassificationAttributeInfo> <ns0:Name>Trim Size</ns0:Name> <ns0:Value></ns0:Value> </ns0:ClassificationAttributeInfo> <ns0:ClassificationAttributeInfo> <ns0:Name>Binding</ns0:Name> <ns0:Value>Leaflet</ns0:Value> </ns0:ClassificationAttributeInfo> <ns0:ClassificationAttributeInfo> <ns0:Name>Release Date</ns0:Name> <ns0:Value>11/1/1999 0:00:00</ns0:Value> </ns0:ClassificationAttributeInfo> <ns0:ClassificationAttributeInfo> <ns0:Name>Skill Level</ns0:Name> <ns0:Value></ns0:Value> </ns0:ClassificationAttributeInfo> <ns0:ClassificationAttributeInfo> <ns0:Name>Pages</ns0:Name> <ns0:Value>20</ns0:Value> </ns0:ClassificationAttributeInfo> <ns0:ClassificationAttributeInfo> <ns0:Name>Projects</ns0:Name> <ns0:Value></ns0:Value> </ns0:ClassificationAttributeInfo> </ns0:AttributeList> </ns0:ClassificationInfo> <ns0:ImageList> <ns0:ImageInfoSubmit> <ns0:PlacementName>ITEMIMAGEURL1</ns0:PlacementName> <ns0:FilenameOrUrl>1872.jpg</ns0:FilenameOrUrl> </ns0:ImageInfoSubmit> </ns0:ImageList> </ns0:InventoryItemSubmit> </ns0:itemList> </ns0:accountID> </ns0:SynchInventoryItemList> </ns1:Body> </SOAP-ENV:Envelope> See how it creates the accountID node twice and wraps the whole thing in it? WHY? How do I make it stop that?!

    Read the article

  • How to reload Django models without losing my locals in an interactive session?

    - by Gj
    I'm doing some research with an interactive shell and using a Django app (shell_plus) for storing data and browsing it using the convenient admin. Occasionally I add or change some of the app models, and run a syncdb (or South migration when changing a model). The changes to the models don't take effect in my interactive session even if I re-import the app models. Thus I'm forced to restart the shell_plus and lose my precious locals() in the process. Is there any way to reload the models during a session? Thanks!!

    Read the article

  • Exception Handling in google app engine

    - by Rahul99
    i am raising exception using if UserId == '' and Password == '': raise Exception.MyException , "wrong userId or password" but i want print the error message on same page class MyException(Exception): def __init__(self,msg): Exception.__init__(self,msg)

    Read the article

  • asyncore callbacks launching threads... ok to do?

    - by sbartell
    I'm unfamiliar with asyncore, and have very limited knowledge of asynchronous programming except for a few intro to twisted tutorials. I am most familiar with threads and use them in all my apps. One particular app uses a couchdb database as its interface. This involves longpolling the db looking for changes and updates. The module I use for couchdb is couchdbkit. It uses an asyncore loop to watch for these changes and send them to a callback. So, I figure from this callback is where I launch my worker threads. It seems a bit crude to mix asynchronous and threaded programming. I really like couchdbkit, but would rather not introduce issues into my program. So, my question is, is it safe to fire threads from an async callback? Here's some code... {{{ def dispatch(change): global jobs, db_url # jobs is my queue db = Database(db_url) work_order = db.get(change['id']) # change is an id to the document that changed. # i need to get the actual document (workorder) worker = Worker(work_order, db) # fire the thread jobs.append[worker] worker.start() return main() . . . consumer.wait(cb=dispatch, since=update_seq, timeout=10000) #wait constains the asyncloop. }}}

    Read the article

  • Removing specific ticks from matplotlib plot

    - by Jsg91
    I'm trying to remove the origin ticks from my plot below to stop them overlapping, alternatively just moving them away from each other would also be great I tried this: xticks = ax.xaxis.get_major_ticks() xticks[0].label1.set_visible(False) yticks = ax.yaxis.get_major_ticks() yticks[0].label1.set_visible(False) However this removed the first and last ticks from the y axis like so: Does anyone have an idea about how to do this? Any help would be greatly appreciated.

    Read the article

  • Error while exiting cherrypy server

    - by Vijayendra Bapte
    Guys, I am getting following error while exiting cherrypy server. What is this error about? 2009-11-04 09:32:35,015 WARNING Error in atexit._run_exitfuncs: 2009-11-04 09:32:35,015 WARNING 2009-11-04 09:32:35,015 WARNING Traceback (most recent call last): 2009-11-04 09:32:35,015 WARNING File "atexit.pyc", line 24, in _run_exitfuncs 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 1486, in shutdown 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 746, in flush 2009-11-04 09:32:35,015 WARNING IOError: [Errno 9] Bad file descriptor 2009-11-04 09:32:35,015 WARNING Error in sys.exitfunc: 2009-11-04 09:32:35,015 WARNING Traceback (most recent call last): 2009-11-04 09:32:35,015 WARNING File "atexit.pyc", line 24, in _run_exitfuncs 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 1486, in shutdown 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 746, in flush 2009-11-04 09:32:35,015 WARNING IOError 2009-11-04 09:32:35,015 WARNING : 2009-11-04 09:32:35,015 WARNING [Errno 9] Bad file descriptor 2009-11-04 09:32:35,015 WARNING

    Read the article

  • Infinite loop when adding a row to a list in a class in python3

    - by Margaret
    I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in this question. Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake? class Dataset: individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most def __init__(self, individuals=[]): and class Node: ds = Dataset() #The data that is associated with this Node links = [] #List of Nodes, the offspring Nodes of this node level = 0 #Tree depth of this Node split_value = '' #Field used to split out this Node from the parent node node_value = '' #Value used to split out this Node from the parent Node def split_dataset(self, split_value): fields = [] #List of options for split_value amongst the individuals datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key for field in self.ds.field_set()[split_value]: #Populates the keys of fields[] fields.append(field) datasets[field] = Dataset() for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit for field in fields: #Creates subnodes from each of the datasets.Dataset options self.add_subnode(datasets[field],split_value,field) def add_subnode(self, dataset, split_value='', node_value=''): def __init__(self, level, dataset=Dataset()): My initialisation code is currently: if __name__ == '__main__': filename = (sys.argv[1]) #Takes in a CSV file predicted_value = "# class" #Identifies the field from the CSV file that should be predicted base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes n = root.links[0] n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >