Search Results

Search found 17845 results on 714 pages for 'python social auth'.

Page 178/714 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • How to replace only part of the match with python re.sub

    - by Arty
    I need to match two cases by one reg expression and do replacement 'long.file.name.jpg' - 'long.file.name_suff.jpg' 'long.file.name_a.jpg' - 'long.file.name_suff.jpg' I'm trying to do the following re.sub('(\_a)?\.[^\.]*$' , '_suff.',"long.file.name.jpg") But this is cut the extension '.jpg' and I'm getting long.file.name_suff. instead of long.file.name_suff.jpg I understand that this is because of [^.]*$ part, but I can't exclude it, because I have to find last occurance of '_a' to replace or last '.' Is there a way to replace only part of the match?

    Read the article

  • Python + Expat: Error on &#0; entities

    - by clacke
    I have written a small function, which uses ElementTree and xpath to extract the text contents of certain elements in an xml file: #!/usr/bin/env python2.5 import doctest from xml.etree import ElementTree from StringIO import StringIO def parse_xml_etree(sin, xpath): """ Takes as input a stream containing XML and an XPath expression. Applies the XPath expression to the XML and returns a generator yielding the text contents of each element returned. >>> parse_xml_etree( ... StringIO('<test><elem1>one</elem1><elem2>two</elem2></test>'), ... '//elem1').next() 'one' >>> parse_xml_etree( ... StringIO('<test><elem1>one</elem1><elem2>two</elem2></test>'), ... '//elem2').next() 'two' >>> parse_xml_etree( ... StringIO('<test><null>&#0;</null><elem3>three</elem3></test>'), ... '//elem2').next() 'three' """ tree = ElementTree.parse(sin) for element in tree.findall(xpath): yield element.text if __name__ == '__main__': doctest.testmod(verbose=True) The third test fails with the following exception: ExpatError: reference to invalid character number: line 1, column 13 Is the � entity illegal XML? Regardless whether it is or not, the files I want to parse contain it, and I need some way to parse them. Any suggestions for another parser than Expat, or settings for Expat, that would allow me to do that?

    Read the article

  • How to optimize this Python code?

    - by RandomVector
    def maxVote(nLabels): count = {} maxList = [] maxCount = 0 for nLabel in nLabels: if nLabel in count: count[nLabel] += 1 else: count[nLabel] = 1 #Check if the count is max if count[nLabel] > maxCount: maxCount = count[nLabel] maxList = [nLabel,] elif count[nLabel]==maxCount: maxList.append(nLabel) return random.choice(maxList) nLabels contains a list of integers. The above function returns the integer with highest frequency, if more than one have same frequency then a randomly selected integer from them is returned. E.g. maxVote([1,3,4,5,5,5,3,12,11]) is 5

    Read the article

  • Python function argument scope (Dictionaries v. Strings)

    - by Shaun Meyer
    Hello, given: foo = "foo" def bar(foo): foo = "bar" bar(foo) print foo # foo is still "foo"... foo = {'foo':"foo"} def bar(foo): foo['foo'] = "bar" bar(foo) print foo['foo'] # foo['foo'] is now "bar"? I have a function that has been inadvertently over-writing my function parameters when I pass a dictionary. Is there a clean way to declare my parameters as constant or am I stuck making a copy of the dictionary within the function? Thanks!

    Read the article

  • Dealing with regular expressions, Python

    - by Gusto
    I want to remove some symbols from a string using a regular expression, for example: == (that occur both at the beginning and at the end of a line), * (at the beginning of a line ONLY). def some_func(): clean = re.sub(r'= {2,}', '', clean) #Removes 2 or more occurrences of = at the beg and at the end of a line. clean = re.sub(r'^\* {1,}', '', clean) #Removes 1 or more occurrences of * at the beginning of a line. What's wrong with my code? It seems like expressions are wrong. How do I remove a character/symbol if it's at the beginning or at the end of the line (with one or more occurrences)?

    Read the article

  • Python implementation of avro slow?

    - by lazy1
    I'm reading some data from avro file using the avro library. It takes about a minute to load 33K objects from the file. This seem very slow to me, specially with the Java version reading the same file in about 1sec. Here is the code, am I doing something wrong? import avro.datafile import avro.io from time import time def load(filename): fo = open(filename, "rb") reader = avro.datafile.DataFileReader(fo, avro.io.DatumReader()) for i, record in enumerate(reader): pass return i + 1 def main(argv=None): import sys from argparse import ArgumentParser argv = argv or sys.argv parser = ArgumentParser(description="Read avro file") start = time() num_records = load("events.avro") end = time() print("{0} records in {1} seconds".format(num_records, end - start)) if __name__ == "__main__": main()

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • Extract anything that looks like links from large amount of data in python

    - by Riz
    Hi, I have around 5 GB of html data which I want to process to find links to a set of websites and perform some additional filtering. Right now I use simple regexp for each site and iterate over them, searching for matches. In my case links can be outside of "a" tags and be not well formed in many ways(like "\n" in the middle of link) so I try to grab as much "links" as I can and check them later in other scripts(so no BeatifulSoup\lxml\etc). The problem is that my script is pretty slow, so I am thinking about any ways to speed it up. I am writing a set of test to check different approaches, but hope to get some advices :) Right now I am thinking about getting all links without filtering first(maybe using C module or standalone app, which doesn't use regexp but simple search to get start and end of every link) and then using regexp to match ones I need.

    Read the article

  • Python - from file to data structure?

    - by Seafoid
    Hi, I have large file comprising ~100,000 lines. Each line corresponds to a cluster and each entry within each line is a reference i.d. for another file (protein structure in this case), e.g. 1hgn 1dju 3nmj 8kfn 9opu 7gfb 4bui I need to read in the file as a list of lists where each line is a sublist, thus preserving the integrity of the cluster, e.g. nested_list = [['1hgn', '1dju', '3nmj', '8kfn'], ['9opu', '7gfb'], ['4bui']] My current code creates a nested list but the entries within each list are a single string and not comma separated. Therefore, I cannot splice the list with indices so easily. Any help greatly appreciated. Thanks, S :-)

    Read the article

  • Plotting a cumulative graph of python datetimes

    - by ventolin
    Say I have a list of datetimes, and we know each datetime to be the recorded time of an event happening. Is it possible in matplotlib to graph the frequency of this event occuring over time, showing this data in a cumulative graph (so that each point is greater or equal to all of the points that went before it), without preprocessing this list? (e.g. passing datetime objects directly to some wonderful matplotlib function) Or do I need to turn this list of datetimes into a list of dictionary items, such as: {"year": 1998, "month": 12, "date": 15, "events": 92} and then generate a graph from this list? Sorry if this seems like a silly question - I'm not all too familiar with matplotlib, and would like to save myself the effort of doing this the latter way if matplotlib can already deal with datetime objects itself.

    Read the article

  • Python and sqlite3 - importing and exporting databases

    - by JPC
    I'm trying to write a script to import a database file. I wrote the script to export the file like so: import sqlite3 con = sqlite3.connect('../sqlite.db') with open('../dump.sql', 'w') as f: for line in con.iterdump(): f.write('%s\n' % line) Now I want to be able to import that database. I tried: import sqlite3 con = sqlite3.connect('../sqlite.db') f = open('../dump.sql','r') str = f.read() con.execute(str) but I'm not allowed to execute more than one statement. Is there a way to get it to run a .sql script directly?

    Read the article

  • python destructuring-bind dictionary contents

    - by Stephen
    Hi, I am trying to 'destructure' a dictionary and associate values with variables names after its keys. Something like params = {'a':1,'b':2} a,b = params.values() but since dictionaries are not ordered, there is no guarantee that params.values() will return values in the order of (a,b). Is there a nice way to do this? Thanks

    Read the article

  • Python method to remove iterability

    - by Debilski
    Suppose I have a function which can either take an iterable/iterator or a non-iterable as an argument. Iterability is checked with try: iter(arg). Depending whether the input is an iterable or not, the outcome of the method will be different. Not when I want to pass a non-iterable as iterable input, it is easy to do: I’ll just wrap it with a tuple. What do I do when I want to pass an iterable (a string for example) but want the function to take it as if it’s non-iterable? E.g. make that iter(str) fails.

    Read the article

  • javascript-aware html parser for Python ~

    - by znetor
    <html> <head> <script type="text/javascript"> document.write('<a href="http://www.google.com">f*** js</a>'); document.write("f*** js!"); </script> </head> <body> <script type="text/javascript"> document.write('<a href="http://www.google.com">f*** js</a>'); document.write("f*** js!"); </script> <div><a href="http://www.google.com">f*** js</a></div> </body> </html> I want use xpath to catch all lable object in the html page above... In [1]: import lxml.html as H In [2]: f = open("test.html","r") In [3]: c = f.read() In [4]: doc = H.document_fromstring(c) In [5]: doc.xpath('//a') Out[5]: [<Element a at a01d17c>] In [6]: a = doc.xpath('//a')[0] In [7]: a.getparent() Out[7]: <Element div at a01d41c> I only get one don't generate by js~ but firefox xpath checker can find all lable!? http://i.imgur.com/0hSug.png how to do that??? thx~! <html> <head> </head> <body> <script language="javascript"> function over(){ a.innerHTML="mouse me" } function out(){ a.innerHTML="<a href='http://www.google.com'>google</a>" } </script> <body><li id="a"onmouseover="over()" onmouseout="out()">mouse me</li> </body> </html>

    Read the article

  • speed up calling lot of entities, and getting unique values, google app engine python

    - by user291071
    OK this is a 2 part question, I've seen and searched for several methods to get a list of unique values for a class and haven't been practically happy with any method so far. So anyone have a simple example code of getting unique values for instance for this code. Here is my super slow example. class LinkRating2(db.Model): user = db.StringProperty() link = db.StringProperty() rating2 = db.FloatProperty() def uniqueLinkGet(tabl): start = time.time() dic = {} query = tabl.all() for obj in query: dic[obj.link]=1 end = time.time() print end-start return dic My second question is calling for instance an iterator instead of fetch slower? Is there a faster method to do this code below? Especially if the number of elements called be larger than 1000? query = LinkRating2.all() link1 = 'some random string' a = query.filter('link = ', link1) adic ={} for itema in a: adic[itema.user]=itema.rating2

    Read the article

  • Unicode filename to python subprocess.call()

    - by otrov
    I'm trying to run subprocess.call() with unicode filename, and here is simplified problem: n = u'c:\\windows\\notepad.exe ' f = u'c:\\temp\\nèw.txt' subprocess.call(n + f) which raises famous error: UnicodeEncodeError: 'ascii' codec can't encode character u'\xe8' Encoding to utf-8 produces wrong filename, and mbcs passes filename as new.txt without accent I just can't read any more on this confusing subject and spin in circle. I found here lot of answers for many different problems in past so I thought to join and ask for help myself Thanks

    Read the article

  • building a pairwise matrix in scipy/numpy in Python from dictionaries

    - by user248237
    I have a dictionary whose keys are strings and values are numpy arrays, e.g.: data = {'a': array([1,2,3]), 'b': array([4,5,6]), 'c': array([7,8,9])} I want to compute a statistic between all pairs of values in 'data' and build an n by x matrix that stores the result. Assume that I know the order of the keys, i.e. I have a list of "labels": labels = ['a', 'b', 'c'] What's the most efficient way to compute this matrix? I can compute the statistic for all pairs like this: result = [] for elt1, elt2 in itertools.product(labels, labels): result.append(compute_statistic(data[elt1], data[elt2])) But I want result to be a n by n matrix, corresponding to "labels" by "labels". How can I record the results as this matrix? thanks.

    Read the article

  • Join a list of lists together into 1 list in Python

    - by dotty
    Hay All. I have a list which consists of many lists, here is an example [ [Obj, Obj, Obj, Obj], [Obj], [Obj], [ [Obj,Obj], [Obj,Obj,Obj] ] ] Is there a way to join all these items together as 1 list, so the output will be something like [Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj,Obj] Thanks

    Read the article

  • Python for statement giving an Invalid Syntax error with list

    - by Cold Diamondz
    I have some code in which is throwing an error (I'm using repl.it) import random students = ['s1:0','s2:0','s3:0'] while True: print'\n'*50 print'Ticket Machine'.center(80) print'-'*80 print'1. Clear Student Ticket Values'.center(80) print'2. Draw Tickets'.center(80) menu = raw_input('-'*80+'\nChoose an Option: ') if menu == '1': print'\n'*50 print'CLEARED!' students = ['s1:0','s2:0','s3:0'] raw_input('Press enter to return to the main menu!') elif menu == '2': tickets = [] print'\n'*50 times = int(raw_input('How many tickets to draw? ') for a in students: for i in range(a.split(':')[1]): tickets.append(a.split(':')[0]) for b in range(1,times+1): print str(b) + '. ' + random.choice(tickets) else: print'\n'*50 print'That was not an option!' raw_input('Press enter to return to the main menu!') But it is throwing this error: File "<stdin>", line 19 for a in students: ^ SyntaxError: invalid syntax I am planning on using this in a class, but I can't use it until the bug is fixed, also, student names have been removed for privacy reasons.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >