Search Results

Search found 14486 results on 580 pages for 'python idle'.

Page 149/580 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Memory problems while code is running (Python, Networkx)

    - by MIN SU PARK
    I made a code for generate a graph with 379613734 edges. But the code couldn't be finished because of memory. It takes about 97% of server memory when it go through 62 million lines. So I killed it. Do you have any idea to solve this problem? My code is like this: import os, sys import time import networkx as nx G = nx.Graph() ptime = time.time() j = 1 for line in open("./US_Health_Links.txt", 'r'): #for line in open("./test_network.txt", 'r'): follower = line.strip().split()[0] followee = line.strip().split()[1] G.add_edge(follower, followee) if j%1000000 == 0: print j*1.0/1000000, "million lines done", time.time() - ptime ptime = time.time() j += 1 DG = G.to_directed() # P = nx.path_graph(DG) Nn_G = G.number_of_nodes() N_CC = nx.number_connected_components(G) LCC = nx.connected_component_subgraphs(G)[0] n_LCC = LCC.nodes() Nn_LCC = LCC.number_of_nodes() inDegree = DG.in_degree() outDegree = DG.out_degree() Density = nx.density(G) # Diameter = nx.diameter(G) # Centrality = nx.betweenness_centrality(PDG, normalized=True, weighted_edges=False) # Clustering = nx.average_clustering(G) print "number of nodes in G\t" + str(Nn_G) + '\n' + "number of CC in G\t" + str(N_CC) + '\n' + "number of nodes in LCC\t" + str(Nn_LCC) + '\n' + "Density of G\t" + str(Density) + '\n' # sys.exit() # j += 1 The edge data is like this: 1000 1001 1000245 1020191 1000 10267352 1000653 10957902 1000 11039092 1000 1118691 10346 11882 1000 1228281 1000 1247041 1000 12965332 121340 13027572 1000 13075072 1000 13183162 1000 13250162 1214 13326292 1000 13452672 1000 13844892 1000 14061830 12340 1406481 1000 14134703 1000 14216951 1000 14254402 12134 14258044 1000 14270791 1000 14278978 12134 14313332 1000 14392970 1000 14441172 1000 14497568 1000 14502775 1000 14595635 1000 14620544 1000 14632615 10234 14680596 1000 14956164 10230 14998341 112000 15132211 1000 15145450 100 15285998 1000 15288974 1000 15300187 1000 1532061 1000 15326300 Lastly, is there anybody who has an experience to analyze Twitter link data? It's quite hard to me to take a directed graph and calculate average/median indegree and outdegree of nodes. Any help or idea?

    Read the article

  • Types in Python - Google Appengine

    - by Chris M
    Getting a bit peeved now; I have a model and a class thats just storing a get request in the database; basic tracking. class SearchRec(db.Model): WebSite = db.StringProperty()#required=True WebPage = db.StringProperty() CountryNM = db.StringProperty() PrefMailing = db.BooleanProperty() DateStamp = db.DateTimeProperty(auto_now_add=True) IP = db.StringProperty() class AddSearch(webapp.RequestHandler): def get(self): searchRec = SearchRec() searchRec.WebSite = self.request.get('WEBSITE') searchRec.WebPage = self.request.get('WEBPAGE') searchRec.CountryNM = self.request.get('COUNTRY') searchRec.PrefMailing = bool(self.request.get('MAIL')) searchRec.IP = self.request.get('IP') Bool has my biscuit; I thought that setting bool(self.reque....) would set the type of the string but no matter what I pass it it still stores it as TRUE in the database. I had the same issue with using required=True on strings for the model; the damn thing kept saying that nothing was being passed... but it had. Ta

    Read the article

  • Obfuscate strings in Python

    - by Caedis
    I have a password string that must be passed to a method. Everything works fine but I don't feel comfortable storing the password in clear text. Is there a way to obfuscate the string or to truly encrypt it? I'm aware that obfuscation can be reverse engineered, but I think I should at least try to cover up the password a bit. At the very least it wont be visible to a indexing program, or a stray eye giving a quick look at my code. I am aware of pyobfuscate but I don't want the whole program obfuscated, just one string and possibly the whole line itself where the variable is defined. Target platform is GNU Linux Generic (If that makes a difference)

    Read the article

  • regular expression search in python

    - by Richard
    Hello all, I am trying to parse some data and just started reading up on regular Expressions so I am pretty new to it. This is the code I have so far String = "MEASUREMENT 3835 303 Oxygen: 235.78 Saturation: 90.51 Temperature: 24.41 DPhase: 33.07 BPhase: 29.56 RPhase: 0.00 BAmp: 368.57 BPot: 18.00 RAmp: 0.00 RawTem.: 68.21" String = String.strip('\t\x11\x13') String = String.split("Oxygen:") print String[1] String[1].lstrip print String[1] What I am trying to do is to do is remove the oxygen data (235.78) and put it in its own variable using an regular expression search. I realize that there should be an easy solution but I am trying to figure out how regular expressions work and they are making my head hurt. Thanks for any help Richard

    Read the article

  • Which Python XML library should I use?

    - by PulpFiction
    Hello. I am going to handle XML files for a project. I had earlier decided to use lxml but after reading the requirements, I think ElemenTree would be better for my purpose. The XML files that have to be processed are: Small in size. Typically < 10 KB. No namespaces. Simple XML structure. Given the small XML size, memory is not an issue. My only concern is fast parsing. What should I go with? Mostly I have seen people recommend lxml, but given my parsing requirements, do I really stand to benefit from it or would ElementTree serve my purpose better?

    Read the article

  • Python/YACC: Resolving a shift/reduce conflict

    - by Rosarch
    I'm using PLY. Here is one of my states from parser.out: state 3 (5) course_data -> course . (6) course_data -> course . course_list_tail (3) or_phrase -> course . OR_CONJ COURSE_NUMBER (7) course_list_tail -> . , COURSE_NUMBER (8) course_list_tail -> . , COURSE_NUMBER course_list_tail ! shift/reduce conflict for OR_CONJ resolved as shift $end reduce using rule 5 (course_data -> course .) OR_CONJ shift and go to state 7 , shift and go to state 8 ! OR_CONJ [ reduce using rule 5 (course_data -> course .) ] course_list_tail shift and go to state 9 I want to resolve this as: if OR_CONJ is followed by COURSE_NUMBER: shift and go to state 7 else: reduce using rule 5 (course_data -> course .) How can I fix my parser file to reflect this? Do I need to handle a syntax error by backtracking and trying a different rule?

    Read the article

  • Faster float to int conversion in Python

    - by culebrón
    Here's a piece of code that takes most time in my program, according to timeit statistics. It's a dirty function to convert floats in [-1.0, 1.0] interval into unsigned integer [0, 2**32]. How can I accelerate floatToInt? piece = [] rng = range(32) for i in rng: piece.append(1.0/2**i) def floatToInt(x): n = x + 1.0 res = 0 for i in rng: if n >= piece[i]: res += 2**(31-i) n -= piece[i] return res

    Read the article

  • Python Script to backup a directory

    - by rgolwalkar
    Filename:backup_ver1 import os import time 1 Using list to specify the files and directory to be backed up source = r'C:\Documents and Settings\rgolwalkar\Desktop\Desktop\Dr Py\Final_Py' 2 define backup directory destination = r'C:\Documents and Settings\rgolwalkar\Desktop\Desktop\PyDevResourse' 3 Setting the backup name targetBackup = destination + time.strftime('%Y%m%d%H%M%S') + '.rar' rar_command = "rar.exe a -ag '%s' %s" % (targetBackup, ''.join(source)) i am sure i am doing something wrong here - rar command please let me know if os.system(rar_command) == 0: print 'Successful backup to', targetBackup else: print 'Backup FAILED' O/P:- Backup FAILED winrar is added to Path and CLASSPATH under Environment variables as well - anyone else with a suggestion for backing up the directory is most welcome

    Read the article

  • Python beginer having trouble running code

    - by Protean
    For some reason this code will not seem to run in the interpreter. When I hit F5 nothing happens, not even the debugger seems to recognize it. I assume it has something to do with the class, as when removed the interpreter seems to recognize the rest of the code. Please tell me what I am doing wrong. Edit: I have restarted the interpreter multiple times, any other piece of code I try to load runs fine, just this one is having trouble. print ('Why won't this work?') class sorting_class: def __init__(self): self.order = ['a', 'b', 'c', 'd'] self.globali = 0 self.orderi = 0 self.sortedlist = [] def sort(self, array): carry, leave = [] for arrayi in array: print ('run', arrayi) if self.order[self.orderi] == arrayi[self.globali]: carry.append(arrayi) else: if self.globali != 0: leave.append(arrayi) return carry, leave def srt(self, array): globalii = 0 carry, leave = my.sort(array) while len(self.sortedlist) != len(array): if len(self.carry) == 1: self.sortedlist.append(carry) arrayt = leave self.globali = 1 self.orderi = 0 carry, leave = my.sort(arrayt) elif len(self.carry) == 0: if len(self.leave) != 0: arrayt = leave self.globali = 1 self.orderi += 1 my.sort(arrayt) else: self.arrayt globalii += 1 self.orderi = globalii self.globali = 0 my.sort(arrayt) self.orderi = 0 else: arrayt = carry carry = [] self.globali += 1 carry, leave += my.sort(arrayt) my = sorting_class() x = ['ac', 'bc' ,'ab', 'da'] my.srt(x)

    Read the article

  • python regex of a date in some text, enclosed by two keywords

    - by Horace Ho
    This is Part 2 of this question and thanks very much for David's answer. What if I need to extract dates which are bounded by two keywords? Example: text = "One 09 Jun 2011 Two 10 Dec 2012 Three 15 Jan 2015 End" Case 1 bounding keyboards: "One" and "Three" Result expected: ['09 Jun 2011', '10 Dec 2012'] Case 2 bounding keyboards: "Two" and "End" Result expected: ['10 Dec 2012', '15 Jan 2015'] Thanks!

    Read the article

  • python lxml problem

    - by David ???
    I'm trying to print/save a certain element's HTML from a web-page. I've retrieved the requested element's XPath from firebug. All I wish is to save this element to a file. I don't seem to succeed in doing so. (tried the XPath with and without a /text() at the end) I would appreciate any help, or past experience. 10x, David import urllib2,StringIO from lxml import etree url='http://www.tutiempo.net/en/Climate/Londres_Heathrow_Airport/12-2009/37720.htm' seite = urllib2.urlopen(url) html = seite.read() seite.close() parser = etree.HTMLParser() tree = etree.parse(StringIO.StringIO(html), parser) xpath = "/html/body/table/tbody/tr/td[2]/div/table/tbody/tr[6]/td/table/tbody/tr/td[3]/table/tbody/tr[3]/td/table/tbody/tr/td/table/tbody/tr/td/table/tbody/text()" elem = tree.xpath(xpath) print elem[0].strip().encode("utf-8")

    Read the article

  • Python __subclasses__() not listing subclasses

    - by Mridang Agarwalla
    I cant seem to list all derived classes using the __subclasses__() method. Here's my directory layout: import.py backends __init__.py --digger __init__.py base.py test.py --plugins plugina_plugin.py From import.py i'm calling test.py. test.py in turn iterates over all the files in the plugins directory and loads all of them. test.py looks like this: import os import sys import re sys.path.append(os.path.join(os.path.abspath(os.path.dirname(os.path.abspath( __file__ ))))) sys.path.append(os.path.join(os.path.abspath(os.path.dirname(os.path.abspath( __file__ ))), 'plugins')) from base import BasePlugin class TestImport: def __init__(self): print 'heeeeello' PLUGIN_DIRECTORY = os.path.join(os.path.abspath(os.path.dirname(os.path.abspath( __file__ ))), 'plugins') for filename in os.listdir (PLUGIN_DIRECTORY): # Ignore subfolders if os.path.isdir (os.path.join(PLUGIN_DIRECTORY, filename)): continue else: if re.match(r".*?_plugin\.py$", filename): print ('Initialising plugin : ' + filename) __import__(re.sub(r".py", r"", filename)) print ('Plugin system initialized') print BasePlugin.__subclasses__() The problem us that the __subclasses__() method doesn't show any derived classes. All plugins in the plugins directory derive from a base class in the base.py file. base.py looks like this: class BasePlugin(object): """ Base """ def __init__(self): pass plugina_plugin.py looks like this: from base import BasePlugin class PluginA(BasePlugin): """ Plugin A """ def __init__(self): pass Could anyone help me out with this? Whatm am i doing wrong? I've racked my brains over this but I cant seem to figure it out Thanks.

    Read the article

  • how to display my list with n amount on each line in Python

    - by user1786698
    im trying to display my list with 7 states on each line here is what i have so far, but it displays as one long string of all the states with quotes around each state. I forgot to mention that this is for my CS class and we havent learned iter yet so we not allowed to use it. the only hint i was given was to to turn STATE_LIST into a string then use '\n' to break it up state = str(STATE_LIST) displaystates = Text(Point(WINDOW_WIDTH/2, WINDOW_HEIGHT/2), state.split('\n')) displaystates.draw(win) and STATE_LIST looks like this STATE_VOTES = { "AL" : 9, # Alabama "AK" : 3, # Alaska "AZ" : 11, # Arizona "AR" : 6, # Arkansas "CA" : 55, # California "CO" : 9, # Colorado "CT" : 7, # Connecticut "DE" : 3, # Delaware "DC" : 3, # Washington DC "FL" : 29, # Florida "GA" : 16, # Georgia "HI" : 4, # Hawaii "ID" : 4, # Idaho "IL" : 20, # Illinois "IN" : 11, # Indiana "IA" : 6, # Iowa "KS" : 6, # Kansas "KY" : 8, # Kentucky "LA" : 8, # Louisiana "ME" : 4, # Maine "MD" : 10, # Maryland "MA" : 11, # Massachusetts "MI" : 16, # Michigan "MN" : 10, # Minnesota "MS" : 6, # Mississippi "MO" : 10, # Missouri "MT" : 3, # Montana "NE" : 5, # Nebraska "NV" : 6, # Nevada "NH" : 4, # New Hampshire "NJ" : 14, # New Jersey "NM" : 5, # New Mexico "NY" : 29, # New York "NC" : 15, # North Carolina "ND" : 3, # North Dakota "OH" : 18, # Ohio "OK" : 7, # Oklahoma "OR" : 7, # Oregon "PA" : 20, # Pennsylvania "RI" : 4, # Rhode Island "SC" : 9, # South Carolina "SD" : 3, # South Dakota "TN" : 11, # Tennessee "TX" : 38, # Texas "UT" : 6, # Utah "VT" : 3, # Vermont "VA" : 13, # Virginia "WA" : 12, # Washington "WV" : 5, # West Virginia "WI" : 10, # Wisconsin "WY" : 3 # Wyoming } STATE_LIST = sorted(list(STATE_VOTES.keys())) I am trying to get it to look somewhat like this

    Read the article

  • Message queue proxy in Python + Twisted

    - by gasper_k
    Hi, I want to implement a lightweight Message Queue proxy. It's job is to receive messages from a web application (PHP) and send them to the Message Queue server asynchronously. The reason for this proxy is that the MQ isn't always avaliable and is sometimes lagging, or even down, but I want to make sure the messages are delivered, and the web application returns immediately. So, PHP would send the message to the MQ proxy running on the same host. That proxy would save the messages to SQLite for persistence, in case of crashes. At the same time it would send the messages from SQLite to the MQ in batches when the connection is available, and delete them from SQLite. Now, the way I understand, there are these components in this service: message listener (listens to the messages from PHP and writes them to a Incoming Queue) DB flusher (reads messages from the Incoming Queue and saves them to a database; due to SQLite single-threadedness) MQ connection handler (keeps the connection to the MQ server online by reconnecting) message sender (collects messages from SQlite db and sends them to the MQ server, then removes them from db) I was thinking of using Twisted for #1 (TCPServer), but I'm having problem with integrating it with other points, which aren't event-driven. Intuition tells me that each of these points should be running in a separate thread, because all are IO-bound and independent of each other, but I could easily put them in a single thread. Even though, I couldn't find any good and clear (to me) examples on how to implement this worker thread aside of Twisted's main loop. The example I've started with is the chatserver.py, which uses service.Application and internet.TCPServer objects. If I start my own thread prior to creating TCPServer service, it runs a few times, but the it stops and never runs again. I'm not sure, why this is happening, but it's probably because I don't use threads with Twisted correctly. Any suggestions on how to implement a separate worker thread and keep Twisted? Do you have any alternative architectures in mind?

    Read the article

  • Python PyQT4 - Adding an unknown number of QComboBox widgets to QGridLayout

    - by ZZ
    Hi all, I want to retrieve a list of people's names from a queue and, for each person, place a checkbox with their name to a QGridLayout using the addWidget() function. I can successfully place the items in a QListView, but they just write over the top of each other rather than creating a new row. Does anyone have any thoughts on how I could fix this? self.chk_People = QtGui.QListView() items = self.jobQueue.getPeopleOffQueue() for item in items: QtGui.QCheckBox('%s' % item, self.chk_People) self.jobQueue.getPeopleOffQueue() would return something like ['Bob', 'Sally', 'Jimmy'] if that helps.

    Read the article

  • python: send a list/dict over network

    - by facha
    Hi, everyone I'm looking for an easy way of packing/unpacking data structures for sending over the network: on client just before sending: a = ((1,2),(11,22,),(111,222)) message = pack(a) and then on server: a = unpack(message) Is there a library that could do pack/unpack magic? Thanks in advance

    Read the article

  • Python large variable RAM useage

    - by PPTim
    Hi, Say there is a dict variable that grows very large during runtime- up into millions of key:value pairs. Does this variable get stored in RAM,effectively using up all the available memory and slowing down the rest of the system? Asking the interpreter to display the entire dict is a bad idea, but would it be okay as long as one key is accessed at a time? Tim

    Read the article

  • FIFO dequeueing in python?

    - by Aaron Ramsey
    hello again everybody— I'm looking to make a functional (not necessarily optimally efficient, as I'm very new to programming) FIFO queue, and am having trouble with my dequeueing. My code looks like this: class QueueNode: def __init__(self, data): self.data = data self.next = None def __str__(self): return str(self.data) class Queue: def__init__(self): self.front = None self.rear = None self.size = 0 def enqueue(self, item) newnode = QueueNode(item) newnode.next = None if self.size == 0: self.front = self.rear = newnode else: self.rear = newnode self.rear.next = newnode.next self.size = self.size+1 def dequeue(self) dequeued = self.front.data del self.front self.size = self.size-1 if self.size == 0: self.rear = None print self.front #for testing if I do this, and dequeue an item, I get the error "AttributeError: Queue instance has no attribute 'front'." I guess my function doesn't properly assign the new front of the queue? I'm not sure how to fix it though. I don't really want to start from scratch, so if there's a tweak to my code that would work, I'd prefer that—I'm not trying to minimize runtime so much as just get a feel for classes and things of that nature. Thanks in advance for any help, you guys are the best.

    Read the article

  • Python: saving and loading objects and using pickle.

    - by Peterstone
    Hello, I´m trying to save and load objects using pickle module. First I declare my objects: >>> class Fruits:pass ... >>> banana = Fruits() >>> banana.color = 'yellow' >>> banana.value = 30 After that I open a file called 'Fruits.obj'(previously I created a new .txt file and I renamed 'Fruits.obj'): >>> import pickle >>> filehandler = open(b"Fruits.obj","wb") >>> pickle.dump(banana,filehandler) After do this I close my session and I began a new one and I put the next (trying to access to the object that it supposed to be saved): file = open("Fruits.obj",'r') object_file = pickle.load(file) But I have this message: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python31\lib\pickle.py", line 1365, in load encoding=encoding, errors=errors).load() ValueError: read() from the underlying stream did notreturn bytes I don´t know what to do because I don´t understand this message. Does anyone know How I can load my object 'banana'? Thank you! EDIT: As some of you have sugested I put: >>> import pickle >>> file = open("Fruits.obj",'rb') There were no problem, but the next I put was: >>> object_file = pickle.load(file) And I have error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python31\lib\pickle.py", line 1365, in load encoding=encoding, errors=errors).load() EOFError

    Read the article

  • Python Script to check website for a tag

    - by LinuxGnut
    Hello all. I'm trying to figure out how to go about writing a website monitoring script (cron job in the end) to open up a given URL, check to see if a tag exists, and if the tag does not exist, or doesn't contain the expected data, then to write some to a log file, or to send an e-mail. The tag would be something like or something relatively similar. Anyone have any ideas?

    Read the article

  • How to do relative imports in Python?

    - by Joril
    Imagine this directory structure: app/ __init__.py sub1/ __init__.py mod1.py sub2/ __init__.py mod2.py I'm coding mod1, and I need to import something from mod2. How should I do it? I tried from ..sub2 import mod2 but I'm getting an "Attempted relative import in non-package". I googled around but found only "sys.path manipulation" hacks. Isn't there a clean way? Edit: all my __init__.py's are currently empty Edit2: I'm trying to do this because sub2 contains classes that are shared across sub packages (sub1, subX, etc.). Edit3: The behaviour I'm looking for is the same as described in PEP 366 (thanks John B)

    Read the article

  • Python (Django). Store telnet connection

    - by Shamanu4
    Hello. I am programming web interface which communicates with cisco switches via telnet. I want to make such system which will be storing one telnet connection per switch and every script (web interface, cron jobs, etc.) will have access to it. This is needed to make a single query queue for each device and prevent huge cisco processor load caused by several concurent telnet connections. How do I can do this?

    Read the article

  • problems with unpickling a 80 megabyte file in python

    - by tipu
    I am using the pickle module to read and write large amounts of data to a file. After writing to the file a 80 megabyte pickled file, I load it in a SocketServer using class MyTCPHandler(SocketServer.BaseRequestHandler): def handle(self): print("in handle") words_file_handler = open('/home/tipu/Dropbox/dev/workspace/search/words.db', 'rb') words = pickle.load(words_file_handler) tweets = shelve.open('/home/tipu/Dropbox/dev/workspace/search/tweets.db', 'r'); results_per_page = 25 query_details = self.request.recv(1024).strip() query_details = eval(query_details) query = query_details["query"] page = int(query_details["page"]) - 1 return_ = [] booleanquery = BooleanQuery(MyTCPHandler.words) if query.find("(") > -1: result = booleanquery.processAdvancedQuery(query) else: result = booleanquery.processQuery(query) result = list(result) i = 0 for tweet_id in result and i < 25: #return_.append(MyTCPHandler.tweets[str(tweet_id)]) return_.append(tweet_id) i += 1 self.request.send(str(return_)) However the file never seems to load after the pickle.load line and it eventually halts the connection attempt. Is there anything I can do to speed this up?

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >