Search Results

Search found 4955 results on 199 pages for 'range'.

Page 169/199 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Pymedia video encoding failed

    - by user1474837
    I am using Python 2.5 with Windows XP. I am trying to make a list of pygame images into a video file using this function. I found the function on the internet and edited it. It worked at first, than it stopped working. This is what it printed out: Making video... Formating 114 Frames... starting loop making encoder Frame 1 process 1 Frame 1 process 2 Frame 1 process 2.5 This is the error: Traceback (most recent call last): File "ScreenCapture.py", line 202, in <module> makeVideoUpdated(record_files, video_file) File "ScreenCapture.py", line 151, in makeVideoUpdated d = enc.encode(da) pymedia.video.vcodec.VCodecError: Failed to encode frame( error code is 0 ) This is my code: def makeVideoUpdated(files, outFile, outCodec='mpeg1video', info1=0.1): fw = open(outFile, 'wb') if (fw == None) : print "Cannot open file " + outFile return if outCodec == 'mpeg1video' : bitrate= 2700000 else: bitrate= 9800000 start = time.time() enc = None frame = 1 print "Formating "+str(len(files))+" Frames..." print "starting loop" for img in files: if enc == None: print "making encoder" params= {'type': 0, 'gop_size': 12, 'frame_rate_base': 125, 'max_b_frames': 90, 'height': img.get_height(), 'width': img.get_width(), 'frame_rate': 90, 'deinterlace': 0, 'bitrate': bitrate, 'id': vcodec.getCodecID(outCodec) } enc = vcodec.Encoder(params) # Create VFrame print "Frame "+str(frame)+" process 1" bmpFrame= vcodec.VFrame(vcodec.formats.PIX_FMT_RGB24, img.get_size(), # Covert image to 24bit RGB (pygame.image.tostring(img, "RGB"), None, None) ) print "Frame "+str(frame)+" process 2" # Convert to YUV, then codec da = bmpFrame.convert(vcodec.formats.PIX_FMT_YUV420P) print "Frame "+str(frame)+" process 2.5" d = enc.encode(da) #THIS IS WHERE IT STOPS print "Frame "+str(frame)+" process 3" fw.write(d.data) print "Frame "+str(frame)+" process 4" frame += 1 print "savng file" fw.close() Could somebody tell me why I have this error and possibly how to fix it? The files argument is a list of pygame images, outFile is a path, outCodec is default, and info1 is not used anymore. UPDATE 1 This is the code I used to make that list of pygame images. from PIL import ImageGrab import time, pygame pygame.init() f = [] #This is the list that contains the images fps = 1 for n in range(1, 100): info = ImageGrab.grab() size = info.size mode = info.mode data = info.tostring() info = pygame.image.fromstring(data, size, mode) f.append(info) time.sleep(fps)

    Read the article

  • 500 internal server error at form connection

    - by klox
    hi..all..i've a problem i can't connect to database what's wrong with my code?this is my code: $("#mod").change(function() { var barcode; barCode=$("#mod").val(); var data=barCode.split(" "); $("#mod").val(data[0]); $("#seri").val(data[1]); var str=data[0]; var matches=str.match(/(EE|[EJU]).*(D)/i); $.ajax({ type:"post", url:"process1.php", data:"value="+matches+"action=tunermatches", cache:false, async:false, success: function(res){ $('#rslt').replaceWith( "<div id='value'><h6>Tuner range is" + res + " .</h6></div>" ); } }); }); and this is my process file: switch(postVar('action')) { case 'tunermatches' : tunermatches(postVar('tuner')); break; function tunermatches($tuner)){ $Tuner=mysql_real_escape_string($tuner); $sql= "SELECT remark FROM settingdata WHERE itemname="Tuner_range" AND itemdata="$Tunermatches"; $res=mysql_query($sql); $dat=mysql_fetch_array($res,MYSQL_NUM); if($dat[0]>0) { echo $dat[0]; } mysql_close($dbc); }

    Read the article

  • What is the difference between Inversion of Control and Dependency injection in C++?

    - by rlbond
    I've been reading recently about DI and IoC in C++. I am a little confused (even after reading related questions here on SO) and was hoping for some clarification. It seems to me that being familiar with the STL and Boost leads to use of dependency injection quite a bit. For example, let's say I made a function that found the mean of a range of numbers: template <typename Iter> double mean(Iter first, Iter last) { double sum = 0; size_t number = 0; while (first != last) { sum += *(first++); ++number; } return sum/number; }; Is this dependency injection? Inversion of control? Neither? Let's look at another example. We have a class: class Dice { public: typedef boost::mt19937 Engine; Dice(int num_dice, Engine& rng) : n_(num_dice), eng_(rng) {} int roll() { int sum = 0; for (int i = 0; i < num_dice; ++i) sum += boost::uniform_int<>(1,6)(eng_); return sum; } private: Engine& eng_; int n_; }; This seems like dependency injection. But is it inversion of control? Also, if I'm missing something, can someone help me out?

    Read the article

  • Need to find number of new unique ID numbers in a MySQL table

    - by Nicholas
    I have an iPhone app out there that "calls home" to my server every time a user uses it. On my server, I create a row in a MySQL table each time with the unique ID (similar to a serial number) aka UDID for the device, IP address, and other data. Table ClientLog columns: Time, UDID, etc, etc. What I'd like to know is the number of new devices (new unique UDIDs) on a given date. I.e. how many UDIDs were added to the table on a given date that don't appear before that date? Put plainly, this is the number of new users I gained that day. This is close, I think, but I'm not 100% there and not sure it's what I want... SELECT distinct UDID FROM ClientLog a WHERE NOT EXISTS ( SELECT * FROM ClientLog b WHERE a.UDID = b.UDID AND b.Time <= '2010-04-05 00:00:00' ) I think the number of rows returned is the new unique users after the given date, but I'm not sure. And I want to add to the statement to limit it to a date range (specify an upper bound as well).

    Read the article

  • SQL Databases and table design/organization

    - by John McMullen
    (NOOB disclaimer) I'm working on a system (a type of map), that is accessed mostly via 3 fields: ID (auto incremented), X coordinate, and Y coordinate. As it is right now, i have all data on the map, stored in 1 table. Whenever the map display is loaded it simply queries the database for contents in x and y, and the DB gives the data (other fields in the same entry). If an item on the map is doing something, it has a flag saying its doing something, and then has an ID of the action in another table holding that type of 'actions'. Essentially, for all map data, its stored in 1 table. All actions of a certain type are stored in their own table. I'm a noob, and i'm wondering what the most effective/efficient structure for such a design? (a map that has items, and each item has stats/actions). I'm using PHP atm, using standard SQL queries to get my data. Should i split up the tables so that there are only x number of entries on a table? (coord range limits)? Should it just keep growing and growing? There's a lot of queries to the table... so just tryin to see what is best :/

    Read the article

  • Trying to output a list using class

    - by captain morgan
    Am trying to get the moving average of a price..but i keep getting an attribute error in my Moving_Average class. ('Moving_Average' object has no attribute 'days'). Here is what I have: class Moving_Average: def calculation(self, alist:list,days:int): m = self.days prices = alist[1::2] average = [0]* len(prices) signal = ['']* len(prices) for m in range(0,len(prices)-days+1): average[m+2] = sum(prices[m:m+days])/days if prices[m+2] < average[m+2]: signal[m+2]='SELL' elif prices[m+2] > average[m+2] and prices[m+1] < average[m+1]: signal[m+2]='BUY' else: signal[m+2] ='' return average,signal def print_report(symbol:str,strategy:str): print('SYMBOL: ', symbol) print('STRATEGY: ', strategy) print('Date Closing Strategy Signal') def user(): strategy = ''' Which of the following strategy would you like to use? * Simple Moving Average [S] * Directional Indicator[D] Please enter your choice: ''' if signal_strategy in 'Ss': days = input('Please enter the number of days for the average') days = int(days) strategy = 'Simple Moving Average {}-days'.format(str(days)) m = Moving_Average() ma = m.calculation(gg, days) print(ma) gg is an list that contains date and prices. [2013-10-01,60,2013-10-02,60] The output is supposed to look like: Date Price Average Signal 2013-10-01 60.0 2013-10-02 60.0 60.00 BUY

    Read the article

  • How to move point on arc?

    - by bbZ
    I am writing an app that is simulating RobotArm movement. What I want to achieve, and where I have couple of problems is moving a Point on arc (180degree) that is arm range of movement. I am moving an arm by grabbing with mouse end of arm (Elbow, the Point I was talking about), robot can have multiple arms with diffrent arm lenghts. If u can help me with this part, I'd be grateful. This is what I have so far, drawing function: public void draw(Graphics graph) { Graphics2D g2d = (Graphics2D) graph.create(); graph.setColor(color); graph.fillOval(location.x - 4, location.y - 4, point.width, point.height); //Draws elbow if (parentLocation != null) { graph.setColor(Color.black); graph.drawLine(location.x, location.y, parentLocation.x, parentLocation.y); //Connects to parent if (arc == null) { angle = new Double(90 - getAngle(parentInitLocation)); arc = new Arc2D.Double(parentLocation.x - (parentDistance * 2 / 2), parentLocation.y - (parentDistance * 2 / 2), parentDistance * 2, parentDistance * 2, 90 - getAngle(parentInitLocation), 180, Arc2D.OPEN); //Draws an arc if not drawed yet. } else if (angle != null) //if parent is moved, angle is moved also { arc = new Arc2D.Double(parentLocation.x - (parentDistance * 2 / 2), parentLocation.y - (parentDistance * 2 / 2), parentDistance * 2, parentDistance * 2, angle, 180, Arc2D.OPEN); } g2d.draw(arc); } if (spacePanel.getElbows().size() > level + 1) {//updates all childElbows position updateChild(graph); } } I just do not know how to prevent moving Point moving outside of arc line. It can not be inside or outside arc, just on it. Here I wanted to put a screenshot, sadly I don't have enough rep. Am I allowed to put link to this? Maybe you got other ways how to achieve this kind of thing. Here is the image: Red circle is actual state, and green one is what I want to do. EDIT2: As requested, repo link: https://github.com/dspoko/RobotArm

    Read the article

  • Delphi: How to avoid EIntOverflow underflow when subtracting?

    - by Ian Boyd
    Microsoft already says, in the documentation for GetTickCount, that you could never compare tick counts to check if an interval has passed. e.g.: Incorrect (pseudo-code): DWORD endTime = GetTickCount + 10000; //10 s from now ... if (GetTickCount > endTime) break; The above code is bad because it is suceptable to rollover of the tick counter. For example, assume that the clock is near the end of it's range: endTime = 0xfffffe00 + 10000 = 0x00002510; //9,488 decimal Then you perform your check: if (GetTickCount > endTime) Which is satisfied immediatly, since GetTickCount is larger than endTime: if (0xfffffe01 > 0x00002510) The solution Instead you should always subtract the two time intervals: DWORD startTime = GetTickCount; ... if (GetTickCount - startTime) > 10000 //if it's been 10 seconds break; Looking at the same math: if (GetTickCount - startTime) > 10000 if (0xfffffe01 - 0xfffffe00) > 10000 if (1 > 10000) Which is all well and good in C/C++, where the compiler behaves a certain way. But what about Delphi? But when i perform the same math in Delphi, with overflow checking on ({Q+}, {$OVERFLOWCHECKS ON}), the subtraction of the two tick counts generates an EIntOverflow exception when the TickCount rolls over: if (0x00000100 - 0xffffff00) > 10000 0x00000100 - 0xffffff00 = 0x00000200 What is the intended solution for this problem? Edit: i've tried to temporarily turn off OVERFLOWCHECKS: {$OVERFLOWCHECKS OFF}] delta = GetTickCount - startTime; {$OVERFLOWCHECKS ON} But the subtraction still throws an EIntOverflow exception. Is there a better solution, involving casts and larger intermediate variable types?

    Read the article

  • Putting together CSV Cells in Excel with a macro

    - by Eric Kinch
    So, I have a macro to export data into CSV format and it's working great (Code at the bottom). The problem is the data I am putting into it. When I put the data in question in it comes out Firstname,Lastname,username,password,description I'd like to change it so I get Firstname Lastname,Firstname,Lastname,username,password,description What I'd like to do is manipulate my existing macro so to accomplish this. I'm not so good at VBS so any input or a shove in the right direction would be fantastic. Thanks! Sub Make_CSV() Dim sFile As String Dim sPath As String Dim sLine As String Dim r As Integer Dim c As Integer r = 1 'Starting row of data sPath = "C:\CSVout\" sFile = "MyText_" & Format(Now, "YYYYMMDD_HHMMSS") & ".CSV" Close #1 Open sPath & sFile For Output As #1 Do Until IsEmpty(Range("A" & r)) 'You can also Do Until r = 17 (get the first 16 cells) sLine = "" c = 1 Do Until IsEmpty(Cells(1, c)) 'Number of Columns - You could use a FOR / NEXT loop instead sLine = sLine & """" & Replace(Cells(r, c), ";", ":") & """" & "," c = c + 1 Loop Print #1, Left(sLine, Len(sLine) - 1) 'Remove the trailing comma r = r + 1 Loop Close #1 End Sub

    Read the article

  • mysql: storing arbitrary data

    - by Hailwood
    Background: I was asking a question on stack overflow regarding creating tables on the fly where this conversation ensued: This smells like a terrible idea! In fact, it smells just like this one. What in the world do you want to use this for? – deceze @deceze: very true, However, How else would you store the contents of these CSV files. They must be stored in mysql for indexing. The only solid fact about them is that they all have a mobile column with a standard format. The CSV can have an arbitrary amount of columns with an arbitrary amount of rows. They can (with no exaggeration) range from a single row, 35 column csv to an 80k row single column CSV. I am open to other ideas. – Hailwood There are many solutions for this, from attribute-value schemas to JSON storage and NoSQL storage. Open a new question about it. Whatever you do though, don't dynamically create tables! – deceze Question: So my question is, What would you say is the best way to store this data? Are you in agreement with deceze about not creating dynamic tables?

    Read the article

  • When is the reintegrate option really necessary?

    - by Tor Hovland
    If you always sync a feature branch before you merge it back, why do you really have to use the --reintegrate option? The Subversion book says: When merging your branch back to the trunk, however, the underlying mathematics is quite different. Your feature branch is now a mishmosh of both duplicated trunk changes and private branch changes, so there's no simple contiguous range of revisions to copy over. By specifying the --reintegrate option, you're asking Subversion to carefully replicate only those changes unique to your branch. (And in fact, it does this by comparing the latest trunk tree with the latest branch tree: the resulting difference is exactly your branch changes!) So the --reintegrate option only merges the changes that are unique to the feature branch. But if you always sync before merge (which is a recommended practice, in order to deal with any conflicts on the feature branch), then the only changes between the branches are the changes that are unique to the feature branch, right? And if Subversion tries to merge code that is already on the target branch, it will just do nothing, right? In this blog post, Mark Phippard writes: http://blogs.open.collab.net/svn/2008/07/subversion-merg.html If we include those synched revisions, then we merge back changes that already exist in trunk. This yields unnecessary and confusing conflicts. Can somebody give me an example of when dropping reintegrate gives me unnecessary conflicts?

    Read the article

  • Is there a definitive reference document for Ruby syntax?

    - by JSW
    I'm searching for a definitive document on Ruby syntax. I know about the definitive documents for the core API and standard library, but what about the syntax itself? For instance, such a document should cover: reserved words, string literals syntax, naming rules for variables/classes/modules, all the conditional statements and their permutations, and so forth. I know there are many books and tutorials, yes, but every one of them is essentially a tutorial, each one having a range of different depth and focus. They will all, by necessity of brevity and narrative flow, omit certain details of the language that the author deems insignificant. For instance, did you know that you can use a case statement without an initial case value, and it will then execute the first true when clause? Any given Ruby book or tutorial may or may not cover that particular lesser-known functionality of the case syntax. It's not discussed in the section in "Programming Ruby" about case statements. But that is just one small example. So far the best documentation I've found is the rubyspec project, which appears to be an attempt to write a complete test suite for the language. That's not bad, but it's a bit hard to use from a practical standpoint as a developer working on my own projects. Am I just missing something or is there really no definitive readable document defining the whole of Ruby syntax?

    Read the article

  • django join-like expansion of queryset

    - by jimbob
    I have a list of Persons each which have multiple fields that I usually filter what's upon, using the object_list generic view. Each person can have multiple Comments attached to them, each with a datetime and a text string. What I ultimately want to do is have the option to filter comments based on dates. class Person(models.Model): name = models.CharField("Name", max_length=30) ## has ~30 other fields, usually filtered on as well class Comment(models.Model): date = models.DateTimeField() person = models.ForeignKey(Person) comment = models.TextField("Comment Text", max_length=1023) What I want to do is get a queryset like Person.objects.filter(comment__date__gt=date(2011,1,1)).order_by('comment__date') send that queryset to object_list and be able to only see the comments ordered by date with only so many objects on a page. E.g., if "Person A" has comments 12/3/11, 1/2/11, 1/5/11, "Person B" has no comments, and person C has a comment on 1/3, I would see: "Person A", 1/2 - comment "Person C", 1/3 - comment "Person A", 1/5 - comment I would strongly prefer not to have to switch to filtering based on Comments.objects.filter(), as that would make me have to largely repeat large sections of code in the both the view and template. Right now if I tried executing the following command, I will get a queryset returning (PersonA, PersonC, PersonA), but if I try rendering that in a template each persons comment_set will contain all their comments even if they aren't in the date range. Ideally they're would be some sort of functionality where I could expand out a Person queryset's comment_set into a larger queryset that can be sorted and ordered based on the comment and put into a object_list generic view. This normally is fairly simple to do in SQL with a JOIN, but I don't want to abandon the ORM, which I use everywhere else.

    Read the article

  • Get the equivalent time between "dynamic" time zones

    - by doctore
    I have a table providers that has three columns (containing more columns but not important in this case): starttime, start time in which you can contact him. endtime, final hour in which you can contact him. region_id, region where the provider resides. In USA: California, Texas, etc. In UK: England, Scotland, etc starttime and endtime are time without timezone columns, but, "indirectly", their value has time zone of the region in which the provider resides. For example: starttime | endtime | region_id (time zone of region) | "real" st | "real" et ----------|----------|---------------------------------|-----------|----------- 03:00:00 | 17:00:00 | 1 (EGT => -1) | 02:00:00 | 16:00:00 Often I need to get the list of suppliers whose time range is within the current server time (taking into account the time zone conversion). The problem is that the time zones aren't "constant", ie, they may change during the summer time. However, this change is very specific to the region and not always carried out at the same time: EGT <= EGST, ART <= ARST, etc. The question is: 1. Is it necessary to use a webservice to update every so often the time zones in the regions? Does anyone know of a web service that can serve? 2. Is there a better approach to solve this problem? Thanks in advance. UPDATE I will give an example to clarify what I'm trying to get. In the table providers I found this records: idproviders | starttime | endtime | region_id ------------|-----------|----------|----------- 1 | 03:00:00 | 17:00:00 | 23 (Texas) 2 | 04:00:00 | 18:00:00 | 23 (Texas) If I execute the query in January, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +1 hour Server time = 02:00:00 I should get the following results: idproviders = 1 If I execute the query in June, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +2 hours (their local time has not changed, but their time zone has changed) Server time = 02:00:00 I should get the following results: idproviders = 1 and 2

    Read the article

  • Getting unhandled error and connection get lost when a client tries to communicate with chat server in twisted

    - by user2433888
    from twisted.internet.protocol import Protocol,Factory from twisted.internet import reactor class ChatServer(Protocol): def connectionMade(self): print "A Client Has Connected" self.factory.clients.append(self) print"clients are ",self.factory.clients self.transport.write('Hello,Welcome to the telnet chat to sign in type aim:YOUR NAME HERE to send a messsage type msg:YOURMESSAGE '+'\n') def connectionLost(self,reason): self.factory.clients.remove(self) self.transport.write('Somebody was disconnected from the server') def dataReceived(self,data): #print "data is",data a = data.split(':') if len(a) > 1: command = a[0] content = a[1] msg="" if command =="iam": self.name + "has joined" elif command == "msg": ma=sg = self.name + ":" +content print msg for c in self.factory.clients: c.message(msg) def message(self,message): self.transport.write(message + '\n') factory = Factory() factory.protocol = ChatServer factory.clients = [] reactor.listenTCP(80,factory) print "Iphone Chat server started" reactor.run() The above code is running succesfully...but when i connect the client (by typing telnet localhost 80) to this chatserver and try to write message ,connection gets lost and following errors occurs : Iphone Chat server started A Client Has Connected clients are [<__main__.ChatServer instance at 0x024AC0A8>] Unhandled Error Traceback (most recent call last): File "C:\Python27\lib\site-packages\twisted\python\log.py", line 84, in callWithLogger return callWithContext({"system": lp}, func, *args, **kw) File "C:\Python27\lib\site-packages\twisted\python\log.py", line 69, in callWithContext return context.call({ILogContext: newCtx}, func, *args, **kw) File "C:\Python27\lib\site-packages\twisted\python\context.py", line 118, in callWithContext return self.currentContext().callWithContext(ctx, func, *args, **kw) File "C:\Python27\lib\site-packages\twisted\python\context.py", line 81, in callWithContext return func(*args,**kw) --- --- File "C:\Python27\lib\site-packages\twisted\internet\selectreactor.py", line 150, in _doReadOrWrite why = getattr(selectable, method)() File "C:\Python27\lib\site-packages\twisted\internet\tcp.py", line 199, in doRead rval = self.protocol.dataReceived(data) File "D:\chatserverultimate.py", line 21, in dataReceived content = a[1] exceptions.IndexError: list index out of range Where am I going wrong?

    Read the article

  • Reporting Services "cannot connect to the report server database"

    - by Dano
    We have Reporting Services running, and twice in the past 6 months it has been down for 1-3 days, and suddenly it will start working again. The errors range from not being able to view the tree root in a browser, down to being able to insert parameters on a report, but crashing before the report can generate. Looking at the logs, there is 1 error and 1 warning which seem to correspond somewhat. ERROR:Event Type: Error Event Source: Report Server (SQL2K5) Event Category: Management Event ID: 107 Date: 2/13/2009 Time: 11:17:19 AM User: N/A Computer: ******** Description: Report Server (SQL2K5) cannot connect to the report server database. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. WARNING: always comes before the previous error Event code: 3005 Event message: An unhandled exception has occurred. Event time: 2/13/2009 11:06:48 AM Event time (UTC): 2/13/2009 5:06:48 PM Event ID: 2efdff9e05b14f4fb8dda5ebf16d6772 Event sequence: 550 Event occurrence: 5 Event detail code: 0 Process information: Process ID: 5368 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: ReportServerException Exception message: For more information about this error navigate to the report server on the local server machine, or enable remote errors. During the downtime we tried restarting everything from the server RS runs on, to the database it calls to fill reports with no success. When I came in monday morning it was working again. Anyone out there have any ideas on what could be causing these issues? Edit Tried both suggestions below several months ago to no avail. This issue hasn't arisen since, maybe something out of my control has changed....

    Read the article

  • Which class should store the lookup table?

    - by max
    The world contains agents at different locations, with only a single agent at any location. Each agent knows where he's at, but I also need to quickly check if there's an agent at a given location. Hence, I also maintain a map from locations to agents. I have a problem deciding where this map belongs to: class World, class Agent (as a class attribute) or elsewhere. In the following I put the lookup table, agent_locations, in class World. But now agents have to call world.update_agent_location every time they move. This is very annoying; what if I decide later to track other things about the agents, apart from their locations - would I need to add calls back to the world object all across the Agent code? class World: def __init__(self, n_agents): # ... self.agents = {} self.agent_locations = {} for id in range(n_agents): x, y = self.find_location() agent = Agent(self,x,y) self.agents.append(agent) self.agent_locations[x,y] = agent def update_agent_location(self, agent, x, y): del self.agent_locations[agent.x, agent.y] self.agent_locations[x, y] = agent def update(self): # next step in the simulation for agent in self.agents: agent.update() # next step for this agent # ... class Agent: def __init__(self, world, x, y): self.world = world self.x, self.y = x, y def move(self, x1, y1): self.world.update_agent_location(self, x1, y1) self.x, self.y = x1, y1 def update(): # find a good location that is not occupied and move there for x, y in self.valid_locations(): if not self.location_is_good(x, y): continue if self.world.agent_locations[x, y]: # location occupied continue self.move(x, y) I can instead put agent_locations in class Agent as a class attribute. But that only works when I have a single World object. If I later decide to instantiate multiple World objects, the lookup tables would need to be world-specific. I am sure there's a better solution... EDIT: I added a few lines to the code to show how agent_locations is used. Note that it's only used from inside Agent objects, but I don't know if that would remain the case forever.

    Read the article

  • Using the Queue class in Python 2.6

    - by voipme
    Let's assume I'm stuck using Python 2.6, and can't upgrade (even if that would help). I've written a program that uses the Queue class. My producer is a simple directory listing. My consumer threads pull a file from the queue, and do stuff with it. If the file has already been processed, I skip it. The processed list is generated before all of the threads are started, so it isn't empty. Here's some pseudo-code. import Queue, sys, threading processed = [] def consumer(): while True: file = dirlist.get(block=True) if file in processed: print "Ignoring %s" % file else: # do stuff here dirlist.task_done() dirlist = Queue.Queue() for f in os.listdir("/some/dir"): dirlist.put(f) max_threads = 8 for i in range(max_threads): thr = Thread(target=consumer) thr.start() dirlist.join() The strange behavior I'm getting is that if a thread encounters a file that's already been processed, the thread stalls out and waits until the entire program ends. I've done a little bit of testing, and the first 7 threads (assuming 8 is the max) stop, while the 8th thread keeps processing, one file at a time. But, by doing that, I'm losing the entire reason for threading the application. Am I doing something wrong, or is this the expected behavior of the Queue/threading classes in Python 2.6?

    Read the article

  • convert an int to list of individual digitals more faster?

    - by user478514
    All, I want define an int(987654321) <= [9, 8, 7, 6, 5, 4, 3, 2, 1] convertor, if the length of int number < 9, for example 10 the list will be [0,0,0,0,0,0,0,1,0] , and if the length 9, for example 9987654321 , the list will be [9, 9, 8, 7, 6, 5, 4, 3, 2, 1] >>> i 987654321 >>> l [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> z = [0]*(len(unit) - len(str(l))) >>> z.extend(l) >>> l = z >>> unit [100000000, 10000000, 1000000, 100000, 10000, 1000, 100, 10, 1] >>> sum([x*y for x,y in zip(l, unit)]) 987654321 >>> int("".join([str(x) for x in l])) 987654321 >>> l1 = [int(x) for x in str(i)] >>> z = [0]*(len(unit) - len(str(l1))) >>> z.extend(l1) >>> l1 = z >>> l1 [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> a = [i//x for x in unit] >>> b = [a[x] - a[x-1]*10 for x in range(9)] >>> if len(b) = len(a): b[0] = a[0] # fix the a[-1] issue >>> b [9, 8, 7, 6, 5, 4, 3, 2, 1] I tested above solutions but found those may not faster/simple enough than I want and may have a length related bug inside, anyone may share me a better solution for this kinds convertion? Thanks!

    Read the article

  • Simplifying for-if messes with better structure?

    - by HH
    # Description: you are given a bitwise pattern and a string # you need to find the number of times the pattern matches in the string # any one liner or simple pythonic solution? import random def matchIt(yourString, yourPattern): """find the number of times yourPattern occurs in yourString""" count = 0 matchTimes = 0 # How can you simplify the for-if structures? for coin in yourString: #return to base if count == len(pattern): matchTimes = matchTimes + 1 count = 0 #special case to return to 2, there could be more this type of conditions #so this type of if-conditionals are screaming for a havoc if count == 2 and pattern[count] == 1: count = count - 1 #the work horse #it could be simpler by breaking the intial string of lenght 'l' #to blocks of pattern-length, the number of them is 'l - len(pattern)-1' if coin == pattern[count]: count=count+1 average = len(yourString)/matchTimes return [average, matchTimes] # Generates the list myString =[] for x in range(10000): myString= myString + [int(random.random()*2)] pattern = [1,0,0] result = matchIt(myString, pattern) print("The sample had "+str(result[1])+" matches and its size was "+str(len(myString))+".\n" + "So it took "+str(result[0])+" steps in average.\n" + "RESULT: "+str([a for a in "FAILURE" if result[0] != 8])) # Sample Output # # The sample had 1656 matches and its size was 10000. # So it took 6 steps in average. # RESULT: ['F', 'A', 'I', 'L', 'U', 'R', 'E']

    Read the article

  • Why do Scala maps have poor performance relative to Java?

    - by Mike Hanafey
    I am working on a Scala app that consumes large amounts of CPU time, so performance matters. The prototype of the system was written in Python, and performance was unacceptable. The application does a lot with inserting and manipulating data in maps. Rex Kerr's Thyme was used to look at the performance of updating and retrieving data from maps. Basically "n" random Ints were stored in maps, and retrieved from the maps, with the time relative to java.util.HashMap used as a reference. The full results for a range of "n" are here. Sample (n=100,000) performance relative to java, smaller is worse: Update Read Mutable 16.06% 76.51% Immutable 31.30% 20.68% I do not understand why the scala immutable map beats the scala mutable map in update performance. Using the sizeHint on the mutable map does not help (it appears to be ignored in the tested implementation, 2.10.3). Even more surprisingly the immutable read performance is worse than the mutable read performance, more significantly so with larger maps. The update performance of the scala mutable map is surprisingly bad, relative to both scala immutable and plain Java. What is the explanation?

    Read the article

  • MySQL help, counting information on last records

    - by ee12csvt
    I need some advice I have two tables, one holds unique serial numbers of items (items) and the other holds status changes and other information for these items (details) The Tables are set up as follows Item itemID itemName itemDate details detID itemID modlvl status detDate All items have at least one record in the details table, but over time the status has changed or the modification level has changed (Both of these are identified by numbers which are held in other appropriate tables) and a new record is created each time the status/modlvl changes I want to display a table on my webpage using php that identifies the different mod levels of the items and shows a count of each of the current status of the items EDIT Hi Ronnis, This is an example of the data in the tables and what I want to achieve The current Mod Levels range from 1 to 3 Status representations are 1 In Use 2 In Store 3 Being repaired 4 In Transit 5 For Disposal 6 Disposed 7 Lost Item itemID OrigMod created 1000 1 2009-10-01 22:12:12 1001 1 2009-10-01 22:12:12 1002 1 2009-10-01 22:12:12 1003 1 2009-10-01 22:12:12 1004 1 2009-10-01 22:12:12 1005 1 2009-10-01 22:12:12 1006 1 2009-10-01 22:12:12 1007 1 2009-10-01 22:12:12 1008 1 2009-10-01 22:12:12 1009 1 2009-10-01 22:12:12 1010 1 2009-10-01 22:12:12 Details detID itemID modlvl detDate status 1 1000 1 2009-10-01 1 2 1001 1 2009-10-01 1 3 1002 1 2009-10-01 1 4 1003 1 2009-10-01 1 5 1004 1 2009-10-01 1 6 1005 1 2009-10-01 1 7 1006 1 2009-10-01 1 8 1007 1 2009-10-01 1 9 1008 1 2009-10-01 1 10 1009 1 2009-10-01 1 11 1010 1 2009-10-01 1 12 1001 1 2010-02-01 2 13 1001 1 2010-02-03 4 14 1001 1 2010-03-01 3 15 1000 1 2010-03-14 2 16 1001 2 2010-04-01 4 17 1006 1 2010-04-01 2 18 1001 2 2010-04-03 2 19 1006 1 2010-04-14 4 20 1006 1 2010-05-01 5 21 1002 1 2010-05-02 2 22 1003 1 2010-05-10 2 23 1010 1 2010-06-01 2 24 1006 1 2010-06-18 6 25 1010 1 2010-07-01 7 26 1007 1 2010-07-02 2 27 1007 1 2010-07-04 4 28 1003 1 2010-07-10 2 29 1007 1 2010-07-11 3 30 1007 2 2010-07-12 4 31 1007 2 2010-07-15 2 32 1001 2 2010-08-31 1 33 1001 2 2010-09-10 2 34 1001 2 2010-10-01 4 35 1008 1 2010-10-01 2 36 1001 2 2010-10-05 3 37 1008 1 2010-10-05 4 38 1008 1 2010-10-10 3 39 1001 3 2010-10-20 4 40 1001 3 2010-10-25 2 Using the tables above I want to get this result MoLvl Use Store Repd Transit Displ Dispd Lost Total 1 3 3 1 0 0 1 1 9 2 0 1 0 0 0 0 0 1 3 0 1 0 0 0 0 0 1 Total 3 5 1 0 0 1 1 11

    Read the article

  • Setting up SVN (subvsersion) to manage our companies files, how to exclude large files from being ve

    - by Roeland
    Me and two other guys recently started our own web development company. We each work from our homes and have decided we want to keep one central location for all of our files. These files include word documents, spreadsheets, client files, designs.. etc. Anything pertaining to our company. I have a pretty solid internet connection and a windows 2008 server box sitting at home so I set up a subversion repository. Our file repository will look something like this. Clients Company A Design (photoshop files, wireframes, concepts) Documents ( logins, quotes, proposals etc) Site Backups Company B Design Documents Site Backups Prospects Company C Company D Our Company Our Website Documents (contract, operating procudres) My question is in regards to design files. The photoshop files that my designer works with range in sizes from 10mb to 100mb. I don't think we need to keep these files version-ed as this would eat up space incredibly fast. How do I go about controlling which files get version-ed, and which files are just stored. What I am thinking is that all documents need to be version-ed, and any files other then that should not be. Any help would be appreciated, thanks! Edit I am also curious whether this is the way to go. I just like this system since it keeps version of all my documents and at the same time. Also essentially I will have 3 backups in 3 different locations (3 local copies) so no need for backing it up. I am unsure of how svn would perform as purely a huge file repository.

    Read the article

  • Identifying the GeoPoint that trigger an onTap call

    - by Akroy
    I'm developing a Google Maps app on Android. I have a number of GeoPoints that I'm displaying by adding them as OverlayItems to an ItemizedOverlay. This works well for displaying them and bringing up a nice box when I click them, however I'm trying to put info in the box it brings up. Thus, I've extended ItemizedOverlay with my own class, and I'm overriding onTap (final GeoPoint p, final MapView mapView). At first I thought that this would be very simple, as one of the parameters is the GeoPoint, so I would know exactly which GeoPoint was clicked. However, from what I can tell, the GeoPoint argument there is the GeoPoint for where the user actually touched. Given the range the user can touch and still trigger the onTap, that GeoPoint isn't very helpful for knowing precisely which GeoPoint was actually touched. I'm currently checking the parameter GeoPoint against all my existing GeoPoints and seeing which it's closest to. This seems like a super hacky abstraction inversion. Is there a better way to know what was actually tapped?

    Read the article

  • Can isdigit legitimately be locale dependent in C

    - by cdev
    In the section covering setlocale, the ANSI C standard states in a footnote that the only ctype.h functions whose behaviour is not affected by the current locale are isdigit and isxdigit. The Microsoft implementation of isdigit is locale dependent because, for example, in locales using code page 1250 isdigit only returns non-zero for characters in the range 0x30 ('0') - 0x39 ('9'), whereas in locales using code page 1252 isdigit also returns non-zero for the superscript digits 0xB2 ('²'), 0xB3 ('³') and 0xB9 ('¹'). Is Microsoft in violation of the C standard by making isdigit locale dependent? In this question I am primarily interested in C90, which Microsoft claims to conform to, rather than C99. Additional background: Microsoft's own documentation of setlocale incorrectly states that isdigit is unaffected by the LC_CTYPE part of the locale. The section of the C standard that covers the ctype.h functions contains some wording that I consider ambiguous: "The behavior of these functions is affected by the current locale. Those functions that have locale-specific aspects only when not in the "C" locale are noted below." I consider this ambiguous because it is unclear what it is trying to say about functions such as isdigit for which there are no notes about locale-specific aspects. It might be trying to say that such functions must be assumed to be locale dependent, in which case Microsoft's implementation of isdigit would be OK. (Except that the footnote I mentioned earlier seems to contradict this interpretation.)

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >