Search Results

Search found 16680 results on 668 pages for 'python datetime'.

Page 367/668 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • OpenGL basics: calling glDrawElements once per object

    - by Bethor
    Hi all, continuing on from my explorations of the basics of OpenGL (see this question), I'm trying to figure out the basic principles of drawing a scene with OpenGL. I am trying to render a simple cube repeated n times in every direction. My method appears to yield terrible performance : 1000 cubes brings performance below 50fps (on a QuadroFX 1800, roughly a GeForce 9600GT). My method for drawing these cubes is as follows: done once: set up a vertex buffer and array buffer containing my cube vertices in model space set up an array buffer indexing the cube for drawing as 12 triangles done for each frame: update uniform values used by the vertex shader to move all cubes at once done for each cube, for each frame: update uniform values used by the vertex shader to move each cube to its position call glDrawElements to draw the positioned cube Is this a sane method ? If not, how does one go about something like this ? I'm guessing I need to minimize calls to glUniform, glDrawElements, or both, but I'm not sure how to do that. Full code for my little test : (depends on gletools and pyglet) I'm aware that my init code (at least) is really ugly; I'm concerned with the rendering code for each frame right now, I'll move to something a little less insane for the creation of the vertex buffers and such later on. import pyglet from pyglet.gl import * from pyglet.window import key from numpy import deg2rad, tan from gletools import ShaderProgram, FragmentShader, VertexShader, GeometryShader vertexData = [-0.5, -0.5, -0.5, 1.0, -0.5, 0.5, -0.5, 1.0, 0.5, -0.5, -0.5, 1.0, 0.5, 0.5, -0.5, 1.0, -0.5, -0.5, 0.5, 1.0, -0.5, 0.5, 0.5, 1.0, 0.5, -0.5, 0.5, 1.0, 0.5, 0.5, 0.5, 1.0] elementArray = [2, 1, 0, 1, 2, 3,## back face 4, 7, 6, 4, 5, 7,## front face 1, 3, 5, 3, 7, 5,## top face 2, 0, 4, 2, 4, 6,## bottom face 1, 5, 4, 0, 1, 4,## left face 6, 7, 3, 6, 3, 2]## right face def toGLArray(input): return (GLfloat*len(input))(*input) def toGLushortArray(input): return (GLushort*len(input))(*input) def initPerspectiveMatrix(aspectRatio = 1.0, fov = 45): frustumScale = 1.0 / tan(deg2rad(fov) / 2.0) fzNear = 0.5 fzFar = 300.0 perspectiveMatrix = [frustumScale*aspectRatio, 0.0 , 0.0 , 0.0 , 0.0 , frustumScale, 0.0 , 0.0 , 0.0 , 0.0 , (fzFar+fzNear)/(fzNear-fzFar) , -1.0, 0.0 , 0.0 , (2*fzFar*fzNear)/(fzNear-fzFar), 0.0 ] return perspectiveMatrix class ModelObject(object): vbo = GLuint() vao = GLuint() eao = GLuint() initDone = False verticesPool = [] indexPool = [] def __init__(self, vertices, indexing): super(ModelObject, self).__init__() if not ModelObject.initDone: glGenVertexArrays(1, ModelObject.vao) glGenBuffers(1, ModelObject.vbo) glGenBuffers(1, ModelObject.eao) glBindVertexArray(ModelObject.vao) initDone = True self.numIndices = len(indexing) self.offsetIntoVerticesPool = len(ModelObject.verticesPool) ModelObject.verticesPool.extend(vertices) self.offsetIntoElementArray = len(ModelObject.indexPool) ModelObject.indexPool.extend(indexing) glBindBuffer(GL_ARRAY_BUFFER, ModelObject.vbo) glEnableVertexAttribArray(0) #position glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ModelObject.eao) glBufferData(GL_ARRAY_BUFFER, len(ModelObject.verticesPool)*4, toGLArray(ModelObject.verticesPool), GL_STREAM_DRAW) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(ModelObject.indexPool)*2, toGLushortArray(ModelObject.indexPool), GL_STREAM_DRAW) def draw(self): glDrawElements(GL_TRIANGLES, self.numIndices, GL_UNSIGNED_SHORT, self.offsetIntoElementArray) class PositionedObject(object): def __init__(self, mesh, pos, objOffsetUf): super(PositionedObject, self).__init__() self.mesh = mesh self.pos = pos self.objOffsetUf = objOffsetUf def draw(self): glUniform3f(self.objOffsetUf, self.pos[0], self.pos[1], self.pos[2]) self.mesh.draw() w = 800 h = 600 AR = float(h)/float(w) window = pyglet.window.Window(width=w, height=h, vsync=False) window.set_exclusive_mouse(True) pyglet.clock.set_fps_limit(None) ## input forward = [False] left = [False] back = [False] right = [False] up = [False] down = [False] inputs = {key.Z: forward, key.Q: left, key.S: back, key.D: right, key.UP: forward, key.LEFT: left, key.DOWN: back, key.RIGHT: right, key.PAGEUP: up, key.PAGEDOWN: down} ## camera camX = 0.0 camY = 0.0 camZ = -1.0 def simulate(delta): global camZ, camX, camY scale = 10.0 move = scale*delta if forward[0]: camZ += move if back[0]: camZ += -move if left[0]: camX += move if right[0]: camX += -move if up[0]: camY += move if down[0]: camY += -move pyglet.clock.schedule(simulate) @window.event def on_key_press(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = True @window.event def on_key_release(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = False ## uniforms for shaders camOffsetUf = GLuint() objOffsetUf = GLuint() perspectiveMatrixUf = GLuint() camRotationUf = GLuint() program = ShaderProgram( VertexShader(''' #version 330 layout(location = 0) in vec4 objCoord; uniform vec3 objOffset; uniform vec3 cameraOffset; uniform mat4 perspMx; void main() { mat4 translateCamera = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, cameraOffset.x, cameraOffset.y, cameraOffset.z, 1.0f); mat4 translateObject = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, objOffset.x, objOffset.y, objOffset.z, 1.0f); vec4 modelCoord = objCoord; vec4 positionedModel = translateObject*modelCoord; vec4 cameraPos = translateCamera*positionedModel; gl_Position = perspMx * cameraPos; }'''), FragmentShader(''' #version 330 out vec4 outputColor; const vec4 fillColor = vec4(1.0f, 1.0f, 1.0f, 1.0f); void main() { outputColor = fillColor; }''') ) shapes = [] def init(): global camOffsetUf, objOffsetUf with program: camOffsetUf = glGetUniformLocation(program.id, "cameraOffset") objOffsetUf = glGetUniformLocation(program.id, "objOffset") perspectiveMatrixUf = glGetUniformLocation(program.id, "perspMx") glUniformMatrix4fv(perspectiveMatrixUf, 1, GL_FALSE, toGLArray(initPerspectiveMatrix(AR))) obj = ModelObject(vertexData, elementArray) nb = 20 for i in range(nb): for j in range(nb): for k in range(nb): shapes.append(PositionedObject(obj, (float(i*2), float(j*2), float(k*2)), objOffsetUf)) glEnable(GL_CULL_FACE) glCullFace(GL_BACK) glFrontFace(GL_CW) glEnable(GL_DEPTH_TEST) glDepthMask(GL_TRUE) glDepthFunc(GL_LEQUAL) glDepthRange(0.0, 1.0) glClearDepth(1.0) def update(dt): print pyglet.clock.get_fps() pyglet.clock.schedule_interval(update, 1.0) @window.event def on_draw(): with program: pyglet.clock.tick() glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) glUniform3f(camOffsetUf, camX, camY, camZ) for shape in shapes: shape.draw() init() pyglet.app.run()

    Read the article

  • Django-registration and ReCaptcha integration - how to pass the user's IP

    - by knuckfubuck
    New to django and trying to setup django-registration 0.8 with recaptcha-client. I followed the advice posted in the answer to this question. I used the custom form and custom backend from that post and the widget and field from this tutorial. My form is displaying properly with the recaptcha widget but when I submit it throws the error about the missing IP. What's the best way to pass the IP using django-registration?

    Read the article

  • Regex to remove conditional comments

    - by cnu
    I want a regex which can match conditional comments in a HTML source page so I can remove only those. I want to preserve the regular comments. I would also like to avoid using the .*? notation if possible. The text is foo <!--[if IE]> <style type="text/css"> ul.menu ul li{ font-size: 10px; font-weight:normal; padding-top:0px; } </style> <![endif]--> bar and I want to remove everything in <!--[if IE]> and <![endif]--> EDIT: It is because of BeautifulSoup I want to remove these tags. BeautifulSoup fails to parse and gives an incomplete source EDIT2: [if IE] isn't the only condition. There are lots more and I don't have any list of all possible combinations. EDIT3: Vinko Vrsalovic's solution works, but the actual problem why beautifulsoup failed was because of a rogue comment within the conditional comment. Like <!--[if lt IE 7.]> <script defer type="text/javascript" src="pngfix_253168.js"></script><!--png fix for IE--> <![endif]--> Notice the <!--png fix for IE--> comment? Though my problem was solve, I would love to get a regex solution for this.

    Read the article

  • Scrapy domain_name for spider

    - by Zeynel
    From the Scrapy tutorial: domain_name: identifies the Spider. It must be unique, that is, you can’t set the same domain name for different Spiders. Does this mean that domain_name must be a valid domain name, like domain_name = 'example.com' Or can I name domain_name = 'ex1' The problem is I had a spider that worked with domain name domain_name = 'whitecase.com' Now I created a new spider as an instance of CrawlSpider and named it domain_name = 'wc2' but I am getting the error "could not find spider for domain "wc2""

    Read the article

  • Urllib's urlopen broken on some sites (e.g. StackApps api)

    - by Edan Maor
    I'm using urllib2's urlopen function to try and get a JSON result from the StackOverflow api. The code I'm using: >>> import urllib2 >>> conn = urllib2.urlopen("http://api.stackoverflow.com/0.8/users/") >>> conn.readline() The result I'm getting: '\x1f\x8b\x08\x00\x00\x00\x00\x00\x04\x00\xed\xbd\x07`\x1cI\x96%&/m\xca{\x7fJ\... I'm fairly new to urllib, but this doesn't seem like the result I should be getting. I've tried it in other places and I get what I expect (the same as visiting the address with a browser gives me: a JSON object). Using urlopen on other sites (e.g. "http://google.com") works fine, and gives me actual html. I've also tried using urllib and it gives the same result. I'm pretty stuck, not even knowing where to look to solve this problem. Any ideas?

    Read the article

  • Urllib's urlopen breaking on some sites (e.g. StackApps api): returns garbage results

    - by Edan Maor
    I'm using urllib2's urlopen function to try and get a JSON result from the StackOverflow api. The code I'm using: >>> import urllib2 >>> conn = urllib2.urlopen("http://api.stackoverflow.com/0.8/users/") >>> conn.readline() The result I'm getting: '\x1f\x8b\x08\x00\x00\x00\x00\x00\x04\x00\xed\xbd\x07`\x1cI\x96%&/m\xca{\x7fJ\... I'm fairly new to urllib, but this doesn't seem like the result I should be getting. I've tried it in other places and I get what I expect (the same as visiting the address with a browser gives me: a JSON object). Using urlopen on other sites (e.g. "http://google.com") works fine, and gives me actual html. I've also tried using urllib and it gives the same result. I'm pretty stuck, not even knowing where to look to solve this problem. Any ideas?

    Read the article

  • Timezones and the DateTimeField - Django

    - by RadiantHex
    Hi folks, I'm trying to implement a "time ago" feature, for the displaying of items on a site. As I'm caching the pages I wish to use javascript in order to render the "time ago". Javascript knows local time and problably the Timezone of the local machine so I could play with that, but that would require to hard code the server's timezone. Therefore I'm trying to figure out a simple way to pass a ISO 8601 timestamp, in GMT time. Is there any simple and straight forward way for doing this? Help would be much appreciated! =)

    Read the article

  • Django Foreign key queries

    - by Hulk
    In the following model: class header(models.Model): title = models.CharField(max_length = 255) created_by = models.CharField(max_length = 255) def __unicode__(self): return self.id() class criteria(models.Model): details = models.CharField(max_length = 255) headerid = models.ForeignKey(header) def __unicode__(self): return self.id() class options(models.Model): opt_details = models.CharField(max_length = 255) headerid = models.ForeignKey(header) def __unicode__(self): return self.id() If there is a row in the database for table header as Id=1, title=value-mart , createdby=CEO How do i query criteria and options tables to get all the values related to header table id=1 Also can some one please suggest a good link for queries examples, Thanks..

    Read the article

  • How can I show figures separately in matplotlib?

    - by Federico Ramponi
    Say that I have two figures in matplotlib, with one plot per figure: import matplotlib.pyplot as plt f1 = plt.figure() plt.plot(range(0,10)) f2 = plt.figure() plt.plot(range(10,20)) Then I show both in one shot plt.show() Is there a way to show them separately, i.e. to show just f1? Or better: how can I manage the figures separately like in the following 'wishful' code (that doesn't work): f1 = plt.figure() f1.plot(range(0,10)) f1.show()

    Read the article

  • [Django] Automatically Update Field when a Different Field is Changed

    - by Gordon
    I have a model with a bunch of different fields like first_name, last_name, etc. I also have fields first_name_ud, last_name_ud, etc. that correspond to the last updated date for the related fields (i.e. when first_name is modified, then first_name_ud is set to the current date). Is there a way to make this happen automatically or do I need to check what fields have changed each time I save an object and then update the related "_ud" fields. Thanks a lot!

    Read the article

  • Problems with Snow Leopard, Django & PIL

    - by Cato Johnston
    Hi I am having some trouble getting Django & PIL work properly since upgrading to Snow Leopard. I have installed freetype, libjpeg and then PIL, which tells me: --- TKINTER support ok --- JPEG support ok --- ZLIB (PNG/ZIP) support ok --- FREETYPE2 support ok but when I try to upload a jpeg through the django admin interface I get: Upload a valid image. The file you uploaded was either not an image or a corrupted image. It works fine with PNG files. Any Ideas?

    Read the article

  • Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs

    - by brandizzi
    Hello, all. I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!

    Read the article

  • Haystack / Whoosh Index Generation Error

    - by Keith Fitzgerald
    I'm trying to setup haystack with whoosh backend. When i try to gen the index [or any index command for that matter] i receive: TypeError: Item in ``from list'' not a string if i completely remove my search_indexes.py i get the same error [so i'm guessing it can't find that file at all] what might cause this error? it's set to autodiscover and i'm sure my app is installed because i'm currently using it. Full traceback: Traceback (most recent call last): File "./manage.py", line 17, in <module> execute_manager(settings) File "/Users/ghostrocket/Development/Redux/.dependencies/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Users/ghostrocket/Development/Redux/.dependencies/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Users/ghostrocket/Development/Redux/.dependencies/django/core/management/__init__.py", line 257, in fetch_command klass = load_command_class(app_name, subcommand) File "/Users/ghostrocket/Development/Redux/.dependencies/django/core/management/__init__.py", line 67, in load_command_class module = import_module('%s.management.commands.%s' % (app_name, name)) File "/Users/ghostrocket/Development/Redux/.dependencies/django/utils/importlib.py", line 35, in import_module __import__(name) File "/Users/ghostrocket/Development/Redux/.dependencies/haystack/__init__.py", line 124, in <module> handle_registrations() File "/Users/ghostrocket/Development/Redux/.dependencies/haystack/__init__.py", line 121, in handle_registrations search_sites_conf = __import__(settings.HAYSTACK_SITECONF) File "/Users/ghostrocket/Development/Redux/website/../website/search_sites.py", line 2, in <module> haystack.autodiscover() File "/Users/ghostrocket/Development/Redux/.dependencies/haystack/__init__.py", line 83, in autodiscover app_path = __import__(app, {}, {}, [app.split('.')[-1]]).__path__ TypeError: Item in ``from list'' not a string and here is my search_indexes.py from haystack import indexes from haystack import site from myproject.models import * site.register(myobject)

    Read the article

  • How to sort a boxplot by the median values in pandas

    - by Chris
    I've got a dataframe outcome2 that I generate a grouped boxplot with in the following manner: In [11]: outcome2.boxplot(column='Hospital 30-Day Death (Mortality) Rates from Heart Attack',by='State') plt.ylabel('30 Day Death Rate') plt.title('30 Day Death Rate by State') Out [11]: What I'd like to do is sort the plot by the median for each state, instead of alphabetically. Not sure how to go about doing so.

    Read the article

  • Urllib's urlopen broken on some sites (StackApps api)

    - by Edan Maor
    I'm using urllib2's urlopen function to try and get a JSON result from the StackOverflow api. The code I'm using: >>> import urllib2 >>> conn = urllib2.urlopen("http://api.stackoverflow.com/0.8/users/") >>> conn.readline() The result I'm getting: '\x1f\x8b\x08\x00\x00\x00\x00\x00\x04\x00\xed\xbd\x07`\x1cI\x96%&/m\xca{\x7fJ\... I'm fairly new to urllib, but this doesn't seem like the result I should be getting. I've tried it in other places and I get what I expect (the same as visiting the address with a browser gives me: a JSON object). Using urlopen on other sites (e.g. "http://google.com") works fine, and gives me actual html. I've also tried using urllib and it gives the same result. I'm pretty stuck, not even knowing where to look to solve this problem. Any ideas?

    Read the article

  • Remove linebreak at specific position in textfile

    - by williamx
    I have a large textfile, which has linebreaks at column 80 due to console width. Many of the lines in the textfile are not 80 characters long, and are not affected by the linebreak. In pseudocode, this is what I want: Iterate through lines in file If line matches this regex pattern: ^(.{80})\n(.+) Replace this line with a new string consisting of match.group(1) and match.group(2). Just remove the linebreak from this line. If line doesn't match the regex, skip! Maybe I don't need regex to do this?

    Read the article

  • Beautiful soup how print a tag while iterating over it .

    - by Bunny Rabbit
    <?xml version="1.0" encoding="UTF-8"?> <playlist version="1" xmlns="http://xspf.org/ns/0/"> <trackList> <track> <location>file:///home/ashu/Music/Collections/randomPicks/ipod%20on%20sep%2009/Coldplay-Sparks.mp3</location> <title>Coldplay-Sparks</title> </track> <track> <location>file:///home/ashu/Music/Collections/randomPicks/gud%201s/Coldplay%20Warning%20sign.mp3</location> <title>Coldplay Warning sign</title> </track>.... My xml looks like this , i want to get the locations, i am trying from BeautifulSoup import BeautifulSoup as bs soup = bs (the_above_xml_text) for track in soup.tracklist: print track.location.string but that is not working because i am getting AttributeError: 'NavigableString' object has no attribute 'location' how can i achive the result , thanks in advance.

    Read the article

  • Disable Plone Archetypes index/convert doc/pdf files

    - by hosting-schuppen
    Hi Stackoverflowers, if i rebuild my catalog in plone i get many of this infos: 2010-02-18T11:26:09 INFO Archetypes Error while trying to convert file contents to 'text/plain' in .getIndexable() of : Unable to find binary "wvHtml" in /sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:/usr/lib/jvm/jre/bin This happens to doc and pdf files. I don't wanna convert docs or pdfs. How can i disable it completly? Thanks for the help!

    Read the article

  • Break nested loop in Django views.py with a function

    - by knuckfubuck
    I have a nested loop that I would like to break out of. After searching this site it seems the best practice is to put the nested loop into a function and use return to break out of it. Is it acceptable to have functions inside the views.py file that are not a view? What is the best practice for the location of this function? Here's the example code from inside my views.py @login_required def save_bookmark(request): if request.method == 'POST': form = BookmarkSaveForm(request.POST) if form.is_valid(): bookmark_list = Bookmark.objects.all() for bookmark in bookmark_list: for link in bookmark.link_set.all(): if link.url == form.cleaned_data['url']: # Do something. break else: # Do something else. else: form = BookmarkSaveForm() return render_to_response('save_bookmark_form.html', {'form': form})

    Read the article

  • How to Pythonically yield all values from a list?

    - by bodacydo
    Suppose I have a list that I wish not to return but to yield values from. What is the most Pythonic way to do that? Here is what I mean. Thanks to some non-lazy computation I have computed the list ['a', 'b', 'c', 'd'], but my code through the project uses lazy computation, so I'd like to yield values from my function instead of returning the whole list. I currently wrote it as following: List = ['a', 'b', 'c', 'd'] for item in List: yield item But this doesn't feel Pythonic to me. Looking forward to some suggestions, thanks. Boda Cydo.

    Read the article

  • Design pattern for parsing data that will be grouped to two different ways and flipped

    - by lewisblackfan
    I'm looking for an easily maintainable and extendable design model for a script to parse an excel workbook into two separate workbooks after pulling data from other locations like the command line, and a database. The high level details are as follows. I need to parse an excel workbook containing a sheet that lists unique question names, the only reliable information that can be parsed from the question name is the book code that identifies the title and edition of the textbook the question is associated with, the rest of the question name is not standardized well enough to be reliably parsed by computer. The general form of the question name is best described by the following regular expression. '^(\w+)\s(\w{1,2})\.(\w{1,2})\.(\w{1,3})\.(\w{1,3}\.)*$' The first sub-pattern is the book code, the second sub-pattern is 90% of the time the chapter, and the rest of the sub-patterns could be section, problem type, problem number, or question type information. There is no simple logic, at least not one I can find. There will be a minimum of three other columns in this spreadsheet; one column will be the chapter the question is associated with, the second will be the section within the chapter the question is associated with, and the third will be some kind of asset indicated by a uniform resource locator. 1 | 1 | qname1 | url | description | url | description ... 1 | 1 | qname2 | url | description 1 | 1 | qname3 | url | description | url | description | url | The asset can be indicated by a full or partial uniform resource locator, the partial url will need to be completed before it can be fed into the application. There theoretically could be no limit to the number of asset columns, the assets will be grouped in columns by type. Some times additional data will have to be retrieved from a database or combined with the book code before the asset url is complete and can be understood by the application that will be using the asset. The type is an abstraction, there are eight types right now, each with their own logic in how the uniform resource locator is handled and or completed, and I have to add a new type and its logic every three or four months. For each asset url there is the possibility of a description column, a character string for display in the application, but not always. (I've already worked out validating the description text, and squashing MSs obscure code page down to something 7-bit ascii can handle.) Now that all the details are filled-in I can get to the actual problem of parsing the file. I need to split the information in this excel workbook into two separate workbooks. The first workbook will group all the questions by section in rows. With the first cell being the section doublet and the rest of the cells in the row are the question names. 1.1 | qname1 | qname2 | qname3 | qname4 | 1.2 | qname1 | qname2 | qname3 | 1.3 | qname1 | qname2 | qname3 | qname4 | qname5 There is no set number of questions for each section as you can see from the above example. The second workbook is more complicated, there is one row per asset, and question names that have more than one asset will be duplicated. There will be four or five columns on this sheet. The first is the question name for the asset, the second is a media type used to select the correct icon for the asset in the application, the third is string representing the asset type, the four is the full and complete uniform resource locator for the asset, and the fifth columns is the optional text description for the asset. q1 | mtype1 | atype1 | url | description q1 | mtype2 | atype2 | url | description q1 | mtype2 | atype3 | url | description q2 | mtype1 | atype1 | url | description q2 | mtype2 | atype3 | url | description For the original six types I did have a script that parsed the source excel workbook into the other two excel workbooks, and I was able to add two more types until I ran aground on the implementation of the ninth type and tenth types. What broke my script was the fact that the ninth type is actually a sub-type of one of the original six, but with entirely different logic, and my mostly procedural script could not accommodate without duplicating a lot of code. I also had a lot of bugs in the script and will be writing the test first on this time around. I'm stuck with the format for the resulting two workbooks, this script is glue code, development went ahead with the project without bothering to get a complete spec from the sponsor. I work for the same company as the developers but in the editorial department, editorial is co-sponsor of the project, and am expected to fix pesky details like this (I'm foaming at the mouth as I type this). I've tried factories, I've tried different object models, but each resulting workbook is so different when I find a design that works for generating one workbook the code is not really usable for generating the other. What I would really like are ideas about a maintainable and extensible design for parsing the source workbook into both workbooks with maximum code reuse, and or sympathy.

    Read the article

  • how to dispose a incoming email and then send some words back using googe-app-engine..

    - by zjm1126
    from google.appengine.api import mail i read the doc: mail.send_mail(sender="[email protected]", to="Albert Johnson <[email protected]>", subject="Your account has been approved", body=""" Dear Albert: Your example.com account has been approved. You can now visit http://www.example.com/ and sign in using your Google Account to access new features. Please let us know if you have any questions. The example.com Team """) and i know hwo to send a email using gae ,but how to check a email incoming, and then do something thanks

    Read the article

  • Simple way to decrease values without making a new attribute?

    - by Jam
    I'm making a program where you're firing a 'blaster', and I have 5 ammo. I'm blasting an alien who has 5 health. At the end I instantiate the player and make him blast 6 times to check that the program works correctly. But the way I've done it makes it so that the amount won't decrease. Is there an easy fix to this, or do I just have to make a new attribute for ammo and health? Here's what I have: class Player(object): """ A player in a shooter game. """ def blast(self, enemy, ammo=5): if ammo>=1: ammo-=1 print "You have blasted the alien." print "You have", ammo, "ammunition left." enemy.die(5) else: print "You are out of ammunition!" class Alien(object): """ An alien in a shooter game. """ def die(self, health=5): if health>=1: health-=1 print "The alien is wounded. He now has", health, "health left." elif health==0: health-=1 print "The alien gasps and says, 'Oh, this is it. This is the big one. \n" \ "Yes, it's getting dark now. Tell my 1.6 million larvae that I loved them... \n" \ "Good-bye, cruel universe.'" else: print "The alien's corpse sits up momentarily and says, 'No need to blast me, I'm dead already!" # main print "\t\tDeath of an Alien\n" hero = Player() invader = Alien() hero.blast(invader) hero.blast(invader) hero.blast(invader) hero.blast(invader) hero.blast(invader) hero.blast(invader) raw_input("\n\nPress the enter key to exit.")

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >