Search Results

Search found 31319 results on 1253 pages for 'source engine'.

Page 241/1253 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Creating a iso file from a set of img files

    - by cafebabe1991
    So i have started building android from scratch and after lot of effort finally landed up to a point where i have the OUT folder consisting of all the required IMG files for an emulator to run. Like the system.img ramdisk.img userdata-qemu.img So after that i ran the emulator command so that means the OUT directory was valid and could easily be used to create any other bootable medium. What i want to accomplish Creating the iso file from all those img files found at the end of building process,or a way to just create an iso out of that complete OUT folder. any help in this regard would be greatly appreciated.

    Read the article

  • Most efficient way to fetch and output Content with 2-Level Comments?

    - by awegawef
    I have some content with up to 2-levels of replies. I am wondering what the most efficient way to fetch and output the replies. I should note that I am planning on storing the comments with fields content_id and reply_to, where reply_to refers to which comment it is in reply to (if any). Any criticism on this design is welcome. In pseudo-code (ish), my first attempt would be: # in outputting content CONTENT_ID all_comments = fetch all comments where content_id == CONTENT_ID root_comments = filter all_comments with reply_to == None children_comments = filter all_comments with reply_to != None output_comments = list() for each root_comment children = filter children_comments, reply_to == root_comment.id output_coments.append( (root_comment, children) ) send output_comments to template Is this the best way to do this? Thanks in advance. Edit: On second thought, I'll want to preserve date-order on the comments, so I'll have to do this a bit differently, or at least just sort the comments afterward.

    Read the article

  • Is there a performance gain from defining routes in app.yaml versus one large mapping in a WSGIAppli

    - by jgeewax
    Scenario 1 This involves using one "gateway" route in app.yaml and then choosing the RequestHandler in the WSGIApplication. app.yaml - url: /.* script: main.py main.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page1/', Page1), ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Scenario 2: This involves defining two routes in app.yaml and then two separate scripts for each (page1.py and page2.py). app.yaml - url: /page1/ script: page1.py - url: /page2/ script: page2.py page1.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") application = webapp.WSGIApplication([ ('/page1/', Page1), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() page2.py from google.appengine.ext import webapp class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Question What are the benefits and drawbacks of each pattern? Is one much faster than the other?

    Read the article

  • How can I call a GWT RPC method on a server from a non GWT (but Java) gapplication?

    - by hansi
    I have a regular Java application and want to access an GWT RPC endpoint. Any idea how to make this happen? My GWT application is on a GAE/J and I could use REST for example but I already have the GWT RPC endpoints and don't want to build another façade. Yes, I have seen http://stackoverflow.com/questions/1330318/invoke-a-gwt-rpc-service-from-java-directly, but this discussion goes into a different direction.

    Read the article

  • Loading datasets from datastore and merge into single dictionary. Resource problem.

    - by fredrik
    Hi, I have a productdatabase that contains products, parts and labels for each part based on langcodes. The problem I'm having and haven't got around is a huge amount of resource used to get the different datasets and merging them into a dict to suit my needs. The products in the database are based on a number of parts that is of a certain type (ie. color, size). And each part has a label for each language. I created 4 different models for this. Products, ProductParts, ProductPartTypes and ProductPartLabels. I've narrowed it down to about 10 lines of code that seams to generate the problem. As of currently I have 3 Products, 3 Types, 3 parts for each type, and 2 languages. And the request takes a wooping 5500ms to generate. for product in productData: productDict = {} typeDict = {} productDict['productName'] = product.name cache_key = 'productparts_%s' % (slugify(product.key())) partData = memcache.get(cache_key) if not partData: for type in typeData: typeDict[type.typeId] = { 'default' : '', 'optional' : [] } ## Start of problem lines ## for defaultPart in product.defaultPartsData: for label in labelsForLangCode: if label.key() in defaultPart.partLabelList: typeDict[defaultPart.type.typeId]['default'] = label.partLangLabel for optionalPart in product.optionalPartsData: for label in labelsForLangCode: if label.key() in optionalPart.partLabelList: typeDict[optionalPart.type.typeId]['optional'].append(label.partLangLabel) ## end problem lines ## memcache.add(cache_key, typeDict, 500) partData = memcache.get(cache_key) productDict['parts'] = partData productList.append(productDict) I guess the problem lies in the number of for loops is too many and have to iterate over the same data over and over again. labelForLangCode get all labels from ProductPartLabels that match the current langCode. All parts for a product is stored in a db.ListProperty(db.key). The same goes for all labels for a part. The reason I need the some what complex dict is that I want to display all data for a product with it's default parts and show a selector for the optional one. The defaultPartsData and optionaPartsData are properties in the Product Model that looks like this: @property def defaultPartsData(self): return ProductParts.gql('WHERE __key__ IN :key', key = self.defaultParts) @property def optionalPartsData(self): return ProductParts.gql('WHERE __key__ IN :key', key = self.optionalParts) When the completed dict is in the memcache it works smoothly, but isn't the memcache reset if the application goes in to hibernation? Also I would like to show the page for first time user(memcache empty) with out the enormous delay. Also as I said above, this is only a small amount of parts/product. What will the result be when it's 30 products with 100 parts. Is one solution to create a scheduled task to cache it in the memcache every hour? It this efficient? I know this is alot to take in, but I'm stuck. I've been at this for about 12 hours straight. And can't figure out a solution. ..fredrik

    Read the article

  • Drupal 7 : vers la sortie de la version finale le 5 janvier, la 3e et dernière Release Candidate du CMS open-source est disponible

    Drupal 7 : vers une sortie de la version finale le 5 Janvier La 3ème et dernière RC du système de gestion de contenu open-source est disponible Mise à jour du 27/12/2010 par Idelways L'équipe de Drupal vient de sortir la troisième et (à priori) dernière release-candidate de la version 7 du système de gestion de contenu open-source. Dries Buytaert, le créateur du CMS a par ailleurs annoncé sur son blog que le 5 janvier prochain sera la date de sortie de la version finale, il promet une "fête gigantesque" pour célébrer cet évènement. Avec cette troisième RC, le CMS retrouve le statut de zéro faille...

    Read the article

  • Extract files from zip folder and store these files in blobstore

    - by Eng_Engineer
    i want to upload zip folder from file input in form the i want to extract the contents of this uploaded zip folder,and store the contents (files)of this zip in the blobstore in order to download them after putting these files in one folder,but the problem is that i can't deal with the zip folder directly(to read it), i tried as this: form = cgi.FieldStorage() file_upload = form['file'] zip1=file_upload.filename zipstream=StringIO.StringIO(zip1.read()) But the problem still that i can't read the zip as previous,also i tried to read zip folder directly like this: z1=zipfile.ZipFile(zip1,"r") But there was an error in this way.Please can any one help me.Thanks in advance.

    Read the article

  • What license do I need to use gSOAP in a commercial product?

    - by Lawrence Johnston
    I'd like to use gSOAP in a product which will be distributed commercially. The use I have in mind is what I suspect is a pretty typical workflow—generating a header using wsdl2h, consuming the header with soapcpp2, and then calling the functions generated in the stub in my code. I'm not 100 percent sure which license(s) I need to use to be able to do this. Has anybody here already gone through this and figured out the solution?

    Read the article

  • Is it possible to bulk load an NDB child Entity in GAE?

    - by hmacread
    At some point in the future I may need to bulk load migration data (i.e. from a CSV). Has anyone had exceptions raised doing the following? Also is there any change in behaviour if the ndb.put_multi() function is used? from google.appengine.ext import ndb while True: if not id: break id, name = read_csv_row(readline()) x = X(parent=ndb.Key('Y','static_id') x.id, x.name = id, name x.put() class X(ndb.Model): id = StringProperty() name = StringProperty() class Y(ndb.Model): pass def read_csv_row(line): """returns tuple"""

    Read the article

  • frameworks for coding a php library?

    - by ajsie
    there are plenty of frameworks for coding MVC web applications. this time im going to code a library (think of Doctrine or Solr) with a bunch of class files. u just include a bootstrap or a class file and you are ready to use my classes. i have never tried to code a library before and intend to code one for learning purpose so that i can use various design patterns i have learned. are there any great frameworks for this, how i should organize the different class files, where i can put configuration files and so on? tutorials or other information would be great too. thanks

    Read the article

  • can't understand the url function used in the google taskque api documentation

    - by Bunny Rabbit
    import com.google.appengine.api.labs.taskqueue.Queue; import com.google.appengine.api.labs.taskqueue.QueueFactory; import static com.google.appengine.api.labs.taskqueue.TaskOptions.Builder.*; // ... Queue queue = QueueFactory.getDefaultQueue(); queue.add(url("/worker").param("key", key)) in the code example given on the google task queue documentation page i can't understand the url("/worker") function they are calling in the queues.add() invocation .

    Read the article

  • Using #DEBUG macros in spark

    - by Quintin Par
    I have some scripts which need to be included only in the release version. Stuff like google analytics, quantserve etc. The typical way in asp.net mvc world is to wrap a #if DEBUG #endif How do I do it the sparkish way. Like <script if='x==5' type="text/javascript">

    Read the article

  • How can I implement "real time" messaging on Google AppEngine?

    - by Freed
    I'm creating a web application on Google AppEngine where I want the user to be notified a quickly as possible after certain events occour. The problem is similar to say a chat server in that I need something happening on one connection (someone is writing a message in a chat room) to propagate to a number of other connections (other people in that chat room gets the message). To get speedy updates from the server to the client I'm planning on using long polling with XmlHttpRequest, hoping that AppEngine won't interfere other than possibly restriing the timeout. The real problem however is efficient notification between connections on AppEngine. Is there any support for this type of cross connection notification on AppEngine that does not involve busy-waiting? The only tools I can think of to do this at all is either using the data storage (slow) or memcache (unreliable), and none of them would let me avoid busy-waiting. Note: I know about XMPP support on AppEngine. It's related, but I want a browser based solution, sending messages to the users by XMPP is not an option.

    Read the article

  • Can't iterate over nestled dict in django

    - by fredrik
    Hi, Im trying to iterate over a nestled dict list. The first level works fine. But the second level is treated like a string not dict. In my template I have this: {% for product in Products %} <li> <p>{{ product }}</p> {% for partType in product.parts %} <p>{{ partType }}</p> {% for part in partType %} <p>{{ part }}</p> {% endfor %} {% endfor %} </li> {% endfor %} It's the {{ part }} that just list 1 char at the time based on partType. And it seams that it's treated like a string. I can however via dot notation reach all dict but not with a for loop. The current output looks like this: Color C o l o r Style S ..... The Products object looks like this in the log: [{'product': <models.Products.Product object at 0x1076ac9d0>, 'parts': {u'Color': {'default': u'Red', 'optional': [u'Red', u'Blue']}, u'Style': {'default': u'Nice', 'optional': [u'Nice']}, u'Size': {'default': u'8', 'optional': [u'8', u'8.5']}}}] What I trying to do is to pair together a dict/list for a product from a number of different SQL queries. The web handler looks like this: typeData = Products.ProductPartTypes.all() productData = Products.Product.all() langCode = 'en' productList = [] for product in productData: typeDict = {} productDict = {} for type in typeData: typeDict[type.typeId] = { 'default' : '', 'optional' : [] } productDict['product'] = product productDict['parts'] = typeDict defaultPartsData = Products.ProductParts.gql('WHERE __key__ IN :key', key = product.defaultParts) optionalPartsData = Products.ProductParts.gql('WHERE __key__ IN :key', key = product.optionalParts) for defaultPart in defaultPartsData: label = Products.ProductPartLabels.gql('WHERE __key__ IN :key AND partLangCode = :langCode', key = defaultPart.partLabelList, langCode = langCode).get() productDict['parts'][defaultPart.type.typeId]['default'] = label.partLangLabel for optionalPart in optionalPartsData: label = Products.ProductPartLabels.gql('WHERE __key__ IN :key AND partLangCode = :langCode', key = optionalPart.partLabelList, langCode = langCode).get() productDict['parts'][optionalPart.type.typeId]['optional'].append(label.partLangLabel) productList.append(productDict) logging.info(productList) templateData = { 'Languages' : Settings.Languges.all().order('langCode'), 'ProductPartTypes' : typeData, 'Products' : productList } I've tried making the dict in a number of different ways. Like first making a list, then a dict, used tulpes anything I could think of. Any help is welcome! Bouns: If someone have an other approach to the SQL quires, that is more then welcome. I feel that it kinda stupid to run that amount of quires. What is happening that each product part has a different label base on langCode. ..fredrik

    Read the article

  • How can I obfuscate my Perl script to make it difficult to reverse engineer?

    - by codaddict
    I've developed a Perl script that the a confidential business logic. I have to give this script to another Perl coder to test it in his environment. He will definitely try to extract the logic in my program. So I want to make my script impossible, or at least very very hard, to understand. I've tried a few sites like liraz, but they did not work for me. The encoded Perl script does not work the same as the original one.

    Read the article

  • Codename One : la boite à outils Java open source pour le développement mobile multiplateforme sur une base de code unique sort

    Prise en charge de Windows 8 pour Codename One la boîte à outils open source pour le développement mobile multiplateforme sur une base de code Java unique Codename One, la plateforme open source pour le développement mobile en Java prend désormais en charge Windows Phone et les tablettes Windows 8. Développé par deux anciens ingénieurs de SUN Microsystems, Codename One est un écosystème léger, fondé sur Java, conçu pour permettre aux développeurs de créer des applications natives pour de multiples plateformes mobiles et tablettes en utilisant une base de code unique. La bêta de la boîte à outils avait été présentée en juillet dernier, et permettait de développer pour iOS, Android...

    Read the article

  • Problem formatting in Eclipse

    - by fastcodejava
    I have a method in Eclipse as below. public String toString() { return "HouseVo [ " + "Name : " + this.name == null ? "" : this.name + "Address : " + this.address == null ? "" : this.address; } When I format it becomes: return "HouseVo [ " + "Name : " + this.name == null ? "" : this.name + "Address : " + this.address == null ? "" : this.address; Any way to fix it so it correctly formats?

    Read the article

  • J2ObjC : l'outil de portage de Java vers Objective-C de Google vient d'être mis en ligne, il est open-source

    Google sort J2ObjC un outil open source pour la conversion du code Java en Objective-C Bonne nouvelle pour les développeurs Java qui souhaitent cibler iOS sans toutefois se mettre à l'Objective-C. Google vient de publier sur son blog dédié aux outils open source une application pour la conversion du code Java en code Objective-C. Le projet J2ObjC a pour objectif de permettre aux développeurs de partager facilement du code qui n'est pas utilisé pour l'interface utilisateur (logique métier, accès aux données, etc.) pour les applications Android, les applications Web (qui utilisent le serveur GWT) avec iOS. J2ObjC convertit les classes Java en classes Objective-C qui u...

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >