Search Results

Search found 12900 results on 516 pages for 'rules engine'.

Page 136/516 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Google AppEngine + Local JUnit Tests + Jersey framework + Embedded Jetty

    - by xamde
    I use Google Appengine for Java (GAE/J). On top, I use the Jersey REST-framework. Now i want to run local JUnit tests. The test sets up the local GAE development environment ( http://code.google.com/appengine/docs/java/tools/localunittesting.html ), launches an embedded Jetty server, and then fires requests to the server via HTTP and checks responses. Unfortunately, the Jersey/Jetty combo spawns new threads. GAE expects only one thread to run. In the end, I end up having either no datstore inside the Jersey-resources or multiple, having different datastore. As a workaround I initialise the GAE local env only once, put it in a static variable and inside the GAE resource I add many checks (This threads has no dev env? Re-use the static one). And these checks should of course only run inside JUnit tests.. (which I asked before: "How can I find out if code is running inside a JUnit test or not?" - I'm not allowed to post the link directly here :-|)

    Read the article

  • jQuery + sexy alerts + javascript menus

    - by BibiBuBu
    Good Day! i am working on page which has Javascript Menus (dont know how made, i think they are in compressed form) Jquery Sexy Alerts now my problem is that when i want to add any type of jquery calender, it wont work :(, i have tried different possibilities found here, but yet invain. Can somebody guide me. OR some body give me suggestion to use a calender that is compatible with any other framework. help is appreciated. Thanks

    Read the article

  • Appengine filter inequality and ordering fails

    - by davezor
    I think I'm overlooking something simple here, I can't imagine this is impossible to do. I want to filter by a datetime attribute and then order the result by a ranking integer attribute. When I try to do this: query.filter("submitted >=" thisweek).order("ranking") I get the following: BadArgumentError: First ordering property must be the same as inequality filter property, if specified for this query; received ranking, expected submitted Huh? What am I missing? Thanks.

    Read the article

  • what this `^` mean here in solr

    - by Rahul Mehta
    I am confuse her but i want to clear my doubt. I think it is stupid question but i want to know. Use a TokenFilter that outputs two tokens (one original and one lowercased) for each input token. For queries, the client would need to expand any search terms containing upper case characters to two terms, one lowercased and one original. The original search term may be given a boost, although it may not be necessary given that a match on both terms will produce a higher score. text:NeXT ==> (text:NeXT^10 OR text:next) what this ^ mean here . http://wiki.apache.org/solr/SolrRelevancyCookbook#Relevancy_and_Case_Matching

    Read the article

  • What is the difference between .get() and .fetch(1)

    - by AutomatedTester
    I have written an app and part of it is uses a URL parser to get certain data in a ReST type manner. So if you put /foo/bar as the path it will find all the bar items and if you put /foo it will return all items below foo So my app has a query like data = Paths.all().filter('path =', self.request.path).get() Which works brilliantly. Now I want to send this to the UI using templates {% for datum in data %} <div class="content"> <h2>{{ datum.title }}</h2> {{ datum.content }} </div> {% endfor %} When I do this I get data is not iterable error. So I updated the Django to {% for datum in data.all %} which now appears to pull more data than I was giving it somehow. It shows all data in the datastore which is not ideal. So I removed the .all from the Django and changed the datastore query to data = Paths.all().filter('path =', self.request.path).fetch(1) which now works as I intended. In the documentation it says The db.get() function fetches an entity from the datastore for a Key (or list of Keys). So my question is why can I iterate over a query when it returns with fetch() but can't with get(). Where has my understanding gone wrong?

    Read the article

  • Most efficient way to fetch and output Content with 2-Level Comments?

    - by awegawef
    I have some content with up to 2-levels of replies. I am wondering what the most efficient way to fetch and output the replies. I should note that I am planning on storing the comments with fields content_id and reply_to, where reply_to refers to which comment it is in reply to (if any). Any criticism on this design is welcome. In pseudo-code (ish), my first attempt would be: # in outputting content CONTENT_ID all_comments = fetch all comments where content_id == CONTENT_ID root_comments = filter all_comments with reply_to == None children_comments = filter all_comments with reply_to != None output_comments = list() for each root_comment children = filter children_comments, reply_to == root_comment.id output_coments.append( (root_comment, children) ) send output_comments to template Is this the best way to do this? Thanks in advance. Edit: On second thought, I'll want to preserve date-order on the comments, so I'll have to do this a bit differently, or at least just sort the comments afterward.

    Read the article

  • Loading datasets from datastore and merge into single dictionary. Resource problem.

    - by fredrik
    Hi, I have a productdatabase that contains products, parts and labels for each part based on langcodes. The problem I'm having and haven't got around is a huge amount of resource used to get the different datasets and merging them into a dict to suit my needs. The products in the database are based on a number of parts that is of a certain type (ie. color, size). And each part has a label for each language. I created 4 different models for this. Products, ProductParts, ProductPartTypes and ProductPartLabels. I've narrowed it down to about 10 lines of code that seams to generate the problem. As of currently I have 3 Products, 3 Types, 3 parts for each type, and 2 languages. And the request takes a wooping 5500ms to generate. for product in productData: productDict = {} typeDict = {} productDict['productName'] = product.name cache_key = 'productparts_%s' % (slugify(product.key())) partData = memcache.get(cache_key) if not partData: for type in typeData: typeDict[type.typeId] = { 'default' : '', 'optional' : [] } ## Start of problem lines ## for defaultPart in product.defaultPartsData: for label in labelsForLangCode: if label.key() in defaultPart.partLabelList: typeDict[defaultPart.type.typeId]['default'] = label.partLangLabel for optionalPart in product.optionalPartsData: for label in labelsForLangCode: if label.key() in optionalPart.partLabelList: typeDict[optionalPart.type.typeId]['optional'].append(label.partLangLabel) ## end problem lines ## memcache.add(cache_key, typeDict, 500) partData = memcache.get(cache_key) productDict['parts'] = partData productList.append(productDict) I guess the problem lies in the number of for loops is too many and have to iterate over the same data over and over again. labelForLangCode get all labels from ProductPartLabels that match the current langCode. All parts for a product is stored in a db.ListProperty(db.key). The same goes for all labels for a part. The reason I need the some what complex dict is that I want to display all data for a product with it's default parts and show a selector for the optional one. The defaultPartsData and optionaPartsData are properties in the Product Model that looks like this: @property def defaultPartsData(self): return ProductParts.gql('WHERE __key__ IN :key', key = self.defaultParts) @property def optionalPartsData(self): return ProductParts.gql('WHERE __key__ IN :key', key = self.optionalParts) When the completed dict is in the memcache it works smoothly, but isn't the memcache reset if the application goes in to hibernation? Also I would like to show the page for first time user(memcache empty) with out the enormous delay. Also as I said above, this is only a small amount of parts/product. What will the result be when it's 30 products with 100 parts. Is one solution to create a scheduled task to cache it in the memcache every hour? It this efficient? I know this is alot to take in, but I'm stuck. I've been at this for about 12 hours straight. And can't figure out a solution. ..fredrik

    Read the article

  • Saving a 'Date' using DataMapper on AppEngine+JRuby

    - by Ryan Montgomery
    I have a a model as follows: class Total include DataMapper::Resource property :id, Serial property :amount, Float, :default => 0.00 property :day, Date belongs_to :calendar end I am trying to select a specific Total from the data-store. class Calendar include DataMapper::Resource property :id, Serial property :name, String has n, :totals def get_total_for(date) return Total.first(:day => date, :calendar => self) end end When I call get_total_for(DateTime.now) I receive the following error on the call to the data-store. java.lang.IllegalArgumentException: day: org.jruby.RubyObject is not a supported property type. Is Date not allowed for usage in AppEngine? Is this a DataMapper issue? I have tried changing the name of the :day property to something else (hoping it was just a name conflict) but it doesn't seem to matter. Thanks for any help you can provide.

    Read the article

  • Any strategies for assessing the trade-off between CPU loss and memory gain from compression of data

    - by indiehacker
    Are very large TextProperties a burden? Should they be compressed? Say I have a information stored in 2 attributes of type TextProperty in my datastore entities. The strings are always the same length of 65,000 characters and have lots of repeating integers, a sample appearing as follows: entity.pixel_idx = 0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,5,5,5,5,5,5,5,5,5,5,5,5....etc. entity.pixel_color = 2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,1,1,1,1,1,...etc. So these above could also be represented using much less storage memory by compressing say using only each integer and the length of its series ( '0,8' for '0,0,0,0,0,0,0,0') but then its takes time and CPU to compress and decompress? Any general ideas? Are there some tricks for testing different attempts to the problem?

    Read the article

  • gae error:AttributeError: 'NoneType' object has no attribute 'user_is_member'

    - by zjm1126
    class Thread(db.Model): members = db.StringListProperty() def user_is_member(self, user): return str(user) in self.members and thread = Thread.get(db.Key.from_path('Thread', int(id))) is_member = thread.user_is_member(user) but the error is : Traceback (most recent call last): File "D:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 511, in __call__ handler.get(*groups) File "D:\Program Files\Google\google_appengine\google\appengine\ext\webapp\util.py", line 62, in check_login handler_method(self, *args) File "D:\zjm_code\forum_blog_gae\main.py", line 222, in get is_member = thread.user_is_member(user) AttributeError: 'NoneType' object has no attribute 'user_is_member' why ? thanks

    Read the article

  • How can I call a GWT RPC method on a server from a non GWT (but Java) gapplication?

    - by hansi
    I have a regular Java application and want to access an GWT RPC endpoint. Any idea how to make this happen? My GWT application is on a GAE/J and I could use REST for example but I already have the GWT RPC endpoints and don't want to build another façade. Yes, I have seen http://stackoverflow.com/questions/1330318/invoke-a-gwt-rpc-service-from-java-directly, but this discussion goes into a different direction.

    Read the article

  • how to set Content-Type automatically when i download the data that i uploaded.

    - by zjm1126
    this is my code : import os from google.appengine.ext import webapp from google.appengine.ext.webapp import template from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext import db #from login import htmlPrefix,get_current_user class MyModel(db.Model): blob = db.BlobProperty() class BaseRequestHandler(webapp.RequestHandler): def render_template(self, filename, template_args=None): if not template_args: template_args = {} path = os.path.join(os.path.dirname(__file__), 'templates', filename) self.response.out.write(template.render(path, template_args)) class upload(BaseRequestHandler): def get(self): self.render_template('index.html',) def post(self): file=self.request.get('file') obj = MyModel() obj.blob = db.Blob(file.encode('utf8')) obj.put() self.response.out.write('upload ok') class download(BaseRequestHandler): def get(self): #id=self.request.get('id') o = MyModel.all().get() #self.response.out.write(''.join('%s: %s <br/>' % (a, getattr(o, a)) for a in dir(o))) self.response.out.write(o) application = webapp.WSGIApplication( [ ('/?', upload), ('/download',download), ], debug=True ) def main(): run_wsgi_app(application) if __name__ == "__main__": main() my index.html is : <form action="/" method="post"> <input type="file" name="file" /> <input type="submit" /> </form> and it show : <__main__.MyModel object at 0x02506830> but ,i don't want to see this , i want to download it , how to change my code to run, thanks updated it is ok now : class upload(BaseRequestHandler): def get(self): self.render_template('index.html',) def post(self): file=self.request.get('file') obj = MyModel() obj.blob = db.Blob(file) obj.put() self.response.out.write('upload ok') class download(BaseRequestHandler): def get(self): #id=self.request.get('id') o = MyModel.all().order('-').get() #self.response.out.write(''.join('%s: %s <br/>' % (a, getattr(o, a)) for a in dir(o))) self.response.headers['Content-Type'] = "image/png" self.response.out.write(o.blob) and new question is : if you upload a 'png' file ,it will show successful , but ,when i upload a rar file ,i will run error , so how to set Content-Type automatically , and what is the Content-Type of the 'rar' file thanks

    Read the article

  • Is there a performance gain from defining routes in app.yaml versus one large mapping in a WSGIAppli

    - by jgeewax
    Scenario 1 This involves using one "gateway" route in app.yaml and then choosing the RequestHandler in the WSGIApplication. app.yaml - url: /.* script: main.py main.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page1/', Page1), ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Scenario 2: This involves defining two routes in app.yaml and then two separate scripts for each (page1.py and page2.py). app.yaml - url: /page1/ script: page1.py - url: /page2/ script: page2.py page1.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") application = webapp.WSGIApplication([ ('/page1/', Page1), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() page2.py from google.appengine.ext import webapp class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Question What are the benefits and drawbacks of each pattern? Is one much faster than the other?

    Read the article

  • Eclipse + AppEngine =? autocomplete

    - by Brandon Watson
    I was doing some beginner AppEngine dev on a Windows box and installed Eclipse for that. I liked the autocompletion I got with the objects and functions. I moved my dev environment over to my Macbook, and installed Eclipse Ganymede. I installed the AppEngine SDK and Eclipse plug in. However, when I am typing out code now, the autocomplete isn't functioning. Did I miss a step? UPDATE Just to add to this: the line: import cgi appears to give me what I need. When I type "cgi." I get all of the auto complete. However, the lines: from google.appengine.api import users from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext import db don't give me any auto complete. If I type "users." there is no auto complete.

    Read the article

  • ClassNotFoundException: org.datanucleus.store.appengine.jpa.DatastorePersistenceProvider

    - by Kiva
    I try to start an Google appengine application in my eclipse. I have the Google plugin and I set the sdk for my application. But, when I start this one, I get the following error: java.lang.ClassNotFoundException: org.datanucleus.store.appengine.jpa.DatastorePersistenceProvider However, this class is present in the sdk which is present in my classpath. Why Appengine doesn't find this class ? Thanks.

    Read the article

  • task strategies for handling HardDeadlineExceededError

    - by Stevko
    I've got a number of tasks/servlets that are hitting the HardDeadlineExceededError which is leaving everything hanging in an 'still executing' state. The work being done can easily exceed the 29 second threshold. I try to catch the DeadlineExceededException and base Exception in order to save the exit state but neither of these exception handlers are being caught... Is there a way to determine which tasks are in the queue or currently executing? Are there any other strategies for dealing with this situation?

    Read the article

  • Best Tools for Software Maintenance Engineering

    - by Pev
    Yes, the dreaded 'M' word. You've got a workstation, source control and half a million lines of source code that you didn't write. The documentation was out of date the moment that it was approved and published. The original developers are LTAO, at the next project/startup/loony bin and not answering email. What are you going to do? {favourite editor} and Grep will get you started on your spelunking through the gnarling guts of the code base but what other tools should be in the maintenance engineers toolbox? To start the ball-rolling; I don't think I could live without source-insight for C/C++ spelunking. (DISCLAIMER: I don't work for 'em).

    Read the article

  • Can I db.put models without db.getting them first?

    - by Liron
    I tried to do something like ss = Screenshot(key=db.Key.from_path('myapp_screenshot', 123), name='flowers') db.put([ss, ...]) It seems to work on my dev_appserver, but on live I get this traceback: 05-07 09:50PM 19.964 File "/base/data/home/apps/quixeydev3/12.341796548761906563/common/appenginepatch/appenginepatcher/patch.py", line 600, in put E 05-07 09:50PM 19.964 result = old_db_put(models, *args, **kwargs) E 05-07 09:50PM 19.964 File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/init.py", line 1278, in put E 05-07 09:50PM 19.964 keys = datastore.Put(entities, rpc=rpc) E 05-07 09:50PM 19.964 File "/base/python_runtime/python_lib/versions/1/google/appengine/api/datastore.py", line 284, in Put E 05-07 09:50PM 19.965 raise _ToDatastoreError(err) E 05-07 09:50PM 19.965 InternalError: the new entity or index you tried to insert already exists I happen to know just the ID of an existing Screenshot entity I want to update; that's why I was manually constructing its key. Am I doing it wrong?

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >