Search Results

Search found 21343 results on 854 pages for 'pass by reference'.

Page 677/854 | < Previous Page | 673 674 675 676 677 678 679 680 681 682 683 684  | Next Page >

  • Zend_Validate_Abstract custom validator not displaying correct error messages.

    - by Jeremy Dowell
    I have two text fields in a form that I need to make sure neither have empty values nor contain the same string. The custom validator that I wrote extends Zend_Validate_Abstract and works correctly in that it passes back the correct error messages. In this case either: isEmpty or isMatch. However, the documentation says to use addErrorMessages to define the correct error messages to be displayed. in this case, i have attached ->addErrorMessages(array("isEmpty"=>"foo", "isMatch"=>"bar")); to the form field. According to everything I've read, if I return "isEmpty" from isValid(), my error message should read "foo" and if i return "isMatch" then it should read "bar". This is not the case I'm running into though. If I return false from is valid, no matter what i set $this-_error() to be, my error message displays "foo", or whatever I have at index[0] of the error messages array. If I don't define errorMessages, then I just get the error code I passed back for the display and I get the proper one, depending on what I passed back. How do I catch the error code and display the correct error message in my form? The fix I have implemented, until I figure it out properly, is to pass back the full message as the errorcode from the custom validator. This will work in this instance, but the error message is specific to this page and doesn't really allow for re-use of code. Things I have already tried: I have already tried validator chaining so that my custom validator only checks for matches: ->setRequired("true") ->addValidator("NotEmpty") ->addErrorMessage("URL May Not Be Empty") ->addValidator([*customValidator]*) ->addErrorMessage("X and Y urls may not be the same") But again, if either throws an error, the last error message to be set displays, regardless of what the error truly is. I'm not entirely sure where to go from here. Any suggestions?

    Read the article

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

  • Action Filter Dependency Injection in ASP.NET MVC 3 RC2 with StructureMap

    - by Ben
    Hi, I've been playing with the DI support in ASP.NET MVC RC2. I have implemented session per request for NHibernate and need to inject ISession into my "Unit of work" action filter. If I reference the StructureMap container directly (ObjectFactory.GetInstance) or use DependencyResolver to get my session instance, everything works fine: ISession Session { get { return DependencyResolver.Current.GetService<ISession>(); } } However if I attempt to use my StructureMap filter provider (inherits FilterAttributeFilterProvider) I have problems with committing the NHibernate transaction at the end of the request. It is as if ISession objects are being shared between requests. I am seeing this frequently as all my images are loaded via an MVC controller so I get 20 or so NHibernate sessions created on a normal page load. I added the following to my action filter: ISession Session { get { return DependencyResolver.Current.GetService<ISession>(); } } public ISession SessionTest { get; set; } public override void OnResultExecuted(System.Web.Mvc.ResultExecutedContext filterContext) { bool sessionsMatch = (this.Session == this.SessionTest); SessionTest is injected using the StructureMap Filter provider. I found that on a page with 20 images, "sessionsMatch" was false for 2-3 of the requests. My StructureMap configuration for session management is as follows: For<ISessionFactory>().Singleton().Use(new NHibernateSessionFactory().GetSessionFactory()); For<ISession>().HttpContextScoped().Use(ctx => ctx.GetInstance<ISessionFactory>().OpenSession()); In global.asax I call the following at the end of each request: public Global() { EndRequest += (sender, e) => { ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects(); }; } Is this configuration thread safe? Previously I was injecting dependencies into the same filter using a custom IActionInvoker. This worked fine until MVC 3 RC2 when I started experiencing the problem above, which is why I thought I would try using a filter provider instead. Any help would be appreciated Ben P.S. I'm using NHibernate 3 RC and the latest version of StructureMap

    Read the article

  • getting expat to use .dtd for entity replacement in python

    - by nicolas78
    I'm trying to read in an xml file which looks like this <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE dblp SYSTEM "dblp.dtd"> <dblp> <incollection> <author>Jos&eacute; A. Blakeley</author> </incollection> </dblp> The point that creates the problem looks is the Jos&eacute; A. Blakeley part: The parser calls its character handler twice, once with "Jos", once with " A. Blakeley". Now I understand this may be the correct behaviour if it doesn't know the eacute entity. However, this is defined in the dblp.dtd, which I have. I don't seem to be able to convince expat to use this file, though. All I can say is p = xml.parsers.expat.ParserCreate() # tried with and without following line p.SetParamEntityParsing(xml.parsers.expat.XML_PARAM_ENTITY_PARSING_ALWAYS) p.UseForeignDTD(True) f = open(dblp_file, "r") p.ParseFile(f) but expat still doesn't recognize my entity. Why is there no way to tell expat which DTD to use? I've tried putting the file into the same directory as the XML putting the file into the program's working directory replacing the reference in the xml file by an absolute path What am I missing? Thx.

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • How do I properly implement a property in F#?

    - by Greg D
    Consider my first attempt, a simple type in F# like the following: type Test() = inherit BaseImplementingNotifyPropertyChangedViaOnPropertyChanged() let mutable prop: string = null member this.Prop with public get() = prop and public set value = match value with | _ when value = prop -> () | _ -> let prop = value this.OnPropertyChanged("Prop") Now I test this via C# (this object is being exposed to a C# project, so apparent C# semantics are desirable): [TestMethod] public void TaskMaster_Test() { var target = new FTest(); string propName = null; target.PropertyChanged += (s, a) => propName = a.PropertyName; target.Prop = "newString"; Assert.AreEqual("Prop", propName); Assert.AreEqual("newString", target.Prop); return; } propName is properly assigned, my F# Setter is running, but the second assert is failing because the underlying value of prop isn't changed. This sort of makes sense to me, because if I remove mutable from the prop field, no error is generated (and one should be because I'm trying to mutate the value). I think I must be missing a fundamental concept. What's the correct way to rebind/mutate prop in the Test class so that I can pass my unit test?

    Read the article

  • Error with Property Validation in Form Submission in ASP.NET MVC

    - by Maxim Z.
    I have a simple form on an ASP.NET MVC site that I'm building. This form is submitted, and then I validate that the form fields aren't null, empty, or improperly formatted. However, when I use ModelState.AddModelError() to indicate validation errors from my controller code, I get an error when my view is re-rendered. In Visual Studio, I get that the following line is highlighted as being the location of the error: <%=Html.TextBox("Email")%> The error is the following: NullReferenceException was unhandled by user code - object reference not set to an instance of an object. My complete code for that textbox is the following: <p> <label for="Email">Your Email:</label> <%=Html.TextBox("Email")%> <%=Html.ValidationMessage("Email", "*") %> </p> Here's how I'm doing that validation in my controller: try { System.Net.Mail.MailAddress address = new System.Net.Mail.MailAddress(email); } catch { ModelState.AddModelError("Email", "Should not be empty or invalid"); } return View(); Note: this applies to all of my fields, not just my Email field, as long as they are invalid.

    Read the article

  • win32 console - form example!

    - by Bach
    I'm trying to build a simple form in a c++ win32 console application. instead of using cin and keep prompting the user to enter the details, i would like to display the form labels then using the tab key, allow the user to tab through. What is the simplest way of doing this, without having to use ncurses? all I need is cout the below all at once: Name: Username: Email: set the cursor position next to name Field, then each time you hit tab, i gotoxy, and set the cursor at the next position, then set the the cin to the next variable eg. at startup gotoxy(nameX, nameY); cin >> name; Hit Tab/enter gotoxy(usernameX, usernameY); cin >> username; Hit Tab/enter gotoxy(emailX, emailY); cin >> email; is this even doable? I tried while loops with, GetAsyncKeyState, and keyboard events, but the cin is not working properly in that loop. is there any good example for a super simple form, or reference for doing that? I know how to SetConsoleCursorPosition, but how to implement the tabbing while still being able to capture cin? thanks

    Read the article

  • Rails has_many conditions

    - by user305270
    c = "(f.profile_id = #{self.id} OR f.friend_id = #{self.id})" c += AND + "(CASE WHEN f.profile_id=#{self.id} THEN f.friend_id ELSE f.profile_id END = p.id)" c += AND + "(CASE WHEN f.profile_id=#{self.id} THEN f.profile_rejected ELSE f.friend_rejected END = 1)" c += AND + "(p.banned = 0)" I need this to be used in a has_many relationship like this: has_many :removed_friends, :conditions => ??? how do i set there the self.id?, or how do i pass there the id? Then i want to use the will_paginate plugin: @profile.removed_friends.paginate(:page => 1, :per_page => 20) Thanks for your help EDIT: class Profile < ActiveRecord::Base has_many :friendships has_many :removed_friends, :class_name => 'Profile', :through => :friendships, :conditions => "(friendships.profile_id = #{self.id} OR friendships.friend_id = #{self.id})" "AND (CASE WHEN friendships.profile_id=#{self.id} THEN friendships.profile_rejected ELSE friendships.friend_rejected END = 1)" + "AND (p.banned = 0)" end class Friendship < ActiveRecord::Base belongs_to :profile belongs_to :removed_friend, :class_name => 'Profile', :foreign_key => "(CASE WHEN friendships.profile_id = #{self.id} THEN friend_id ELSE profile_id END)" end

    Read the article

  • Ruby and duck typing: design by contract impossible?

    - by davetron5000
    Method signature in Java: public List<String> getFilesIn(List<File> directories) similar one in ruby def get_files_in(directories) In the case of Java, the type system gives me information about what the method expects and delivers. In Ruby's case, I have no clue what I'm supposed to pass in, or what I'll expect to receive. In Java, the object must formally implement the interface. In Ruby, the object being passed in must respond to whatever methods are called in the method defined here. This seems highly problematic: Even with 100% accurate, up-to-date documentation, the Ruby code has to essentially expose its implementation, breaking encapsulation. "OO purity" aside, this would seem to be a maintenance nightmare. The Ruby code gives me no clue what's being returned; I would have to essentially experiment, or read the code to find out what methods the returned object would respond to. Not looking to debate static typing vs duck typing, but looking to understand how you maintain a production system where you have almost no ability to design by contract. Update No one has really addressed the exposure of a method's internal implementation via documentation that this approach requires. Since there are no interfaces, if I'm not expecting a particular type, don't I have to itemize every method I might call so that the caller knows what can be passed in? Or is this just an edge case that doesn't really come up?

    Read the article

  • ClassCircularityError when running tomcat6 from eclipse

    - by zenmonkey
    I'm using Eclipse 3.5, tomcat runtime is set as tomcat 6.0.26, vm is jdk 1.6.17 (macosx) When I try to run a web application from eclipse Java EE perspective I keep seeing this error in the console: Caused by: java.lang.ClassCircularityError: java/util/logging/LogRecord at com.adsafe.util.SimpleFormatter.format(SimpleFormatter.java:11) at java.util.logging.StreamHandler.publish(StreamHandler.java:179) at java.util.logging.ConsoleHandler.publish(ConsoleHandler.java:88) at java.util.logging.Logger.log(Logger.java:458) at java.util.logging.Logger.doLog(Logger.java:480) at java.util.logging.Logger.logp(Logger.java:596) at org.apache.juli.logging.DirectJDKLog.log(DirectJDKLog.java:165) at org.apache.juli.logging.DirectJDKLog.info(DirectJDKLog.java:115) at org.apache.catalina.core.ApplicationContext.log(ApplicationContext.java:644) at org.apache.catalina.core.ApplicationContextFacade.log(ApplicationContextFacade.java:251) at org.apache.catalina.core.StandardWrapper.unavailable(StandardWrapper.java:1327) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1130) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:993) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4187) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4496) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardHost.start(StandardHost.java:785) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:519) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) ... 6 more java/util/logging/LogRecord implements Serializable, so i am not sure where the circular reference could have creeped in. Anyone see this before and any anyone know how to fix this?

    Read the article

  • Why does my Doctrine DBAL query return no results when quoted?

    - by braveterry
    I'm using the Doctrine DataBase Abstraction Layer (DBAL) to perform some queries. For some reason, when I quote a parameter before passing it to the query, I get back no rows. When I pass it unquoted, it works fine. Here's the relevant snippet of code I'm using: public function get($game) { load::helper('doctrinehelper'); $conn = doctrinehelper::getconnection(); $statement = $conn->prepare('SELECT games.id as id, games.name as name, games.link_url, games.link_text, services.name as service_name, image_url FROM games, services WHERE games.name = ? AND services.key = games.service_key'); $quotedGame = $conn->quote($game); load::helper('loghelper'); $logger = loghelper::getLogger(); $logger->debug("Quoted Game: $quotedGame"); $logger->debug("Unquoted Game: $game"); $statement->execute(array($quotedGame)); $resultsArray = $statement->fetchAll(); $logger->debug("Number of rows returned: " . count($resultsArray)); return $resultsArray; } Here's what the log shows: 01/01/11 17:00:13,269 [2112] DEBUG root - Quoted Game: 'Diablo II Lord of Destruction' 01/01/11 17:00:13,269 [2112] DEBUG root - Unquoted Game: Diablo II Lord of Destruction 01/01/11 17:00:13,270 [2112] DEBUG root - Number of rows returned: 0 If I change this line: $statement->execute(array($quotedGame)); to this: $statement->execute(array($game)); I get this in the log: 01/01/11 16:51:42,934 [2112] DEBUG root - Quoted Game: 'Diablo II Lord of Destruction' 01/01/11 16:51:42,935 [2112] DEBUG root - Unquoted Game: Diablo II Lord of Destruction 01/01/11 16:51:42,936 [2112] DEBUG root - Number of rows returned: 1 Have I fat-fingered something?

    Read the article

  • jsTree: How to create a new ID for the new added node?

    - by marknt15
    Hi, I can normally get the ID of the default tree nodes but my problem is onCreate then jsTree will add a new node but it doesn't have an ID. My question is how can I add an ID to the newly created tree node? What I'm thinking to do is adding the ID HTML attribute to the newly created tree node but how? I need to get the ID of all of the nodes because it will serve as a reference for the node's respective div storage. HTML code: <div class="demo" id="demo_1"> <ul> <li id="phtml_1" class="file"><a href="#"><ins>&nbsp;</ins>Root node 1</a></li> <li id="phtml_2" class="file"><a href="#"><ins>&nbsp;</ins>Root node 2</a></li> </ul> </div> JS code: $("#demo_1").tree({ ui : { theme_name : "apple" }, callback : { onrename : function (NODE, TREE_OBJ) { alert(TREE_OBJ.get_text(NODE)); alert($(NODE).attr('id')); } } }); Cheers, Mark

    Read the article

  • Trouble getting $.ajax() to work in PhoneGap against a locally hosted server

    - by David Gutierrez
    Currently trying to make an ajax post request to an IIS Express hosted MVC 4 Web API end point from an android VM (Bluestacks) on my machine. Here are the snippets of code that I am trying, and cannot get to work: $.ajax({ type: "POST", url: "http://10.0.2.2:28434/api/devices", data: {'EncryptedPassword':'1234','UserName':'test','DeviceToken':'d234'} }).always(function( data, textStatus, jqXHR ) { alert( textStatus ); }); Whenever I run this request I always get back a textStatus of 'error'. After hours of trying different things, I pushed my End Point to an actual server, and was able to actually get responses back in PhoneGap if I built up an XMLHttpRequest by hand, like so: var request = new XMLHttpRequest(); request.open("POST", "http://172.16.100.42/MobileRewards/api/devices", true); request.onreadystatechange = function(){//Call a function when the state changes. console.log("state = " + request.readyState); console.log("status = " + request.status); if (request.readyState == 4) { if (request.status == 200 || request.status == 0) { console.log("*" + request.responseText + "*"); } } } request.send("{EncryptedPassword:1234,UserName:test,DeviceToken:d234}"); Unfortunately, if I try to use $.ajax() against the same end point in the snippet above I still get a status text that says 'error', here is that snippet for reference: $.ajax({ type: "POST", url: "http://172.16.100.42/MobileRewards/api/devices", data: {'EncryptedPassword':'1234','UserName':'test','DeviceToken':'d234'} }).always(function( data, textStatus, jqXHR ) { alert( textStatus ); }); So really, there are a couple of questions here. 1) Why can't I get any ajax calls (post or get) to successfully hit my End Point when it's hosted via IIS Express on the same machine that the Android VM is running? 2) When my end point is hosted on an actual server, through IIS and served through port 80, why can't I get post requests to be successful when I use jquery's ajax calls? (Even though I can get it to work by manually creating an XMLHttpRequest) Thanks

    Read the article

  • Modify loggingConfiguration Programmatic (enterprise library)

    - by alhambraeidos
    Hi all, I have app.config in m win application, and loggingConfiguration section (enterprise library 4.1). I need do this programatically, Get a list of all listener in loggingConfiguration Modify property fileName=".\Trazas\Excepciones.log" of several RollingFlatFileTraceListener's Modify several properties of AuthenticatingEmailTraceListener listener, Any help, please, I havent found any reference or samples Thanks in advanced. Greetings <listeners> <add name="Excepciones RollingFile Listener" fileName=".\Trazas\Excepciones.log" formatter="Text Single Formatter" footer="&lt;/Excepcion&gt;" header="&lt;Excepcion&gt;" rollFileExistsBehavior="Overwrite" rollInterval="None" rollSizeKB="1500" timeStampPattern="yyyy-MM-dd" listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.RollingFlatFileTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" traceOutputOptions="None" filter="All" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.RollingFlatFileTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add name="AuthEmailTraceListener" type="zzzz.Frk.Logging.AuthEmailTraceListener.AuthenticatingEmailTraceListener, zzzz.Frk.Logging.AuthEmailTraceListener" listenerDataType="zzzz.Frk.Logging.AuthEmailTraceListener.AuthenticatingEmailTraceListenerData, zzzz.Frk.Logging.AuthEmailTraceListener" formatter="Exception Formatter" traceOutputOptions="None" toAddress="[email protected]" fromAddress="[email protected]" subjectLineStarter=" Excepción detectada - " subjectLineEnder="incidencias" smtpServer="smtp.gmail.com" smtpPort="587" authenticate="true" username="[email protected]" password="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" enableSsl="true" />

    Read the article

  • Final classes in Python 3.x- something Guido isn't telling me?

    - by GlenCrawford
    This question is built on top of many assumptions. If one assumption is wrong, then the whole thing falls over. I'm still relatively new to Python and have just entered the curious/exploratory phase. It is my understanding that Python does not support the creating of classes that cannot be subclassed (final classes). However, it seems to me that the bool class in Python cannot be subclassed. This makes sense when the intent of the bool class is considered (because bool is only supposed to have two values: true and false), and I'm happy with that. What I want to know is how this class was marked as final. So my question is: how exactly did Guido manage to prevent subclassing of bool? >>> class TestClass(bool): pass Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> class TestClass(bool): TypeError: type 'bool' is not an acceptable base type Related question: http://stackoverflow.com/questions/2172189/why-i-cant-extend-bool-in-python

    Read the article

  • LINQ to XML contents of child records.

    - by Fossaw
    I have this LINQ to XML enquiry... var Records = from Item in XDoc.Root.Elements("Item") where (string)Item.Element("ItemNumber") == item.ID.ToString select Item; ... where ItemNumber is a reference number used in the XML, (originally written by this program but manually edited by "others"), and item.ID is the database version of the same thing. The query executes, and I can test for the number of entries in the result fine... if (Records.Count() < 1) ... you get the idea. I have established that there is only one record. Each Item has several child fields. I want to test the values of the child fields are reasonable before passing them on to the database update sub-system. The XML is produced by the program, but edited by users, so I need to really check what is coming back. So I tried... if (DB_English.ToString() != Records.Elements("English").ToString()) ... DB_English is from the database, but the XML in Records, does not contain the contents of that field, it contains... System.Xml.Linq.Extensions+<GetElements>d__29`1[System.Xml.Linq.XElement] ... so, how do I get the value of this element in the XML file? I need to check the field in the XML has not been altered, (the manual editors of this data file are not potentially 100% reliable).

    Read the article

  • Django 'ImproperlyConfigured' error after deployment on google app engine

    - by oreon
    Hello, I'm currently trying to get my first django project running on Google App Engine. I followed the instructions given here http://www.allbuttonspressed.com/projects/djangoappengine as best I could. Unfortunately I have run into some issues. Locally everything runs fine, no problems. I then tried to deploy my project to the cloud. This is where I'm totally stuck. I always receive 500 Server Errors coupled with google.appengine.runtime.DeadlineExceededError's. Every now and then I get the following error message in my logs, which I think is the root of the problem : <class 'django.core.exceptions.ImproperlyConfigured'>: ImportError projectyalanda.pricecompare: No module named projectyalanda.pricecompare Obviously something is wrong in the way I reference my django app. Why this is only an issue in the cloud is a mystery to me. The interesting part in the settings.py file is setup as following: INSTALLED_APPS = ( 'djangotoolbox', # 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'projectyalanda.pricecompare', ) I absolutely can't figure out why django/appengine wouldn't be able to find the module, especially since everything works perfectly locally. So where else can I look? The local folder structure is of course also correct as automatically done by django, so maybe something is messed up during deployment? How would I be able to find out? Please help me ;-) Thanks

    Read the article

  • How can I search an XML file without a dynamic language?

    - by jeph perro
    Let me try to explain my situation: We are using a CMS which 'bakes' a website, and you publish it to a webserver. The published site contains only static HTML ( or XML ) pages ( generated from the content in the CMS database ). I imported an XML file with the names and phone numbers from the company phone directory. Using only XSLT, can I create a way to search that directory? For example, if my XML file, directory.xml looks like this: <directory> <person> <fname>Ryan</fname> <lname>Purple</lname> <phone>887 778 5544</phone> </person> <person> <fname>Tanya</fname> <lname>Orange</lname> <phone>887 998 5541</phone> </person> <directory> Can I create a way to search for a person with the last name starting with "Pur" ? Can I pass a parameter to the XSLT? Can I search the XML tree to match the string in the parameter?

    Read the article

  • C++ classes with members referencing each other

    - by Saad Imran.
    I'm trying to write 2 classes with members that reference each other. I'm not sure if I'm doing something wrong or it's just not possible. Can anyone help me out here... Source.cpp #include "Headers.h" using namespace std; void main() { Network* network = new Network(); system("pause"); return; } Headers.h #ifndef Headers_h #define Headers_h #include <iostream> #include <vector> #include "Network.h" #include "Router.h" #endif Network.h #include "Headers.h" class Network { protected: vector<Router> Routers; }; Router.h #include "Headers.h" class Router { protected: Network* network; public: }; The errors I'm getting are: error C2143: syntax error : missing ';' before '<' error C2238: unexpected token(s) preceding ';' error C4430: missing type specifier - int assumed. I'm pretty sure I'm not missing any semicolons or stuff like that. The program works find if I take out one of the members. I tried finding similar questions and the solution was to use pointers, but that's what I'm doing and it does't seem to be working!

    Read the article

  • Safe way to set computed environment variables

    - by sfink
    I have a bash script that I am modifying to accept key=value pairs from stdin. (It is spawned by xinetd.) How can I safely convert those key=value pairs into environment variables for subprocesses? I plan to only allow keys that begin with a predefined prefix "CMK_", to avoid IFS or any other "dangerous" variable getting set. But the simplistic approach function import () { local IFS="=" while read key val; do case "$key" in CMK_*) eval "$key=$val";; esac done } is horribly insecure because $val could contain all sorts of nasty stuff. This seems like it would work: shopt -s extglob function import () { NORMAL_IFS="$IFS" local IFS="=" while read key val; do case "$key" in CMK_*([a-zA-Z_]) ) IFS="$NORMAL_IFS" eval $key='$val' IFS="=" ;; esac done } but (1) it uses the funky extglob thing that I've never used before, and (2) it's complicated enough that I can't be comfortable that it's secure. My goal, to be specific, is to allow key=value settings to pass through the bash script into the environment of called processes. It is up to the subprocesses to deal with potentially hostile values getting set. I am modifying someone else's script, so I don't want to just convert it to Perl and be done with it. I would also rather not change it around to invoke the subprocesses differently, something like #!/bin/sh ...start of script... perl -nle '($k,$v)=split(/=/,$_,2); $ENV{$k}=$v if $k =~ /^CMK_/; END { exec("subprocess") }' ...end of script...

    Read the article

  • BackgroundWorker might be causing my application to hang

    - by alexD
    I have a Form that uses a BackgroundWorker to execute a series of tests. I use the ProgressChanged event to send messages to the main thread, which then does all of the updates on the UI. I've combed through my code to make sure I'm not doing anything to the UI in the background worker. There are no while loops in my code and the BackgroundWorker has a finite execution time (measured in seconds or minutes). However, for some reason when I lock my computer, often times the application will be hung when I log back in. The thing is, the BackgroundWorker isn't even running when this happens. The reason I believe it is related to the BackgroundWorker though is because the form only hangs when the BackgroundWorker has been executed since the application was loaded (it only runs when given a certain user input). I pass this thread a List of TreeNodes from a TreeView in my UI through the RunWorkerAsync method, but I only read those nodes in the worker thread..any modifications I make to them is done in the UI thread through the progressChanged event. I do use Thread.Sleep in my worker thread to execute tests at timed intervals (which involves sending messages over a TCP socket, which was not created in the worker thread). I am completely perplexed as to why my application might be hanging. I'm sure I'm doing something 'illegal' somewhere, I just don't know what.

    Read the article

  • NSArray/NSMutableArray : Passed by ref or by value???

    - by wgpubs
    Totally confused here. I have a PARENT UIViewController that needs to pass an NSMutableArray to a CHILD UIViewController. I'm expecting it to be passed by reference so that changes made in the CHILD will be reflected in the PARENT and vice-versa. But that is not the case. Both have a property declared as .. @property (nonatomic, retain) NSMutableArray *photos; Example: In PARENT: self.photos = [[NSMutableArray alloc] init]; ChildViewController *c = [[ChildViewController alloc] init ...]; c.photos = self.photos; ... ... ... In CHILD: [self.photos addObject:obj1]; [self.photos addObject:obj2]; NSLog(@"Count:%d", [self.photos count]) // Equals 2 as expected ... Back in PARENT: NSLog(@"Count:%d", [self.photos count]) // Equals 0 ... NOT EXPECTED I thought they'd both be accessing the same memory. Is this not the case? If it isn't ... how do I keep the two NSMutableArrays in sync?

    Read the article

  • How to set the RelativeSource in a DataTemplate that is nested in a HierarchicalDataTemplate?

    - by Dabblernl
    I have the following XAML, that does all that it is supposed to, except that the MultiBinding on the FontSize fails on retrieving the Users. As you can see Users is an IEnumerable<UserData> that is part of the HierarchicalDataTemplate's DataContext. How do I reference it?? <TreeView Name="AllGroups" ItemsSource="{Binding}" > <TreeView.Resources> <HierarchicalDataTemplate DataType="{x:Type PrivateMessengerUI:GroupContainer}" ItemsSource="{Binding Users}" > <Label Content="{Binding GroupName}"/> </HierarchicalDataTemplate> <DataTemplate DataType="{x:Type PrivateMessenger:UserData}"> <TextBlock Text="{Binding Username}" ToolTip="{StaticResource UserDataGroupBox}" Name="GroupedUser" MouseDown="GroupedUser_MouseDown"> <TextBlock.FontSize> <MultiBinding Converter="{StaticResource LargeWhenIAmSelected}"> <Binding ElementName="Root" Path="SelectedUser"/> <Binding RelativeSource="???" Path="DataContext.Users"/> </MultiBinding> </TextBlock.FontSize> </TextBlock> </DataTemplate> </TreeView.Resources> </TreeView>

    Read the article

  • Figure out if element is present in multi-dimensional array in python

    - by Terje
    I am parsing a log containing nicknames and hostnames. I want to end up with an array that contains the hostname and the latest used nickname. I have the following code, which only creates a list over the hostnames: hostnames = [] # while(parsing): # nick = nick_on_current_line # host = host_on_current_line if host in hostnames: # Hostname is already present. pass else: # Hostname is not present hostnames.append(host) print hostnames # ['[email protected]', '[email protected]', '[email protected]'] I thought it would be nice to end up with something along the lines of the following: # [['[email protected]', 'John'], ['[email protected]', 'Mary'], ['[email protected]', 'Joe']] My problem is finding out if the hostname is present in such a list hostnames = [] # while(parsing): # nick = nick_on_current_line # host = host_on_current_line if host in hostnames[0]: # This doesn't work. # Hostname is already present. # Somehow check if the nick stored together # with the hostname is the latest one else: # Hostname is not present hostnames.append([host, nick]) Are there any easy fix to this, or should I try a different approach? I could always have an array with objects or structs (if there is such a thing in python), but I would prefer a solution to my array problem.

    Read the article

< Previous Page | 673 674 675 676 677 678 679 680 681 682 683 684  | Next Page >