Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 342/371 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • Issues with Rails 3.1 API with Query String to Create action on Mac OSX Mountain Lion

    - by hjaved
    Hi I've been stuck on this problem for a while and would appreciate your help. I'm writing an API to allow an external source like a Browser Query String or a smartphone to enter some model User info in a form and hit the User create action to write the data to the db. Please tell me what I'm doing wrong with the code below. I've also observed that if I have code like @user = User.new(params[:user]), that this approach only works when a user enters their data within the form. And that if I have code such as @user = User.new( name: params[:name], location: params[:location], password = params[:password], email: params[:email]), that this code ONLY works for a Query string entry, but NOT both Query string AND regular form submission. Why is that and how can I write the code above in the Users Controller Create action, so that it takes care of both situations? URL used: localhost:3000/users/create?name=John&&[email protected]&&password=secret&&location=SanFrancisco&date=06122012 The date is of type string but it doesn't show up in the database. Why? Everything else does. UsersController.rb def create @user = User.new(params[:user]) if @user.save session[:uid] = @user.id redirect_to thanks_path, notice: "Welcome #{@user.name}!" else redirect_to root_path end end New User Form: <%=u.text_field :name, placeholder: "Name"%><br> <%=u.text_field :email, placeholder: "Email"%><br> <%=u.password_field :password, placeholder: "Password"%><br> <%=u.text_field :location, placeholder: "City"%><br> <%=u.text_field :date, placeholder: "Date"%><br> <%if params[:partner_id]%> <%=u.hidden_field :partner_id, value: params[:partner_id]%> <%end%> <button class="btn btn-large btn-primary">Enter</button> I also tried to create a separate method called remotecreate for User creation for something other than a regular web form. I entered remotecreate in the Query string but it didn't work. def remotecreate @user = User.create(name: params[:name], email: params[:email], password: params[:password], location: params[:location], date: params[:date]) if @user.save session[:uid] = @user.id redirect_to thanks_path, notice: "Welcome #{@user.name}" else redirect_to root_path end end Thanks!

    Read the article

  • Applying Domain Model on top of Linq2Sql entities

    - by Thomas
    I am trying to practice the model first approach and I am putting together a domain model. My requirement is pretty simple: UserSession can have multiple ShoppingCartItems. I should start off by saying that I am going to apply the domain model interfaces to Linq2Sql generated entities (using partial classes). My requirement translates into three database tables (UserSession, Product, ShoppingCartItem where ProductId and UserSessionId are foreign keys in the ShoppingCartItem table). Linq2Sql generates these entities for me. I know I shouldn't even be dealing with the database at this point but I think it is important to mention. The aggregate root is UserSession as a ShoppingCartItem can not exist without a UserSession but I am unclear on the rest. What about Product? It is defiently an entity but should it be associated to ShoppingCartItem? Here are a few suggestion (they might all be incorrect implementations): public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItem> ShoppingCartItems{ get; set; } } public interface IShoppingCartItem { public Guid UserSessionId { get; set; } public int ProductId { get; set; } } Another one would be: public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItem> ShoppingCartItems{ get; set; } } public interface IShoppingCartItem { public Guid UserSessionId { get; set; } public IProduct Product { get; set; } } A third one is: public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItemColletion> ShoppingCartItems{ get; set; } } public interface IShoppingCartItemColletion { public IUserSession UserSession { get; set; } public IProduct Product { get; set; } } public interface IProduct { public int ProductId { get; set; } } I have a feeling my mind is too tightly coupled with database models and tables which is making this hard to grasp. Anyone care to decouple?

    Read the article

  • LINQ to SQL Could not find key member. Only fails on server.

    - by Adam Carr
    I have a scenario where I am inheriting from an abstract class in my partial linq to sql auto generated class implementation. My base abstract class has an abstract property called ID which I have flagged inside my LINQ to SQL model with the instance modifier override. This works fine locally without any issues. I have also done some development on another machine and it works fine there too (both in VS2008 and using Subversion). I am running CI with TeamCity and the build succeeds and deploys as desired. The problem is when the server tries to hit the database for the first time via the LINQ to SQL data context, it generates the following error. "Could not find key member 'Id' of key 'Id' on type 'CustomType'. The key may be wrong or the field or property on 'CustomType' has changed names." I have tried changing my configuration by not implementing the Id field in my base class but this still fails. Why does it work on both of my DEV machines but not on the server? I am using LINQ to SQL in another project that runs on this server just fine. FYI: LINQ to SQL, SQL 2008, .NET 3.5, SERVER 2008, IIS 7.0 UPDATE I have gone back and added the same table a second time in the same data model but without a base class and have then displayed the results from that table and got no errors. This tells me it has something to do with my base abstract class and the need to flag a property on one of my linq to sql model classes (that belongs to a key relationship) with the instance modifier of override. No answer to this yet but am getting closer. UPDATE I have fixed my issue by simply changing my approach to my problem but I am still interested in why this doesn't work. I created a new WinSrv2008 VPC and patched it, deployed a pre-built version of my site to it and still got the same error. I now assume the issue is like what the person said here, a dependency issue with VS2008. My question is what or what? Will install VS2008 on the VPC to see if it works after that.

    Read the article

  • How would you organize this Javascript?

    - by Anurag
    How do you usually organize complex web applications that are extremely rich on the client side. I have created a contrived example to indicate the kind of mess it's easy to get into if things are not managed well for big apps. Feel free to modify/extend this example as you wish - http://jsfiddle.net/NHyLC/1/ The example basically mirrors part of the comment posting on SO, and follows the following rules: Must have 15 characters minimum, after multiple spaces are trimmed out to one. If Add Comment is clicked, but the size is less than 15 after removing multiple spaces, then show a popup with the error. Indicate amount of characters remaining and summarize with color coding. Gray indicates a small comment, brown indicates a medium comment, orange a large comment, and red a comment overflow. One comment can only be submitted every 15 seconds. If comment is submitted too soon, show a popup with appropriate error message. A couple of issues I noticed with this example. This should ideally be a widget or some sort of packaged functionality. Things like a comment per 15 seconds, and minimum 15 character comment belong to some application wide policies rather than being embedded inside each widget. Too many hard-coded values. No code organization. Model, Views, Controllers are all bundled together. Not that MVC is the only approach for organizing rich client side web applications, but there is none in this example. How would you go about cleaning this up? Applying a little MVC/MVP along the way? Here's some of the relevant functions, but it will make more sense if you saw the entire code on jsfiddle: /** * Handle comment change. * Update character count. * Indicate progress */ function handleCommentUpdate(comment) { var status = $('.comment-status'); status.text(getStatusText(comment)); status.removeClass('mild spicy hot sizzling'); status.addClass(getStatusClass(comment)); } /** * Is the comment valid for submission */ function commentSubmittable(comment) { var notTooSoon = !isTooSoon(); var notEmpty = !isEmpty(comment); var hasEnoughCharacters = !isTooShort(comment); return notTooSoon && notEmpty && hasEnoughCharacters; } // submit comment $('.add-comment').click(function() { var comment = $('.comment-box').val(); // submit comment, fake ajax call if(commentSubmittable(comment)) { .. } // show a popup if comment is mostly spaces if(isTooShort(comment)) { if(comment.length < 15) { // blink status message } else { popup("Comment must be at least 15 characters in length."); } } // show a popup is comment submitted too soon else if(isTooSoon()) { popup("Only 1 comment allowed per 15 seconds."); } });

    Read the article

  • PERL newbie : get a proper minimal debug_mode solution

    - by Michael Mao
    Hi all: I am learning PERL in a "head-first" manner. I am absolutely a newbie in this language: I am trying to have a debug_mode switch from CLI which can be used to control how my script works, by switching certain subroutines "on and off". And below is what I've got so far: #!/usr/bin/perl -s -w # purpose : make subroutine execution optional, # which is depending on a CLI switch flag use strict; use warnings; use constant DEBUG_VERBOSE => "v"; use constant DEBUG_SUPPRESS_ERROR_MSGS => "s"; use constant DEBUG_IGNORE_VALIDATION => "i"; use constant DEBUG_SETPPING_COMPUTATION => "c"; our ($debug_mode); mainMethod(); sub mainMethod # () { if(!$debug_mode) { print "debug_mode is OFF\n"; } elsif($debug_mode) { print "debug_mode is ON\n"; } else { print "OMG!\n"; exit -1; } checkArgv(); printErrorMsg("Error_Code_123", "Parsing Error at..."); verbose(); } sub checkArgv #() { print ("Number of ARGV : ".(1 + $#ARGV)."\n"); } sub printErrorMsg # ($error_code, $error_msg, ..) { if(defined($debug_mode) && !($debug_mode =~ DEBUG_SUPPRESS_ERROR_MSGS)) { print "You can only see me if -debug_mode is NOT set". " to DEBUG_SUPPRESS_ERROR_MSGS\n"; die("terminated prematurely...\n") and exit -1; } } sub verbose # () { if(defined($debug_mode) && ($debug_mode =~ DEBUG_VERBOSE)) { print "Blah blah blah...\n"; } } So far as I can tell, at least it works...: the -debug_mode switch doesn't interfere with normal ARGV the following commandlines work: ./optional.pl ./optional.pl -debug_mode ./optional.pl -debug_mode=v ./optional.pl -debug_mode=s However, I am puzzled when multiple debug_modes are "mixed", such as: ./optional.pl -debug_mode=sv ./optional.pl -debug_mode=vs I don't understand why the above lines of code "magically works". I see both of the "DEBUG_VERBOS" and "DEBUG_SUPPRESS_ERROR_MSGS" apply to the script, which is fine in this case. However, if there are some "conflicting" debug modes, I am not sure how to set the "precedence of debue_modes"? Also, I am not certain if my approach is good enough to Perlists and I hope I am getting my feet in the right direction. One biggest problem is that I now put if statements inside most of my subroutines for controlling their behavior under different modes. Is this okay? Is there a more elegant way? I know there must be a debug module from CPAN or elsewhere, but I wanna a real minimal solution that doesn't depend on any other module than the "default" And I cannot have any control on the environment where this script will be executed... Many thanks to the suggestions in advance.

    Read the article

  • Debugging site written mainly in JScript with AJAX code injection

    - by blumidoo
    Hello, I have a legacy code to maintain and while trying to understand the logic behind the code, I have run into lots of annoying issues. The application is written mainly in Java Script, with extensive usage of jQuery + different plugins, especially Accordion. It creates a wizard-like flow, where client code for the next step is downloaded in the background by injecting a result of a remote AJAX request. It also uses callbacks a lot and pretty complicated "by convention" programming style (lots of events handlers are created on the fly based on certain object names - e.g. current page name, current step name). Adding to that, the code is very messy and there is no obvious inner structure - the functions are scattered in the code, file names do not reflect the business role of the code, lots of functions and code snippets are most likely not used at all etc. PROBLEM: How to approach this code base, so that the inner flow of the code can be sort-of "reverse engineered" using a suite of smart debugging tools. Ideally, I would like to be able to attach to the running application and step through the code, breaking on each new function call. Also, it would be nice to be able to create a "diagram of calls" in the application (i.e. in order to run a particular page logic, this particular flow of function calls was executed in a particular order). Not to mention to be able to run a coverage analysis, identifying potentially orphaned code fragments. I would like to stress out once more, that it is impossible to understand the inner logic of the application just by looking at the code itself, unless you have LOTS of spare time and beer crates, which I unfortunately do not have :/ (shame...) An IDE of some sort that would aid in extending that code would be also great, but I am currently looking into possibility to use Visual Studio 2010 to do the job, as the site itself is a mix of Classic ASP and ASP.NET (I'd say - 70% Java Script with jQuery, 30% ASP). I have obviously tried FireBug, but I was unable to find a way to define a breakpoint or step into the code, which is "injected" into the client JS using AJAX calls (i.e. the application retrieves the code by invoking an URL and injects it to the client local code). Venkman debugger had similar issues. Any hints would be welcome. Feel free to ask additional questions.

    Read the article

  • Linux: How to find all serial devices (ttyS, ttyUSB, ..) without opening them?

    - by Thomas Tempelmann
    What is the proper way to get a list of all available serial ports/devices on a Linux system? In other words, when I iterate over all devices in /dev/, how do I tell which ones are serial ports in the classic way, i.e. those usually supporting baud rates and RTS/CTS flow control? The solution would be coded in C. I ask because I am using a 3rd party library that does this clearly wrong: It appears to only iterate over /dev/ttyS*. The problem is that there are, for instance, serial ports over USB (provided by USB-RS232 adapters), and those are listed under /dev/ttyUSB*. And reading the Serial-HOWTO at Linux.org, I get the idea that there'll be other name spaces as well, as time comes. So I need to find the official way to detect serial devices. Problem is that there appears none documented, or I can't find it. I imagine one way would be to open all files from /dev/tty* and call a specific ioctl() on them that is only available on serial devices. Would that be a good solution, though? Update hrickards suggested to look at the source for "setserial". Its code does exactly what I had in mind: First, it opens a device with: fd = open (path, O_RDWR | O_NONBLOCK) Then it invokes: ioctl (fd, TIOCGSERIAL, &serinfo) If that call returns no error, then it's a serial dev, apparently. I found similar code here, which suggested to also add the O_NOCTTY option. There is one problem with this approach, though: When I tested this code on BSD Unix (i.e. OSX), it worked as well, however serial devices that are provided thru Bluetooth cause the system (driver) to try to connect to the bluetooth device, which takes a while before it'll return with a timeout error. This is caused by just opening the device. And I can imagine that similar things can happen on Linux as well - ideally, I should not need to open the device to figure out its type. I wonder if there's also a way to invoke ioctl functions without an open, or open a device in a way that it does not cause connections to be made? Any ideas?

    Read the article

  • May volatile be in user defined types to help writing thread-safe code

    - by David Rodríguez - dribeas
    I know, it has been made quite clear in a couple of questions/answers before, that volatile is related to the visible state of the c++ memory model and not to multithreading. On the other hand, this article by Alexandrescu uses the volatile keyword not as a runtime feature but rather as a compile time check to force the compiler into failing to accept code that could be not thread safe. In the article the keyword is used more like a required_thread_safety tag than the actual intended use of volatile. Is this (ab)use of volatile appropriate? What possible gotchas may be hidden in the approach? The first thing that comes to mind is added confusion: volatile is not related to thread safety, but by lack of a better tool I could accept it. Basic simplification of the article: If you declare a variable volatile, only volatile member methods can be called on it, so the compiler will block calling code to other methods. Declaring an std::vector instance as volatile will block all uses of the class. Adding a wrapper in the shape of a locking pointer that performs a const_cast to release the volatile requirement, any access through the locking pointer will be allowed. Stealing from the article: template <typename T> class LockingPtr { public: // Constructors/destructors LockingPtr(volatile T& obj, Mutex& mtx) : pObj_(const_cast<T*>(&obj)), pMtx_(&mtx) { mtx.Lock(); } ~LockingPtr() { pMtx_->Unlock(); } // Pointer behavior T& operator*() { return *pObj_; } T* operator->() { return pObj_; } private: T* pObj_; Mutex* pMtx_; LockingPtr(const LockingPtr&); LockingPtr& operator=(const LockingPtr&); }; class SyncBuf { public: void Thread1() { LockingPtr<BufT> lpBuf(buffer_, mtx_); BufT::iterator i = lpBuf->begin(); for (; i != lpBuf->end(); ++i) { // ... use *i ... } } void Thread2(); private: typedef vector<char> BufT; volatile BufT buffer_; Mutex mtx_; // controls access to buffer_ };

    Read the article

  • Performance of tokenizing CSS in PHP

    - by Boldewyn
    This is a noob question from someone who hasn't written a parser/lexer ever before. I'm writing a tokenizer/parser for CSS in PHP (please don't repeat with 'OMG, why in PHP?'). The syntax is written down by the W3C neatly here (CSS2.1) and here (CSS3, draft). It's a list of 21 possible tokens, that all (but two) cannot be represented as static strings. My current approach is to loop through an array containing the 21 patterns over and over again, do an if (preg_match()) and reduce the source string match by match. In principle this works really good. However, for a 1000 lines CSS string this takes something between 2 and 8 seconds, which is too much for my project. Now I'm banging my head how other parsers tokenize and parse CSS in fractions of seconds. OK, C is always faster than PHP, but nonetheless, are there any obvious D'Oh! s that I fell into? I made some optimizations, like checking for '@', '#' or '"' as the first char of the remaining string and applying only the relevant regexp then, but this hadn't brought any great performance boosts. My code (snippet) so far: $TOKENS = array( 'IDENT' => '...regexp...', 'ATKEYWORD' => '@...regexp...', 'String' => '"...regexp..."|\'...regexp...\'', //... ); $string = '...CSS source string...'; $stream = array(); // we reduce $string token by token while ($string != '') { $string = ltrim($string, " \t\r\n\f"); // unconsumed whitespace at the // start is insignificant but doing a trim reduces exec time by 25% $matches = array(); // loop through all possible tokens foreach ($TOKENS as $t => $p) { // The '&' is used as delimiter, because it isn't used anywhere in // the token regexps if (preg_match('&^'.$p.'&Su', $string, $matches)) { $stream[] = array($t, $matches[0]); $string = substr($string, strlen($matches[0])); // Yay! We found one that matches! continue 2; } } // if we come here, we have a syntax error and handle it somehow } // result: an array $stream consisting of arrays with // 0 => type of token // 1 => token content

    Read the article

  • JqTouch - Detect trigger for animation

    - by majman
    Yet another problem I'm having w/ jqTouch... I'm trying to detect what element was clicked to trigger an animation so that I can pass parameters from the clicked item to the subsequent page. My HTML is: <div id="places"> <div class="toolbar"> <h1>Places</h1> <a class="back" href="#">Back</a> </div> <ul> <li id="1"><a href="#singleplace">Place 1</a></li> <li id="2"><a href="#singleplace">Place 2</a></li> <li id="3"><a href="#singleplace">Place 3</a></li> </ul> </div> <div id="singleplace"> <div class="toolbar"> <h1></h1> <a class="back" href="#">Back</a> </div> </div> When I click on any of the list items in #places, I'm able to slide over to #singleplace just fine, but I'm trying to detect which element was clicked so that I can pass parameters into the #singleplace div. My javascript is: var placeID; $('#places a').live('mouseup',function(){ $('#singleplace h1').html($(this).text()) placeID = $(this).parent().attr('id'); }) I've tried several alternatives to the $(el).live('event', fn()) approach including: $('#places a').live('click',fn()... $('#places a').live('mouseup',fn()... $('#places a').live('tap',fn()... $('#places a').tap(fn()... None of which seem to work. Is there a better way I could be handling this? I noticed on jqTouch's issues page, there is this: http://code.google.com/p/jqtouch/issues/detail?id=91 which may be part of the problem...

    Read the article

  • architecture python question

    - by tom smith
    hi. creating a distributed crawling python app. it consists of a master server, and associated client apps that will run on client servers. the purpose of the client app is to run across a targeted site, to extract specific data. the clients need to go "deep" within the site, behind multiple levels of forms, so each client is specifically geared towards a given site. each client app looks something like main: parse initial url call function level1 (data1) function level1 (data) parse the url, for data1 use the required xpath to get the dom elements call the next function call level2 (data) function level2 (data2) parse the url, for data2 use the required xpath to get the dom elements call the next function call level3 function level3 (dat3) parse the url, for data3 use the required xpath to get the dom elements call the next function call level4 function level4 (data) parse the url, for data4 use the required xpath to get the dom elements at the final function.. --all the data output, and eventually returned to the server --at this point the data has elements from each function... my question: given that the number of calls that is made to the child function by the current function varies, i'm trying to figure out the best approach. each function essentialy fetches a page of content, and then parses the page using a number of different XPath expressions, combined with different regex expressions depending on the site/page. if i run a client on a single box, as a sequential process, it'll take awhile, but the load on the box is rather small. i've thought of attempting to implement the child functions as threads from the current function, but that could be a nightmare, as well as quickly bring the "box" to its knees! i've thought of breaking the app up in a manner that would allow the master to essentially pass packets to the client boxes, in a way to allow each client/function to be run directly from the master. this process requires a bit of rewrite, but it has a number of advantages. a bunch of redundancy, and speed. it would detect if a section of the process was crashing and restart from that point. but not sure if it would be any faster... i'm writing the parsing scripts in python.. so... any thoughts/comments would be appreciated... i can get into a great deal more detail, but didn't want to bore anyone!! thanks! tom

    Read the article

  • showing errors from actions in table-based views

    - by enashnash
    I have a view where I want to perform different actions on the items in each row in a table, similar to this (in, say, ~/Views/Thing/Manage.aspx): <table> <% foreach (thing in Model) { %> <tr> <td><%: thing.x %></td> <td> <% using (Html.BeginForm("SetEnabled", "Thing")) { %> <%: Html.Hidden("x", thing.x) %> <%: Html.Hidden("enable", !thing.Enabled) %> <input type="submit" value="<%: thing.Enabled ? "Disable" : "Enable" %>" /> <% } %> </td> <!-- more tds with similar action forms here, a few per table row --> </tr> <% } %> In my ThingController, I have functions similar to the following: public ActionResult Manage() { return View(ThingService.GetThings()); } [HttpPost] public ActionResult SetEnabled(string x, bool enable) { try { ThingService.SetEnabled(x, enable); } catch (Exception ex) { ModelState.AddModelError("", ex.Message); // I know this is wrong... } return RedirectToAction("Manage"); } In the most part, this is working fine. The problem is that if ThingService.SetEnabled throws an error, I want to be able to display the error at the top of the table. I've tried a few things with Html.ValidationSummary() in the page but I can't get it to work. Note that I don't want to send the user to a separate page to do this, and I'm trying to do it without using any javascript. Am I going about displaying my table in the best way? How do I get the errors displayed in the way I want them to? I will end up with perhaps 40 small forms on the page. This approach comes largely from this article, but it doesn't handle the errors in the way I need to. Any takers?

    Read the article

  • Rendering javascript at the server side level. A good or bad idea?

    - by davidhong
    I want to make it clear first: This isn't a question in relation to server-side Javascript or running Javascript server side. This is a question regarding rendering of Javascript code (which will be executed on the client-side) from server-side code. Having said that, take a look at below ASP.net code for example: hlRemoveCategory.Attributes.Add("onclick", "return confirm('Are you sure you want to delete this?');") This is prescribing the client-side onclick event on the server-side. As oppose to: $('a[rel=remove]').bind('click', function(event) { return confirm('Are you sure you want to delete this?'); } Now the question I want to ask is: What is the benefit of rendering javascript from the server-side code? Or the vice-versa? I personally prefer the second way of hooking up client-side UI/behaviour to HTML elements for the following reasons: Server-side does what ever it needs to already, including data-validation, event delegation and etc; and What server-side sees as an event is not necessarily the same process on the client-side. i.e., there are plenty more events on client-side (just look at custom events); and What happens on client-side and on server-side, during an event, could be completely irrelevant and decoupled; and What ever happens on client-side happens on client-side, there is no need for the server to know. Server should process and run what is given to them, how the process comes to life is not really up to them to decide in the event of the client-side events; and so and so forth. These are my thoughts obviously. I want to know what others think and if there has been any discussions on this topic. Topics branching from this argument can reach: Code management: is it easier to render everything from server-side? Separation of concern: is it easier if client-side logic is separated to server-side logic? Efficiency: which is more efficient both in terms of coding and running? At the end of the day, I am trying to move my team to go towards the second approach. There are lot of old guys in this team who are afraid of this change. I just wish to convince them with the right facts and stats. Let me know your thoughts.

    Read the article

  • How to merge jquery autocomplete and fcbkListSelection functionality?

    - by Ben
    Initial caveat: I am a newbie to jquery and javascript. I started using the autocomplete jquery plugin recently from bassistance.de. Yesterday, I found the fcbkListSelection plugin (http://www.emposha.com/javascript/fcbklistselection-like-facebook-friends-selector.html) and am using it to include a selector in my application. What I now want to do is to merge the 2 pieces of functionality since there are lots of items in my selector list. To be more specific, I want to put a "Filter by Name" text box above the facebook list selection object in my html. When the user types a few characters into the "Filter by Name" text box, I want the items in the Facebook list selector to only show the items that contain the typed characters -- kind of like what autocomplete does already, but rather than the results showing below the text input box, I want them to dynamically update the facebook list object. Is this possible and relatively straightforward? If so, can somebody please describe how to do it? If not, is there an easier way to approach this? Thanks! OK, to Spencer Ruport's comment, I guess I may have a more specific question already. Below is the current Javascript in my HTML file (angle brackets represent Django tags). #suggest1 and #fcbklist both do their separate pieces, but how do I get them to talk to each other? Do I need to further write javascript in my HTML file, or do I need to customize the guts of the plugins' .js? Does this help elaborate? $(document).ready(function() { var names = []; var count = 0; {% for a, b, c in friends_for_picker %} names[count] = "{{ b }}"; count = count+1; {% endfor %} function findValueCallback(event, data, formatted) { $("<li>").html( !data ? "No match!" : "Selected: " + formatted).appendTo("#result"); } function formatItem(row) { return row[0] + " (<strong>id: " + row[1] + "</strong>)"; } function formatResult(row) { return row[0].replace(/(<.+?>)/gi, ''); } $("#suggest1").autocomplete(names, { multiple: true, mustMatch: false, autoFill: true, matchContains: true, }); //id(ul id),width,height(element height),row(elements in row) $.fcbkListSelection("#fcbklist","575","50","4"); });

    Read the article

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • Count number of queries executed by NHibernate in a unit test

    - by Bittercoder
    In some unit/integration tests of the code we wish to check that correct usage of the second level cache is being employed by our code. Based on the code presented by Ayende here: http://ayende.com/Blog/archive/2006/09/07/MeasuringNHibernatesQueriesPerPage.aspx I wrote a simple class for doing just that: public class QueryCounter : IDisposable { CountToContextItemsAppender _appender; public int QueryCount { get { return _appender.Count; } } public void Dispose() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; logger.RemoveAppender(_appender); } public static QueryCounter Start() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; lock (logger) { foreach (IAppender existingAppender in logger.Appenders) { if (existingAppender is CountToContextItemsAppender) { var countAppender = (CountToContextItemsAppender) existingAppender; countAppender.Reset(); return new QueryCounter {_appender = (CountToContextItemsAppender) existingAppender}; } } var newAppender = new CountToContextItemsAppender(); logger.AddAppender(newAppender); logger.Level = Level.Debug; logger.Additivity = false; return new QueryCounter {_appender = newAppender}; } } public class CountToContextItemsAppender : IAppender { int _count; public int Count { get { return _count; } } public void Close() { } public void DoAppend(LoggingEvent loggingEvent) { if (string.Empty.Equals(loggingEvent.MessageObject)) return; _count++; } public string Name { get; set; } public void Reset() { _count = 0; } } } With intended usage: using (var counter = QueryCounter.Start()) { // ... do something Assert.Equal(1, counter.QueryCount); // check the query count matches our expectations } But it always returns 0 for Query count. No sql statements are being logged. However if I make use of Nhibernate Profiler and invoke this in my test case: NHibernateProfiler.Intialize() Where NHProf uses a similar approach to capture logging output from NHibernate for analysis via log4net etc. then my QueryCounter starts working. It looks like I'm missing something in my code to get log4net configured correctly for logging nhibernate sql ... does anyone have any pointers on what else I need to do to get sql logging output from Nhibernate?

    Read the article

  • How do bind a List<object> to a DataGrid in Silverlight?

    - by Ben McCormack
    I'm trying to create a simple Silverlight application that involves parsing a CSV file and displaying the results in a DataGrid. I've configured my application to parse the CSV file to return a List<CSVTransaction> that contains properties with names: Date, Payee, Category, Memo, Inflow, Outflow. The user clicks a button to select a file to parse, at which point I want the DataGrid object to be populated. I'm thinking I want to use data binding, but I can't seem to figure out how to get the data to show up in the grid. My XAML for the DataGrid looks like this: <data:DataGrid IsEnabled="False" x:Name="TransactionsPreview"> <data:DataGrid.Columns> <data:DataGridTextColumn Header="Date" Binding="{Binding Date}" /> <data:DataGridTextColumn Header="Payee" Binding="{Binding Payee}"/> <data:DataGridTextColumn Header="Category" Binding="{Binding Category}"/> <data:DataGridTextColumn Header="Memo" Binding="{Binding Memo}"/> <data:DataGridTextColumn Header="Inflow" Binding="{Binding Inflow}"/> <data:DataGridTextColumn Header="Outflow" Binding="{Binding Outflow}"/> </data:DataGrid.Columns> </data:DataGrid> The code-behind for the xaml.cs file looks like this: private void OpenCsvFile_Click(object sender, RoutedEventArgs e) { try { CsvTransObject csvTO = new CsvTransObject.ParseCSV(); //This returns a List<CsvTransaction> and passes it //to a method which is supposed to set the DataContext //for the DataGrid to be equal to the list. BindCsvTransactions(csvTO.CsvTransactions); TransactionsPreview.IsEnabled = true; MessageBox.Show("The CSV file has a valid header and has been loaded successfully."); } catch (Exception ex) { MessageBox.Show(ex.Message); } } private void BindCsvTransactions(List<CsvTransaction> listYct) { TransactionsPreview.DataContext = listYct; } My thinking is to bind the CsvTransaction properties to each DataGridTextColumn in the XAML and then set the DataContext for the DataGrid to the List<CsvTransaction at run-time, but this isn't working. Any ideas about how I might approach this (or do it better)?

    Read the article

  • ELMAH - Using custom error pages to collecting user feedback

    - by vdh_ant
    Hey guys I'm looking at using ELMAH for the first time but have a requirement that needs to be met that I'm not sure how to go about achieving... Basically, I am going to configure ELMAH to work under asp.net MVC and get it to log errors to the database when they occur. On top of this I be using customErrors to direct the user to a friendly message page when an error occurs. Fairly standard stuff... The requirement is that on this custom error page I have a form which enables to user to provide extra information if they wish. Now the problem arises due to the fact that at this point the error is already logged and I need to associate the loged error with the users feedback. Normally, if I was using my own custom implementation, after I log the error I would pass through the ID of the error to the custom error page so that an association can be made. But because of the way that ELMAH works, I don't think the same is quite possible. Hence I was wondering how people thought that one might go about doing this.... Cheers UPDATE: My solution to the problem is as follows: public class UserCurrentConextUsingWebContext : IUserCurrentConext { private const string _StoredExceptionName = "System.StoredException."; private const string _StoredExceptionIdName = "System.StoredExceptionId."; public virtual string UniqueAddress { get { return HttpContext.Current.Request.UserHostAddress; } } public Exception StoredException { get { return HttpContext.Current.Application[_StoredExceptionName + this.UniqueAddress] as Exception; } set { HttpContext.Current.Application[_StoredExceptionName + this.UniqueAddress] = value; } } public string StoredExceptionId { get { return HttpContext.Current.Application[_StoredExceptionIdName + this.UniqueAddress] as string; } set { HttpContext.Current.Application[_StoredExceptionIdName + this.UniqueAddress] = value; } } } Then when the error occurs, I have something like this in my Global.asax: public void ErrorLog_Logged(object sender, ErrorLoggedEventArgs args) { var item = new UserCurrentConextUsingWebContext(); item.StoredException = args.Entry.Error.Exception; item.StoredExceptionId = args.Entry.Id; } Then where ever you are later you can pull out the details by var item = new UserCurrentConextUsingWebContext(); var error = item.StoredException; var errorId = item.StoredExceptionId; item.StoredException = null; item.StoredExceptionId = null; Note this isn't 100% perfect as its possible for the same IP to have multiple requests to have errors at the same time. But the likely hood of that happening is remote. And this solution is independent of the session, which in our case is important, also some errors can cause sessions to be terminated, etc. Hence why this approach has worked nicely for us.

    Read the article

  • Algorithm to Render a Horizontal Binary-ish Tree in Text/ASCII form

    - by Justin L.
    It's a pretty normal binary tree, except for the fact that one of the nodes may be empty. I'd like to find a way to output it in a horizontal way (that is, the root node is on the left and expands to the right). I've had some experience expanding trees vertically (root node at the top, expanding downwards), but I'm not sure where to start, in this case. Preferably, it would follow these couple of rules: If a node has only one child, it can be skipped as redundant (an "end node", with no children, is always displayed) All nodes of the same depth must be aligned vertically; all nodes must be to the right of all less-deep nodes and to the left of all deeper nodes. Nodes have a string representation which includes their depth. Each "end node" has its own unique line; that is, the number of lines is the number of end nodes in the tree, and when an end node is on a line, there may be nothing else on that line after that end node. As a consequence of the last rule, the root node should be in either the top left or the bottom left corner; top left is preferred. For example, this is a valid tree, with six end nodes (node is represented by a name, and its depth): [a0]------------[b3]------[c5]------[d8] \ \ \----------[e9] \ \----[f5] \--[g1]--------[h4]------[i6] \ \--------------------[j10] \-[k3] Which represents the horizontal, explicit binary tree: 0 a / \ 1 g * / \ \ 2 * * * / \ \ 3 k * b / / \ 4 h * * / \ \ \ 5 * * f c / \ / \ 6 * i * * / / \ 7 * * * / / \ 8 * * d / / 9 * e / 10 j (branches folded for compactness; * representing redundant, one-child nodes; note that *'s are actual nodes, storing one child each, just with names omitted here for presentation sake) (also, to clarify, I'd like to generate the first, horizontal tree; not this vertical tree) I say language-agnostic because I'm just looking for an algorithm; I say ruby because I'm eventually going to have to implement it in ruby anyway. Assume that each Node data structure stores only its id, a left node, and a right node. A master Tree class keeps tracks of all nodes and has adequate algorithms to find: A node's nth ancestor A node's nth descendant The generation of a node The lowest common ancestor of two given nodes Anyone have any ideas of where I could start? Should I go for the recursive approach? Iterative?

    Read the article

  • Finding what makes strings unique in a list, can you improve on brute force?

    - by Ed Guiness
    Suppose I have a list of strings where each string is exactly 4 characters long and unique within the list. For each of these strings I want to identify the position of the characters within the string that make the string unique. So for a list of three strings abcd abcc bbcb For the first string I want to identify the character in 4th position d since d does not appear in the 4th position in any other string. For the second string I want to identify the character in 4th position c. For the third string it I want to identify the character in 1st position b AND the character in 4th position, also b. This could be concisely represented as abcd -> ...d abcc -> ...c bbcb -> b..b If you consider the same problem but with a list of binary numbers 0101 0011 1111 Then the result I want would be 0101 -> ..0. 0011 -> .0.. 1111 -> 1... Staying with the binary theme I can use XOR to identify which bits are unique within two binary numbers since 0101 ^ 0011 = 0110 which I can interpret as meaning that in this case the 2nd and 3rd bits (reading left to right) are unique between these two binary numbers. This technique might be a red herring unless somehow it can be extended to the larger list. A brute-force approach would be to look at each string in turn, and for each string to iterate through vertical slices of the remainder of the strings in the list. So for the list abcd abcc bbcb I would start with abcd and iterate through vertical slices of abcc bbcb where these vertical slices would be a | b | c | c b | b | c | b or in list form, "ab", "bb", "cc", "cb". This would result in four comparisons a : ab -> . (a is not unique) b : bb -> . (b is not unique) c : cc -> . (c is not unique) d : cb -> d (d is unique) or concisely abcd -> ...d Maybe it's wishful thinking, but I have a feeling that there should be an elegant and general solution that would apply to an arbitrarily large list of strings (or binary numbers). But if there is I haven't yet been able to see it. I hope to use this algorithm to to derive minimal signatures from a collection of unique images (bitmaps) in order to efficiently identify those images at a future time. If future efficiency wasn't a concern I would use a simple hash of each image. Can you improve on brute force?

    Read the article

  • Android: dynamically setting links to text in strings.xml

    - by Martyn
    I'm trying to make an app with localisation built in, but I want a way that I can create a web link within the text, the URL being defined elsewhere (for ease of maintenance). So, I have my links in res/values/strings.xml: <?xml version="1.0" encoding="utf-8"?> <resources> ... <string name="link1">http://some.link.com</string> <string name="link2">http://some.link2.com</string> </resources> and my localised text in res/values-en-rGB/strings.xml <?xml version="1.0" encoding="utf-8"?> <resources> ... <string name="sampleText">Sample text\nMore text and link1\nMore text and link2.</string> </resources> I've not tested this bit, but from the localization section of developer.android.com it says that this approach to reducing content duplication should work, although I'm not sure what folder I should put Italian, for example. Would it be in 'res/values-it-rIT/strings.xml'? Lets assume that I have various other languages too. I'm looking for a way of taking the base localised 'sampleText' and inserting my html links in, and getting them to work when clicked on. I've tried two approaches so far: 1, Putting some formatting in the 'sampleText' (%s): <string name="sampleText">Sample text\nMore text and <a href="%s">link1</a>\nMore text and <a href="%s">link2</a>.</string> and then processing the text like this: TextView tv = (TextView) findViewById(R.id.textHolder); tv.setText(getResources().getString(R.string.sampleText, getResources().getString(R.string.link1), getResources().getString(R.string.link2))); But this didn't work when I click on the link, even though the link text is being put in to the correct places. 2, I tried to use Linkify but the regular expression route may be difficult as I'm looking at supporting non-Latin based languages. I tried to put a custom xml tag around the link text and then do something like this: Pattern wordMatcher = Pattern.compile("<span1>.*</span1>"); String viewURL = "content://" + getResources().getString(R.string.someLink); Linkify.addLinks(tv, wordMatcher , viewURL ); But this didn't work either. So, I'd like to know if there's a way of dynamically adding multiple URLs to different sections of the same text which will link to web content? Thank you, Martyn

    Read the article

  • Help choosing authentication method

    - by Dima
    I need to choose an authentication method for an application installed and integrated in customers environment. There are two types of environments - windows and linux/unix. Application is user based, no web stuff, pure Java. The requirement is to authenticate users which will use my application against customer provided user base. Meaning, customer installs my app, but uses his own users to grant or deny access to my app. Typical, right? I have three options to consider and I need to pick up the one which would be a) the most flexible to cover most common modern environments and b) would take least effort while stay robust and standard. Option (1) - Authenticate locally managing user credentials in some local storage, e.g. file. Customer would then add his users to my application and it will then check the passwords. Simple, clumsy but would work. Customers would have to punch every user they want to grant access to my app using some UI we will have to provide. Lots of work for me, headache to the customer. Option (2) - Use LDAP authentication. Customers would tell my app where to look for users and I will walk their directory resolving names into user names and trying to bind with found password. This is better approach IMO, but more fragile because I will have to walk an unknown directory structure and who knows if this will be permitted everywhere. Would be harder to test since there are many LDAP implementation out there, last thing I want is drowning in this voodoo. Option(3) - Use plain Kerberos authentication. Customers would tell my app what realm (domain) and which KDC (key distribution center) to use. In ideal world these two parameters would be all I need to set while customers could use their own administration tools to configure domain and kdc. My application would simply delegate user credentials to this third party (using JAAS or Spring security) and consider success when third party is happy with them. I personally prefer #3, but not sure what surprises I might face. Would this cover windows and *nix systems entirely? Is there another option to consider?

    Read the article

  • Simple form validation. Object-oriented.

    - by kalininew
    Problem statement: It is necessary for me to write a code, whether which before form sending will check all necessary fields are filled. If not all fields are filled, it is necessary to allocate with their red colour and not to send the form. Now the code exists in such kind: function formsubmit(formName, reqFieldArr){ var curForm = new formObj(formName, reqFieldArr); if(curForm.valid) curForm.send(); else curForm.paint(); } function formObj(formName, reqFieldArr){ var filledCount = 0; var fieldArr = new Array(); for(i=reqFieldArr.length-1; i>=0; i--){ fieldArr[i] = new fieldObj(formName, reqFieldArr[i]); if(fieldArr[i].filled == true) filledCount++; } if(filledCount == fieldArr.length) this.valid = true; else this.valid = false; this.paint = function(){ for(i=fieldArr.length-1; i>=0; i--){ if(fieldArr[i].filled == false) fieldArr[i].paintInRed(); else fieldArr[i].unPaintInRed(); } } this.send = function(){ document.forms[formName].submit(); } } function fieldObj(formName, fName){ var curField = document.forms[formName].elements[fName]; if(curField.value != '') this.filled = true; else this.filled = false; this.paintInRed = function(){ curField.addClassName('red'); } this.unPaintInRed = function(){ curField.removeClassName('red'); } } Function is caused in such a way: <input type="button" onClick="formsubmit('orderform', ['name', 'post', 'payer', 'recipient', 'good'])" value="send" /> Now the code works. But I would like to add "dynamism" in it. That it is necessary for me: to keep an initial code essentially, to add listening form fields (only necessary for filling). For example, when the field is allocated by red colour and the user starts it to fill, it should become white. As a matter of fact I need to add listening of events: onChange, blur for the blank fields of the form. As it to make within the limits of an initial code. If all my code - full nonsense, let me know about it. As to me it to change using object-oriented the approach.

    Read the article

  • Advantage of creating a generic repository vs. specific repository for each object?

    - by LuckyLindy
    We are developing an ASP.NET MVC application, and are now building the repository/service classes. I'm wondering if there are any major advantages to creating a generic IRepository interface that all repositories implement, vs. each Repository having its own unique interface and set of methods. For example: a generic IRepository interface might look like (taken from this answer): public interface IRepository : IDisposable { T[] GetAll<T>(); T[] GetAll<T>(Expression<Func<T, bool>> filter); T GetSingle<T>(Expression<Func<T, bool>> filter); T GetSingle<T>(Expression<Func<T, bool>> filter, List<Expression<Func<T, object>>> subSelectors); void Delete<T>(T entity); void Add<T>(T entity); int SaveChanges(); DbTransaction BeginTransaction(); } Each Repository would implement this interface (e.g. CustomerRepository:IRepository, ProductRepository:IRepository, etc). The alternate that we've followed in prior projects would be: public interface IInvoiceRepository : IDisposable { EntityCollection<InvoiceEntity> GetAllInvoices(int accountId); EntityCollection<InvoiceEntity> GetAllInvoices(DateTime theDate); InvoiceEntity GetSingleInvoice(int id, bool doFetchRelated); InvoiceEntity GetSingleInvoice(DateTime invoiceDate, int accountId); //unique InvoiceEntity CreateInvoice(); InvoiceLineEntity CreateInvoiceLine(); void SaveChanges(InvoiceEntity); //handles inserts or updates void DeleteInvoice(InvoiceEntity); void DeleteInvoiceLine(InvoiceLineEntity); } In the second case, the expressions (LINQ or otherwise) would be entirely contained in the Repository implementation, whoever is implementing the service just needs to know which repository function to call. I guess I don't see the advantage of writing all the expression syntax in the service class and passing to the repository. Wouldn't this mean easy-to-messup LINQ code is being duplicated in many cases? For example, in our old invoicing system, we call InvoiceRepository.GetSingleInvoice(DateTime invoiceDate, int accountId) from a few different services (Customer, Invoice, Account, etc). That seems much cleaner than writing the following in multiple places: rep.GetSingle(x => x.AccountId = someId && x.InvoiceDate = someDate.Date); The only disadvantage I see to using the specific approach is that we could end up with many permutations of Get* functions, but this still seems preferable to pushing the expression logic up into the Service classes. What am I missing?

    Read the article

  • Delaying emails in PHP to avoid exceeding server limit

    - by Andrew P.
    Okay, so here's my problem: I have a list of members on a website, and periodically one of the admins my site (who are not very web or tech savvy) will send a newsletter to the memberlist. My current memberlist is well over 800 individuals long. So, I wrote an email script that sends the email to the full memberlist, with the members listed in the Bcc header. However, I've discovered that my host server has a limit of 300 emails per hour, which I apparently exceed even though the members are listed in the Bcc field. (I wasn't previously aware that the behaviour of Bcc was to send separate emails for each name on the list...) After some thought, I've come to the conclusion that my only solution is to have my script send only the email to only the first 300 emails, wait an hour, and send a second email to the next three hundred, wait another hour, and so on until I've sent the email to the whole member list. Looking around on the internet, I've seen some other solutions people have come up with for delaying emails in PHP. Sleep() is obviously not an option, because I can't just leave the script open and running for 3 or four hours. I've seen some people suggest cron jobs, but I'm not sure how feasible it would be to create three new cron jobs every time I send an email, use them once, and then delete them afterward. The final (and what I think is the smartest) solution I've seen, is to have a table in my database to temporarily store the emails to be delayed and sent later, and then create a cron job that checks this sql table every hour or so, compares the timestamp of the row to the current timestamp, and then sends the email if an hour has passed. So I'm asking you all which method you would recommend. Is there an easier solution that I've completely looked over (aside from getting a different hosting plan. ha!), or is there a cleaner way to do it than the database / cron job approach? tl;dr: I have 800 emails to send in an hour on a server that limits me to 300/hr. Using PHP, find a way to get around this problem in a way that the person sending the email needs only to click "send."

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >