Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 112/423 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • WPF : Access Application Resources when not referencing Shell from App.xaml

    - by CF_Maintainer
    I am beginner in WPF. My App.xaml looks like below app.xaml <Application x:Class="ContactManager.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Application.Resources> <Color x:Key="lightBlueColor">#FF145E9D</Color> <SolidColorBrush x:Key="lightBlueBrush" Color="{StaticResource lightBlueColor}" /> </Application.Resources> I do not set the startupuri since I want to a presenter first approach. I do the following in app.xaml.cs protected override void OnStartup(StartupEventArgs e) { base.OnStartup(e); var appPresenter = new ApplicationPresenter( new Shell(), new ContactRepository()); appPresenter.LaunchView(); } I have a usercontrol called "SearchBar.xaml" which references "lightBlueBrush" as a staticResource. When I try to open "Shell.xaml" in the designer it tells me : The "shell.xaml" cannot be loaded at design time because it says it could not create an instance of type "SearchBar.xaml". When I debugged the devenv.exe using another visual studio instance it tells me that it does not have access to the Brush I created in app.resources. If one is doing a Presenter first approach, how does one access resources? I had this working when the startupURI was "Shell.xaml" and the startup event was not present. Any clues/ideas/suggestions. I am just trying to understand. Everything works as expected when I run the application just not @ design time.

    Read the article

  • Aspect-Oriented Programming in OOP world - breaking rules ?

    - by Maksim Kondratyuk
    Hi 2 all! When I worked on asp.net mvc web site project, I investigated different approaches for validation. Some of them were DataAnotation validation and Validation Block. They use attributes for setting up rules for validation. Like this: [Required] public string Name {get;set;} I was confused how this approach combines with SRP (single responsibilty principle) from OOP world. Also I don't like any business logic in business objects, I prefer "poor business objects" model, but when I decorate my business objects with validation attributes for real requirements, they become ugly (Has a lot of attributes / with localization logic and so on). Idea with attributes realy simple, but in my opinion the validation decoration should be separated from object. I'm not sure is the approach to separate validation rules to xml files or to another objects, maybe it is a solution. Another bad side of AOP - problems with unit testin such code. When I decorated some controller actions with custom attributes for example to import/export TempData between actions or initialize some required services I can't to write proper unit test for testing this actions. Do you think that attributes don't break srp or you just disregard this and think that it's simplest , is not worst way ? P.S. I read some likes articles and discussions and I just want to put things in proper order. P.P.S. sorry for my "fluent" english :=)

    Read the article

  • Elegant way of parsing Data files for Simulation

    - by sc_ray
    I am working on this project where I need to read in a lot of data from .dat files and use the data to perform simulations. The data in my .dat file looks as follows: DeviceID InteractingDeviceID InteractionStartTime InteractionEndTime 1 2 1101 1105 1,2 1101 and 1105 are tab delimited and it means Device 1 interacted with Device 2 at 1101 ms and ended the interaction at 1105ms. I have a trace data sets that compile thousands of such interactions and my job is to analyze these interactions. The first step is to parse the file. The language of choice is C++. The approach I was thinking of taking was to read the file, for every line that's read create a Device Object. This Device object will contain the property DeviceId and an array/vector of structs, that will contain a list of all the devices the given DeviceId interacted with over the course of the simulation.The struct will contain the Interacting Device Id, Interaction Start Time and Interaction End Time. I have a two fold question here: Is my approach correct? If I am on the right track, how do I rapidly parse these tab delimited data files and create Device objects without excessive memory overhead using C++? A push in the right direction will be much appreciated. Thanks

    Read the article

  • Does complex JOINs causes high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Do complex JOINs cause high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • backbone js or knockout js as a web framework with jquery mobile

    - by Dan
    without trying to cause a mass discussion I would like some advice from the fellow users of stack overflow. I am about to start building a mobile website that gets it data from JSON that comes from a PHP rest api. I have looked into different mobile frameworks and feel that JQM will work best for us as we have the knowledge of jQuery even though a little large. Currently at work however we are using jQuery for all our sites and realise that now we are building a mobile website I need to think about javascript frameworks to move us onto a more MV* approach, which I understand the benefits of and will bring much needed structure to this mobile site and future web applications we may bring. I have made a comparision table where I have managed to bring the selection down to 2 - backbone and knockout. I have been looking around the web and it seems that there is more support for backbone in general and maybe even more support for backbone with JQM. http://jquerymobile.com/test/docs/pages/backbone-require.html One thing I have noticed however is that backbone doesnt support view bindings (declarative approach) whereas knockout does - is this a massive bonus? one of the main reasons for using a mv* for us is to get more structure - so I would like to use the library that will intergrate best with jQuery and especially jQuery mobile. neither of them seem to have that similar syntax... Thanks

    Read the article

  • OO Objective-C design with XML parsing

    - by brainfsck
    Hi, I need to parse an XML record that represents a QuizQuestion. The "type" attribute tells the type of question. I then need to create an appropriate subclass of QuizQuestion based on the question type. The following code works ([auto]release statements omitted for clarity): QuizQuestion *question = [[QuizQuestion alloc] initWithXMLString:xml]; if( [ [question type] isEqualToString:@"multipleChoiceQuestion"] ) { [myQuestions addObject:[[MultipleChoiceQuizQuestion alloc] initWithXMLString:xml]; } //QuizQuestion.m -(id)initWithXMLString:(NSString*)xml { self.type = ...// parse "type" attribute from xml // parse the rest of the xml } //MultipleChoiceQuizQuestion.m -(id)initWithXMLString:(NSString*)xml { if( self= [super initWithXMLString:xml] ) { // multiple-choice stuff } } Of course, this means that the XML is parsed twice: once to find out the type of QuizQuestion, and once when the appropriate QuizQuestion is initialized. To prevent parsing the XML twice, I tried the following approach: // MultipleChoiceQuizQuestion.m -(id)initWithQuizRecord:(QuizQuestion*)record { self=record; // record has already parsed the "type" and other parameters // multiple-choice stuff } However, this fails due to the "self=record" assignment; whenever the MultipleChoiceQuizQuestion tries to call an instance-method, it tries to call the method on the QuizQuestion class instead. Can someone tell me the correct approach for parsing XML into the appropriate subclass when the parent class needs to be initialized to know which subclass is appropriate?

    Read the article

  • Nested parsers in happy / infinite loop?

    - by McManiaC
    I'm trying to write a parser for a simple markup language with happy. Currently, I'm having some issues with infinit loops and nested elements. My markup language basicly consists of two elements, one for "normal" text and one for bold/emphasized text. data Markup = MarkupText String | MarkupEmph [Markup] For example, a text like Foo *bar* should get parsed as [MarkupText "Foo ", MarkupEmph [MarkupText "bar"]]. Lexing of that example works fine, but the parsing it results in an infinite loop - and I can't see why. This is my current approach: -- The main parser: Parsing a list of "Markup" Markups :: { [Markup] } : Markups Markup { $1 ++ [$2] } | Markup { [$1] } -- One single markup element Markup :: { Markup } : '*' Markups1 '*' { MarkupEmph $2 } | Markup1 { $1 } -- The nested list inside *..* Markups1 :: { [Markup] } : Markups1 Markup1 { $1 ++ [$2] } | Markup1 { [$1] } -- Markup which is always available: Markup1 :: { Markup } : String { MarkupText $1 } What's wrong with that approach? How could the be resolved? Update: Sorry. Lexing wasn't working as expected. The infinit loop was inside the lexer. Sorry. :) Update 2: On request, I'm using this as lexer: lexer :: String -> [Token] lexer [] = [] lexer str@(c:cs) | c == '*' = TokenSymbol "*" : lexer cs -- ...more rules... | otherwise = TokenString val : lexer rest where (val, rest) = span isValidChar str isValidChar = (/= '*') The infinit recursion occured because I had lexer str instead of lexer cs in that first rule for '*'. Didn't see it because my actual code was a bit more complex. :)

    Read the article

  • Formatting Parameters for Ajax POST request to Rails Controller - for jQuery-UI sortable list

    - by Hung Luu
    I'm using the jQuery-UI Sortable Connected Lists. I'm saving the order of the connected lists to a Rails server. My approach is to grab the list ID, column ID and index position of each list item. I want to then wrap this into an object that can be passed as a parameter back to the Rails Controller to be saved into the database. So ideally i'm looking to format the parameter like this: Parameters: {"Activity"=>[{id:1,column:2,position:1},{id:2,column:2,position:2} ,...]} How do I properly format my parameters to be passed in this Ajax POST request? Right now, with the approach below, I'm passing on Parameters: {"undefined"=>""} This is my current jQuery code (Coffeescript) which doesn't work: jQuery -> $('[id*="day"]').sortable( connectWith: ".day" placeholder: "ui-state-highlight" update: (event, ui) -> neworder = new Array() $('[id*="day"] > li').each -> column = $(this).attr("id") index = ui.item.index() + 1 id = $("#" + column + " li:nth-child(" + index + ") ").attr('id') passObject={} passObject.id = id passObject.column = column passObject.index = index neworder.push(passObject) alert neworder $.ajax url: "sort" type: "POST" data: neworder ).disableSelection() My apologies because this seems like a really amateur question but I'm just getting started with programming jQuery and Javascript.

    Read the article

  • Bespoke Development or Leverage SharePoint With Web Parts etc?

    - by Asim
    Hi all, We are currently in the process of drawing up a solution for an existing client, creating a number of eServices. The client currently have MOSS 2007. The proposed solution is to use MOSS as the launching pad for the eServices… The requirement involves drawing up several online forms which provide registration facilities as well as facilitating a workflow of some sort. I have been told that the proposed solution requires complex web forms. Most are complex forms with parent child details that have multiple windows. The proposed solution is to do some bespoke development, developing ASP .NET forms. These forms would be deployed under the _layouts folder of the current MOSS portal, inheriting the master page design on the current site. I have been told that this approach make development and deployment more simple, as well has having ‘complete integration’ with MOSS. My questions are: Is this the best way to leverage SharePoint – it seems like the proposed solution is not leveraging MOSS at all..! I thought perhaps utilizing Web Parts would be better, but I have been told that this is more complex and developing more smarter intuitive UI is more difficult. Is this really the case? If not, what should be the recommended approach? We will be utilizing Ultimus as the workflow engine. However, I have been recommended K2 Workflows. Anyone used both/have any opinions on either? Many thanks in advance! Kind Regards,

    Read the article

  • retrieving object information with Doctrine

    - by ajsie
    i want to fetch information from the database using objects. i really like this approach cause this is more OOP: $user = Doctrine_Core::getTable('User')->find(1); echo $user->Email['address']; echo $user->Phonenumbers[0]->phonenumber; rather than: $q = Doctrine_Query::create() ->from('User u') ->leftJoin('u.Email e') ->leftJoin('u.Phonenumbers p') ->where('u.id = ?', 1); $user = $q->fetchOne(); echo $user->Email['address']; echo $user->Phonenumbers[0]['phonenumber']; the problem is that the first one uses 3 queries (3 different tables), while the second one uses only 1 (and is therefore recommended technique). but i feel that it destroys the object oriented design. cause ORM is meant to give us an OOP approach so that we could focus on objects and not the relational database. but now they want us to go back to use SQL like pattern. there isn't a way to get information form multiple tables not using DQL? the above examples are taken from the documentation: doctrine

    Read the article

  • How to connect to Oracle using JRuby & JDBC

    - by Rob
    First approach: bare metal require 'java' require 'rubygems' require "c:/ruby/jruby-1.2.0/lib/ojdbc14.jar" # should be redundant, but tried it anyway odriver = Java::JavaClass.for_name("oracle.jdbc.driver.OracleDriver") puts odriver.java_class url = "jdbc:oracle:thin:@myhost:1521:mydb" puts "About to connect..." con = java.sql.DriverManager.getConnection(url, "myuser", "mypassword"); if con puts " connection good" else puts " connection failed" end The result of the above is: sqltest.rb:4: cannot load Java class oracle.jdbc.driver.OracleDriver (NameError) Second approach: Active Record require 'rubygems' gem 'ActiveRecord-JDBC' require 'jdbc_adapter' require 'active_record' require 'active_record/version' require "c:/ruby/jruby-1.2.0/lib/ojdbc14.jar" # should be redundant... ActiveRecord::Base.establish_connection( :adapter => 'jdbc', :driver => 'oracle.jdbc.driver.OracleDriver', :url => 'jdbc:oracle:thin:@myhost:1521:mydb', :username=>'myuser', :password=>'mypassword' ) ActiveRecord::Base.connection.execute("SELECT * FROM mytable") The result of this is: C:/ruby/jruby-1.2.0/lib/ruby/gems/1.8/gems/activerecord-jdbc-adapter-0.9.1/lib/active_recordconnection_adapters/jdbc_adapter.rb:330:in `initialize': The driver encountered an error: cannot load Java class oracle.jdbc.driver.OracleDriver (RuntimeError) Essentially the same error no matter how I go about it. I'm using JRuby 1.2.0 and I have ojdbc14.jar in my JRuby lib directory Gems: ActiveRecord-JDBC (0.5) activerecord-jdbc-adapter (0.9.1) activerecord (2.2.2) What am I missing? Thanks,

    Read the article

  • With Google Website Optimizer's multivariate testing, can I vary multiple css classes on a single di

    - by brahn
    I would like to use Google Website Optimizer (GWO)'s multivariate tests to test some different versions of a web page. I can change from version to version just by varying some class tags on a div, i.e. the different versions are of this form: <div id="testing" class="foo1 bar1">content</div> <div id="testing" class="foo1 bar2">content</div> <div id="testing" class="foo2 bar1">content</div> <div id="testing" class="foo2 bar2">content</div> In the ideal, I would be able to use GWO section code in place of each class, and google would just swap in the appropriate tags (foo1 or foo2, bar1 or bar2). However, naively doing this results in horribly malformed code because I would be trying to put <script> tags inside the div's class attribute: <div id="testing" class=" <script>utmx_section("foo-class")</script>foo1</noscript> <script>utmx_section("bar-class")</script>bar1</noscript> "> content </div> And indeed, the browser chokes all over it. My current best approach is just to use a different div for each variable in the test, as follows: <script>utmx_section("foo-class-div")</script> <div class="foo1"> </noscript> <script>utmx_section("bar-class-div")</script> <div class="bar1"> </noscript> content </div> </div> So testing multiple variables requires layer of div-nesting per variable, and it all seems rather awkward. Is there a better approach that I could use in which I just vary the classes on a single div?

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Updating a local sqlite db that is used for local metadata & caching from an service?

    - by Pharaun
    I've searched through the site and haven't found a question/answer that quite answer my question, the closest one I found was: Syncing objects between two disparate systems best approach. Anyway to begun, because there is no RSS feeds available, I'm screen scrapping a webpage, hence it does a fetch then it goes through the webpage to scrap out all of the information that I'm interested in and dumps that information into a sqlite database so that I can query the information at my leisure without doing repeat fetching from the website. However I'm also storing various metadata on the data itself that is stored in the sqlite db, such as: have I looked at the data, is the data new/old, bookmark to a chunk of data (Think of it as a collection of unrelated data, and the bookmark is just a pointer to where I am in processing/reading of the said data). So right now my current problem is trying to figure out how to update the local sqlite database with new data and/or changed data from the website in a manner that is effective and straightforward. Here's my current idea: Download the page itself Create a temporary table for the parsed data to go into Do a comparison between the official and the temporary table and copy updates and/or new information to the official table This process seems kind of complicated because I would have to figure out how to determine if the data in the temporary table is new, updated, or unchanged. So I am wondering if there isn't a better approach or if anyone has any suggestion on how to architecture/structure such system?

    Read the article

  • PHP: What is an efficient way to parse a text file containing very long lines?

    - by Shaun
    I'm working on a parser in php which is designed to extract MySQL records out of a text file. A particular line might begin with a string corresponding to which table the records (rows) need to be inserted into, followed by the records themselves. The records are delimited by a backslash and the fields (columns) are separated by commas. For the sake of simplicity, let's assume that we have a table representing people in our database, with fields being First Name, Last Name, and Occupation. Thus, one line of the file might be as follows [People] = "\Han,Solo,Smuggler\Luke,Skywalker,Jedi..." Where the ellipses (...) could be additional people. One straightforward approach might be to use fgets() to extract a line from the file, and use preg_match() to extract the table name, records, and fields from that line. However, let's suppose that we have an awful lot of Star Wars characters to track. So many, in fact, that this line ends up being 200,000+ characters/bytes long. In such a case, taking the above approach to extract the database information seems a bit inefficient. You have to first read hundreds of thousands of characters into memory, then read back over those same characters to find regex matches. Is there a way, similar to the Java String next(String pattern) method of the Scanner class constructed using a file, that allows you to match patterns in-line while scanning through the file? The idea is that you don't have to scan through the same text twice (to read it from the file into a string, and then to match patterns) or store the text redundantly in memory (in both the file line string and the matched patterns). Would this even yield a significant increase in performance? It's hard to tell exactly what PHP or Java are doing behind the scenes.

    Read the article

  • Loading enumerations from database

    - by Mosh
    Hello, I have a problem with mapping .NET enumerations to database tables. Imagine I have a table called Statuses with the following values: StatusID | Name 1 Draft 2 Ready ... ... In the application layer, I can either use a Repository to get all Statuses as an IList object. However, the problem with this approach is that I cannot reference a certain status in my business logic. For example, how can I implement something like this? if (myObject.Status is Ready) do this else if (myObject.Status is Draft) do that... Since the statuses are loaded dynamically, I cannot tell for sure what particular Status object in the List represents the Draft or Ready status. Alternatively, I could just use an enumeration like public enum Statuses { Draft, Ready }; Then I could easily use an enumeration in my business logic. if (myObject.Status == Statuses.Draft) // do something... However, the problem with this approach is that every time the user wants to modify the list of Statuses (adding a new status, or renaming an existing status) the application has to be re-compiled. We cannot load the statuses dynamically from the database. Has anyone else come across a similar situation? Any solutions/patterns? Cheers, Mosh

    Read the article

  • Calculating rotation in > 360 deg. situations

    - by danglebrush
    I'm trying to work out a problem I'm having with degrees. I have data that is a list of of angles, in standard degree notation -- e.g. 26 deg. Usually when dealing with angles, if an angle exceeds 360 deg then the angle continues around and effectively "resets" -- i.e. the angle "starts again", e.g. 357 deg, 358 deg, 359 deg, 0 deg, 1 deg, etc. What I want to happen is the degree to continue increasing -- i.e. 357 deg, 358 deg, 359 deg, 360 deg, 361 deg, etc. I want to modify my data so that I have this converted data in it. When numbers approach the 0 deg limit, I want them to become negative -- i.e. 3 deg, 2 deg, 1 deg, 0 deg, -1 deg, -2 deg, etc. With multiples of 360 deg (both positive and negative), I want the degrees to continue, e.g. 720 deg, etc. Any suggestions on what approach to take? There is, no doubt, a frustratingly simple way of doing this, but my current solution is kludgey to say the least .... ! My best attempt to date is to look at the percentage difference between angle n and angle n - 1. If this is a large difference -- e.g. 60% -- then this needs to be modified, by adding or subtracting 360 deg to the current value, depending on the previous angle value. That is, if the previous angle is negative, substract 360, and add 360 if the previous angle is positive. Any suggestions on improving this? Any improvements?

    Read the article

  • What is the best way to detect white color?

    - by dnul
    I'm trying to detect white objects in a video. The first step is to filter the image so that it leaves only white-color pixels. My first approach was using HSV color space and then checking for high level of VAL channel. Here is the code: //convert image to hsv cvCvtColor( src, hsv, CV_BGR2HSV ); cvCvtPixToPlane( hsv, h_plane, s_plane, v_plane, 0 ); for(int x=0;x<srcSize.width;x++){ for(int y=0;y<srcSize.height;y++){ uchar * hue=&((uchar*) (h_plane->imageData+h_plane->widthStep*y))[x]; uchar * sat=&((uchar*) (s_plane->imageData+s_plane->widthStep*y))[x]; uchar * val=&((uchar*) (v_plane->imageData+v_plane->widthStep*y))[x]; if((*val>170)) *hue=255; else *hue=0; } } leaving the result in the hue channel. Unfortunately, this approach is very sensitive to lighting. I'm sure there is a better way. Any suggestions?

    Read the article

  • In .NET, how do I prevent, or handle, tampering with form data of disabled fields before submission?

    - by David
    Hi, If a disabled drop-down list is dynamically rendered to the page, it is still possible to use Firebug, or another tool, to tamper with the submitted value, and to remove the "disabled" HTML attribute. This code: protected override void OnLoad(EventArgs e) { var ddlTest = new DropDownList() {ID="ddlTest", Enabled = false}; ddlTest.Items.AddRange(new [] { new ListItem("Please select", ""), new ListItem("test 1", "1"), new ListItem("test 2", "2") }); Controls.Add(ddlTest); } results in this HTML being rendered: <select disabled="disabled" id="Properties_ddlTest" name="Properties$ddlTest"> <option value="" selected="selected">Please select</option> <option value="1">test 1</option> <option value="2">test 2</option> </select> The problem occurs when I use Firebug to remove the "disabled" attribute, and to change the selected option. On submission of the form, and re-creation of the field, the newly generated control has the correct value by the end of OnLoad, but by OnPreRender, it has assumed the identity of the submitted control and has been given the submitted form value. .NET seems to have no way of detecting the fact that the field was originally created in a disabled state and that the submitted value was faked. This is understandable, as there could be legitimate, client-side functionality that would allow the disabled attribute to be removed. Is there some way, other than a brute force approach, of detecting that this field's value should not have been changed? I see the brute force approach as being something crap, like saving the correct value somewhere while still in OnLoad, and restoring the value in the OnPreRender. As some fields have dependencies on others, that would be unacceptable to me.

    Read the article

  • How to keep confirmation messages after POST while doing a post-submit redirect?

    - by MicE
    Hello, I'm looking for advise on how to share certain bits of data (i.e. post-submit confirmation messages) between individual requests in a web application. Let me explain: Current approach: user submits an add/edit form for a resource if there were no errors, user is shown a confirmation with links to: submit a new resource (for "add" form) view the submitted/edited resource view all resources (one step above in hierarchy) user then has to click on one of the three links to proceed (i.e. to the page "above") Progmatically, the form and its confirmation page are one set of classes. The page above that is another. They can technically share code, but at the moment they are both independent during processing of individual requests. We would like to amend the above as follows: user submits an add/edit form for a resource if there were no errors, the user is redirected to the page with all resources (one step above in hierarchy) with one or more confirmation messages displayed at the top of the page (i.e. success message, to whom was the request assigned, etc) This will: save users one click (they have to go through a lot of these add/edit forms) the post-submit redirect will address common problems with browser refresh / back-buttons What approach would you recommend for sharing data needed for the confirmation messages between the two requests, please? I'm not sure if it helps, it's a PHP application backed by a RESTful API, but I think that this is a language-agnostic question. A few simple solutions that come to mind are to share the data via cookies or in the session, this however breaks statelessness and would pose a significant problem for users who work in several tabs (the data could clash together). Passing the data as GET parameters is not suitable as we are talking about several messages which are dynamic (e.g. changing actors, dates). Thanks, M.

    Read the article

  • Using Ant how can I dex a directory of jars?

    - by cbeaudin
    I have a directory full of jars (felix bundles). I want to iterate through all of these jars and create dex'd versions. My intent is to deploy each of these dex'd jars as standalone apk's since they are bundles. Feel free to straighten me out if I am approaching this from the wrong direction. This first part is just to try and create a corresponding .dex file for each jar. However when I run this I am getting a "no resources specified" error coming out of Ant. Is this the right approach, or is there a simpler approach to just input a jar and output a dex'd version of that jar? The ${file} is valid as it is spitting out the name of the file in the echo command. <target name="dexBundles" description="Run dex on all the bundles"> <taskdef resource="net/sf/antcontrib/antlib.xml" classpath="${basedir}/libs/ant-contrib.jar" /> <echo>Starting</echo> <for param="file"> <path> <fileset dir="${pre.dex.dir}"> <include name="**/*.jar" /> </fileset> </path> <sequential> <echo message="@{file}" /> <echo>Converting jar file @{file} into ${post.dex.dir}/@{file}.class...</echo> <apply executable="${dx}" failonerror="true" parallel="true" verbose="true"> <arg value="--dex" /> <arg value="--output=${post.dex.dir}/${file}.dex" /> <arg path="@{file}" /> </apply> </sequential> </for> <echo>Finished</echo> </target>

    Read the article

  • Google Maps API and "rightclick" events on Macs

    - by samc
    Using the Google Maps API (v3), I can create a map and handle normal click events just fine, but when I want to handle rightclick events, it doesn't work on Macs. I assume this is because a rightclick on a Mac is actually converted to a ctrl-click, but the Google Maps API MouseEvent doesn't provide information about modifier keys, so I can't check for the ctrl key. I tried adding an "capture" event listener to the document that converts the click event to a rightclick event. function convertClick(e) { if (e.ctrlKey) { e.button = 2; } } document.addEventListener("click", convertClick, true) I added an alert to verify that the condition is correct, but modifying the event in this way didn't work. So, I decided to have my event handler set a global flag that my click handler could check. If the flag is set, it means ctrl was pressed, so the click handler just invokes the rightclick handler. var ctrl; function captureCtrl(e) { ctrl = e.ctrlKey; } This approach worked great, except for one thing. The ctrl flag gets set for the click after the one that occured when ctrl was pressed. That means the event handler is be called during the bubble phase rather than the capture phase. Could explain why the event modification approach didn't work. So, my question is how can you detect "rightclick" events from Macs with the Google Maps API? I can't be the first person to want to do this. That said, when I right-click on the map on http://maps.google.com from a Windows or Linux machine, I get a popup box with options like "Directions from here...", etc. On a Mac, nothing happens. So, not even the main Google Maps page has solved this problem. ...maybe I am the first person to want to do this.

    Read the article

  • How to deal with Unicode strings in C/C++ in a cross-platform friendly way?

    - by Sorin Sbarnea
    On platforms different than Windows you could easily use char * strings and treat them as UTF-8. The problem is that on Windows you are required to accept and send messages using wchar* strings (W). If you'll use the ANSI functions (A) you will not support Unicode. So if you want to write truly portable application you need to compile it as Unicode on Windows. Now, In order to keep the code clean I would like to see what is the recommended way of dealing with strings, a way that minimize ugliness in the code. Type of strings you may need: std::string, std::wstring, std::tstring,char *,wchat_t *, TCHAR*, CString (ATL one). Issues you may encounter: cout/cerr/cin and their Unicode variants wcout,wcerr,wcin all renamed wide string functions and their TCHAR macros - like strcmp, wcscmp and _tcscmp. constant strings inside code, with TCHAR you will have to fill your code with _T() macros. What approach do you see as being best? (examples are welcome) Personally I would go for a std::tstring approach but I would like to see how would do to the conversions where they are necessary.

    Read the article

  • C++ & proper TDD

    - by Kotti
    Hi! I recently tried developing a small-sized project in C# and during the whole project our team used the Test-Driven-Development (TDD) technique (xunit, moq). I really think this was awesome, because (when paired with C#) this approach allowed to relax when coding, relax when projecting and relax when refactoring. I suspect that all this TDD-stuff actually simplifies the coding process and, well, it allowed (eventually, for me) to get the same result with fewer brain cells working. Right after that I tried using TDD paired with C++ (I used Google Test and Google Mock libraries), and, I don't know why but I actually think that TDD here was a step back in terms of rapid application development. I had some moments when I had to spend huge amounts of time thinking of my tests, building proper mocks, rebuilding them and swearing at my monitor. And, well, I obviously can't ask something like "what I did wrong?" or "what was wrong in my approach?", because I don't know what to describe. But if there are any people who are used to TDD in C++ (and, probably C#) too, could you please advise me how to do this properly. Framework recommendations, architecture approaches, plain coding advices - if you are experienced in TDD & C++, please respond.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >