Search Results

Search found 1666 results on 67 pages for 'practical joke'.

Page 55/67 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • HTML 4, HTML 5, XHTML, MIME types - the definitive resource

    - by deceze
    The topics of HTML vs. XHTML and XHTML as text/html vs. XHTML as XHTML are quite complex. Unfortunately it's hard to get a complete picture, since information is spread mostly in bits and pieces around the web or is buried deep in W3C tech jargon. In addition there's some misinformation being circulated. I propose to make this the definite SO resource about the topic, describing the most important aspects of: HTML 4 HTML 5 XHTML 1.0/1.1 as text/html XHTML 1.0/1.1 as XHTML What are the practical implications of each? What are common pitfalls? What is the importance of proper MIME types for each? How do different browsers handle them? I'd like to see one answer per technology. I'm making this a community wiki, so rather than contributing redundant answers, please edit answers to complete the picture. Feel free to start with stubs. Also feel free to edit this question.

    Read the article

  • [OffTopic] PhD is it worth it?

    - by Zenzen
    So I'll be graduating next year (hopefully) and lately I started thinking about getting a PhD in computer science (to be more precise I was thinking about something related to "distributed systems", don't have any specific topics yet in mind), but is it really worth it? A PhD course here lasts 4 years and is free IF you're doing the "normal one" (which means you have class from monday to friday) OR you have to pay if you want to study on the weekends. So I've been thinking about getting a normal full time job (as a JEE programmer) and doing my PhD during the weekends OR doing the normal PhD course (mon-fri) and getting a part time job as a software dev (the main reason I want a job is simple - I need to eat). That would give me practical working experience and theoretical knowledge+a PhD, but it would also mean working 7days a week from 9 to 5 for 4 years straight (sounds like an overkill). Is it, job wise, really worth getting a PhD? As an employer do you prefer people with PhDs or MScs? This might be kinda important - I'm not from the US or western europe, how are PhDs from other countries (I am graduating one of the best, if not the best, school in my country but it's still a central european country...) seen? Are they useless? I won't hide it - after graduation I am thinking about moving abroad.

    Read the article

  • HTML 4, HTML 5, XHTML, MIME types - the definite resource

    - by deceze
    The topics of HTML vs. XHTML and XHTML as text/html vs. XHTML as XHTML are quite complex. Unfortunately it's hard to get a complete picture, since information is spread mostly in bits and pieces around the web or is buried deep in W3C tech jargon. In addition there's some misinformation being circulated. I propose to make this the definite SO resource about the topic, describing the most important aspects of: HTML 4 HTML 5 XHTML 1.0/1.1 as text/html XHTML 1.0/1.1 as XHTML What are the practical implications of each? What are common pitfalls? What is the importance of proper MIME types for each? How do different browsers handle them? I'd like to see one answer per technology. I'm making this a community wiki, so rather than contributing redundant answers, please edit answers to complete the picture. Feel free to start with stubs. Also feel free to edit this question.

    Read the article

  • How can I write a clean Repository without exposing IQueryable to the rest of my application?

    - by Simucal
    So, I've read all the Q&A's here on SO regarding the subject of whether or not to expose IQueryable to the rest of your project or not (see here, and here), and I've ultimately decided that I don't want to expose IQueryable to anything but my Model. Because IQueryable is tied to certain persistence implementations I don't like the idea of locking myself into this. Similarly, I'm not sure how good I feel about classes further down the call chain modifying the actual query that aren't in the repository. So, does anyone have any suggestions for how to write a clean and concise Repository without doing this? One problem I see, is my Repository will blow up from a ton of methods for various things I need to filter my query off of. Having a bunch of: IEnumerable GetProductsSinceDate(DateTime date); IEnumberable GetProductsByName(string name); IEnumberable GetProductsByID(int ID); If I was allowing IQueryable to be passed around I could easily have a generic repository that looked like: public interface IRepository<T> where T : class { T GetById(int id); IQueryable<T> GetAll(); void InsertOnSubmit(T entity); void DeleteOnSubmit(T entity); void SubmitChanges(); } However, if you aren't using IQueryable then methods like GetAll() aren't really practical since lazy evaluation won't be taking place down the line. I don't want to return 10,000 records only to use 10 of them later. What is the answer here? In Conery's MVC Storefront he created another layer called the "Service" layer which received IQueryable results from the respository and was responsible for applying various filters. Is this what I should do, or something similar? Have my repository return IQueryable but restrict access to it by hiding it behind a bunch of filter classes like GetProductByName, which will return a concrete type like IList or IEnumerable?

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • Implementing Naïve Bayes algorithm in Java - Need some guidance

    - by techventure
    hello stackflow people As a School assignment i'm required to implement Naïve Bayes algorithm which i am intending to do in Java. In trying to understand how its done, i've read the book "Data Mining - Practical Machine Learning Tools and Techniques" which has a section on this topic but am still unsure on some primary points that are blocking my progress. Since i'm seeking guidance not solution in here, i'll tell you guys what i thinking in my head, what i think is the correct approach and in return ask for correction/guidance which will very much be appreciated. please note that i am an absolute beginner on Naïve Bayes algorithm, Data mining and in general programming so you might see stupid comments/calculations below: The training data set i'm given has 4 attributes/features that are numeric and normalized(in range[0 1]) using Weka (no missing values)and one nominal class(yes/no) 1) The data coming from a csv file is numeric HENCE * Given the attributes are numeric i use PDF (probability density function) formula. + To calculate the PDF in java i first separate the attributes based on whether they're in class yes or class no and hold them into different array (array class yes and array class no) + Then calculate the mean(sum of the values in row / number of values in that row) and standard divination for each of the 4 attributes (columns) of each class + Now to find PDF of a given value(n) i do (n-mean)^2/(2*SD^2), + Then to find P( yes | E) and P( no | E) i multiply the PDF value of all 4 given attributes and compare which is larger, which indicates the class it belongs to In temrs of Java, i'm using ArrayList of ArrayList and Double to store the attribute values. lastly i'm unsure how to to get new data? Should i ask for input file (like csv) or command prompt and ask for 4 values? I'll stop here for now (do have more questions) but I'm worried this won't get any responses given how long its got. I will really appreciate for those that give their time reading my problems and comment.

    Read the article

  • Axis2 webservice (aar archive) properties file

    - by XpiritO
    Hi there, guys. I'm currently developing a set of SOAP webservices over Axis2, deployed over a clustered WebLogic 10.3.2 environment. My webservices use some user settings that I want to be editable without the need for recompiling and regenerating the AAR archive. With this in mind, I chose to put them into a properties file that is loaded and consumed in runtime. Unfortunately, I'm having some questions about this: As far as I know, to achieve what I want, the only option is to put the properties file into the ../axis2/WEB-INF/classes directory of each one of the deployments (on each WebLogic instance) I currently have on my clustered configuration, and then load the file, as follows (or equivalent, this has not been verified for optimization): InputStreamReader fMainProp = new InputStreamReader(this.getClass().getResourceAsStream("myfile.properties")); Properties mainProp = new Properties(); mainProp.load(fMainProp); This is not as practical as I wanted it to be, because each time I want to alter some setting on the properties file, I have to edit each one of the files (deployed over different WebLogic instances) and there is a high probability of modifying one of these files without modifying the others. What I would like to know is if there is any (better) alternative to accomplish what I want, minimizing the potential conflict of configuration that is created by distributing and replicating the properties file through multiple WebLogic instances.

    Read the article

  • Any way for a class to prevent outside code from declaring variables of its type?

    - by supercat
    Is it possible for a class of exposing a type for function returns, without allowing users of that class to create variables of that type? A couple usage scenarios: A Fluent interface on a large class; a statement like "foo=bar.WithX(5).WithY(9).WithZ(19);" would be inefficient if it had to create three new instances of the class, but could be much more efficient if the WithX could create one instance, and the other statements could simply use it. A class may wish to support a statement like "foo[19].x = 9;" even when foo itself isn't an array, and does not hold the data in class instances that can be exposed to the public; one way to do that is to have foo[19] return a struct which holds a reference to 'foo' and the value '19', and has a member property 'x' which could call "foo.SetXValue(19, 9);" Such a struct could have a conversion operator to convert itself to the "apparent" type of foo[19]. In both of these scenarios, storing the value returned by a method or property into a variable and then using it more than once would cause strange behavior. It would be desirable if the designer of the class exposing such methods or properties could ensure that callers wouldn't be able to use them more than once. Is there any practical way to accomplish that?

    Read the article

  • x86 opcode alignment references and guidelines

    - by mrjoltcola
    I'm generating some opcodes dynamically in a JIT compiler and I'm looking for guidelines for opcode alignment. 1) I've read comments that briefly "recommend" alignment by adding nops after calls 2) I've also read about using nop for optimizing sequences for parallelism. 3) I've read that alignment of ops is good for "cache" performance Usually these comments don't give any supporting references. Its one thing to read a blog or a comment that says, "its a good idea to do such and such", but its another to actually write a compiler that implements specific op sequences and realize most material online, especially blogs, are not useful for practical application. So I'm a believer in finding things out myself (disassembly, etc. to see what real world apps do). This is one case where I need some outside info. I notice compilers will usually start an odd byte instruction immediately after whatever previous instruction sequence there was. So the compiler is not taking any special care in most cases. I see "nop" here or there, but usually it seems nop is used sparingly, if at all. How critical is opcode alignment? Can you provide references for cases that I can actually use for implementation? Thanks.

    Read the article

  • Programmatically creating vector arrows in KML

    - by mettadore
    Does anyone have any practical examples of programmatically drawing icons as vectors in KML? Specifically, I have data with a magnitude and an azimuth at given coordinates, and I would like to have icons (or another graphical element) generated based on these values. Some thoughts on how I might approach it: Image directory (a brute force way): Make an image director of 360 different image files (probably batch rotate a single image) each pointing in a cooresponding azimuth. I've seen things like "Excel to KML," but am looking for code that I can use within a program, rather than a web utility. Issue: Arrow does not contain magnitude context, so that would have to be a label. I'd rather dynamically lengthen the arrow. Line creation in KML: Perhaps create a formula that creates a line with the origin at the coordinate points, with the length of the line proportional to the magnitute, and angled according to azimuth. There would then be two more lines, perhaps 30 degrees or so extending from the end of the previous line to make the arrow head. Issues: Not a separate image icon, so not sure how it would work in KML. Also not sure how easy it would be to generate this algorithm. Separate image generation: Perhaps create a PHP file that uses imagemagick or something similar to dynamically generate a .png file in a similar method to the above, and then link to the icon using the URI "domain.tld/imagegen.php?magnitude=magvalue&azimuth=azmvalue". Issue: Still have the problem of actually writing the algorithm for image generation. So, the question: has anyone else come up with solutions for programmatic vector (rather than merely arrow) generation?

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • method for specialized pathfinding?

    - by rlbond
    I am working on a roguelike in my (very little) free time. Each level will basically be a few rectangular rooms connected together by paths. I want the paths between rooms to be natural-looking and windy, however. For example, I would not consider the following natural-looking: B X X X XX XX XX AXX I really want something more like this: B X XXXX X X X X AXXXXXXXX These paths must satisfy a few properties: I must be able to specify an area inside which they are bounded, I must be able to parameterize how windy and lengthy they are, The lines should not look like they started at one path and ended at the other. For example, the first example above looks as if it started at A and ended at B, because it basically changed directions repeatedly until it lined up with B and then just went straight there. I was hoping to use A*, but honestly I have no idea what my heuristic would be. I have also considered using a genetic algorithm, but I don't know how practical that method might end up. My question is, what is a good way to get the results I want? Please do not just specify a method like "A*" or "Dijkstra's algorithm," because I also need help with a good heuristic.

    Read the article

  • Using UpdatePanels inside of a ListView

    - by Jim B
    Hey everyone, I'm wondering if anybody has run across something similar to this before. Some quick pseudo-code to get started: <UpdatePanel> <ContentTemplate> <ListView> <LayoutTemplate> <UpdatePanel> <ContentTemplate> <ItemPlaceholder /> </ContentTemplate> </UpdatePanel> </LayoutTemplate> <ItemTemplate> Some stuff goes here </ItemTemplate> </ListView> </ContentTemplate> </UpdatePanel> The main thing to take away from the above is that I have an update panel which contains a listview; and then each of the listview items is contained in its own update panel. What I'm trying to do is when one of the ListView update panels triggers a postback, I'd want to also update one of the other ListView item update panels. A practical implementation would be a quick survey, that has 3 questions. We'd only ask Question #3 if the user answered "Yes" to Question #1. When the page loads; it hides Q3 because it doesn't see "Yes" for Q1. When the user clicks "Yes" to Q1, I want to refresh the Q3 update panel so it now displays. I've got it working now by refreshing the outer UpdatePanel on postback, but this seems inefficient because I don't need to re-evaluate every item; just the ones that would be affected by the prerequisite i detailed out above. I've been grappling with setting up triggers, but i keep coming up empty mainly because I can't figure out a way to set up a trigger for the updatepanel for Q3 based off of the postback triggered by Q1. Any suggestions out there? Am I barking up the wrong tree?

    Read the article

  • Efficiently Determine if EF 4 POCO Already in ObjectSet

    - by Eric J.
    I'm trying EF 4 with POCO's on a small project for the first time. In my Repository implementation, I want to provide a method AddOrUpdate that will add a passed-in POCO to the repository if it's new, else do nothing (as the updated POCO will be saved when SaveChanges is called). My first thought was to do this: public void AddOrUpdate(Poco p) { if (!Ctx.Pocos.Contains<Poco>(p)) { Ctx.Pocos.AddObject(p); } } However that results in a NotSupportedException as documented under Referencing Non-Scalar Variables Not Supported (bonus question: why would that not be supported?) Just removing the Contains part and always calling AddObject results in an InvalidStateException: An object with the same key already exists in the ObjectStateManager. The existing object is in the Unchanged state. An object can only be added to the ObjectStateManager again if it is in the added state. So clearly EF 4 knows somewhere that this is a duplicate based on the key. What's a clean, efficient way for the Repository to update Pocos for either a new or pre-existing object when AddOrUpdate is called so that the subsequent call to SaveChanges() will do the right thing? I did consider carrying an isNew flag on the object itself, but I'm trying to take persistence ignorance as far as practical.

    Read the article

  • Textbox autofill not in correct position in safari

    - by jerjer
    Hello All, Has anyone experience this weird issue on safari? Textbox autofill is not at its correct position, please see screenshot below. I have been searching for answers in google for almost a day, still no luck. This is the built-in autofill feature of safari. Here is the markup: index.php <html> <body> <div id="nav"></div> <div id="content"> <iframe style="width:100%;height:100%" src="add_user.php" frameborder="0"></iframe> </div> .... </body> </html> add_user.php <html> .... <body> <form method="post"> <h3>Add User (Admin only) <div id="msg">Please enter First and Last Name.</div> <ul> <li><label>* Email</label> <input type="text" id="email" /></li> <li><label>* First Name</label> <input type="text" id="fname" /></li> <li><label>* Last Name</label> <input type="text" id="lname" /></li> .... </ul> </form> </body> </html> I am suspecting that this is caused by the iframe, but it works just fine in other browsers. Also I could not change the page design(using iframe) right away for practical reasons. Thanks

    Read the article

  • .NET WF4: Should it be in the middle of everything?

    - by stimpy77
    I am aware that WF4 (Windows Workflow 4.0, part of .NET 4.0) is a significant rework and redesign of WF3, where much of what made WF3 a poor technology choice has been cleaned up in WF4. For example, as far as I can tell, WF4 (Windows Workflow 4.0) activities are more or less testable with [TestMethod] and mocking. This among other things, like improved performance, has grabbed my attention about the technology again, whereas I had previously pooh-poohed WF3. I'm working on a new architecture for essentially an n-tier collaborative application (not enterprise-class, just a smallish project with potential to grow significantly) where I'm already trying to discipline myself to use IoC and, to some extent, TDD, and I'm wondering, in general terms, whether it is wiser to just hand-code workflow logic or if I should delve into learning and integrating WF4 so that WF becomes literally the controller of the entire application, i.e. the practical C in "MVC" (not ASP.NET MVC but rather the pattern). So should workflow activities in WF4 be the primary controller for a highly expandable/growable web-based collaborative application? Or am I asking entirely the wrong question? This is a vague question, I'm sure, so abstract answers are as welcome as specific ones.

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

  • Defining jUnit Test cases Correctly

    - by Epitaph
    I am new to Unit Testing and therefore wanted to do some practical exercise to get familiar with the jUnit framework. I created a program that implements a String multiplier public String multiply(String number1, String number2) In order to test the multiplier method, I created a test suite consisting of the following test cases (with all the needed integer parsing, etc) @Test public class MultiplierTest { Multiplier multiplier = new Multiplier(); // Test for 2 positive integers assertEquals("Result", 5, multiplier.multiply("5", "1")); // Test for 1 positive integer and 0 assertEquals("Result", 0, multiplier.multiply("5", "0")); // Test for 1 positive and 1 negative integer assertEquals("Result", -1, multiplier.multiply("-1", "1")); // Test for 2 negative integers assertEquals("Result", 10, multiplier.multiply("-5", "-2")); // Test for 1 positive integer and 1 non number assertEquals("Result", , multiplier.multiply("x", "1")); // Test for 1 positive integer and 1 empty field assertEquals("Result", , multiplier.multiply("5", "")); // Test for 2 empty fields assertEquals("Result", , multiplier.multiply("", "")); In a similar fashion, I can create test cases involving boundary cases (considering numbers are int values) or even imaginary values. 1) But, what should be the expected value for the last 3 test cases above? (a special number indicating error?) 2) What additional test cases did I miss? 3) Is assertEquals() method enough for testing the multiplier method or do I need other methods like assertTrue(), assertFalse(), assertSame() etc 4) Is this the RIGHT way to go about developing test cases? How am I "exactly" benefiting from this exercise? 5)What should be the ideal way to test the multiplier method? I am pretty clueless here. If anyone can help answer these queries I'd greatly appreciate it. Thank you.

    Read the article

  • Append a dynamically changing watermark to a PDF in SharePoint

    - by ccomet
    This is primarily a question of possibilities more than instructions. I'm a programming consultant working on a WSS project site system for my client. We have a document library in which files are uploaded to go through a complex approval process. With multiple stages in this process, we have an extra field which dictates what the current status of the document is. Now, my client has become enamored with the idea of PDF watermarking. He wants the document (which is already a PDF) to be affixed with a watermark corresponding to the current status, such that with each stage of the approval process the watermark will change. One method, the traditional method for PDF watermarking, of accomplishing this is to have one "clean" copy of the document somewhere hidden on the site, and create a new PDF from it that has the watermark at each stage of the approval process. Since the filename will never change, this new PDF can be uploaded continually to a public library, always overwriting the old version and simulating a "dynamically changing watermark". However, in the various stages there will also be people uploading clean copies with corrections and suggestions, nevermind the complex nature of juggling around two libraries and the fact we double the number of files stored. My client and I agree that this is not a practical path to choose. What we would like to do is be able to "modify" the watermark in a PDF, so that we only have to keep one copy of the file. Unfortunately, from what I've seen, in most cases when you make something like a watermark, which in its nature is supposed to be "unmodifyable", you won't be able to edit it later. So, is it possible to have a part of a PDF which cannot be changed by anyone who downloads the file, but can be changed as part of a workflow or other object model process? Thanks in advance!

    Read the article

  • Big-O of PHP functions?

    - by Kendall Hopkins
    After using PHP for a while now, I've noticed that not all PHP built in functions as fast as expected. Consider the below two possible implementations of a function that finds if a number is prime using a cached array of primes. //very slow for large $prime_array $prime_array = array( 2, 3, 5, 7, 11, 13, .... 104729, ... ); $result_array = array(); foreach( $array_of_number => $number ) { $result_array[$number] = in_array( $number, $large_prime_array ); } //still decent performance for large $prime_array $prime_array => array( 2 => NULL, 3 => NULL, 5 => NULL, 7 => NULL, 11 => NULL, 13 => NULL, .... 104729 => NULL, ... ); foreach( $array_of_number => $number ) { $result_array[$number] = array_key_exists( $number, $large_prime_array ); } This is because in_array is implemented with a linear search O(n) which will linearly slow down as $prime_array grows. Where the array_key_exists function is implemented with a hash lookup O(1) which will not slow down unless the hash table gets extremely populated (in which case it's only O(logn)). So far I've had to discover the big-O's via trial and error, and occasionally looking at the source code. Now for the question... I was wondering if there was a list of the theoretical (or practical) big O times for all* the PHP built in functions. *or at least the interesting ones For example find it very hard to predict what the big O of functions listed because the possible implementation depends on unknown core data structures of PHP: array_merge, array_merge_recursive, array_reverse, array_intersect, array_combine, str_replace (with array inputs), etc.

    Read the article

  • Circular dependecy in winforms app using Castle Windsor

    - by Sven
    Hello, I was experimenting a little bit with Castle winforms in a winforms project. I wanted to register all my form dependencies with Castle windsor. This way I would have a single instance for all my forms. Now I have some problem though. I'm in a situation that form x has a dependency on form y and form y has a dependency on form x. Practical example maybe: form x is used to create an order, form y is the screen that has a list of customers. From form x there is a button to select a customer for the order. This will open form y where ou can search the customer. There is a button that lets you add the found customer to the order. It will call a method on form x and passes the selected customer object. I could do this with events. Raise an event in form y and listen for that in form x. But isn't there a way around the circular dependency in Castle Windsor, lazy registration or something? Can anyone help me out? Thanks in advance

    Read the article

  • SAP related testing

    - by mgj
    Dear all, One notion that has been prevalent mostly as rumours for many aspiring programmers is that the testing phase of the SDLC(Software Development Life Cycle) is not that challenging and interesting as one's job as a tester after a period of time becomes monotonous because a person does the same thing repeatedly over and over again. Thats why many have this complexion of looking for a developers job rather than that of testers. Don't testers have a space for themselves in software companies to grow..? Please feel free to express your views for or against this. How true is that, could you please give e.g.'s of instances( need not be practical, even theoretical would suffice) which actually contradict this statement wrt a tester's career specifically wrt the SAP domain. E.g.'s from other domains are also welcome. This question is not meant to hurt someone's feelings who is in the testing domain. Its just that for e.g. in my case I want to know what actually would be the challenge's a tester could also face in real life situations.Something that would make their job also interesting and fun-filled. I myself am pro-testing and also interested in pursuing testing as a profession in a sw co, just curious to know more about it so...:) Thanks..:)

    Read the article

  • Validation in n-tier asp.net mvc applications

    - by sTodorov
    Dear Stack Overflow gurus, I am looking for some practical/theoretical information regarding best practices for validation in asp.net mvc n-tier applications. I am working on a .Net application divided into the following layers: UI - Mvc3 BLL layer - all business rules. Decoupled from data access and UI layers through interfaces DAL layer - Data access with the repository pattern, EF4 and pocos Now, I am looking for a nice, clean and transparent way to specify my validation rules. Here are some thoughts on the matter so far: UI validation should only be responsible for user input and its validity. BLL validation should be handling the validity of the data regarding the application business rules. My main concern is how to bind the BLL and UI validation in the most efficient way. One think I am would like to avoid is having the UI check in a collection of validation and adding manually errors to the ModelState. Furthermore, I do not want to pass the ModelState to the BLL to be populated in there. I will appreciate any thoughts on the matter. P.S. Should this question be marked as a discussion ?

    Read the article

  • What is best practice (and implications) for packaging projects into JAR's?

    - by user245510
    What is considered best practice deciding how to define the set of JAR's for a project (for example a Swing GUI)? There are many possible groupings: JAR per layer (presentation, business, data) JAR per (significant?) GUI panel. For significant system, this results in a large number of JAR's, but the JAR's are (should be) more re-usable - fine-grained granularity JAR per "project" (in the sense of an IDE project); "common.jar", "resources.jar", "gui.jar", etc I am an experienced developer; I know the mechanics of creating JAR's, I'm just looking for wisdom on best-practice. Personally, I like the idea of a JAR per component (e.g. a panel), as I am mad-keen on encapsulation, and the holy-grail of re-use accross projects. I am concerned, however, that on a practical, performance level, the JVM would struggle class loading over dozens, maybe hundreds of small JAR's. Each JAR would contain; the GUI panel code, necessary resources (i.e. not centralised) so each panel can stand alone. Does anyone have wisdom to share?

    Read the article

  • Map large integer to a phrase

    - by Alexander Gladysh
    I have a large and "unique" integer (actually a SHA1 hash). I want (for no other reason than to have fun) to find an algorithm to convert that SHA1 hash to a (pseudo-)English phrase. The conversion should be reversible (i.e., knowing the algorithm, one must be able to convert the phrase back to SHA1 hash.) The possible usage of the generated phrase: the human readable version of Git commit ID, like a motto for a given program version (which is built from that commit). (As I said, this is "for fun". I don't claim that this is very practical — or be much more readable than the SHA1 itself.) A better algorithm would produce shorter, more natural-looking, more unique phrases. The phrase need not make sense. I would even settle for a whole paragraph of nonsense. (Though quality — englishness — of a paragraph should probably be better than for a mere phrase.) A variation: it is OK if I will be able to work only with a part of hash. Say, first six digits is OK. Possible approach: In the past I've attempted to build a probability table (of words), and generate phrases as Markov chains, seeding the generator (picking branches from probability tree), according to the bits I read from the SHA. This was not very successful, the resulting phrases were too long and ugly. I'm not sure if this was a bug, or the general flaw in the algorithm, since I had to abandon it early enough. Now I'm thinking about attempting to solve the problem once again. Any advice on how to approach this? Do you think Markov chain approach can work here? Something else?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >