Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 215/563 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • What happens at control invoke function?

    - by user65909
    A question about form controls invoke function. Control1 is created on thread1. If you want to update something in Control1 from thread2 you must do something like: delegate void SetTextCallback(string txt); void setText(string txt) { if (this.textBox1.InvokeRequired) { SetTextCallback d = new SetTextCallback(setText); this.Invoke(d, new object[] { txt }); } else { // this will run on thread1 even when called from thread2 this.textBox1.AppendText(msg); } }` What happens behind the scenes here? This invoke behaves different from a normal object invoke. When you want to call a function in an object on a specific thread, then that thread must be waiting on some queue of delegates, and execute the incoming delegates. Is it correct that the windows forms control invoke function is completely different from the standard object invoke function?

    Read the article

  • OOP for unit testing : The good, the bad and the ugly

    - by Jeff
    I have recently read Miško Hevery's pdf guide to writing testable code in which its stated that you should limit your classes instanciations in your constructors. I understand that its what you should do because it allow you to easily mock you objects that are send as parameters to your class. But when it comes to writing actual code, i often end up with things like that (exemple is in PHP using Zend Framework but I think it's self explanatory) : class Some_class { private $_data; private $_options; private $_locale; public function __construct($data, $options = null) { $this->_data = $data; if ($options != null) { $this->_options = $options; } $this->_init(); } private function _init() { if(isset($this->_options['locale'])) { $locale = $this->_options['locale']; if ($locale instanceof Zend_Locale) { $this->_locale = $locale; } elseif (Zend_Locale::isLocale($locale)) { $this->_locale = new Zend_Locale($locale); } else { $this->_locale = new Zend_Locale(); } } } } Acording to my understanding of Miško Hevery's guide, i shouldn't instanciate the Zend_Local in my class but push it through the constructor (Which can be done through the options array in my example). I am wondering what would be the best practice to get the most flexibility for unittesing this code and aswell, if I want to move away from Zend Framework. Thanks in advance

    Read the article

  • Learning to implement dynamic language compiler

    - by TriArc
    I'm interested in learning how to create a compiler for a dynamic language. Most compiler books, college courses and articles/tutorials I've come across are specifically for statically typed languages. I've thought of a few ways to do it, but I'd like to know how it's usually done. I know type inferencing is a pretty common strategy, but what about others? Where can I find out more about how to create a dynamically typed language?

    Read the article

  • find second smallest element in Fibonacci Heap

    - by Longeyes
    I need to describe an algorithm that finds the second smallest element in a Fibonacci-Heap using the Operations: Insert, ExtractMin, DecreaseKey and GetMin. The last one is an algorithm previously implemented to find and return the smallest element of the heap. I thought I'd start by extracting the minimum, which results in its children becoming roots. I could then use GetMin to find the second smallest element. But it seems to me that I'm overlooking other cases because I don't know when to use Insert and DecreaseKey, and the way the question is phrased seems to suggest I should need them.

    Read the article

  • How do you put a database online?

    - by Dezrik
    I have a very beginner question regarding web development. I've had some experience with JSP, Hibernate, and MAMP to create a simple system for tracking inventory and sales. But this was all done locally on one computer. This time, I want to create a system that could be accessible online. It's to help my mother track her business wherever she goes. So there would be similar aspects like tracking inventory and sales. I understand that you have to have a server in which to host all the files in. But I don't understand how you can access your database online. Or what sorts of applications or products should be used. Currently the host of my database is localhost. How do put it online such that you can still do CRUD operations? Are there any guides to do this?

    Read the article

  • Do we need a non-disclosure agreement (NDA)?

    - by MrEdmundo
    We're going to meet a potential new client today and between ourselves started discussing the need for a non-disclosure agreement (NDA) and whether we need one at this juncture. In this case we don't think we'll be talking about technical specifics as it's an initial meeting about who we are. Is there any precedent on when is the right time for small ISVs to insist on NDAs and when perhaps the insistence might appear over the top and precious. All ideas welcomed, though in our case we're interested in UK law.

    Read the article

  • Multilevel Queue Scheduling (MQS) with Round Robin

    - by stackuser
    I'm trying to use MQS to create a Gantt chart of 5 processes (P1-P5) as well as their waiting, response, and turnaround times (and averages of those metrics) within a CPU task schedule. Here's the basic table of arrival times and bursts: Here's my actual work version after ticking off the finished processes. The time quantum for each time slice is (2 queues) TQ1=4 and TQ2=3. Note that I'm doing MQS and NOT MLFQ: It just doesn't feel like I'm doing MQS right here, I know this gets a little complex but maybe someone can point out where I'm going totally wrong.

    Read the article

  • Logging errors caused by exceptions deep in the application

    - by Kaleb Pederson
    What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error? For example, let's say that I have an ETL system whose transform step involves: a transformer, pipeline, processing algorithm, and processing engine. In brief, the transformer takes in an input file, parses out records, and sends the records through the pipeline. The pipeline aggregates the results of the processing algorithm (which could do serial or parallel processing). The processing algorithm sends each record through one or more processing engines. So, I have at least four levels: Transformer - Pipeline - Algorithm - Engine. My code might then look something like the following: class Transformer { void Process(InputSource input) { try { var inRecords = _parser.Parse(input.Stream); var outRecords = _pipeline.Transform(inRecords); } catch (Exception ex) { var inner = new ProcessException(input, ex); _logger.Error("Unable to parse source " + input.Name, inner); throw inner; } } } class Pipeline { IEnumerable<Result> Transform(IEnumerable<Record> records) { // NOTE: no try/catch as I have no useful information to provide // at this point in the process var results = _algorithm.Process(records); // examine and do useful things with results return results; } } class Algorithm { IEnumerable<Result> Process(IEnumerable<Record> records) { var results = new List<Result>(); foreach (var engine in Engines) { foreach (var record in records) { try { engine.Process(record); } catch (Exception ex) { var inner = new EngineProcessingException(engine, record, ex); _logger.Error("Engine {0} unable to parse record {1}", engine, record); throw inner; } } } } } class Engine { Result Process(Record record) { for (int i=0; i<record.SubRecords.Count; ++i) { try { Validate(record.subRecords[i]); } catch (Exception ex) { var inner = new RecordValidationException(record, i, ex); _logger.Error( "Validation of subrecord {0} failed for record {1}", i, record ); } } } } There's a few important things to notice: A single error at the deepest level causes three log entries (ugly? DOS?) Thrown exceptions contain all important and useful information Logging only happens when failure to do so would cause loss of useful information at a lower level. Thoughts and concerns: I don't like having so many log entries for each error I don't want to lose important, useful data; the exceptions contain all the important but the stacktrace is typically the only thing displayed besides the message. I can log at different levels (e.g., warning, informational) The higher level classes should be completely unaware of the structure of the lower-level exceptions (which may change as the different implementations are replaced). The information available at higher levels should not be passed to the lower levels. So, to restate the main questions: What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error?

    Read the article

  • Cloning existing software for commercial purposes - legal implications

    - by user2036256
    I have been asked to clone some existing software for a company. Basically its an old 16 bit DOS console app, which was supplied free of charge in I believe the late 80's. Having replaced the machine that needs to run it with a box running Win7 x64 they can't get it to work. It crashes every couple of minutes under DOSbox. The company that supplied it appears to no longer exist - if they did the company asking me to do this would almost certainly know about it. Its undetermined whether they have gone entirely or are just trading under a different name. If the latter they seem to have withdrawn from the market related to this product (because again, niche area, we should know about everyone there). What is the status to this with regards to copyright etc.? The main concern for the company involved is they want an identical interface to what they already have so I would have to clone this entirely. Having no source code / indication of the underlying mechanisms these would be written from scratch. Is an interface covered by copyright? / Does that still hold 30 years later? What is the assumed license when none at all is provided? Under UK law would I be under any serious risk were I to take on the project? How would this pan out if I then decided to sell the software on to other companies? Thanks

    Read the article

  • Does semi-normalization exist as a concept? Is it "normalized"?

    - by Gracchus
    If you don't mind, a tldr on my experience: My experience tldr I have an application that's heavily dependent upon uncertainty, a bane to database design. I tried to normalize it as best as I could according to the capabilities of my database of choice, but a "simple" query took 50ms to read. Nosql appeals to me, but I can't trust myself with it, and besides, normalization has cut down my debugging time immensely over and over. Instead of 100% normalization, I made semi-redundant 1:1 tables with very wide primary keys and equivalent foreign keys. Read times dropped to a few ms, and write times barely degraded. The semi-normalized point Given this reality, that anyone who's tried to rely upon views of fully normalized data is aware of, is this concept codified? Is it as simple as having wide unique and foreign keys, or are there any hidden secrets to this technique? Or is uncertainty merely a special case that has extremely limited application and can be left on the ash heap?

    Read the article

  • Example of persisting an inheritance relationship using ORM

    - by Schemer
    I have some experience with OOP and RDBs, but very little exposure to web programming. I am trying to understand what non-trivial types of problems are solved by ORM. Of course, I am familiar with the need for data persistence, but I have never encountered a need for persisting relationships between objects, a situation which is indicated in many online articles about ORM. I am not asking about the process of persisting a POJO to a database and restoring it later. Nor am I asking about why ORM frameworks are useful -- or a pain in the butt -- for doing so. I am particularly interested in how the need arises to persist and restore relationships between objects. In various documentation, I have seen many examples of persisting POJOs to a database, but the examples have all been for only very simple objects that are essentially nothing more than records anyway: a constructor, some private fields, and getter/setter methods. The motivation for persisting such an "object-record" seems obvious and trivial. This example: Hibernate ORM Tutorial offers such an example, but goes on to discuss mismatch issues of granularity, inheritance, identity, associations, and navigation that are not motivated by the example. If someone could offer a toy example of an instance where, say, the need arises to persist an inheritance relationship, I would be grateful. This might be blindingly obvious for anyone who has already encountered this situation but I have not and a great deal of searching and reading have not turned up any examples.

    Read the article

  • What icon would you use to denote an XML (not rss) feed available [closed]

    - by mplungjan
    Given two sites - one aimed at regular users and one for automated access. The first site is the best known, so many are (still) screen scraping that site for data. It is preferable to have move to the other site where the same data is available in xml format. What icon (+text/title) on a page you are about to screen scrape, would make you pay attention and decide to see what that was about? Examples from Google Image search for xml icon

    Read the article

  • Simplifying C++11 optimal parameter passing when a copy is needed

    - by Mr.C64
    It seems to me that in C++11 lots of attention was made to simplify returning values from functions and methods, i.e.: with move semantics it's possible to simply return heavy-to-copy but cheap-to-move values (while in C++98/03 the general guideline was to use output parameters via non-const references or pointers), e.g.: // C++11 style vector<string> MakeAVeryBigStringList(); // C++98/03 style void MakeAVeryBigStringList(vector<string>& result); On the other side, it seems to me that more work should be done on input parameter passing, in particular when a copy of an input parameter is needed, e.g. in constructors and setters. My understanding is that the best technique in this case is to use templates and std::forward<>, e.g. (following the pattern of this answer on C++11 optimal parameter passing): class Person { std::string m_name; public: template <class T, class = typename std::enable_if < std::is_constructible<std::string, T>::value >::type> explicit Person(T&& name) : m_name(std::forward<T>(name)) { } ... }; A similar code could be written for setters. Frankly, this code seems boilerplate and complex, and doesn't scale up well when there are more parameters (e.g. if a surname attribute is added to the above class). Would it be possible to add a new feature to C++11 to simplify code like this (just like lambdas simplify C++98/03 code with functors in several cases)? I was thinking of a syntax with some special character, like @ (since introducing a &&& in addition to && would be too much typing :) e.g.: class Person { std::string m_name; public: /* Simplified syntax to produce boilerplate code like this: template <class T, class = typename std::enable_if < std::is_constructible<std::string, T>::value >::type> */ explicit Person(std::string@ name) : m_name(name) // implicit std::forward as well { } ... }; This would be very convenient also for more complex cases involving more parameters, e.g. Person(std::string@ name, std::string@ surname) : m_name(name), m_surname(surname) { } Would it be possible to add a simplified convenient syntax like this in C++? What would be the downsides of such a syntax?

    Read the article

  • SQL query. An unusual join. DB implemented in sqlite-3

    - by user02814
    This is essentially a question about constructing an SQL query. The db is implemented with sqlite3. I am a relatively new user of SQL. I have two tables and want to join them in an unusual way. The following is an example to explain the problem. Table 1 (t1): id year name ------------------------- 297 2010 Charles 298 2011 David 300 2010 Peter 301 2011 Richard Table 2 (t2) id year food --------------------------- 296 2009 Bananas 296 2011 Bananas 297 2009 Melon 297 2010 Coffee 297 2012 Cheese 298 2007 Sugar 298 2008 Cereal 298 2012 Chocolate 299 2000 Peas 300 2007 Barley 300 2011 Beans 300 2012 Chickpeas 301 2010 Watermelon I want to join the tables on id and year. The catch is that (1) id must match exactly, but if there is no exact match in Table 2 for the year in Table 1, then I want to choose the year that is the next (lower) available. A selection of the kind that I want to produce would give the following result id year matchyr name food ------------------------------------------------- 297 2010 2010 Charles Coffee 298 2011 2008 David Cereal 300 2010 2007 Peter Barley 301 2011 2010 Richard Watermelon To summarise, id=297 had an exact match for year=2010 given in Table 1, so the corresponding line for id=297, year=2010 is chosen from Table 2. id=298, year=2011 did not have a matching year in Table 2, so the next available year (less than 2011) is chosen. As you can see, I would also like to know what that matched year (whether exactly , or inexactly) actually was. I would very much appreciate (1) an indication (yes/no answer) of whether this is possible to do in SQL alone, or whether I need to look outside SQL, and (2) a solution, if that is not too onerous.

    Read the article

  • How does throwing an ArgumentNullException help?

    - by Scott Whitlock
    Let's say I have a method: public void DoSomething(ISomeInterface someObject) { if(someObject == null) throw new ArgumentNullException("someObject"); someObject.DoThisOrThat(); } I've been trained to believe that throwing the ArgumentNullException is "correct" but an "Object reference not set to an instance of an object" error means I have a bug. Why? I know that if I was caching the reference to someObject and using it later, then it's better to check for nullity when passed in, and fail early. However, if I'm dereferencing it on the next line, why are we supposed to do the check? It's going to throw an exception one way or the other. Edit: It just occurred to me... does the fear of the dereferenced null come from a language like C++ that doesn't check for you (i.e. it just tries to execute some method at memory location zero + method offset)?

    Read the article

  • Why don't we just fix Javascript?

    - by Jan Meyer
    Javascript sucks because of a few fatalities well pointed out by Douglas Crockford. We talk a lot about it. But the point here is, why we don't fix it? Coffeescript of course does that and a lot more. But the question here is another: if we provide a webservice that can convert one version of Javascript to the next, and so on, we can keep the language up to date. Such a conversion allows old code to run, albeit with an ever-increasing startup delay, as newer browsers convert old code to the new syntax. To avoid that delay, the site only needs to take the output of the code-transform and paste it in! The effort has immediate benefits for those businesses interested in the results. The rest can sleep tight: their code will continue to run. If we provide backward code-transformation also, then elder browsers can also run ANY new code! Migration scripts should be created by those that make changes to a language. Today they don't, which is in itself a fundamental omission! It should be am obvious part of their job to provide them, as their job isn't really done without them. The onus of making it work should be on them. With this system Any site will be able to run in Any browser, but new code will run best on the newest browsers. This way we reap the benefit of an up-to-date and productive development environment, where today we suffer, supposedly because of yesterday. This is a misconception. We are all trapped in committee-thinking, and we drag along things that only worsen our performance over time! We cause an ever increasing complexity that is hard to underestimate. Javascript is easily fixed. The fact is we don't. As an example, I have seen Patrick Michaud tackle the migration problem in PmWiki. It included forward migration scripts. Whenever syntax changes were made, a migration script was added to transform pages to the new syntax. As far as I know, ALL migrations have worked flawlessly. In other words, we don't tackle the migration problem, we just drag it along. We are incompetent! And why is that? Because technically incompetent people feel they must decide for us. Because they are incompetent, fear rules them. They are obnoxiously conservative, and we suffer the consequence of bad leadership. But the competent don't need to play by the same rules. They can (and must) change them. They are the path forward. It is about time to leave the past behind, and pursue the leanest meanest, no, eternal functionality. That would in and of itself revolutionize programming. So, why don't we stop whining and fix programming? Begin with Javascript and change the world. Even if the browser doesn't hook into this system, coders could. So language updaters should take it upon them to provide migration scripts. Once they exist, browsers may take advantage of them.

    Read the article

  • Web development tools/approaches?

    - by Clinton
    My day job involves a bit of programming, but I've recently been attempting some web development for personal reasons. I've got Drupal up and running and done basic things like add new content (i.e. heading and text) and add modules and themes, but I'm not sure how to approach actually designing pages. When I mucked around with webpages 15 years ago, it was just a mixture of HTML, CSS and Javascript, generally written with a text editor. Have things changed, or is this the way I'd make a Drupal page today? If it makes a difference, in my case the page's I want to design simply have static content, but I'd like them to be easily updatable.

    Read the article

  • Are too many assertions code smell?

    - by Florents
    I've really fallen in love with unit testing and TDD - I am test infected. However, unit testing is used for public methods. Sometimes though I do have to test some assumptions-assertions in private methods too, because some of them are "dangerous" and refactoring can't help further. (I know, testing frameworks allo testing private methods). So, It became a habit of mine that (almost always) the first and the last line of a private method are both assertions. I guess this couldn't be bad (right ??). However, I've noticed that I also tend to use assertions in public methods too (as in the private) just "to be sure". Could this be "testing duplication" since the public method assumpotions are tested from the unit testng framework? Could someone think of too many assertions as a code smell?

    Read the article

  • Need help with this question [closed]

    - by Jaime
    Occasionally, multiplying the sizes of nested loops can give an overestimate for the Big-Oh running time. This result happens when an innermost loop is infrequently executed. Give the Big-O analysis of the running time. Implement the following code and run for several values of N, and compare your analysis with the actual running times. for(int i = 1; i <= n; i++) for(int j = 1; j<=i * i; j++) if(j%i == 0) for(int k = 0; k < j; k++) sum++;

    Read the article

  • Apply WCF For Large Projects

    - by svlytns
    We have a large projects that have nearly 20 modules on it.We want to use WCF for business layer. We think three way to implement WCF our project: Use only one datacontract and one operation contract. Send ClassName, MethodName to operation and create class by reflaction then invoke the method in WCF side. Second way put all modules in one wcf application, and create their data contracts, operation contracts. Third way is create seperate wcf application for each module and host them seperatly. Which one is the best way? I need your ideas. TIA!

    Read the article

  • How to use DI and DI containers

    - by Pinetree
    I am building a small PHP mvc framework (yes, yet another one), mostly for learning purposes, and I am trying to do it the right way, so I'd like to use a DI container, but I am not asking which one to use but rather how to use one. Without going into too much detail, the mvc is divided into modules which have controllers which render views for actions. This is how a request is processed: a Main object instantiates a Request object, and a Router, and injects the Request into the Router to figure out which module was called. then it instantiates the Module object and sends the Request to that the Module creates a ModuleRouter and sends the Request to figure out the controller and action it then creates the Controller and the ViewRenderer, and injects the ViewRenderer into the Controller (so that the controller can send data to the view) the ViewRenderer needs to know which module, controller and action were called to figure out the path to the view scripts, so the Module has to figure out this and inject it to the ViewRenderer the Module then calls the action method on the controller and calls the render method on the ViewRenderer For now, I do not have any DI container set up, but what I do have are a bunch of initX() methods that create the required component if it is not already there. For instance, the Module has the initViewRenderer() method. These init methods get called right before that component is needed, not before, and if the component was already set it will not initialize it. This allows for the components to be switched, but it does not require manually setting them if they are not there. Now, I'd like to do this by implementing a DI container, but still keep the manual configuration to a bare minimum, so if the directory structure and naming convention is followed, everything should work, without even touching the config. If I use the DI container, do I then inject it into everything (the container would inject itself when creating a component), so that other components can use it? When do I register components with the DI? Can a component register other components with the DI during run-time? Do I create a 'common' config and use that? How do I then figure out on the fly which components I need and how they need to be set up? If Main uses Router which uses Request, Main then needs to use the container to get Module (or does the module need to be found and set beforehand? How?) Module uses Router but needs to figure out the settings for the ViewRenderer and the Controller on the fly, not in advance, so my DI container can't be setting those on the Module before the module figures out the controller and action... What if the controller needs some other service? Do I inject the container into every controller? If I start doing that, I might just inject it into everything... Basically I am looking for the best practices when dealing with stuff like this. I know what DI is and what DI containers do, but I am looking for guidance to using them in real life, and not some isolated examples on the net. Sorry for the lengthy post and many thanks in advance.

    Read the article

  • I don't understand why algorithms are so special

    - by Jessica
    I'm a student of computer science trying to soak up as much information on the topic as I can during my free time. I keep returning to algorithms time and again in various formats (online course, book, web tutorial), but the concept fails to sustain my attention. I just don't understand: why are algorithms so special? I can tell you why fractals are awesome, why the golden ratio is awesome, why origami is awesome and scientific applications of all the above. Heck I even love Newton's laws and conical sections. But when it comes to algorithms, I'm just not astounded. They are not insightful in new ways about human cognition at all. I was expecting algorithms to be shattering preconceptions and mind-altering but time and time again they fail miserably. What am I doing wrong in my approach? Can someone tell me why algorithms are so awesome?

    Read the article

  • Graduate expectations versus reality

    - by Bobby Tables
    When choosing what we want to study, and do with our careers and lives, we all have some expectations of what it is going to be like. Now that I've been in the industry for almost a decade, I've been reflecting a bit on what I thought (back when I was studying Computer Science) programming working life was going to be like, and how it's actually turning out to be. My two biggest shocks (or should I say, broken expectations) by far are the sheer amount of maintenance work involved in software, and the overall lack of professionalism: Maintenance: At uni, we were all told that the majority of software work is maintenance of existing systems. So I knew to expect this in the abstract. But I never imagined exactly how overwhelming this would turn out to be. Perhaps it's something I mentally glazed over, and hoped I'd be building cool new stuff from scratch a lot more. But it really is the case that most jobs are overwhelmingly maintenance, bug fixing, and support oriented. Lack of professionalism: At uni, I always had the impression that commercial software work is very process-oriented and stringently engineered. I had images of ISO processes, reams of technical documentation, every feature and bug being strictly documented, and a generally professional environment. It came as a huge shock to realise that most software companies operate no differently to a team of students working on a large semester-long project. And I've worked in both the small agile hack shop, and the medium sized corporate enterprise. While I wouldn't say that it's always been outright "unprofessional", it definitely feels like the software industry (on the whole) is far from the strong engineering discipline that I expected it to be. Has anyone else had similar experiences to this? What are the ways in which your expectations of what our profession would be like were different to the reality?

    Read the article

  • GPL'ing code of a third party?

    - by Mark
    I am facing the following dilemma at the moment. I am using code from a scientific paper in a commercial project. So basically I copied and pasted the code from the paper's pdf into my code editor and use it in my own code. The code in the paper does not have any copy restrictions or license(like the GPL) so I thought I would be ok using it in a commercial project. However, I have seen several gpl licensed open source projects that use the exact same code from the paper to the point of having the same variable names like in the paper. So what happened here is that a gpl license was put on a third parties non gpl'ed code. Are these open source projects in violation of the gpl or would I be in violation of the gpl because I use code which has been gpl'ed? My common sense tells me it is not allowed to gpl somebody elses non-gpl'ed (like in this case from the paper) code but I though I would ask anyway.

    Read the article

  • Getting started and learning programming?

    - by Blagersdeath
    Hello, I am looking to get started in programming. I am young and know some html as I am taking a Web Design class at my school now. I am planning to apply to Full Sail University when I graduate High School, but I would like to get started now so that I am ahead of the game if I get accepted. I want to learn any and all programing language's. I would appreciate it if anyone can help me out by telling me where I can learn. By in a book, web site, articles, blog, or whatever you can help me in I would appreciate it. Thanks.

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >