Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 162/563 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Choice the project pattern, advice for this case

    - by Lucas Rodrigues Sena
    I have a project in MVC4 entity framework, and have to adapt it to possible updates on the dlls and system do not stop work. I'm using portable Areas, but have difficulty creating 5 modules and about 5 functionality for each modules with fully functioning independently as a dll and database. 1- Reflection DLLs, the system works "on the fly". (I cant do it on mvc4 at moment). 2- Portables Areas (I need to do one area for each modules*functionality 5*5 Areas). Confuse way and I'm afraid if this is ridiculous. 3- Implement WFC on MVC4, compatible? 4- Other better way?

    Read the article

  • Computer vision algorithms (how is this possible?)

    - by Maxim Gershkovich
    I recently stumbled across a company that has created what appears to be a computer vision technology that is capable of detecting shoplifting automatically and alert its users. LINK Watching some of the videos and examples provided by the company has left me completely baffled and amazed as to how on earth they may have achieved this functionality. I understand that no-one here will be able to tell me exactly how this may have been achieved but is anyone aware - and could point me to - research in this field or alternatively perhaps provide details as to how something like this could be implemented or guidance of where one might start? My understanding was the computer vision algorithms were many years away from being this sophisticated. Is this sort of application really possible? Anyone willing to hazard a guess at how they achieved this?

    Read the article

  • When is an object oriented program truly object oriented?

    - by Syed Aslam
    Let me try to explain what I mean: Say, I present a list of objects and I need to get back a selected object by a user. The following are the classes I can think of right now: ListViewer Item App [Calling class] In case of a GUI application, usually click on a particular item is selection of the item and in case of a command line, some input, say an integer representing that item. Let us go with command line application here. A function lists all the items and waits for the choice of object, an integer. So here, I get the choice, is choice going to conceived as an object? And based on the choice, return back the object in the list. Does writing this program like the way explained above make it truly object oriented? If yes, how? If not, why? Or is the question itself wrong and I shouldn't be thinking along those lines?

    Read the article

  • Stagnating in programming

    - by Coder
    Time after time this question came up in my mind, but up until today I wasn't thinking about it much. I have been programming for maybe around 8 years now, and for the last two years it seems I'm not as keen to pick up new technologies anymore. Maybe that's a burnout or something, but I'd say it's experience and what I like, that's stopping me from running after the latest and greatest. I'm C++ developer, by this I mean, I love close to metal programming. I have no problems tracing problems through assembly, using tools like WinDbg or HexView. When I use constructs, I think about how they are realized underneath, how the bits are set and unset under the hood. I love battling with complex threading problems and doing everything hardcore way, even by hand if the regular solutions seem half baked. But I also love the C++0x stuff, and use it a lot. And all C++ code as long as it's not cumbersome compared to C counterparts, sometimes I also fall back to sort of "Super C" if the C++ way is ugly. And then there are all other developers who seem to be way more forward looking, .Net 4.0 MVC, WPF, all those Microsoft X#s, LINQ languages, XML and XSLT, mobile devices and so on. I have done a considerable amount of .NET, SQL, ASPX programming, but the further I go, the less I want to try those technologies. Is that bad? Almost every day I hear people saying that managed code is the only way forward, WPF is the way to go. I hear that C++ is godawful, and you can't code anything in it that's somewhat stable. But I don't buy it. With the experience I have, and the knowledge of how native code is compiled and executes, I can say I find it extremely rare that C++ code is unstable, or leaks, or causes crashes that takes more than 30 seconds to identify and fix. And to tell the truth, I've seen enough problems with other "cool" languages that I'd say C++ is even more stable and production proof than the safe languages, at least for me. The only thing that scares me in C++ is new frameworks, I don't trust them, and I use them extra sparingly. STL - yes, ATL - very sparingly, everything else... Well, not very keen on it. Most huge problems I've ran into, all were related to frameworks, not the language itself. Some overrided operator here, bad hierarchy there, poor class design here, mystical castings there. Other than that, C/C++ (yes, I use them together) still seems a very controlled and stable way to develop applications. Am I stagnating? Should I switch a profession, or force myself in all that marketing hype? Are there more developers who feel the same way?

    Read the article

  • Designing configuration for subobjects

    - by Stefano Borini
    I have the following situation: I have a class (let's call it Main) encapsulating a complex process. This class in turn orchestrates a sequence of subalgorithms (AlgoA, AlgoB), each one represented by an individual class. To configure Main, I have a configuration stored into a configuration object MainConfig. This object contains all the config information for AlgoA and AlgoB with their specific parameters. AlgoA has no interest to the information relative to the configuration of AlgoB, so technically I could have (and in practice I have) a contained MainConfig.AlgoAConfig and MainConfig.AlgoBConfig instances, and initialize as AlgoA(MainConfig.AlgoAConfig) and AlgoB(MainConfig.AlgoBConfig). The problem is that there is some common configuration data. One example is the printLevel. I currently have MainConfig.printLevel. I need to propagate this information to both AlgoA and AlgoB, because they have to know how much to print. MainConfig also needs to know how much to print. So the solutions available are I pass the MainConfig to AlgoA and AlgoB. This way, AlgoA has technically access to the whole configuration (even that of AlgoB) and is less self-contained I copy the MainConfig.printLevel into AlgoAConfig and AlgoBConfig, so I basically have three printLevel information repeated. I create a third configuration class PrintingConfig. I have an instance variable MainConfig.printingConfig, and then pass to AlgoA both MainConfig.AlgoAConfig and MainConfig.printingConfig. Have you ever found this situation? How did you solve it ? Which one is stylistically clearer to a new reader of the code ?

    Read the article

  • When and why you should use void (instead of i.e. bool/int)

    - by Jonas
    I occasionally run into methods where a developer chose to return something which isn't critical to the function. I mean, when looking at the code, it apparently works just as nice as a void and after a moment of thought, I ask "Why?" Does this sound familiar? Sometimes I would agree that most often it is better to return something like a bool or int, rather then just do a void. I'm not sure though, in the big picture, about the pros and cons. Depending on situation, returning an int can make the caller aware of the amount of rows or objects affected by the method (e.g., 5 records saved to MSSQL). If a method like "InsertSomething" returns a boolean, I can have the method designed to return true if success, else false. The caller can choose to act or not on that information. On the other hand, May it lead to a less clear purpose of a method call? Bad coding often forces me to double-check the method content. If it returns something, it tells you that the method is of a type you have to do something with the returned result. Another issue would be, if the method implementation is unknown to you, what did the developer decide to return that isn't function critical? Of course you can comment it. The return value has to be processed, when the processing could be ended at the closing bracket of method. What happens under the hood? Did the called method get false because of a thrown error? Or did it return false due to the evaluated result? What are your experiences with this? How would you act on this?

    Read the article

  • Lightweight, dynamic, fully JavaScript web UI library recommendations

    - by Matt Greer
    I am looking for recommendations for a lightweight, dynamic, fully JavaScript UI library for websites. Doesn't have to be amazing visually, the end result is for simple demos I create. What I want can be summed up as "Ext-like, but not GPL'ed, and a much smaller footprint". I want to be able to construct UIs dynamically and fully through code. My need for this is currently driven by this particle designer. Depending on what query parameters you give it, the UI components change, example 1, example2. Currently this is written in Ext, but Ext's license and footprint are turn offs for me. I like UKI a lot, but it's not very good for dynamically building UIs since everything is absolutely positioned. Extending Uki to support that is something I am considering. Ideally the library would let me make UIs with a pattern along the lines of: var container = new SomeUI.Container(); container.add(new SomeUI.Label('Color Components')); container.add(new SomeUI.NumberField('R')); container.add(new SomeUI.NumberField('G')); container.add(new SomeUI.NumberField('B')); container.add(new SomeUI.CheckBox('Enable Alpha')); container.renderTo(someDiv);

    Read the article

  • Immutable Method in Java

    - by Chris Okyen
    In Java, there is the final keyword in lieu of the const keyword in C and C++. In the latter languages there are mutable and immutable methods such as stated in the answer by Johannes Schaub - litb to the question How many and which are the uses of “const” in C++? Use const to tell others methods won't change the logical state of this object. struct SmartPtr { int getCopies() const { return mCopiesMade; } }ptr1; ... int var = ptr.getCopies(); // returns mCopiesMade and is specified that to not modify objects state. How is this performed in Java?

    Read the article

  • Is the development of CLI apps considered "backward"?

    - by user61852
    I am a DBA fledgling with a lot of experience in programming. I have developed several CLI, non interactive apps that solve some daily repetitive tasks or eliminate the human error from more complex albeit not so daily tasks. These tools are now part of our tool box. I find CLI apps are great because you can include them in an automated workflow. Also the Unix philosophy of doing a single thing but doing it well, and letting the output of a process be the input of another, is a great way of building a set of tools than would consolidate into an strategic advantage. My boss recently commented that developing CLI tools is "backward", or constitutes a "regression". I told him I disagreed, because most CLI tools that exist now are not legacy but are live projects with improved versions being released all the time. Is this kind of development considered "backwards" in the market? Does it look bad on a rèsumè? I also considered all solutions whether they are web or desktop, should have command line, non-interactive options. Some people consider this a waste of programming resources. Is this goal a worthy one in a software project?

    Read the article

  • How should modules access data outside their scope?

    - by Joe
    I run into this same problem quite often. First, I create a namespace and then add modules to this namespace. Then issue I always run into is how best to initialize the application? Naturally, each module has its own startup procedure so should this data(not code in some cases, just a list of items to run) stay with the module? Or should there be a startup procedure in the global namespace which has the startup data for ALL the modules. Which is the more robust way of organizing this situation? Should some things be made centralized or should there be strict adherence to modules encapsulating everything about themselves? Though this is a general architecture questions, Javascript centric answers would be really appreciated!

    Read the article

  • What are the canonical problem sets or problem domains for the different types of languages?

    - by SnOrfus
    I'm just wondering what some of the canonical problem sets are for certain types of languages? ie. Fill in the blanks: __ is the perfect language to use for solving __. The question was arrived at reading or hearing people say statements like such and such module in our codebase would be much easier to implement using a functional language. Fee free to include examples that would seem obvious to you.

    Read the article

  • Agile and different facet of software development

    - by arjun
    It is said that the Kanban methodology is suited for software maintenance and support areas, whereas Scrum is more suited for new product development. No process or methods are complete. Using the right one will help you succeed, but they will not guarantee success. Which agile approach is best suited for a project which is basically a re-platforming from one technology to another (say from Java to .NET).

    Read the article

  • What is the way to understand someone else's giant uncommented spaghetti code? [closed]

    - by Anisha Kaul
    Possible Duplicate: I’ve inherited 200K lines of spaghetti code — what now? I have been recently handled a giant multithreaded program with no comments and have been asked to understand what it does, and then to improve it (if possible). Are there some techniques which should be followed when we need to understand someone else's code? OR do we straightaway start from the first function call and go on tracking next function calls? C++ (with multi-threading) on Linux

    Read the article

  • What deployment framework to use?

    - by jeruki
    We are trying to figure out what deployment method/framework to use with a python application, it has a basic wsgi server to make some REST resources available and a set of static web pages with the interface that are served through apache. The situation is as follows: My team works in isolated parts of the program and sometimes together in specific modules, we have different testing servers and one master server, we all work locally, sync the code using git, and then run a bash script that copies the files from the windows machines to the indicated linux server(using ssh) and then restarts the app. After thinking about it this doesn't seem to be the right way to do it, the script overwrites all the files in the server with the local files everytime. We want to be able to work in the same server without the worry of overwriting other people's code and we need to deploy to different servers to avoid restarting the service while others work with it and in the near future we need to deploy to the master or several clones of the master server when the application reaches a more mature state. We found serveral options capistrano, kwate, chef or fortress, even fleet but we wanted to have opinions from people that has used them to be sure it is what we need. So this are the main questions: Are these the kind of programs we should be looking at to achive a safe concurrent deployment process? Which one have you used/recommend and why? do you think it would help in our actual situation? Thank you so much for your feedback and advice on this.

    Read the article

  • How do I convince my team that a requirements specification is unnecessary if we adopt user-stories?

    - by Nupul
    We are planning to adopt user-stories to capture stakeholder 'intent' in a lightweight fashion rather than a heavy SRS (software requirements specifications). However, it seems that though they understand the value of stories, there is still a desire to 'convert' the stories into an SRS-like language with all the attributes, priorities, input, outputs, source, destination etc. User-stories 'eliminate' the need for a formal SRS like artifact to begin with so what's the point in having an SRS? How should I convince my team (who are all very qualified CS folks by the way - both by education and practice) that the SRS would be 'eliminated' if we adopted user-stories for capturing the functional requirements of the system? (NFRs etc can be captured too, but that's not the intent of the question). So here's my 'work-flow' argument: Capture initial requirements as user-stories and later elaborate them to use-cases (which are required to be documented at a low level i.e. describing interactions with the UI prototypes/mockups and are a deliverable post deployment). Thus going from user-stories to use-cases rather than user-stories to SRS to use-cases. How are you all currently capturing user-stories at your workplace (if at all) and how do you suggest I 'make a case' for absence of SRS in presence of user-stories?

    Read the article

  • Generic log analyzer that produces reports

    - by Eugene
    About 600 customers use our application. We have very detailed logs for everything that happens in the application, from changes in the data model, memory and CPU/GPU usage to clicks on the UI elements. We want to be able to parse the logs coming from these customers and analyze them to understand how users use our application and what happens internally in the application. Is there a log analyzer that can produce such reports automatically?

    Read the article

  • Staying OO and Testable while working with a database

    - by Adam Backstrom
    What are some OOP strategies for working with a database but keeping thing testable? Say I have a User class and my production environment works against MySQL. I see a couple possible approaches, shown here using PHP: Pass in a $data_source with interfaces for load() and save(), to abstract the backend source of data. When testing, pass a different data store. $user = new User( $mysql_data_source ); $user-load( 'bob' ); $user-setNickname( 'Robby' ); $user-save(); Use a factory that accesses the database and passes the result row to User's constructor. When testing, manually generate the $row parameter, or mock the object in UserFactory::$data_source. (How might I save changes to the record?) class UserFactory { static $data_source; public static function fetch( $username ) { $row = self::$data_source->get( [params] ); $user = new User( $row ); return $user; } } I have Design Patterns and Clean Code here next to me, but I'm struggling to find applicable concepts.

    Read the article

  • How can I compare between web development technologies?

    - by Steve
    I would like experts to explain for me how can I compare between web development tools or technologies in order to be able to choose the right one. I'm tired from searching always in the regular way: X Technology vs Y Technology. I'm tired from peoples' biased opinions and usually I don't find a fair comparison. I have decided to put my question here about how can I compare them so you may identify to me the main standards for comparisons so I can compare them by myself and becoming able to choose the technology that is appropriate for the project I will develop. Note: in web development technologies I mean server side languages (e.g. PHP). One important requirement for me that can be defined as major one is cost efficiency and I mean that I don't care about the cost in the near future or the current cost, but what is more important for me is the cost in the future. If, for example, the site becomes one of the most 100 visited sites.   So, how can I compare the cost of different technologies for a future status of a site (such as being very famous site) so I can scale my option easily without missing a good technology like what happened with some sites when they chose not the most effective tool.

    Read the article

  • Is it bad idea to use flag variable to search MAX element in array?

    - by Boris Treukhov
    Over my programming career I formed a habit to introduce a flag variable that indicates that the first comparison has occured, just like Msft does in its linq Max() extension method implementation public static int Max(this IEnumerable<int> source) { if (source == null) { throw Error.ArgumentNull("source"); } int num = 0; bool flag = false; foreach (int num2 in source) { if (flag) { if (num2 > num) { num = num2; } } else { num = num2; flag = true; } } if (!flag) { throw Error.NoElements(); } return num; } However I have met some heretics lately, who implement this by just starting with the first element and assigning it to result, and oh no - it turned out that STL and Java authors have preferred the latter method. Java: public static <T extends Object & Comparable<? super T>> T max(Collection<? extends T> coll) { Iterator<? extends T> i = coll.iterator(); T candidate = i.next(); while (i.hasNext()) { T next = i.next(); if (next.compareTo(candidate) > 0) candidate = next; } return candidate; } STL: template<class _FwdIt> inline _FwdIt _Max_element(_FwdIt _First, _FwdIt _Last) { // find largest element, using operator< _FwdIt _Found = _First; if (_First != _Last) for (; ++_First != _Last; ) if (_DEBUG_LT(*_Found, *_First)) _Found = _First; return (_Found); } Are there any preferences between one method or another? Are there any historical reasons for this? Is one method more dangerous than another?

    Read the article

  • Java stored procedures in Oracle, a good idea?

    - by Scott A
    I'm considering using a Java stored procedure as a very small shim to allow UDP communication from a PL/SQL package. Oracle does not provide a UTL_UDP to match its UTL_TCP. There is a 3rd party XUTL_UDP that uses Java, but it's closed source (meaning I can't see how it's implemented, not that I don't want to use closed source). An important distinction between PL/SQL and Java stored procedures with regards to networking: PL/SQL sockets are closed when dbms_session.reset_package is called, but Java sockets are not. So if you want to keep a socket open to avoid the tear-down/reconnect costs, you can't do it in sessions that are using reset_package (like mod_plsql or mod_owa HTTP requests). I haven't used Java stored procedures in a production capacity in Oracle before. This is a very large, heavily-used database, and this particular shim would be heavily used as well (it serves as a UDP bridge between a PL/SQL RFC 5424 syslog client and the local rsyslog daemon). Am I opening myself up for woe and horror, or are Java stored procedures stable and robust enough for usage in 10g? I'm wondering about issues with the embedded JVM, the jit, garbage collection, or other things that might impact a heavily used database.

    Read the article

  • How do functional languages handle random numbers?

    - by Electric Coffee
    What I mean about that is that in nearly every tutorial I've read about functional languages, is that one of the great things about functions, is that if you call a function with the same parameters twice, you'll always end up with the same result. How on earth do you then make a function that takes a seed as a parameter, and then returns a random number based on that seed? I mean this would seem to go against one of the things that are so good about functions, right? Or am I completely missing something here?

    Read the article

  • How to practice object oriented programming?

    - by user1620696
    I've always programmed in procedural languages and currently I'm moving towards object orientation. The main problem I've faced is that I can't see a way to practice object orientation in an effective way. I'll explain my point. When I've learned PHP and C it was pretty easy to practice: it was just matter of choosing something and thinking about an algorithm for that thing. In PHP for example, it was matter os sitting down and thinking: "well, just to practice, let me build one application with an administration area where people can add products". This was pretty easy, it was matter of thinking of an algorithm to register some user, to login the user, and to add the products. Combining these with PHP features, it was a good way to practice. Now, in object orientation we have lots of additional things. It's not just a matter of thinking about an algorithm, but analysing requirements deeper, writing use cases, figuring out class diagrams, properties and methods, setting up dependency injection and lots of things. The main point is that in the way I've been learning object orientation it seems that a good design is crucial, while in procedural languages one vague idea was enough. I'm not saying that in procedural languages we can write good software without design, just that for sake of practicing it is feasible, while in object orientation it seems not feasible to go without a good design, even for practicing. This seems to be a problem, because if each time I'm going to practice I need to figure out tons of requirements, use cases and so on, it seems to become not a good way to become better at object orientation, because this requires me to have one whole idea for an app everytime I'm going to practice. Because of that, what's a good way to practice object orientation?

    Read the article

  • How to manage an issue tracker's backlog

    - by Josh Kelley
    We've been dutifully using Trac for several years now, and our "active tickets" list has grown to almost 200 items. These include bugs that are too low priority and too complicated to fix for now, feature requests that have been deferred, issues that have never really generated complaints but everyone agrees ought to be fixed someday, planned code refactorings and other design infelicities that we don't want to lose track of, etc. As a result, with almost 200 of these issues, the list is almost overwhelming; it's no longer useful as a source of what needs to be worked on right now. What's the best way to keep track of issues of this sort? Part of the problem is that some of these issues are such a low priority that they may never get done. I hate to lose track of these items (similar to not wanting to throw something out of my house in case I might need it someday); do I need to throw them out regardless (by marking them as wontfix) and assume I can find them in the future if I ever do need them?

    Read the article

  • Type checking and recursive types (Writing the Y combinator in Haskell/Ocaml)

    - by beta
    When explaining the Y combinator in the context of Haskell, it's usually noted that the straight-forward implementation won't type-check in Haskell because of its recursive type. For example, from Rosettacode [1]: The obvious definition of the Y combinator in Haskell canot be used because it contains an infinite recursive type (a = a -> b). Defining a data type (Mu) allows this recursion to be broken. newtype Mu a = Roll { unroll :: Mu a -> a } fix :: (a -> a) -> a fix = \f -> (\x -> f (unroll x x)) $ Roll (\x -> f (unroll x x)) And indeed, the “obvious” definition does not type check: ?> let fix f g = (\x -> \a -> f (x x) a) (\x -> \a -> f (x x) a) g <interactive>:10:33: Occurs check: cannot construct the infinite type: t2 = t2 -> t0 -> t1 Expected type: t2 -> t0 -> t1 Actual type: (t2 -> t0 -> t1) -> t0 -> t1 In the first argument of `x', namely `x' In the first argument of `f', namely `(x x)' In the expression: f (x x) a <interactive>:10:57: Occurs check: cannot construct the infinite type: t2 = t2 -> t0 -> t1 In the first argument of `x', namely `x' In the first argument of `f', namely `(x x)' In the expression: f (x x) a (0.01 secs, 1033328 bytes) The same limitation exists in Ocaml: utop # let fix f g = (fun x a -> f (x x) a) (fun x a -> f (x x) a) g;; Error: This expression has type 'a -> 'b but an expression was expected of type 'a The type variable 'a occurs inside 'a -> 'b However, in Ocaml, one can allow recursive types by passing in the -rectypes switch: -rectypes Allow arbitrary recursive types during type-checking. By default, only recursive types where the recursion goes through an object type are supported. By using -rectypes, everything works: utop # let fix f g = (fun x a -> f (x x) a) (fun x a -> f (x x) a) g;; val fix : (('a -> 'b) -> 'a -> 'b) -> 'a -> 'b = <fun> utop # let fact_improver partial n = if n = 0 then 1 else n*partial (n-1);; val fact_improver : (int -> int) -> int -> int = <fun> utop # (fix fact_improver) 5;; - : int = 120 Being curious about type systems and type inference, this raises some questions I'm still not able to answer. First, how does the type checker come up with the type t2 = t2 -> t0 -> t1? Having come up with that type, I guess the problem is that the type (t2) refers to itself on the right side? Second, and perhaps most interesting, what is the reason for the Haskell/Ocaml type systems to disallow this? I guess there is a good reason since Ocaml also will not allow it by default even if it can deal with recursive types if given the -rectypes switch. If these are really big topics, I'd appreciate pointers to relevant literature. [1] http://rosettacode.org/wiki/Y_combinator#Haskell

    Read the article

  • WCF service and security

    - by Gaz83
    Been building a WP7 app and now I need it to communicate to a WCF service I made to make changes to an SQL database. I am a little concerned about security as the user name and password for accessing the SQL database is in the App.Config. I have read in places that you can encrypt the user name and password in the config file. As the username and password is never exposed to the clients connected to the WCF service, would security in my situation be much of a problem? Just in case anyone suggests a method of security, I do not have SSL on my web server.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >