Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 208/563 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • still about perl vs python but (to me) slightly different from what has been asked [closed]

    - by B Chen
    Being a newbie to coding, I read from this site that Perl is still as viable as it has been, while Python, quoted from someone else's post, is good but just "snake oil" (not sure what this refers to exactly though). So from the responses in that post, I got the gist that Perl is good and worthy to learn. My question is - pardon me for phrasing it in this "non-programmer's" way - Which one should I learn FIRST? (I am actually currently learning R) Here below is the background info - (a) I will be using it mostly for data mining and statistics analysis (b) Will there be this "first" and "later" issue with learning either Perl or Python? That is, after I become competent with one language, would there be a need to learn the second one (for a similar task??) (c) If there should be circumstances where I must learn the second one, would learning Perl FIRST be better than learning Python? I hope to learn as much from exchanging info here, so please help provide with more than just "it depends" type of info. Great many thanks to all who choose to respond to my query.

    Read the article

  • Which approach would lead to an API that is easier to use?

    - by Clem
    I'm writing a JavaScript API and for a particular case, I'm wondering which approach is the sexiest. Let's take an example: writing a VideoPlayer, I add a getCurrentTime method which gives the elapsed time since the start. The first approach simply declares getCurrentTime as follows: getCurrentTime():number where number is the native number type. This approach includes a CURRENT_TIME_CHANGED event so that API users can add callbacks to be aware of time changes. Listening to this event would look like the following: myVideoPlayer.addEventListener(CURRENT_TIME_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().getCurrentTime()); }); The second approach declares getCurrentTime differently: getCurrentTime():CustomNumber where CustomNumber is a custom number object, not the native one. This custom object dispatches a VALUE_CHANGED event when its value changes, so there is no need for the CURRENT_TIME_CHANGED event! Just listen to the returned object for value changes! Listening to this event would look like the following: myVideoPlayer.getCurrentTime().addEventListener(VALUE_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().valueOf()); }); Note that CustomNumber has a valueOf method which returns a native number that lets the returned CustomNumber object being used as a number, so: var result = myVideoPlayer.getCurrentTime()+5; will work! So in the first approach, we listen to an object for a change in its property's value. In the second one we directly listen to the property for a change on its value. There are multiple pros and cons for each approach, I just want to know which one the developers would prefer to use!

    Read the article

  • Pirate Problem In Interview Question

    - by Hafiz
    Some one asked me this question in an interview, so I want to know that what can be technical or algorithmic or strategical solution can we provide? If I am a leader of Pirates who looted 100kg gold, now every pirate has 1 bullet in gun and every pirate wants to get each other's share. They are 5 in number including me. So what strategy I will use to get to kill others while being safe or is there way to decrease probability?

    Read the article

  • How to void checked exceptions in Java?

    - by deamon
    I consider checked exception for a design mistake in the Java language. They lead to leaky abstractions and a lot of clutter in the code. It seems that they force the programmer to handle exceptions early although they are in most cases better handled lately. So my question is how to avoid checked exception? My idea is to execute the actual code inside an exception translator using lambda expressions. Example: ExceptionConverter.convertToRuntimeException(() => { // do things that could throw checked exceptions here }); If for example a IOException occurs it gets rethrown as an exception with the same name but from a different class hierarchy (based on RuntimeException). This approach would effectivly remove the need to handle or declare checked exceptions. Exceptions could then be handled where and if it makes sense. Another solution would be to declare IOException throws Exception on each method. What do you think which solution is better? Do you know any better approach to avoid (suppress) checked exceptions in Java?

    Read the article

  • handling a GET error properly

    - by Andrew Heath
    I have a website that takes two primary get strings: ?type=GAME&id=SomeGameID ?type=SCENARIO&id=SomeScenarioID for reasons unknown, I have recently begun receiving requests for erroneous get strings from both Yandex and Baidu. They are always in the form of: ?type=GAME&id=SomeScenarioID None of my users are triggering these errors, so I am (sort of) confident that this is not due to an HTML template error somewhere on my part. There is also no HTTP_REFER showing up in the $_SERVER array, so I'm guessing these are direct requests from bad dbase data on their part. I see two options for dealing with these bad requests, and would like to know which is recommended... or if there are other, better options I have not thought of: simply 404 the request, since it is incorrect redirect the request to ?type=SCENARIO&id=SomeScenarioID because the scenario IDs are always valid, the breakage is due to asking for the wrong type.

    Read the article

  • How long should one wait before going for a MS?

    - by Jungle Hunter
    Removed duplicate aspects of the question Hi! I'm an undergraduate Master's student. I've what seems to be a good offer in hand in Singapore (if location plays a role). Is an undergraduate Master's good enough for a Master's or one should go for MS? How much time should one wait (in their job) before going for a MS if that's the decision? Does one lose the progress which one makes while at the job before the Master's? Note: Undergraduate Master's is when my degree is called Master's but it is my first degree. This one is a four year one.

    Read the article

  • Are there any "best practices" on cross-device development?

    - by vstrien
    Developing for smartphones in the way the industry is currently doing is relatively new. Of course, there has been enterprise-level mobile development for several decades. The platforms have changed, however. Think of: from stylus-input to touch-input (different screen res, different control layout etc.) new ways of handling multi-tasking on mobile platforms (e.g. WP7's "tombstoning") The way these platforms work aren't totally new (iPhone has been around for quite awhile now for example), but at the moment when developing a functionally equal application for both desktop and smartphone it comes down to developing two applications from ground up. Especially with the birth of Windows Phone with the .NET-platform on board and using Silverlight as UI-language, it's becoming appealing to promote the re-use of (parts of the UI). Still, it's fairly obvious that the needs of an application on a smartphone (or tablet) are very different compared to the needs of a desktop application. An (almost) one-on-one conversion will therefore be impossible. My question: are there "best practices", pitfalls etc. documented about developing "cross-device" applications (for example, developing an app for both the desktop and the smartphone/tablet)? I've been looking at weblogs, scientific papers and more for a week or so, but what I've found so far is only about "migratory interfaces".

    Read the article

  • Non-Profit Technololgy for Non-Profits?

    - by TomJ
    I've been looking around for a way to give back to the community, but I haven't found my right fit yet, so an idea came to mind: A non-profit technology "company" that targets non-profits. Do these exist? I've been doing some google searches and can only find software that is targeted for non-profits that is created by for-profit companies or that charges what I believe to be an outrages amount, conferences directed towards non-profits and technology they may use -- or articles complaining about the digital divide and how non-profits view technology as key but dont have the funds or the knowledge to employ it. Pseudo "Business Model" An open source 501(3)(c) organization that targets directly targets non-profits to fill the "digital divide." Most services would be free and consulting fees would be charged for customization. Donations would be accepted and government grants would be sought after. This would enable non-profits to keep pace with the for-profits in the technology sector, but at little to no cost. Perhaps the first "industry" to be targeted would be those that fill key social needs like unemployment, or food banks.

    Read the article

  • Learning the GO programming language and its prospects [closed]

    - by SHOUBHIK BOSE
    Possible Duplicate: What are the chances of Google's Go becoming a mainstream language? Recently I've started experimenting with The GO programming language by Google. Its a programmer-friendly language having the simplicity of Python. I was wondering whether companies other than Google would also start using Go for development, and if they do , what would be the prospects of being a Go programmer?

    Read the article

  • Task Management - How important it is for a entry level developer?

    - by Naveen Kumar
    I hold masters in CS & now I'm mobile apps developer (Entry Level) , I always start to plan things when starting or doing any project both at work & projects i do at Home (for passion) - as I can deliver the project on time but sometimes i m running out of time like 10 tasks a day vs my time forecast will take 2 on that day? As I'm beginner level, I want your suggestions on How important is Task Management for a person like me & for achieving my goals? My target for the next 3 year will be a Project Manager or Similiar Role - i belive which these time managing skills will be a needed quality.

    Read the article

  • Class instance clustering in object reference graph for multi-entries serialization

    - by Juh_
    My question is on the best way to cluster a graph of class instances (i.e. objects, the graph nodes) linked by object references (the -directed- edges of the graph) around specifically marked objects. To explain better my question, let me explain my motivation: I currently use a moderately complex system to serialize the data used in my projects: "marked" objects have a specific attributes which stores a "saving entry": the path to an associated file on disc (but it could be done for any storage type providing the suitable interface) Those object can then be serialized automatically (eg: obj.save()) The serialization of a marked object 'a' contains implicitly all objects 'b' for which 'a' has a reference to, directly s.t: a.b = b, or indirectly s.t.: a.c.b = b for some object 'c' This is very simple and basically define specific storage entries to specific objects. I have then "container" type objects that: can be serialized similarly (in fact their are or can-be "marked") they don't serialize in their storage entries the "marked" objects (with direct reference): if a and a.b are both marked, a.save() calls b.save() and stores a.b = storage_entry(b) So, if I serialize 'a', it will serialize automatically all objects that can be reached from 'a' through the object reference graph, possibly in multiples entries. That is what I want, and is usually provides the functionalities I need. However, it is very ad-hoc and there are some structural limitations to this approach: the multi-entry saving can only works through direct connections in "container" objects, and there are situations with undefined behavior such as if two "marked" objects 'a'and 'b' both have a reference to an unmarked object 'c'. In this case my system will stores 'c' in both 'a' and 'b' making an implicit copy which not only double the storage size, but also change the object reference graph after re-loading. I am thinking of generalizing the process. Apart for the practical questions on implementation (I am coding in python, and use Pickle to serialize my objects), there is a general question on the way to attach (cluster) unmarked objects to marked ones. So, my questions are: What are the important issues that should be considered? Basically why not just use any graph parsing algorithm with the "attach to last marked node" behavior. Is there any work done on this problem, practical or theoretical, that I should be aware of? Note: I added the tag graph-database because I think the answer might come from that fields, even if the question is not.

    Read the article

  • I don't understand why algorithms are so special

    - by Jessica
    I'm a student of computer science trying to soak up as much information on the topic as I can during my free time. I keep returning to algorithms time and again in various formats (online course, book, web tutorial), but the concept fails to sustain my attention. I just don't understand: why are algorithms so special? I can tell you why fractals are awesome, why the golden ratio is awesome, why origami is awesome and scientific applications of all the above. Heck I even love Newton's laws and conical sections. But when it comes to algorithms, I'm just not astounded. They are not insightful in new ways about human cognition at all. I was expecting algorithms to be shattering preconceptions and mind-altering but time and time again they fail miserably. What am I doing wrong in my approach? Can someone tell me why algorithms are so awesome?

    Read the article

  • design pattern for unit testing? [duplicate]

    - by Maddy.Shik
    This question already has an answer here: Unit testing best practices for a unit testing newbie 4 answers I am beginner in developing test cases, and want to follow good patterns for developing test cases rather than following some person or company's specific ideas. Some people don't make test cases and just develop the way their senior have done in their projects. I am facing lot problems like object dependencies (when want to test method which persist A object i have to first persist B object since A is child of B). Please suggest some good books or sites preferably for learning design pattern for unit test cases. Or reference to some good source code or some discussion for Dos and Donts will do wonder. So that i can avoid doing mistakes be learning from experience of others.

    Read the article

  • Automatic Appointment Conflict Resolution

    - by Thomas
    I'm trying to figure out an algorithm for resolving appointment times. I currently have a naive algorithm that pushes down conflicting appointments repeatedly, until there are no more appointments. # The appointment list is always sorted on start time appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 11:00 -> 12:30>, <Appointment: 13:00 -> 14:00>, <Appointment: 13:30 -> 14:30>, ] Constraints are that appointments: cannot be after 15:00 cannot be before 9:00 This is the naive algorithm for i, app in enumerate(appointment_list): for possible_conflict in appointment_list[i+1:]: if possible_conflict.start < app.end: difference = app.end - possible_conflict.start possible_conflict.end += difference possible_conflict.start += difference else: break This results in the following resolution, which obviously breaks those constraints, and the last appointment will have to be pushed to the following day. appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 12:00 -> 13:30>, <Appointment: 13:30 -> 14:30>, <Appointment: 14:30 -> 15:30>, ] Obviously this is sub-optimal, It performs 3 appointment moves when the confict could have been resolved with one: if we were able to push the first appointment backwards, we could avoid moving all the subsequent appointments down. I'm thinking that there should be a sort of edit-distance approach that would calculate the least number of appointments that should be moved in order to resolve the scheduling conflict, but I can't get the a handle on the methodology. Should it be breadth-first or depth first solution search. When do I know if the solution is "good enough"?

    Read the article

  • How do you put a database online?

    - by Dezrik
    I have a very beginner question regarding web development. I've had some experience with JSP, Hibernate, and MAMP to create a simple system for tracking inventory and sales. But this was all done locally on one computer. This time, I want to create a system that could be accessible online. It's to help my mother track her business wherever she goes. So there would be similar aspects like tracking inventory and sales. I understand that you have to have a server in which to host all the files in. But I don't understand how you can access your database online. Or what sorts of applications or products should be used. Currently the host of my database is localhost. How do put it online such that you can still do CRUD operations? Are there any guides to do this?

    Read the article

  • Getting that buzz back?

    - by kyndigs
    I have been working in development for a good company for a while now since graduating from university, I really enjoy it and have some great fun in the office and enjoy everything I am doing. But recently I have lost that old buzz, I cant bring myself to code outside of work, a while back I could be outside of work and come up with a nice idea and go away and develop that idea, but I feel that buzz has gone, I still love developing and technology but I just cant find the energy to do it when I am not at work. Has anyone else gone through a phase like this? What did you do to combat it and get that energy and buzz back? Maybe I need a new tehcnology, or a holiday!

    Read the article

  • What makes signed integers behave differently?

    - by 000
    In this example of x86_64 hex/disassembled code I see: 48B80000000000000000 mov rax, 0x0 Signed Byte 52 Unsigned Byte 52 Signed Short 14388 Unsigned Short 14388 Signed Int 943863860 Unsigned Int 943863860 Signed Int64 3472328296363079732 Unsigned Int64 3472328296363079732 Float 4.630555e-05 Double 1.39804332763832e-76 String 48B80000000000000000 which to me appears to have the same functionality as: 48C7C000000000 mov rax, 0x0 48C7C000000000 Signed Byte 52 Unsigned Byte 52 Signed Short 14388 Unsigned Short 14388 Signed Int 927152180 Unsigned Int 927152180 Signed Int64 3472328377950746676 Unsigned Int64 3472328377950746676 Float 1.163599e-05 Double 1.39806836023098e-76 String 48C7C000000000 How is the first example treated differently from the second example?

    Read the article

  • Is it typical for a provider of a web services to also provide client libraries?

    - by HDave
    My company is building a corporate Java web-app and we are leaning towards using GWT-RPC as the client-server protocol for performance reasons. However, in the future, we will need to provide an API for other enterprise systems to access our data as well. For this, we were thinking of a SOAP based web service. In my experience it is common for commercial providers of enterprise web applications to provide client libraries (Java, .NET, C#, etc.). Is this generally the case? I ask because if so, then why bother using SOAP or REST or any standard web services protocol at all? Why not just create a client libraries that communicate via GWT-RPC?

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • Is there a SUPPORTED way to run .NET 4.0 applications natively on a Mac?

    - by Dan
    What, if any, are the (Microsoft) supported options for running C#/.NET 4.0 code natively on the Mac? Yes, I know about Mono, but among other things, it lags Microsoft. And Silverlight only works in a web browser. A VMWare-type solution won't cut it either. Here's the subjective part (and might get this closed): is there any semi-authoritative answer to why Microsoft just doesn't support .NET on the Mac itself? It would seem like they could Silverlight and/or buy Mono and quickly be there. No need for native Visual Studio; cross-compiling and remote debugging is fine. The reason is that where I work there is a growing amount of Uncertainty about the future which is causing a lot more development to be done in C++ instead of C#; brand new projects are chosing to use C++. Nobody wants to tell management 18–24 months from now "sorry" should the Mac (or iPad) become a requirement. C++ is seen as the safer option, even if it (arguably) means a loss in productivity today.

    Read the article

  • Is there a SUPPORTED way to run .NET 4.0 applications natively on a Mac?

    - by Dan
    What, if any, are the Microsoft supported options for running C#/.NET 4.0 code natively on the Mac? Yes, I know about Mono, but among other things, it lags Microsoft. And Silverlight only works in a web browser. A VMWare-type solution won't cut it either. Here's the subjective part (and might get this closed): is there any semi-authoritative answer to why Microsoft just doesn't support .NET on the Mac itself? It would seem like they could Silverlight and/or buy Mono and quickly be there. No need for native Visual Studio; cross-compiling and remote debugging is fine. The reason is that where I work there is a growing amount of Uncertainty about the future which is causing a lot more development to be done in C++ instead of C#; brand new projects are chosing to use C++. Nobody wants to tell management 18–24 months from now "sorry" should the Mac (or iPad) become a requirement. C++ is seen as the safer option, even if it (arguably) means a loss in productivity today.

    Read the article

  • Why does this static field always get initialized over-eagerly?

    - by TheSilverBullet
    I am looking at this excellent article from Jon Skeet. While executing the demo code, Jon Skeet says that we can expect three different kinds of behaviours. To quote that article: The runtime could decide to run the type initializer on loading the assembly to start with... Or perhaps it will run it when the static method is first run... Or even wait until the field is first accessed... When I try this out (on framework 4), I always get the first result. That is, the static method is initialized before the assembly is loaded. I have tried running this multiple times and get the same result. (I tried both the debug and release versions) Why is this so? Am I missing something?

    Read the article

  • Is "Interface inheritance" always safe?

    - by Software Engeneering Learner
    I'm reading "Effective Java" by Josh Bloch and in there is Item 16 where he tells how to use inheritance in a correct way and by inheritance he means only class inheritance, not implementing interfaces or extend interfaces by other interfaces. I didn't find any mention of interface inheritance in the entire book. Does this mean that interface inheritance is always safe? Or there are guidlines for interface inheritance?

    Read the article

  • What php programmer should know?

    - by emchinee
    I've dig the database here and didn't found any answer for my question. What is a standard for a php programmer to know? I mean, literally, what group of language functions, mechanisms, variables should person know to consider oneself a (good) php programmer? (I know 'being good' is beyond language syntax, still I'm considering syntax of plain php only) To give an example what I mean: functions to control http sessions, cookies functions to control connection with databases functions to control file handling functions to control xml etc.. I omit phrases like 'security' or 'patterns' or 'framework' intentionally as it applies to every programming language. Hope I made myself clear, any input appreciated :) Note: Michael J.V. is right claiming that databases are independent from language, so to put my question more precisely and emphasise differences: Practises or security, are some ideas to implement (there is no 'Pattern' object with 'Decorator()' method, is there?) while using databases means knowing a mysqli and a set of its methods.

    Read the article

  • Logging errors caused by exceptions deep in the application

    - by Kaleb Pederson
    What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error? For example, let's say that I have an ETL system whose transform step involves: a transformer, pipeline, processing algorithm, and processing engine. In brief, the transformer takes in an input file, parses out records, and sends the records through the pipeline. The pipeline aggregates the results of the processing algorithm (which could do serial or parallel processing). The processing algorithm sends each record through one or more processing engines. So, I have at least four levels: Transformer - Pipeline - Algorithm - Engine. My code might then look something like the following: class Transformer { void Process(InputSource input) { try { var inRecords = _parser.Parse(input.Stream); var outRecords = _pipeline.Transform(inRecords); } catch (Exception ex) { var inner = new ProcessException(input, ex); _logger.Error("Unable to parse source " + input.Name, inner); throw inner; } } } class Pipeline { IEnumerable<Result> Transform(IEnumerable<Record> records) { // NOTE: no try/catch as I have no useful information to provide // at this point in the process var results = _algorithm.Process(records); // examine and do useful things with results return results; } } class Algorithm { IEnumerable<Result> Process(IEnumerable<Record> records) { var results = new List<Result>(); foreach (var engine in Engines) { foreach (var record in records) { try { engine.Process(record); } catch (Exception ex) { var inner = new EngineProcessingException(engine, record, ex); _logger.Error("Engine {0} unable to parse record {1}", engine, record); throw inner; } } } } } class Engine { Result Process(Record record) { for (int i=0; i<record.SubRecords.Count; ++i) { try { Validate(record.subRecords[i]); } catch (Exception ex) { var inner = new RecordValidationException(record, i, ex); _logger.Error( "Validation of subrecord {0} failed for record {1}", i, record ); } } } } There's a few important things to notice: A single error at the deepest level causes three log entries (ugly? DOS?) Thrown exceptions contain all important and useful information Logging only happens when failure to do so would cause loss of useful information at a lower level. Thoughts and concerns: I don't like having so many log entries for each error I don't want to lose important, useful data; the exceptions contain all the important but the stacktrace is typically the only thing displayed besides the message. I can log at different levels (e.g., warning, informational) The higher level classes should be completely unaware of the structure of the lower-level exceptions (which may change as the different implementations are replaced). The information available at higher levels should not be passed to the lower levels. So, to restate the main questions: What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error?

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >