Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 125/563 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Are the criticisms against Dart valid?

    - by Hassan
    According to this Wikipedia article, Microsoft, Apple, Mozilla, and others criticize Dart, a programming language Google introduced to work in web browsers, because they feel "it seems harmful (cf. VBScript in IE)". But Dart also compiles to Javascript, so a web application written in Dart can run on any modern browser. So are their concerns valid? Can Dart really be a threat to the web's openness?

    Read the article

  • web server response code 500

    - by Bryan Kemp
    I realize that this may spur a religious discussion, but I discussed this with friends and get great, but conflicting answers and the actual documentation is of little help. What does the 500 series response codes mean from the webserver? Internal Server Error, but that is vague. My assumption is that it means that something bad happened to the server (file system corruption, no connection to the database, network issue, etc.) but not specifically a data driven error (divide by zero, record missing, bad parameter, etc). Something to note, there are some web client implementations (the default Android and Blackberry httpclients) that do not allow access to the html boddy if the server response is 500 so there is no way to determine what caused the issue from the client. What I have been been implementing recently is a web service that returns a json payload wrapped in a response object that contains more specific error information if it is data related, but the server response will be 200 since it finished the actual processing. Thoughts?

    Read the article

  • How does I/O work for large graph databases?

    - by tjb1982
    I should preface this by saying that I'm mostly a front end web developer, trained as a musician, but over the past few years I've been getting more and more into computer science. So one idea I have as a fun toy project to learn about data structures and C programming was to design and implement my own very simple database that would manage an adjacency list of posts. I don't want SQL (maybe I'll do my own query language? I'm just having fun). It should support ACID. It should be capable of storing 1TB let's say. So with that, I was trying to think of how a database even stores data, without regard to data structures necessarily. I'm working on linux, and I've read that in that world "everything is a file," including hardware (like /dev/*), so I think that that obviously has to apply to a database, too, and it clearly does--whether it's MySQL or PostgreSQL or Neo4j, the database itself is a collection of files you can see in the filesystem. That said, there would come a point in scale where loading the entire database into primary memory just wouldn't work, so it doesn't make sense to design it with that mindset (I assume). However, reading from secondary memory would be much slower and regardless some portion of the database has to be in primary memory in order for you to be able to do anything with it. I read this post: Why use a database instead of just saving your data to disk? And I found it difficult to understand how other databases, like SQLite or Neo4j, read and write from secondary memory and are still very fast (faster, it would seem, than simply writing files to the filesystem as the above question suggests). It seems the key is indexing. But even indexes need to be stored in secondary memory. They are inherently smaller than the database itself, but indexes in a very large database might be prohibitively large, too. So my question is how is I/O generally done with large databases like the one I described above that would be at least 1TB storing a big adjacency list? If indexing is more or less the answer, how exactly does indexing work--what data structures should be involved?

    Read the article

  • What is the evidence that an API has exceeded its orthogonality in the context of types?

    - by hawkeye
    Wikipedia defines software orthogonality as: orthogonality in a programming language means that a relatively small set of primitive constructs can be combined in a relatively small number of ways to build the control and data structures of the language. The term is most-frequently used regarding assembly instruction sets, as orthogonal instruction set. Jason Coffin has defined software orthogonality as Highly cohesive components that are loosely coupled to each other produce an orthogonal system. C.Ross has defined software orthogonality as: the property that means "Changing A does not change B". An example of an orthogonal system would be a radio, where changing the station does not change the volume and vice-versa. Now there is a hypothesis published in the the ACM Queue by Tim Bray - that some have called the Bánffy Bray Type System Criteria - which he summarises as: Static typings attractiveness is a direct function (and dynamic typings an inverse function) of API surface size. Dynamic typings attractiveness is a direct function (and static typings an inverse function) of unit testing workability. Now Stuart Halloway has reformulated Banfy Bray as: the more your APIs exceed orthogonality, the better you will like static typing My question is: What is the evidence that an API has exceeded its orthogonality in the context of types? Clarification Tim Bray introduces the idea of orthogonality and APIs. Where you have one API and it is mainly dealing with Strings (ie a web server serving requests and responses), then a uni-typed language (python, ruby) is 'aligned' to that API - because the the type system of these languages isn't sophisticated, but it doesn't matter since you're dealing with Strings anyway. He then moves on to Android programming, which has a whole bunch of sensor APIs, which are all 'different' to the web server API that he was working on previously. Because you're not just dealing with Strings, but with different types, the API is non-orthogonal. Tim's point is that there is a empirical relationship between your 'liking' of types and the API you're programming against. (ie a subjective point is actually objective depending on your context).

    Read the article

  • HTML Parsing for multiple input files using java code [closed]

    - by mkp
    FileReader f0 = new FileReader("123.html"); StringBuilder sb = new StringBuilder(); BufferedReader br = new BufferedReader(f0); while((temp1=br.readLine())!=null) { sb.append(temp1); } String para = sb.toString().replaceAll("<br>","\n"); String textonly = Jsoup.parse(para).text(); System.out.println(textonly); FileWriter f1=new FileWriter("123.txt"); char buf1[] = new char[textonly.length()]; textonly.getChars(0,textonly.length(),buf1,0); for(i=0;i<buf1.length;i++) { if(buf1[i]=='\n') f1.write("\r\n"); f1.write(buf1[i]); } I've this code but it is taking only one file at a time. I want to select multiple files. I've 2000 files and I've given them numbering name from 1 to 2000 as "1.html". So I want to give for loop like for(i=1;i<=2000;i++) and after executing separate txt file should be generated.

    Read the article

  • How to promote code reuse and documentation?

    - by Graviton
    As a team lead of about 10+ developers, I would want to promote code reuse. We have written a lot of code-- a lot of them are repetitive over the past few years. The problem now is that a lot of these code are just duplicate of some other code or a slight variation of them. I have started the movement ( discussion) on how to make code into components so that they can be reused for the future projects, but the problem is that I afraid the new developers or other developers who are ignorant of the components will just go forward and write their own thing. Is there anyway to remind the developers to reuse the components/ improve the documentation/ contribute to the underlying component instead of duplicating the existing code and tweaking on it or just write their own? How to make the components easily discover-able, easily usable so that everyone will use it? Edit: I think every developer knows about the benefit of reusable components and wants to use them, it's just that we don't know how to make them discoverable. Also, the developers when they are writing code, they know they should write reusable code but lack of the motivation to do so.

    Read the article

  • Best way for a technical manager to stay up to date on technology

    - by JoelFan
    My manager asked for a list of technical blogs he should follow to stay current on technology. His problem is he keeps hearing terms that he hasn't heard of (i.e. NoSql, sharding, agure, sevice bus, etc.) and he would prefer to at least have a fighting chance of knowing something about them without having to be reactive and looking them up. Also I think he wants to have a big picture of all the emerging technologies and where they fit in together instead of just learning about each thing in isolation. He asked about blogs but I'm thinking print magazines may also help.

    Read the article

  • C# books for the experienced programmer

    - by Michael Dmitry Azarkevich
    So I've been programming in C# for 3 years now (been programming in various languages for 3 years before that as well) and most of the stuff I learned I pieced together on the internet. The thing is, I want to understand C# more formally and in depth and so would like to get some books on the subjects. Any books you'd recommend? Also, I've heard good things about "C# 4.0 in a Nutshell", "Pro C# 2010 and the .NET 4 Platform" and "CLR via C#". What do you think of these? (The people at stackoverflow told me to take it here. Please, Please tell me I'm in the right place this time)

    Read the article

  • How popular is ITIL in the rest of the world?

    - by Oz123
    I am sorry if this question is not 100% Programming wise, I just didn't know where to ask. Consider yourself lucky if you don't know what ITIL is. You can understand from my tone I don't like it - I find ITIL the complete opposite of how IT Company should work, being too bureaucratic and complicated. In Germany, where I work, it seems to be very popular, and I have been asked in several job interviews if I know ITIL. Do you know popular is it in the rest of the world? Should I worry about ITIL or I can snub it? I must also ask my European colleagues - Why do you think is ITIL so popular? Is there a strong empirical evidence that ITIL does work? By empirical, I mean not personal experiences of the kind "We are a company that is working with ITIL...". I can hardly imagine a multi-million dollar company like Apple or Google work with ITIL, but I can also hardly see how it can benefit small companies...

    Read the article

  • How to deal with colleagues refuse to follow practices?

    - by Adrian Shum
    I was discussing with another colleague about what we should be used when an DB entity is referring to another. I don't think there is any good reason to break the practice of putting the Primary Key in the referring entity. However, one of my colleague says: "You should use a surrogate key in the entity, but it is better to put the human-readable natural key in the referring entity. As long it is unique, it is fine and it is easier when you are doing support or maintenance job" I know it will works, but obviously it is not a good practice you are putting a non-PK unique column as "foreign key", just for gaining a bit of ease in writing SQL during support as we can have less table join. Though I mentioned the his approach is conceptual incorrect, and causing problem too practically etc, he seems rather trade off correctness in data model in exchange of ease of maintenance. And he said: "I know it is not good practice, but good practice is not golden rule" Honestly I feel frustrated when dealing with something like this. I know there are always case that we should break some rule or practice, but doubtless it is not such case now. What will you when you are facing situation like this? Please assume yourself being a senior developer which is expected to contribute in misc development direction and convention.

    Read the article

  • mp3 file streaming/download - apache server memory issue

    - by Manolis
    I have a website, in which users can upload mp3 files (uploadify), stream them using an html5 player (jplayer) and download them using a php script (www.zubrag.com/scripts/). When a user uploads a song, the path to the audio file is saved in the database and i'm using that data in order to play and show a download link for the song. The problem that i'm experiencing is that, according to my host, this method is using a lot of memory on the server, which is dedicated. Link to script: http://pastebin.com/Vus8SRa7 How should I handle the script properly? And what would be the best way to track down the problem? Any ideas on cleaning up the code? Any help much appreciated.

    Read the article

  • Is there ever a reason to use C++ in a Mac-only application?

    - by Emil Eriksson
    Is there ever a reason to use C++ in a Mac-only application? I not talking about integrating external libraries which are C++, what I mean is using C++ because of any advantages in a particular application. While the UI code must be written in Obj-C, what about logic code? Because of the dynamic nature of Objective-C, C++ method calls tend to be ever so slightly faster but does this have any effect in any imaginable real life scenario? For example, would it make sense to use C++ over Objective-C for simulating large particle systems where some methods would need to be called over and over in short time? I can also see some cases where C++ has a more appropriate "feel". For example when doing graphics, it's nice to have vector and matrix types with appropriate operator overloads and methods. This, to me, seems like it would be a bit clunkier to implement in Objective-C. Also, Objective-C objects can never be treated plain old data structures in the same manner as C++ types since Objective-C objects always have an isa-pointer. Wouldn't it make sense to use C++ instead in something like this? Does anyone have a real life example of a situation where C++ was chosen for some parts of an application? Does Apple use any C++ except for the kernel? (I don't want to start a flame war here, both languages have their merits and I use both equally though in different applications.)

    Read the article

  • What to do if I am working on a language that I don't like

    - by Sayem Ahmed
    Hi there, I really don't know if this is the right place to ask this question, but if it isn't, then I guess someone will notify. Anyway, I am working in a software development farm which is currently using PowerBuilder to develop a mid-size ERP solution. The work environment and company management are so great that it may be the best in the whole Bangladesh. Only problem is the technology that are currently being used, which is this PowerBuilder. Now I am a guy who tends to prefer modern development technologies, like DI containers, ORM, TDD, JQuery etc. PowerBuilder is a great tool too, but I couldn' like the application techniques used to build PB applications. These techniques are so inheritance-dependent that many a times these create a great deal of sufferings. I remember two days ago I had to change some processing logic in a core user object and as a result I had to test and re-test all the forms that the application have(apparently, there are almost 20 forms there, each of them with 3-4 kinds of functionalities). Also, learning PB is tough, because online material on this thing is very, very low. I can't afford to read all the documentation that PB provide because I have hard deadlines on the work that I have to do. Another thing with PB is that applications tend to rely on business logic that are implemented on databases which causes debugging to be a nightmare. As a result, I don't feel motivated enough to work in this IDE/System/Framework (or whatever) anymore. My productivity has greatly decreased, and I am not delivering quality code. I think I have the following options available to me - Remain in the current job, keep delivering worse code and let my productivity decrease day by day, taking salaries and bonuses but not delivering quality codes/doing my job the way I should, Search for a new job. At this point number 2 seems a good option, but there are also some issues. As I mentioned before, our management may be the best in the country. Our company owner is himself a software developer with 24 years of experience in software development. He is currently our Team Leader and System Analyst. He is by far the greatest manager and boss I have ever seen. He understands developer's mentality very well(as he IS himself a developer). He is also a great, kind and generous guy. Our company is only a start-up company with 10 developers. Among them, only 3-4 people knows about the business logic behind the ERP, and I am one of them. If I switch my current job, it may hamper the development of this product which I really don't want. I couldn't decide what to do in this situation, so I turned to the community for advice.

    Read the article

  • How to refactor a Python “god class”?

    - by Zearin
    Problem I’m working on a Python project whose main class is a bit “God Object”. There are so friggin’ many attributes and methods! I want to refactor the class. So Far… For the first step, I want to do something relatively simple; but when I tried the most straightforward approach, it broke some tests and existing examples. Basically, the class has a loooong list of attributes—but I can clearly look over them and think, “These 5 attributes are related…These 8 are also related…and then there’s the rest.” getattr I basically just wanted to group the related attributes into a dict-like helper class. I had a feeling __getattr__ would be ideal for the job. So I moved the attributes to a separate class, and, sure enough, __getattr__ worked its magic perfectly well… At first. But then I tried running one of the examples. The example subclass tries to set one of these attributes directly (at the class level). But since the attribute was no longer “physically located” in the parent class, I got an error saying that the attribute did not exist. @property I then read up about the @property decorator. But then I also read that it creates problems for subclasses that want to do self.x = blah when x is a property of the parent class. Desired Have all client code continue to work using self.whatever, even if the parent’s whatever property is not “physically located” in the class (or instance) itself. Group related attributes into dict-like containers. Reduce the extreme noisiness of the code in the main class. For example, I don’t simply want to change this: larry = 2 curly = 'abcd' moe = self.doh() Into this: larry = something_else('larry') curly = something_else('curly') moe = yet_another_thing.moe() …because that’s still noisy. Although that successfully makes a simply attribute into something that can manage the data, the original had 3 variables and the tweaked version still has 3 variables. However, I would be fine with something like this: stooges = Stooges() And if a lookup for self.larry fails, something would check stooges and see if larry is there. (But it must also work if a subclass tries to do larry = 'blah' at the class level.) Summary Want to replace related groups of attributes in a parent class with a single attribute that stores all the data elsewhere Want to work with existing client code that uses (e.g.) larry = 'blah' at the class level Want to continue to allow subclasses to extend, override, and modify these refactored attributes without knowing anything has changed Is this possible? Or am I barking up the wrong tree?

    Read the article

  • C# return variables

    - by pb01
    In a debate regarding return variables, some members of the team prefer a method to return the result directly to the caller, whereas others prefer to declare a return variable that is then returned to the caller (see code examples below) The argument for the latter is that it allows a developer that is debugging the code to find the return value of the method before it returns to the caller thereby making the code easier to understand: This is especially true where method calls are daisy-chained. Are there any guidelines as to which is the most efficient and/or are there any other reasons why we should adopt one style over another? Thanks private bool Is2(int a) { return a == 2; } private bool Is3(int a) { var result = a == 3; return result; }

    Read the article

  • The best Drupal and JavaScript developer?

    - by hakanito
    I've read a lot of JS articles and books by Nicholas Zakas and Addy Osmani, in my opinion evangelists in the field. But I am also a Drupal developer, and these guys are not. Many of the techniques they're talking about such as AMD and RequireJS are great, but it's hard to know how to integrate them when it comes to Drupal (and do it right, ofc). So my question is if there are any recognized developer/s out there with strong JavaScript AND Drupal experience?

    Read the article

  • Reverse horizontal and vertical for a HTML table

    - by porton
    There is a two-dimensional array describing a HTML table. Each element of the array consists of: the cell content rowspan colspan Every row of this two dimensional array corresponds to <td> cells of a <tr> of the table which my software should generate. I need to "reverse" the array (interchange vertical and horizontal direction). Insofar I considered algorithm based on this idea: make a rectangular matrix of the size of the table and store in every element of this matrix the corresponding index of the element of the above mentioned array. (Note that two elements of the matrix may be identical due rowspan/colspan.) Then I could use this matrix to calculate rowspan/colspan for the inverted table. But this idea seems bad for me. Any other algorithms? Note that I program in PHP.

    Read the article

  • Why are PHP function signatures so inconsistent?

    - by Shamim Hafiz
    I was going through some PHP functions and I could not help notice the following: <?php function foo(&$var) { } foo($a); // $a is "created" and assigned to null $b = array(); foo($b['b']); var_dump(array_key_exists('b', $b)); // bool(true) $c = new StdClass; foo($c->d); var_dump(property_exists($c, 'd')); // bool(true) ?> Notice the array_key_exists() and property_exists() function. In the first one, the property name(key for an array) is the first parameter while in the second one it is the second parameter. By intuition, one would expect them to have similar signature. This can lead to confusion and the development time may be wasted by making corrections of this type. Shouldn't PHP, or any language for that matter, consider making the signatures of related functions consistent?

    Read the article

  • What strategy to use when starting in a new project with no documentation?

    - by Amir Rezaei
    Which is the best why to go when there are no documentation? For example how do you learn business rules? I have done the following steps: Since we are using a ORM tool I have printed a copy of database schema where I can see relations between objects. I have made a list of short names/table names that I will get explained. The project is client/server enterprise application using MVVM pattern.

    Read the article

  • Creating reproducible builds to verify Free Software

    - by mikkykkat
    Free Software is about freedom and privacy, Open Source software is great but making that fully practical usually won't happen. Most Free Software developers publicize binaries that we can't verify are really compiled from the source code or have something bad injected already! We have the freedom to change the code, but privacy for ordinary users is missing. For desktop software there is a lot of languages and opportunities to create Free Software with a reproducible build process (compiling source code to always produce the exact same binary), but for mobile computing I don't know if same thing is possible or not? Mobile devices are probably the future of computing and Android is the only Open Source environment so far which accept Java for coding. Compiling same Android application won't result in the exact same binary every time. For Open Source Android apps how we can verify the produced binary (.apk) is really compiled from the source code? Is there any way to create reproducible builds from the Android SDK or does Java fail here for Free Software? is there any java software ever wrote with a reproducible build?

    Read the article

  • Does relying on intellisense and documentation a lot while coding makes you a bad programmer? [duplicate]

    - by sharp12345
    This question already has an answer here: Forgetting basic language functions due to use of IDE, over reliance? [duplicate] 4 answers Is a programmer required to learn and memorize all syntax, or is it ok to keep handy some documentation? Would it affect the way that managers look at coders? What are the downside of depending on intellisense and auto-complete technologies and pdf documentation?

    Read the article

  • Beginners Tips To Learn Vim

    - by Nathan Campos
    I'm the type of developer that only uses GUI fully-featured programmer editor, when I'm at Windows I use Notepad++, at my Mac I use TextMate and at Linux I use GEdit, but now I'm starting to develop inside my Linux server, which doesn't have any window manager installed and I saw this as a beautiful time to learn how to use Vim, which I always had problems to understand, I can't even open a file to edit at Vim, so I want to know: Which is the best eBook for a very beginner on this editor to learn how to use it? I really loved Vim after I saw all the awesome things that you can do with it and this is the perfect moment to learn how to use it. PS: It would be a lot better if it has a Kindle or ePub version

    Read the article

  • How was Git designed?

    - by Mark Canlas
    My workplace recently switched to Git and I've been loving (and hating!) it. I really do love it, and it is extremely powerful. The only part I hate is that sometimes it's too powerful (and maybe a bit terse/confusing). My question is... How was Git designed? Just using it for a short amount of time, you get the feel that it can handle many obscure workflows that other version control systems could not. But it also feels elegant underneath. And fast! This is no doubt in part to Linus's talent. But I'm wondering, was the overall design of git based off of something? I've read about BitKeeper but the accounts are scant on technical details. The compression, the graphs, getting rid of revision numbers, emphasizing branching, stashing, remotes... Where did it all come from? Linus really knocked this one out of the park and on pretty much the first try! It's quite good to use once you're past the learning curve.

    Read the article

  • How to check any undocumented methods provided by apple?

    - by Mahbubur R Aaman
    The following tools is provided by Apple dlopen dlsym objc_getClass sel_registerName objc_msgSend Those are listing Objective-C selectors, or strings. Objective-C selectors are stored in a special region of the binary, and therefore Apple could extract the content from there, and check if you've used some undocumented Objective-C methods. How to utilize these tools to find undocumented Objective-C methods? EDIT: Recently, one of my App rejected due to using one undocumented methods. -[UIDevice setOrientation] Since, selectors are independent from the class you're messaging, even if my custom class defines -setOrientation: irrelevant to UIDevice, there will be a possbility of being rejected.

    Read the article

  • What's the better user experience: Waiting once at startup for a long time or waiting frequently for a short time?

    - by Roflcoptr
    I'm currently design an application that involves a lot of calculation. Now I have generally two possibilities which I have both tested: 1) During startup of the application I calculated only the most important values and these values that consume a lot of time. So the user has to wait approximately 15 seconds during startup. But on the other hand a lot of user interactions require recalculation so that the user often has to wait 2-3 seconds after clicking somewhere until the application has calculated and loaded all values 2) I load everything during startup. This takes from 90 to 120 seconds... This is quite a long time, but the big advantage is that all the user interactions are executed immediately. So what would you generally consider the better approach? Loading all time-consuming operations during startup or when needed?

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >