Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 264/563 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Planning milestones and time

    - by Ignas
    I was hired by a marketing company a year ago initially for link building / SEO stuff, but I'm actually a Web developer and took the job just in desperation to have one (I'm still quite young and just finished 2nd year of University). From the 3rd day my boss realised that I'm not into that stuff at all and since he had an idea of a web based app we started to plan it. I estimated that it shouldn't take me longer than two months to do it, but as I was making it we soon realised that we want to add more and more stuff to make it even better. So the development on my own lasted for about 4 months, but then it became an enterprise size app and we hired another programmer to work along me. The guy was awesome at what he did, but because I was assigned to be programmer/project manager I had to set up milestones with deadlines and we missed most of them, because most of the time it was too much work, and my lack of experience kept me setting really optimistic deadlines. We still kept adding features and had changed the architecture of the application twice. My boss is a great guy and he gets that when we add features it expands the time frame in which things should be done so he wasn't angry at me nor the other guy. But I was feeling bad (I still am) that I suck at planning. I gained loads of experience from the programming side, but I still lack the management/planning skills which make me go nuts. So over the last year I have dedicated probably about 8 months of work to this app (obviously my studies affected it) and we're launching as a closed beta this month. So my question is how do I get better at planning/managing a project, how do you estimate the times? What do you take into consideration when setting goals. I'm working alone again because the other guy moved from the city. But I'm sure we'll be hiring to help me maintain it so I need to get better at it. Any hints, points or anything on the topic are appreciated.

    Read the article

  • 0.00006103515625 GB of RAM. Is .NET MicroFramework part of Windows CE?

    - by Rocket Surgeon
    The .NET MicroFramework claims to work on 64K RAM and has list of compatible targets vendors. At the same time, same vendors who ship hardware and create Board Support Packages (vendors like Adeneo) keep releasing something named Windows 7 CE BSP for the same hardware targets. Obviously the OS as heavy as WinCE needs more than 64K RAM. So, somehow .NET MicroFramework is relevant to WinCE, but how ? Is it part of bigger OS or is it base of it, or are both mutually exclusive ? Background: 0.00006103515625 GByte of RAM is same as 64Kbyte of RAM. I am looking for possiblity to use Microsoft development tools for small target like BeagleBone. http://www.adeneo-embedded.com/About-Us/News/Release-of-TI-BeagleBone Nice. Now .. where is a MicroFramework for the same beaglebone ? Is it inside the released pile ?

    Read the article

  • UI Controls Copyright

    - by user3692481
    I'm developing a cross-platform computer software. It will run on Windows and Mac OS X. For user experience reasons, I want it to have the same graphic on both platforms. I really like the Mac OS UI controls and I'd love to see them on the Windows version too. My question is: is it legal to "copy" UI components? I'm not going to copy icons or reproduce an existing Apple software. I would only "copy" some standard UI components such as Buttons, Progressbars, TreeView, ListView etc. You can see them here: http://i.stack.imgur.com/9YzYQ.png http://i.stack.imgur.com/MWR6B.jpg IMHO, they should not be copyrighted for two reasons: They are implicitly used by any Mac OS software There are a lot of Apps (for Windows and even Web-Apps) that are "inspired by" the Mac graphic. Am I right?

    Read the article

  • can you have too many dto/bo - mapping method

    - by Fredou
    I have a windows service, 2 web services and a web interface that need to follow the same path (data wise). So I came up with two ways of creating my solution. My concern is the fact that the UI/WS/etc will have their own kind of DTO (let's say the model in ASP.Net MVC) that should be mapped to a DTO so the SL can then map it to a BO then mapping it to the proper EF6 DTO so that I can save it in a database. So I'm thinking of doing it this way to remove one level of mapping. Which one should I take? Or is there a 3rd solution?

    Read the article

  • Switch or a Dictionary when assigning to new object

    - by KChaloux
    Recently, I've come to prefer mapping 1-1 relationships using Dictionaries instead of Switch statements. I find it to be a little faster to write and easier to mentally process. Unfortunately, when mapping to a new instance of an object, I don't want to define it like this: var fooDict = new Dictionary<int, IBigObject>() { { 0, new Foo() }, // Creates an instance of Foo { 1, new Bar() }, // Creates an instance of Bar { 2, new Baz() } // Creates an instance of Baz } var quux = fooDict[0]; // quux references Foo Given that construct, I've wasted CPU cycles and memory creating 3 objects, doing whatever their constructors might contain, and only ended up using one of them. I also believe that mapping other objects to fooDict[0] in this case will cause them to reference the same thing, rather than creating a new instance of Foo as intended. A solution would be to use a lambda instead: var fooDict = new Dictionary<int, Func<IBigObject>>() { { 0, () => new Foo() }, // Returns a new instance of Foo when invoked { 1, () => new Bar() }, // Ditto Bar { 2, () => new Baz() } // Ditto Baz } var quux = fooDict[0](); // equivalent to saying 'var quux = new Foo();' Is this getting to a point where it's too confusing? It's easy to miss that () on the end. Or is mapping to a function/expression a fairly common practice? The alternative would be to use a switch: IBigObject quux; switch(someInt) { case 0: quux = new Foo(); break; case 1: quux = new Bar(); break; case 2: quux = new Baz(); break; } Which invocation is more acceptable? Dictionary, for faster lookups and fewer keywords (case and break) Switch: More commonly found in code, doesn't require the use of a Func< object for indirection.

    Read the article

  • Pair programming and unit testing

    - by TheSilverBullet
    My team follows the Scrum development cycle. We have received feedback that our unit testing coverage is not very good. A team member is suggesting the addition of an external testing team to assist the core team, but I feel this will backfire in a bad way. I am thinking of suggesting pair programming approach. I have a feeling that this should help the code be more "test-worthy" and soon the team can move to test driven development! What are the potential problems that might arise out of pair programming??

    Read the article

  • Motivation and use of move constructors in C++

    - by Giorgio
    I recently have been reading about move constructors in C++ (see e.g. here) and I am trying to understand how they work and when I should use them. As far as I understand, a move constructor is used to alleviate the performance problems caused by copying large objects. The wikipedia page says: "A chronic performance problem with C++03 is the costly and unnecessary deep copies that can happen implicitly when objects are passed by value." I normally address such situations by passing the objects by reference, or by using smart pointers (e.g. boost::shared_ptr) to pass around the object (the smart pointers get copied instead of the object). What are the situations in which the above two techniques are not sufficient and using a move constructor is more convenient?

    Read the article

  • Area of testing

    - by ?????? ??????????
    I'm trying to understand which part of my code I should to test. I have some code. Below is example of this code, just to understand the idea. Depends of some parametrs I put one or another currency to "Event" and return his serialization in the controller. Which part of code I should to test? Just the final serialization, or only "Event" or every method: getJson, getRows, fillCurrency, setCurrency? class Controller { public function getJson() { $rows = $eventManager->getRows(); return new JsonResponse($rows); } } class EventManager { public function getRows() { //some code here if ($parameter == true) { $this->fillCurrency($event, $currency); } } public function fillCurrency($event, $currency) { //some code here if ($parameters == true) { $event->setCurrency($currency); } } } class Event { public function setCurrency($currency) { $this->updatedAt = new Datetime(); $this->currency = $currency; } }

    Read the article

  • Getting into C# and MVC4 coming from Javascript

    - by Stefan V.
    Let me know if this is the wrong place to ask this but, I am trying to get into a backend/server language coming from a front-end javascript background (vanilla, angular, jQuery and a bit of node and mongodb, also some experience with PHP and MySQL). Why C#? My company's entire server-side is MVC4. Occasionally, I am going through commits of the backend guys and have asked them all sorts of questions. A lot of what I have heard and seen just seems appealing. Anyway, I'd rather start with C# first and gradually adopt .NET MVC. Does anybody advice, tips, recommended books, etc for somebody trying to learn C# coming from a JS background?

    Read the article

  • Language Niches and Niche Libraries

    - by Roman A. Taycher
    "Everyone Knows" ... ... that c is widely used for low level programs in large part because operating system/device apis are usually in c. ... that Java is widely used for enterprise applications in large part because of enterprise libraries and ide support. ... that ruby is widely used for webapps thanks in large part because of rails and its library ecosytem But lets go into to details what are the specific niches and subniches. Especially with respect to libraries. Where might you embed lua for application scripting versus python. Where would you use Java vs C#. Which languages do different scientists use? Also which languages have libraries for these subniches? Things like bioperl/scipy/Incanter. Please no flamewars about how nice each language or environment is. This is where they used. Also no complaints about marketing/PHBs. (Manually migrated) I asked this question again after it was closed on stackoverflow.com

    Read the article

  • Text comparison algorithm using java-diff-utils

    - by java_mouse
    One of the features in our project is to implement a comparison algorithm between two versions of text and provide a % change between the two versions. While I was researching, I came across google java-diff-utils project. Has anyone used this for comparing text using java-diff-utils ? Using this utility, I can get a list of "delta" which I assume I can use it for the % of difference between two versions of the text? Is this a correct way of doing this? If you have done any text comparison algorithm using Java, could you give me some pointers?

    Read the article

  • designing classes with similar goal but widely different decisional core

    - by Stefano Borini
    I am puzzled on how to model this situation. Suppose you have an algorithm operating in a loop. At every loop, a procedure P must take place, whose role is to modify an input data I into an output data O, such that O = P(I). In reality, there are different flavors of P, say P1, P2, P3 and so on. The choice of which P to run is user dependent, but all P have the same finality, produce O from I. This called well for a base class PBase with a method PBase::apply, with specific reimplementations of P1::apply(I), P2::apply(I), and P3::apply(I). The actual P class gets instantiated in a factory method, and the loop stays simple. Now, I have a case of P4 which follows the same principle, but this time needs additional data from the loop (such as the current iteration, and the average value of O during the previous iterations). How would you redesign for this case?

    Read the article

  • What triggered the popularity of lambda functions in modern programming languages?

    - by Giorgio
    In the last few years anonymous functions (AKA lambda functions) have become a very popular language construct and almost every major / mainstream programming language has introduced them or is planned to introduce them in an upcoming revision of the standard. Yet, anonymous functions are a very old and very well-known concept in Mathematics and Computer Science (invented by the mathematician Alonzo Church around 1936, and used by the Lisp programming language since 1958, see e.g. here). So why didn't today's mainstream programming languages (many of which originated 15 to 20 years ago) support lambda functions from the very beginning and only introduced them later? And what triggered the massive adoption of anonymous functions in the last few years? Is there some specific event, new requirement or programming technique that started this phenomenon?

    Read the article

  • Go/Obj-C style interfaces with ability to extend compiled objects after initial release

    - by Skrylar
    I have a conceptual model for an object system which involves combining Go/Obj-C interfaces/protocols with being able to add virtual methods from any unit, not just the one which defines a class. The idea of this is to allow Ruby-ish open classes so you can take a minimalist approach to library development, and attach on small pieces of functionality as is actually needed by the whole program. Implementation of this involves a table of methods marked virtual in an RTTI table, which system functions are allowed to add to during module initialization. Upon typecasting an object to an interface, a Go-style lookup is done to create a vtable for that particular mapping and pass it off so you can have comparable performance to C/C++. In this case, methods may be added /afterwards/ which were not previously known and these new methods allow newer interfaces to be satisfied; while I like this idea because it seems like it would be very flexible (disregarding the potential for spaghetti code, which can happen with just about any model you use regardless). By wrapping the system calls for binding methods up in a set of clean C-compatible calls, one would also be able to integrate code with shared libraries and retain a decent amount of performance (Go does not do shared linking, and Objective-C does a dynamic lookup on each call.) Is there a valid use-case for this model that would make it worth the extra background plumbing? As much as this Dylan-style extensibility would be nice to have access to, I can't quite bring myself to a use case that would justify the overhead other than "it could make some kinds of code more extensible in future scenarios."

    Read the article

  • Should I start MCPD training now or wait for new exams?

    - by lunchmeat317
    i apologize if this question has been asked before, or if this is the wrong place to put it. I'm beginning my study track for the MCPD certification in Web Development. However, Microsoft plans to retire this certification on July 31st of 2013, along with two of the necessary tests to receive the certification. On MS's site, I can't find a newer certification path to take - I imagine that Microsoft will release new certification paths and new tests for their new software, but I don't know when that will happen. I don't really know anything about Microsoft's process, as this is the first Microsoft certification I'll be studying for. The bottom line is this - I don't want to lose six months waiting for a new test to appear that won't expire, but I don't want to rush to get a certification that will be invalid in six months (or have to reset any progress due to new study material). To those with experience in affairs like this - what is the best course to take, and can I maximize the time I have now (not wait for new testing material)? Is there any way to find material for the new tests that Microsoft will be rolling out? Thank you for your patience. If this is the wrong place to put this question, I would like to request that it be moved to the correct StackExchange site instead of being closed. Thanks for your help!

    Read the article

  • Reconstruct a file from a TCP stream

    - by Abhishek Chanda
    I have a client and a server and a third box which sees all packets from the server to the client (but not the other way around). Now when the client requests a file from the server (over HTTP), the third box sees the response. I am trying to reconstruct the file there. I am using libpcap to capture TCP datagrams and trying to reconstruct the file there. Here is what I did Listen for packets on an interface Group all packets which have the same ACK number Sort the group based on SEQ number Extract data from each packet and combine them and write to the disk The problem is, the file thus generated is not exactly the same as the original file. Does everything sound correct here? Some more details: I am using C++ The packet data is being stored as std::vector<char> I did change the byte order while reading the ack number and seq number from the packet using ntohl I am not sure if I need to change the byte order for the data as well. I tried to reverse the data from each packet before combining them, even that did not work. Is there something I am missing?

    Read the article

  • C# replacing out parameters with struct

    - by Jonathan
    I'm encountering a lot of methods in my project that have a bunch of out parameters embedded in them and its making it cumbersome to call the methods as I have to start declaring the variables before calling the methods. As such, I would like to refactor the code to return a struct instead and was wondering if this is a good idea. One of the examples from an interface: void CalculateFinancialReturnTotals(FinancialReturn fr, out decimal expenses, out decimal revenue, out decimal levyA, out decimal levyB, out decimal profit, out decimal turnover, out string message) and if I was to refactor that, i would be putting all the out parameters in the struct such that the method signature is much simpler as below. [structName] CalculateFinancialReturnTotals(FinancialReturn fr); Please advise.

    Read the article

  • Is having functionality in DB a road block to scalability?

    - by Estefany Velez
    I may not be able to give the right title to the question. But here it is, We are developing financial portal for wealth management. We are expecting over 10000 clients to use the application. The portal calculates various performance analytics based on the the technical analysis of the stock market. We developed lot of the functionality through Stored procedures, user defined functions, triggers etc. through Database. We thought we can gain huge performance boost doing stuff directly in database than through C# code. And we actually did get a huge performance boost. When I tried to brag about the achievement to our CTO, he counter questioned my decision of having functionality implemented in database rather than code. According to him such applications suffer scalability problems. In his words "These days things are kept in memory/cache. Clustered data is hard to manage over time. Facebook, Google have nothing in database. It is the era of thin servers and thick clients. DB is used only to store plain data and functionality should be completely decoupled from the database." Can you guys please give me some suggestions as to whether what he says is right. How to go about architect such an application?

    Read the article

  • What is use of universal character names in identifiers in C++

    - by Jan Hudec
    The C++ standard (I noticed it in the new one, but it did already exist in C++03) specifies universal character names, written as \uNNNN and \UNNNNNNNN and representing the characters with unicode codepoints NNNN/NNNNNNNN. This is useful with string literals, especially since explicitly UTF-8, UTF-16 and UCS-4 string literals are also defined. However, the universal character literals are also allowed in identifiers. What is the motivation behind that? The syntax is obviously totally unreadable, the identifiers may be mangled for the linker and it's not like there was any standard function to retrieve symbols by name anyway. So why would anybody actually use an identifier with universal character literals in it? Edit: Since it actually existed in C++03 already, additional question would be whether you actually saw a code that used it?

    Read the article

  • How to learn programming in Kindergarten?

    - by Kinder
    Last time I asked for peer review on a new language called KinderScript, which its Code Division Multiple Access succinct style looked like white noise that saturated two police reviewer's narrow band. The question has only 1 hour life with 38 views shortly after the shouting of shut-up-leave-now. Ok, That's totally off topic. That is not the question. I'm asking a peer review on the design of KinderScript [1], within the context of an intriguing: "How to learn programming in kindergarten?" [1] http://code.google.com/p/ac-me/downloads/detail?name=kinder.pdf&can=2&q= Thanks for any feedback. No police please. I choose this forum to ask because here has not only many professional but also many new leaners. Both views are appreciated.

    Read the article

  • Community to discuss project ideas

    - by Auxiliary
    Although I already predict the down votes but the question has stuck in my throat for a while now. I think this has happened to many of us. Sometimes we find a great idea for a project and obviously think this is THE GREATEST idea ever but then one of the following things will happen: The project is a small one, so you might actually give it a try and see how it goes. The project is a big one, even a risk, and you just need a good programmer's community that you could just discuss your idea with them and see what they say and even get some help to make it happen. And there's always the possibility of others stealing your idea which is really bad. So could anyone suggest an online community or place or even method of talking about ideas and the ways of developing them? and do you think it's a good thing to tell others about your idea?

    Read the article

  • Inspiring the method of teaching. Example- C++ :)

    - by Ashwin
    A year ago I graduated with a degree in Computer Science and Engineering. Considering C++ as the first choice of programming language I have been in the process of learning C++ in many ways. At first - five years back - I had many conceptions, most of which were so abstract to me. It started when I knew almost everything about Structs in C and nothing about Classes in C++. I went through a great time experimenting them all and learning a lot. I had a hard time evaluating Procedural programming vs Object-Oriented Programming. Deciding when to choose Procedural or Object-Oriented Programming took a great deal of patience for me. I knew that I cannot underestimate any of these Programming styles... Though Procedural programming is often a better choice than simple sequential unstructured programming, when solving problems with procedural programming, we usually divide one problem into several steps in order regarded as functions. Then we call these functions one by one to get the result of the problem. When solving problems with Object Oriented Priciples we divide one problem into several classes and form the interaction between them. Evaluating these two at the beginning (as a learner) required a lot of inspiration and thoughts. Instructing to think step by step. Relative concepts to understand deeply. Intensive interests to contrast both solving in both POP and OOP. If you were ever a mentor: What ideas/methods would you teach to students in which it will Inspire them to learn a programming language (in general, computer sciences)?

    Read the article

  • Jenkins Paramerized Trigger + Copy Artifact

    - by Josh Kelley
    I'm working on setting up Jenkins to handle our release builds. A release build consists of a Windows installer that includes some binaries that must be built on Linux. Here's what I have so far: The Windows portion and Linux portion are set up as separate Jenkins projects. The Windows project is parameterized, taking the Subversion tag to build and release. As part of its build, the Windows project triggers a build of that same Subversion tag for the Linux project (using the Parameterized Trigger plugin) then copies the artifacts from the Linux project (using the Copy Artifact plugin) to the Windows project's workspace so that they can be included in the Windows installer. Where I'm stuck: Right now, Copy Artifact is set up to copy the last successful build. It seems more robust to configure Copy Artifact to copy from the exact build that Parameterized Trigger triggered, but I'm having trouble figuring out how to make that work. There's an option for a "build selector" parameter that I think is intended to help with this, but I can't figure out how it's supposed to be set up (and blindly experimenting with different possibilities is somewhat painful when the build takes an hour or two to find success or failure). How should I set this up? How does build selector work?

    Read the article

  • Declaring interface in the same file as the base class, is it a good practice?

    - by Louis Rhys
    To be interchangable and testable, normally services with logic needs to have interface, e.g. public class FooService: IFooService { ... } Design-wise, I agree with this, but one of the things that bothers me with this approach is that for one service you will need to declare two things (the class and the interface), and in our team, normally two files (one for the class and one for the interface). Another discomfort is the difficulty in navigation because using "Go to definition" in IDE (VS2010) will point to the interface (since other classes refer to the interface), not the actual class. I was thinking that writing IFooService in the same file as FooService will reduce the above weirdness. After all, IFooService and FooService are very related. Is this a good practice? Is there a good reason that IFooService must be located in its own file?

    Read the article

  • Sharing Authentication Across Subdomains using cookies

    - by Jordan Reiter
    I know that in general cookies themselves are not considered robust enough to store authentication information. What I am wondering is if there is an existing design pattern or framework for sharing authentication across subdomains without having to use something more complex like OpenID. Ideally, the process would be that the user visits abc.example.org, logs in, and continues on to xyz.example.org where they are automatically recognized (ideally, the reverse should also be possible -- a login via xyz means automatic login at abc). The snag is that abc.example.org and xyz.example.org are both on different servers and different web application frameworks, although they can both use a shared database. The web application platforms include PHP, ColdFusion, and Python (Django), although I'm also interested in this from a more general perspective (i.e. language agnostic).

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >