Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 264/563 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Advice for a distracted, unhappy, recently graduated programmer? [closed]

    - by Re-Invent
    I graduated 4 months ago. I had offers from a few good places to work at. At the same time I wanted to stick to building a small software business of my own, still have some ideas with good potential, some half done projects frozen in my github. But due to social pressures, I chose a job, the pay is great, but I am half-passionate about it. A small team of smart folks building useful product, working out contracts across the world. I've started finding it extremely boring. Boring to the extent that I skip 2-3 days a week together not doing work. Neither do I spend that time progressing any of my own projects. Yes, I feel stupid at the way I'm wasting time, but I don't understand exactly why is it happening. It's as if all the excitement has been drained. What can I do about it? Long version: School - I was in third standard. Only students, 6th grade had access to computer labs. I once peeked into the lab from the little door opening. No hard-disks, MS DOS on 5 1/2 inch floppies. I asked a senior student to play some sound in BASIC. He used PLAY to compose a tune. Boy! I was so excited, I was jumping from within. Back home, asked my brother to teach me some programming. We bought a book "MODERN All About GW-BASIC for Schools & Colleges". The book had everything, right from printing, to taking input, file i/o, game programming, machine level support, etc. I was in 6th standard, wrote my first game - a wheel of fortune, rotated the wheel by manipulating 16 color palette's definition. Got internet soon, got hooked to QuickBasic programming community. Made some more games "007 in Danger", "Car Crush 2" for submission to allbasiccode archives. I was extremely excited about all this. My interests now swayed into "hacking" (computer security). Taught myself some perl, found it annoying, learnt PHP and a bit of SQL. Also taught myself Visual Basic one of the winters and wrote a pacman clone with Direct X. By the time I was in 10th standard, I created some evil tools using visual basic, php and mysql and eventually landed myself into an unpaid side-job at a government facility, building evil tools for them. It was a dream come true for crackers of that time. And so was I, still very excited. Things changed soon, last two years of school were not so great as I was balancing preps for college, work at govt. and studies for school at same time. College - College was opposite of all I had wished it to be. I imagined it to be a place where I'd spend my 4 years building something awesome. It was rather an epitome of rote learning, attendance, rules, busy schedules, ban on personal laptops, hardly any hackers surrounding you and shit like that. We had to take permissions to even introduce some cultural/creative activities in our annual schedule. The labs won't be open on weekends because the lab employees had to have their leaves. Yes, a horrible place for someone like me. I still managed to pull out a project with a friend over 2 months. Showed it to people high in the academia hierarchy. They were immensely impressed, we proposed to allow personal computers for students. They made up half-assed reasons and didn't agree. We felt frustrated. And so on, I still managed to teach myself new languages, do new projects of my own, do an intern at the same govt. facility, start a small business for sometime, give a talk at a conference I'm passionate about, win game-dev and hacking contest at most respected colleges, solve good deal of programming contest problems, etc. At the same time I was not content with all these restrictions, great emphasis on rote learning, and sheer wastage of time due to college. I never felt I was overdoing, but now I feel I burnt myself out. During my last days at college, I did an intern at a bigco. While I spent my time building prototypes for certain LBS, the other interns around me, even a good friend, was just skipping time. I thought maybe, in a few weeks he would put in some serious efforts at work assigned to him, but all he did was to find creative ways to skip work, hide his face from manager, engage people in talks if they try to question his progress, etc. I tried a few time to get him on track, but it seems all he wanted was to "not to work hard at all and still reap the fruits". I don't know how others take such people, but I find their vicinity very very poisonous to one's own motivation and productivity. Over that, the place where I come from, HRs don't give much value to what have you done past 4 years. So towards the end of out intern, we all were offered work at the bigco, but the slacker, even after not writing more than 200 lines of code was made a much better offer. I felt enraged instantly - "Is this how the corp world treats someone who does fruitful, if not extra-ordinary work form them for past 6 months?". Yes, I did try to negotiate and debate. The bigcos seem blind due to departmentalization of responsibilities and many layers of management. I decided not to be in touch with any characters of that depressing play. Probably the busy time I had at college, ignoring friends, ignoring fun and squeezing every bit of free time for myself is also responsible. Probably this is what has drained all my willingness to work for anyone. I find my day job boring, at the same time I with to maintain it for financial reasons. I feel a bit burnt out, unsatisfied and at the same time an urge to quit working for someone else and start finishing my frozen side-projects (which may be profitable). Though I haven't got much to support myself with food, office, internet bills, etc in savings. I still have my day job, but I don't find it very interesting, even though the pay is higher than the slacker, I don't find money to be a great motivator here. I keep comparing myself to my past version. I wonder how to get rid of this and reboot myself back to the way I was in school days - excited about it, tinkering, building, learning new things daily, and NOT BORED?

    Read the article

  • COM INTEROP Support - which is better? C# or VB

    - by dot
    I keep hearing that c# is "better" than vb... but as far as I can see, aside from syntactical differences, both compile down to the same IL. I've found some good articles by googling that explain what the differences are between the two and so I feel comfortable in "diffusing" conversations between developers arguing over vb / c#. =) But I did read an article that said vb.net 2005 had better support for com interop stuff. But i'm wondering if this is still the case? This is of interest to me because we are in the middle of redesigning an old vb6 app that communicates with some older COM components. Does anyone have recent experience with .NET and COM interop? Thanks.

    Read the article

  • Real performance of node.js

    - by uther.lightbringer
    I've got a question concerning node.js performance. There is quite lot of "benchmarks" and a lot of fuss about great performance of node.js. But how does it stand in real world? Not just process empty request at high speed. If someone could try to compare this scenario: Java (or equivalent) server running an application with complex business logic between receiving request and sending response. How would node.js deal with it? If there was need for a lot of JavaScript processing on server side, is node.js really so fast that it can execute JavaScript, and stand a chance against more heavyveight competitors?

    Read the article

  • Book about tcp, http, named pipe, shared memory, wcf and other inter-process communication protocol

    - by Samuel
    Recently, I had to create a program to send messages between two winforms executable. I used a tool with simple built-in functionalities to prevent having to figure out all the ins and outs of this vast quantity of protocols that exist. But now, I'm ready to learn more about the internals difference between each of theses protocols. I googled a couple of them but it would be greatly appreciate to have a good reference book that gives me a clean idea of how each protocol works and what are the pros and cons in a couple of context. Here is a list of nice protocols that I found: Shared memory TCP List item Named Pipe File Mapping Mailslots MSMQ (Microsoft Queue Solution) WCF I know that all of these protocols are not specific to a language, it would be nice if example could be in .net. Thank you very much.

    Read the article

  • flat files vs. RDBMS database, few read/writes, few changes

    - by Bob Lapique
    I have to handle data from long term (years, decades) climate monitoring stations. The data flow usually starts with raw data (voltages, etc.) plus quality check information (pressure, temperature, flow rate, etc.) generally recorded @ 1Hz. Then, the data are assigned a quality flag (human and/or program), processed (apply calibration curves) and flagged. So, we basically end up with 2 datasets : raw and processed data. New data are typically added once a day (~500Ko/day/instrument). Simultaneous queries are not likely to ever happen. I wanted to go for a RDBMS (we have a MySQL server) and have some experience in database design, but the IT guy keeps telling me that flat files will to the job just as well. I suspect him to try to make his life easier when it comes to backup/upgrade the MySQL. There are not so many links between data, they don't change much, but the quality flags will change. A RDBMS is easier to compare data from different instruments on a "many days" scale, compared to daily text files. Well, what would you advise ? Thanks.

    Read the article

  • cloud programming for OpenStack in C / C++

    - by Basile Starynkevitch
    (Sorry for such a fuzzy question, I am very newbie to cloud programming) I am interested in designing (and developing) a (free software) program in C or C++ (probably, most of it being meta-programmed, i.e. part of the C code code being generated). I am still in the thinking / designing phase. And I might perhaps give up. For reference, I am the main architect and implementor of GCC MELT, a domain specific language to extend the GCC compiler (the MELT language is translated to C/C++ and is bootstrapped: the MELT to C/C++ translator being written in MELT). And I am dreaming of extending it with some cloud computing abilities. But I am a newbie in cloud computing. (I am only interested in free-software, GPLv3 friendly, based cloud computing, which probably means openstack). I believe that "compiling on the cloud with some enhanced GCC" could make sense (for super-optimizations or static analysis of e.g. an entire Linux distribution, or at least a massive GCC compiled free software like Qt, GCC itself, or the Linux kernel). I'm dreaming of a MELT specific monitoring program which would store, communicate, and and enhance GCC compilation (extended by MELT). So the picture would be that each GCC process (actually the cc1 or cc1plus started by the gcc driver, suitably extended by some MELT extension) would communicate with some monitor. That "monitoring/persisting" program would run "on the cloud" (and probably manage some information produced by GCC e.g. on NoSQL bases). So, how should some (yet to be written) C program (some Linux daemon) be designed to be cloud-friendly? So far, I understood that it should provide some Web service, probably thru a RESTful service, so should use an HTTP server library like onion. And that OpenStack is able to start (e.g. a dozen of) such services. But I don't have a clear picture of what OpenStack brings. So far, I noticed the ability to manage (and distribute) virtual machines (with some Python API). It is less clear how can it distribute some ELF executable, how can it start it, etc. Do you have any references or examples of C / C++ programming on the cloud? How should a "cloud-friendly" (actually, OpenStack friendly) C/C++ server application be designed?

    Read the article

  • Nested entities in Google App Engine. Do I do it right?

    - by Aleksandr Makov
    Trying to make most of the GAE Datastore entities concept, but some doubts drill my head. Say I have the model: class User(ndb.Model): email = ndb.StringProperty(indexed=True) password = ndb.StringProperty(indexed=False) first_name = ndb.StringProperty(indexed=False) last_name = ndb.StringProperty(indexed=False) created_at = ndb.DateTimeProperty(auto_now_add=True) @classmethod def key(cls, email): return ndb.Key(User, email) @classmethod def Add(cls, email, password, first_name, last_name): user = User(parent=cls.key(email), email=email, password=password, first_name=first_name, last_name=last_name) user.put() UserLogin.Record(email) class UserLogin(ndb.Model): time = ndb.DateTimeProperty(auto_now_add=True) @classmethod def Record(cls, user_email): login = UserLogin(parent=User.key(user_email)) login.put() And I need to keep track of times of successful login operations. Each time user logs in, an UserLogin.Record() method will be executed. Now the question — do I make it right? Thanks. EDIT 2 Ok, used the typed arguments, but then it raised this: Expected Key instance, got User(key=Key('User', 5418393301680128), created_at=datetime.datetime(2013, 6, 27, 10, 12, 25, 479928), email=u'[email protected]', first_name=u'First', last_name=u'Last', password=u'password'). It's clear to understand, but I don't get why the docs are misleading? They implicitly propose to use: # Set Employee as Address entity's parent directly... address = Address(parent=employee) But Model expects key. And what's worse the parent=user.key() swears that key() isn't callable. And I found out the user.key works. EDIT 1 After reading the example form the docs and trying to replicate it — I got type error: TypeError('Model constructor takes no positional arguments.'). This is the exacto code used: user = User('[email protected]', 'password', 'First', 'Last') user.put() stamp = UserLogin(parent=user) stamp.put() I understand that Model was given the wrong argument, BUT why it's in the docs?

    Read the article

  • Push-Based Events in a Services Oriented Architecture

    - by Colin Morelli
    I have come to a point, in building a services oriented architecture (on top of Thrift), that I need to expose events and allow listeners. My initial thought was, "create an EventService" to handle publishing and subscribing to events. That EventService can use whatever implementation it desires to actually distribute the events. My client automatically round-robins service requests to available service hosts which are determined using Zookeeper-based service discovery. So, I'd probably use JMS inside of EventService mainly for the purpose of persisting messages (in the event that a service host for EventService goes down before it can distribute the message to all of the available listeners). When I started considering this, I began looking into the differences between Queues and Topics. Topics unfortunately won't work for me, because (at least for now), all listeners must receive the message (even if they were down at the time the event was pushed, or hadn't made a subscription yet because they haven't completed startup (during deployment, for example) - messages should be queued until the service is available). However, I don't want EventService to be responsible for handling all of the events. I don't think it should have the code to react to events inside of it. Each of the services should do what it needs with a given event. This would indicate that each service would need a JMS connection, which questions the value of having EventService at all (as the services could individually publish and subscribe to JMS directly). However, it also couples all of the services to JMS (when I'd rather that there be a single service that's responsible for determining how to distribute events). What I had thought was to publish an event to EventService, which pulls a configuration of listeners from some configuration source (database, flat file, irrelevant for now). It replicates the message and pushes each one back into a queue with information specific to that listener (so, if there are 3 listeners, 1 event would become 3 events in JMS). Then, another thread in EventService (which is replicated, running on multiple hots) would be pulling from the queue, attempting to make the service call to the "listener", and returning the message to the queue (if the service is down), or discarding the message (if the listener completed successfully). tl;dr If I have an EventService that is responsible for receiving events and delegating service calls to "event listeners," (which are really just endpoints on other services), how should it know how to craft the service call? Should I create a generic "Event" object that is shared among all services? Then, the EventService can just construct this object and pass it to the service call. Or is there a better answer to this problem entirely?

    Read the article

  • How to convert a Bazaar repository to GIT repository?

    - by Naruto Uzumaki
    We have a large bazaar repository and we want to convert it to a git repository. The bazaar repository contains the folders of each of the interns. Any documentation/code prepared by interns is committed in their directory so there are a huge number of commits. What steps should be performed to securely convert the bazaar repository to a git repository so that we do not lose any commit information. We firstly need to create a backup of the existing bazaar repository and then convert it. Edit: I followed this link: http://librelist.com/browser//cville/2010/2/9/migrate-repository-bzr-to-git/ It's working fine on my system with Ubuntu. But when I try to run it on the actual server it gives me EOF error and crashes Starting export of 1036 revisions ... fatal: EOF in data (1825 bytes remaining) fast-import: dumping crash report to .git/fast_import_crash_11804 Edit 2: I also tried it on a new CentOS system and received the following error fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree. Use '--' to separate paths from revisions

    Read the article

  • Are XML Comments Necessary Documentation?

    - by Bob Horn
    I used to be a fan of requiring XML comments for documentation. I've since changed my mind for two main reasons: Like good code, methods should be self-explanatory. In practice, most XML comments are useless noise that provide no additional value. Many times we simply use GhostDoc to generate generic comments, and this is what I mean by useless noise: /// <summary> /// Gets or sets the unit of measure. /// </summary> /// <value> /// The unit of measure. /// </value> public string UnitOfMeasure { get; set; } To me, that's obvious. Having said that, if there were special instructions to include, then we should absolutely use XML comments. I like this excerpt from this article: Sometimes, you will need to write comments. But, it should be the exception not the rule. Comments should only be used when they are expressing something that cannot be expressed in code. If you want to write elegant code, strive to eliminate comments and instead write self-documenting code. Am I wrong to think we should only be using XML comments when the code isn't enough to explain itself on its own? I believe this is a good example where XML comments make pretty code look ugly. It takes a class like this... public class RawMaterialLabel : EntityBase { public long Id { get; set; } public string ManufacturerId { get; set; } public string PartNumber { get; set; } public string Quantity { get; set; } public string UnitOfMeasure { get; set; } public string LotNumber { get; set; } public string SublotNumber { get; set; } public int LabelSerialNumber { get; set; } public string PurchaseOrderNumber { get; set; } public string PurchaseOrderLineNumber { get; set; } public DateTime ManufacturingDate { get; set; } public string LastModifiedUser { get; set; } public DateTime LastModifiedTime { get; set; } public Binary VersionNumber { get; set; } public ICollection<LotEquipmentScan> LotEquipmentScans { get; private set; } } ... And turns it into this: /// <summary> /// Container for properties of a raw material label /// </summary> public class RawMaterialLabel : EntityBase { /// <summary> /// Gets or sets the id. /// </summary> /// <value> /// The id. /// </value> public long Id { get; set; } /// <summary> /// Gets or sets the manufacturer id. /// </summary> /// <value> /// The manufacturer id. /// </value> public string ManufacturerId { get; set; } /// <summary> /// Gets or sets the part number. /// </summary> /// <value> /// The part number. /// </value> public string PartNumber { get; set; } /// <summary> /// Gets or sets the quantity. /// </summary> /// <value> /// The quantity. /// </value> public string Quantity { get; set; } /// <summary> /// Gets or sets the unit of measure. /// </summary> /// <value> /// The unit of measure. /// </value> public string UnitOfMeasure { get; set; } /// <summary> /// Gets or sets the lot number. /// </summary> /// <value> /// The lot number. /// </value> public string LotNumber { get; set; } /// <summary> /// Gets or sets the sublot number. /// </summary> /// <value> /// The sublot number. /// </value> public string SublotNumber { get; set; } /// <summary> /// Gets or sets the label serial number. /// </summary> /// <value> /// The label serial number. /// </value> public int LabelSerialNumber { get; set; } /// <summary> /// Gets or sets the purchase order number. /// </summary> /// <value> /// The purchase order number. /// </value> public string PurchaseOrderNumber { get; set; } /// <summary> /// Gets or sets the purchase order line number. /// </summary> /// <value> /// The purchase order line number. /// </value> public string PurchaseOrderLineNumber { get; set; } /// <summary> /// Gets or sets the manufacturing date. /// </summary> /// <value> /// The manufacturing date. /// </value> public DateTime ManufacturingDate { get; set; } /// <summary> /// Gets or sets the last modified user. /// </summary> /// <value> /// The last modified user. /// </value> public string LastModifiedUser { get; set; } /// <summary> /// Gets or sets the last modified time. /// </summary> /// <value> /// The last modified time. /// </value> public DateTime LastModifiedTime { get; set; } /// <summary> /// Gets or sets the version number. /// </summary> /// <value> /// The version number. /// </value> public Binary VersionNumber { get; set; } /// <summary> /// Gets the lot equipment scans. /// </summary> /// <value> /// The lot equipment scans. /// </value> public ICollection<LotEquipmentScan> LotEquipmentScans { get; private set; } }

    Read the article

  • IOS Variable vs Property

    - by William Smith
    Just started diving into Objective-C and IOS development and was wondering when and the correct location I should be declaring variables/properties. The main piece of code i need explaining is below: Why and when should i be declaring variables inside the interface statement and why do they have the same variable with _ and then the same one as a property. And then in the implementation they do @synthesize tableView = _tableView (I understand what synthesize does) Thanks :-) @interface ViewController : UIViewController <UITableViewDataSource, UITableViewDelegate> { UITableView *_tableView; UIActivityIndicatorView *_activityIndicatorView; NSArray *_movies; } @property (nonatomic, retain) UITableView *tableView; @property (nonatomic, retain) UIActivityIndicatorView *activityIndicatorView; @property (nonatomic, retain) NSArray *movies;

    Read the article

  • Need help eliminating dead code paths and variables from C source code

    - by Anjum Kaiser
    I have a legacy C code on my hands, and I am given the task to filter dead/unused symbols and paths from it. Over the time there were many insertions and deletions, causing lots of unused symbols. I have identified many dead variables which were only being written to once or twice, but were never being read from. Both blackbox/whitebox/regression testing proved that dead code removal did not affected any procedures. (We have a comprehensive test-suite). But this removal was done only on a small part of code. Now I am looking for some way to automate this work. We rely on GCC to do the work. P.S. I'm interested in removing stuff like: variables which are being read just for the sake of reading from them. variables which are spread across multiple source files and only being written to. For example: file1.c: int i; file2.c: extern int i; .... i=x;

    Read the article

  • What do you need to know to get a job as a web developer

    - by Alex Foster
    What do you need to know to at the very least get your foot in the door? We're assuming for someone who doesn't have a college degree (yet) but will eventually get one. My guess is html, css, javascript, and php, and photoshop and dreamweaver, and sql. And being familiar with using a web host to have sites live, like knowing how to use cpanel. It's probably a very inaccurate and narrow guess but that's what i think right now. I don't know exactly.

    Read the article

  • Masters for Artificial Intelligence

    - by Frenchie
    I am very interested in going to graduate school to study Artificial Intelligence. I am currently an undergrad student majoring in Computer Science, I will be finished in a year and a half so I figured now is a good time to start looking. I do not have a competitive GPA(3.3) so I am not looking for the top 25 graduate schools in AI such as listed in this ranking. So far I am taking into consideration the University of Georgia, they have a masters in AI, separate from a masters in CS. If you know of any other schools that has a decent masters in CS with an emphasis on AI or a masters in AI by itself please let me know. Thank you. USA/Canada schools only.

    Read the article

  • What is this algorithm for converting strings into numbers called?

    - by CodexArcanum
    I've been doing some work in Parsec recently, and for my toy language I wanted multi-based fractional numbers to be expressible. After digging around in Parsec's source a bit, I found their implementation of a floating-point number parser, and copied it to make the needed modifications. So I understand what this code does, and vaguely why (I haven't worked out the math fully yet, but I think I get the gist). But where did it come from? This seems like a pretty clever way to turn strings into floats and ints, is there a name for this algorithm? Or is it just something basic that's a hole in my knowledge? Did the folks behind Parsec devise it? Here's the code, first for integers: number' :: Integer -> Parser Integer number' base = do { digits <- many1 ( oneOf ( sigilRange base )) ; let n = foldl (\x d -> base * x + toInteger (convertDigit base d)) 0 digits ; seq n (return n) } So the basic idea here is that digits contains the string representing the whole number part, ie "192". The foldl converts each digit individually into a number, then adds that to the running total multiplied by the base, which means that by the end each digit has been multiplied by the correct factor (in aggregate) to position it. The fractional part is even more interesting: fraction' :: Integer -> Parser Double fraction' base = do { digits <- many1 ( oneOf ( sigilRange base )) ; let base' = fromIntegral base ; let f = foldr (\d x -> (x + fromIntegral (convertDigit base d))/base') 0.0 digits ; seq f (return f) Same general idea, but now a foldr and using repeated division. I don't quite understand why you add first and then divide for the fraction, but multiply first then add for the whole. I know it works, just haven't sorted out why. Anyway, I feel dumb not working it out myself, it's very simple and clever looking at it. Is there a name for this algorithm? Maybe the imperative version using a loop would be more familiar?

    Read the article

  • Using an open source non-free license

    - by wagglepoons
    Are there any projects/products out there that use an open source license that basically says "free for small companies" and "cost money for larger companies" in addition to "make modifications available"? (And are there any standard licenses with such a wording?) If I were to release a project under such a license, would it be automatically shunned by every developer on the face of the earth, or, assuming it is actually a useful project, does it have a fair chance at getting contributions from Joe Programmer? The second part of this question can easily become subjective, but any well argued point of view will be highly appreciated. For example, do dual licensed projects made by commercial entities have success with the open source communities?

    Read the article

  • KISS principle applied to programming language design?

    - by Giorgio
    KISS ("keep it simple stupid", see e.g. here) is an important principle in software development, even though it apparently originated in engineering. Citing from the wikipedia article: The principle is best exemplified by the story of Johnson handing a team of design engineers a handful of tools, with the challenge that the jet aircraft they were designing must be repairable by an average mechanic in the field under combat conditions with only these tools. Hence, the 'stupid' refers to the relationship between the way things break and the sophistication available to fix them. If I wanted to apply this to the field of software development I would replace "jet aircraft" with "piece of software", "average mechanic" with "average developer" and "under combat conditions" with "under the expected software development / maintenance conditions" (deadlines, time constraints, meetings / interruptions, available tools, and so on). So it is a commonly accepted idea that one should try to keep a piece of software simple stupid so that it easy to work on it later. But can the KISS principle be applied also to programming language design? Do you know of any programming languages that have been designed specifically with this principle in mind, i.e. to "allow an average programmer under average working conditions to write and maintain as much code as possible with the least cognitive effort"? If you cite any specific language it would be great if you could add a link to some document in which this intent is clearly expressed by the language designers. In any case, I would be interested to learn about the designers' (documented) intentions rather than your personal opinion about a particular programming language.

    Read the article

  • Business case for decentralized version control systems

    - by Keyo
    I searched and couldn't find any business reasons why git/mercurial/bazzr systems are better than centralized systems (subversion, perforce). If you were trying to sell a DVCS to a non-technical person what arguments would you provide for the DVCS increasing profit. I will shortly be pitching git to my manager, it will take some time converting out subversion repositories and some expense in buying smartgit licences.

    Read the article

  • Write own messaging system vs. utilize existing ones

    - by A.Rashad
    We are trying to have our own startup, with a middleware application to glue small applications with enterprise legacy systems. for such middle-ware to function properly, we will need some sort of messaging system to make different components talk to each other in a reliable way. the alternatives are: use an existing messaging system, such as 0MQ, jBOSS, WebSphere MQ, etc. build our own messaging system the way we see the problem I am more biased towards the later option for the following reasons: to have more control over our final product to avoid any licensing problems later on to learn about messaging while writing the code to invent something new, that might cost us lots of $$$ if reused an existing system What would you do if in my shoes?

    Read the article

  • How to keep background requests in sequence

    - by Jason Lewis
    I'm faced with implementing interfaces for some rather archaic systems, for handling online deposits to stored value accounts (think campus card accounts for students). Here's my dilemma: stage 1 of the process involves passing the user off to a thrid-party site for the credit card transaction, like old-school PayPal. Step two involves using a proprietary protocol for communicating with a legacy system for conducting the actual deposit. Step two requires that each transaction have a unique sequence number, and that the requests' seqnums are in order. Since we're logging each transaction in Postgres, my first thought was to take a number from a sequence in the DB, guaranteeing uniqueness. But since we're dealing with web requests that might come in near-simultaneously, and since latency with the return from the off-ste payment processor is beyond our control, there's always the chance for a race condition in the order of requests passed back to the proprietary system, and if the seqnums are out of order, the request fails silently (brilliant, right?). I thought about enqueuing the requests in Redis and using Resque workers to process them (single worker, single process, so they are processed in order), but we need to be able to give the user feedback as to whether the transaction was processed successfully, so this seems less feasible to me. I've tried to make this application handle concurrency well (as much as possible for a Ruby on Rails app), but now we're in a situation where we have to interact with a system that is designed to be single process, single threaded, and sequential. If it at least gave an "out of order" error, I could just increment (or take the next value off the sequence), but it's designed to fail silently in the event of ANY error. We are handling timeouts in a way that blocks on I/O, but since the application uses multiple workers (Unicorn), that's no guarantee. Any ideas/suggestions would be appreciated.

    Read the article

  • May I remove ads from feed in my news reader app?

    - by Mahdi Ghiasi
    I'm creating a News Reader app for Tablets and PCs. My app is fetching data from news sources by RSS feed of websites (in the server-side). But some of these sites are showing some advertising banners at the end of each article. Should I remove those banners from the feed? Am I legally/ethically allowed to do this? And what about If I want to put some other ads in my application? (Right at the end of each article) I mean, If I want to have my own advertising service... Update: And what if I use feed for content titles and summaries, but use other thing, like Readability API to show full article, and then put my own ads below content? (Readability gets the HTML page, and gives you a clean page without any ads and such.)

    Read the article

  • Reasons for Pair Programming

    - by Jeff Langemeier
    I've worked in a few shops where management has passed the idea of pair programming either to me or another manager/developer, and I can't get behind it at all. From a developer stand-point I can't find a reason why moving to this coding style would be beneficial, nor as a manager of a small team have I seen any benefit. I understand that it helps on basic syntax errors and can be helpful if you need to hash something out, but managers that are out of the programming loop seem to keep seeing it as a way of keeping their designers from going to Facebook or Reddit than as a design tool. As someone close to the development floor that apparently can't quite understand from a book tossed my way or a wiki page on the subject... from a high level management position, what are the benefits of Pair Programming when dealing with Scrum or Agile environments?

    Read the article

  • How much time takes to a new language like D to become popular? [closed]

    - by Adrián Pérez
    I was reading about new languages for me to learn and I find very good comments about D, like it's the new C or what C++ should have been. Knowing that many people say wonders about the language, I'm wondering how much time usually takes to a language to become popular. This is, having libraries ported or written natively for this language and being used in serious software development. I have read about the history of Java, and Python to figure it out, but may be they are too high level complexity to say their development could take the same time as will take for D.

    Read the article

  • Asynchronous update design/interaction patterns

    - by Andy Waite
    These days many apps support asynchronous updates. For example, if you're looking at a list of widgets and you delete one of them then rather than wait for the roundtrip to the server, the app can hide the one you deleted, giving immediate feedback. The actual deletion on the server will happen in the background. This can be seen in web apps, desktop apps, iOS apps, etc. But what about when the background operation fails. How should you feed back to the user? Should you restore the UI to the pre-deletion state? What about when multiple background operations fail together? Does this behaviour/pattern have a name? Perhaps something based on the Command pattern?

    Read the article

  • Questions about identifying the components in MVC

    - by luiscubal
    I'm currently developing an client-server application in node.js, Express, mustache and MySQL. However, I believe this question should be mostly language and framework agnostic. This is the first time I'm doing a real MVC application and I'm having trouble deciding exactly what means each component. (I've done web applications that could perhaps be called MVC before, but I wouldn't confidently refer to them as such) I have a server.js that ties the whole application together. It does initialization of all other components (including the database connection, and what I think are the "models" and the "views"), receiving HTTP requests and deciding which "views" to use. Does this mean that my server.js file is the controller? Or am I mixing code that doesn't belong there? What components should I break the server.js file into? Some examples of code that's in the server.js file: var connection = mysql.createConnection({ host : 'localhost', user : 'root', password : 'sqlrevenge', database : 'blog' }); //... app.get("/login", function (req, res) { //Function handles a GET request for login forms if (process.env.NODE_ENV == 'DEVELOPMENT') { mu.clearCache(); } session.session_from_request(connection, req, function (err, session) { if (err) { console.log('index.js session error', err); session = null; } login_view.html(res, user_model, post_model, session, mu); //I named my view functions "html" for the case I might want to add other output types (such as a JSON API), or should I opt for completely separate views then? }); }); I have another file that belongs named session.js. It receives a cookies object, reads the stored data to decide if it's a valid user session or not. It also includes a function named login that does change the value of cookies. First, I thought it would be part of the controller, since it kind of dealt with user input and supplied data to the models. Then, I thought that maybe it was a model since it dealt with the application data/database and the data it supplies is used by views. Now, I'm even wondering if it could be considered a View, since it outputs data (cookies are part of HTTP headers, which are output)

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >