Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 264/563 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Why use sealed instead of static on a class?

    - by sq33G
    Our system has several utility classes. Some people on our team use (A) a class with all-static methods and a private constructor. Others use (B) a class with all-static methods (these the juniors). On code analysis, (A) and (B) raise warning CA1052, which recommends marking the class as sealed. Included in the MSDN documentation there is the following advice: If you are targeting .NET Framework 2.0 or earlier, a better approach is to mark the type as static. Why does this make any sense? I would have thought the opposite; AFAIK, previous to 2.0 there was no way to mark a class as static.

    Read the article

  • Advice for a distracted, unhappy, recently graduated programmer? [closed]

    - by Re-Invent
    I graduated 4 months ago. I had offers from a few good places to work at. At the same time I wanted to stick to building a small software business of my own, still have some ideas with good potential, some half done projects frozen in my github. But due to social pressures, I chose a job, the pay is great, but I am half-passionate about it. A small team of smart folks building useful product, working out contracts across the world. I've started finding it extremely boring. Boring to the extent that I skip 2-3 days a week together not doing work. Neither do I spend that time progressing any of my own projects. Yes, I feel stupid at the way I'm wasting time, but I don't understand exactly why is it happening. It's as if all the excitement has been drained. What can I do about it? Long version: School - I was in third standard. Only students, 6th grade had access to computer labs. I once peeked into the lab from the little door opening. No hard-disks, MS DOS on 5 1/2 inch floppies. I asked a senior student to play some sound in BASIC. He used PLAY to compose a tune. Boy! I was so excited, I was jumping from within. Back home, asked my brother to teach me some programming. We bought a book "MODERN All About GW-BASIC for Schools & Colleges". The book had everything, right from printing, to taking input, file i/o, game programming, machine level support, etc. I was in 6th standard, wrote my first game - a wheel of fortune, rotated the wheel by manipulating 16 color palette's definition. Got internet soon, got hooked to QuickBasic programming community. Made some more games "007 in Danger", "Car Crush 2" for submission to allbasiccode archives. I was extremely excited about all this. My interests now swayed into "hacking" (computer security). Taught myself some perl, found it annoying, learnt PHP and a bit of SQL. Also taught myself Visual Basic one of the winters and wrote a pacman clone with Direct X. By the time I was in 10th standard, I created some evil tools using visual basic, php and mysql and eventually landed myself into an unpaid side-job at a government facility, building evil tools for them. It was a dream come true for crackers of that time. And so was I, still very excited. Things changed soon, last two years of school were not so great as I was balancing preps for college, work at govt. and studies for school at same time. College - College was opposite of all I had wished it to be. I imagined it to be a place where I'd spend my 4 years building something awesome. It was rather an epitome of rote learning, attendance, rules, busy schedules, ban on personal laptops, hardly any hackers surrounding you and shit like that. We had to take permissions to even introduce some cultural/creative activities in our annual schedule. The labs won't be open on weekends because the lab employees had to have their leaves. Yes, a horrible place for someone like me. I still managed to pull out a project with a friend over 2 months. Showed it to people high in the academia hierarchy. They were immensely impressed, we proposed to allow personal computers for students. They made up half-assed reasons and didn't agree. We felt frustrated. And so on, I still managed to teach myself new languages, do new projects of my own, do an intern at the same govt. facility, start a small business for sometime, give a talk at a conference I'm passionate about, win game-dev and hacking contest at most respected colleges, solve good deal of programming contest problems, etc. At the same time I was not content with all these restrictions, great emphasis on rote learning, and sheer wastage of time due to college. I never felt I was overdoing, but now I feel I burnt myself out. During my last days at college, I did an intern at a bigco. While I spent my time building prototypes for certain LBS, the other interns around me, even a good friend, was just skipping time. I thought maybe, in a few weeks he would put in some serious efforts at work assigned to him, but all he did was to find creative ways to skip work, hide his face from manager, engage people in talks if they try to question his progress, etc. I tried a few time to get him on track, but it seems all he wanted was to "not to work hard at all and still reap the fruits". I don't know how others take such people, but I find their vicinity very very poisonous to one's own motivation and productivity. Over that, the place where I come from, HRs don't give much value to what have you done past 4 years. So towards the end of out intern, we all were offered work at the bigco, but the slacker, even after not writing more than 200 lines of code was made a much better offer. I felt enraged instantly - "Is this how the corp world treats someone who does fruitful, if not extra-ordinary work form them for past 6 months?". Yes, I did try to negotiate and debate. The bigcos seem blind due to departmentalization of responsibilities and many layers of management. I decided not to be in touch with any characters of that depressing play. Probably the busy time I had at college, ignoring friends, ignoring fun and squeezing every bit of free time for myself is also responsible. Probably this is what has drained all my willingness to work for anyone. I find my day job boring, at the same time I with to maintain it for financial reasons. I feel a bit burnt out, unsatisfied and at the same time an urge to quit working for someone else and start finishing my frozen side-projects (which may be profitable). Though I haven't got much to support myself with food, office, internet bills, etc in savings. I still have my day job, but I don't find it very interesting, even though the pay is higher than the slacker, I don't find money to be a great motivator here. I keep comparing myself to my past version. I wonder how to get rid of this and reboot myself back to the way I was in school days - excited about it, tinkering, building, learning new things daily, and NOT BORED?

    Read the article

  • Real performance of node.js

    - by uther.lightbringer
    I've got a question concerning node.js performance. There is quite lot of "benchmarks" and a lot of fuss about great performance of node.js. But how does it stand in real world? Not just process empty request at high speed. If someone could try to compare this scenario: Java (or equivalent) server running an application with complex business logic between receiving request and sending response. How would node.js deal with it? If there was need for a lot of JavaScript processing on server side, is node.js really so fast that it can execute JavaScript, and stand a chance against more heavyveight competitors?

    Read the article

  • Reasons for Pair Programming

    - by Jeff Langemeier
    I've worked in a few shops where management has passed the idea of pair programming either to me or another manager/developer, and I can't get behind it at all. From a developer stand-point I can't find a reason why moving to this coding style would be beneficial, nor as a manager of a small team have I seen any benefit. I understand that it helps on basic syntax errors and can be helpful if you need to hash something out, but managers that are out of the programming loop seem to keep seeing it as a way of keeping their designers from going to Facebook or Reddit than as a design tool. As someone close to the development floor that apparently can't quite understand from a book tossed my way or a wiki page on the subject... from a high level management position, what are the benefits of Pair Programming when dealing with Scrum or Agile environments?

    Read the article

  • Nested entities in Google App Engine. Do I do it right?

    - by Aleksandr Makov
    Trying to make most of the GAE Datastore entities concept, but some doubts drill my head. Say I have the model: class User(ndb.Model): email = ndb.StringProperty(indexed=True) password = ndb.StringProperty(indexed=False) first_name = ndb.StringProperty(indexed=False) last_name = ndb.StringProperty(indexed=False) created_at = ndb.DateTimeProperty(auto_now_add=True) @classmethod def key(cls, email): return ndb.Key(User, email) @classmethod def Add(cls, email, password, first_name, last_name): user = User(parent=cls.key(email), email=email, password=password, first_name=first_name, last_name=last_name) user.put() UserLogin.Record(email) class UserLogin(ndb.Model): time = ndb.DateTimeProperty(auto_now_add=True) @classmethod def Record(cls, user_email): login = UserLogin(parent=User.key(user_email)) login.put() And I need to keep track of times of successful login operations. Each time user logs in, an UserLogin.Record() method will be executed. Now the question — do I make it right? Thanks. EDIT 2 Ok, used the typed arguments, but then it raised this: Expected Key instance, got User(key=Key('User', 5418393301680128), created_at=datetime.datetime(2013, 6, 27, 10, 12, 25, 479928), email=u'[email protected]', first_name=u'First', last_name=u'Last', password=u'password'). It's clear to understand, but I don't get why the docs are misleading? They implicitly propose to use: # Set Employee as Address entity's parent directly... address = Address(parent=employee) But Model expects key. And what's worse the parent=user.key() swears that key() isn't callable. And I found out the user.key works. EDIT 1 After reading the example form the docs and trying to replicate it — I got type error: TypeError('Model constructor takes no positional arguments.'). This is the exacto code used: user = User('[email protected]', 'password', 'First', 'Last') user.put() stamp = UserLogin(parent=user) stamp.put() I understand that Model was given the wrong argument, BUT why it's in the docs?

    Read the article

  • cloud programming for OpenStack in C / C++

    - by Basile Starynkevitch
    (Sorry for such a fuzzy question, I am very newbie to cloud programming) I am interested in designing (and developing) a (free software) program in C or C++ (probably, most of it being meta-programmed, i.e. part of the C code code being generated). I am still in the thinking / designing phase. And I might perhaps give up. For reference, I am the main architect and implementor of GCC MELT, a domain specific language to extend the GCC compiler (the MELT language is translated to C/C++ and is bootstrapped: the MELT to C/C++ translator being written in MELT). And I am dreaming of extending it with some cloud computing abilities. But I am a newbie in cloud computing. (I am only interested in free-software, GPLv3 friendly, based cloud computing, which probably means openstack). I believe that "compiling on the cloud with some enhanced GCC" could make sense (for super-optimizations or static analysis of e.g. an entire Linux distribution, or at least a massive GCC compiled free software like Qt, GCC itself, or the Linux kernel). I'm dreaming of a MELT specific monitoring program which would store, communicate, and and enhance GCC compilation (extended by MELT). So the picture would be that each GCC process (actually the cc1 or cc1plus started by the gcc driver, suitably extended by some MELT extension) would communicate with some monitor. That "monitoring/persisting" program would run "on the cloud" (and probably manage some information produced by GCC e.g. on NoSQL bases). So, how should some (yet to be written) C program (some Linux daemon) be designed to be cloud-friendly? So far, I understood that it should provide some Web service, probably thru a RESTful service, so should use an HTTP server library like onion. And that OpenStack is able to start (e.g. a dozen of) such services. But I don't have a clear picture of what OpenStack brings. So far, I noticed the ability to manage (and distribute) virtual machines (with some Python API). It is less clear how can it distribute some ELF executable, how can it start it, etc. Do you have any references or examples of C / C++ programming on the cloud? How should a "cloud-friendly" (actually, OpenStack friendly) C/C++ server application be designed?

    Read the article

  • Share Mulitple Classes as one dll or a lib with Mulitple Projects

    - by JNL
    Currently I have some shared class files(.cpp and .h) which I include them in around 20 Projects. Currently I have to include them in all of the projects. So if I get some business requirments and I change some of the shared(.cpp or .h) files I have to include them in all the 20 Projects which is kind of tedious. Is there a way where I can create a shared dll or library and include it all of my Projects. So if I have to change it, I just have to change it once and then just Add Reference to include that dll or library which contains all the shared(.cpp, .h) files. Any help/recommendations regarding the same, will be highly appreciated. I am using VS2012 for VC++.

    Read the article

  • Push-Based Events in a Services Oriented Architecture

    - by Colin Morelli
    I have come to a point, in building a services oriented architecture (on top of Thrift), that I need to expose events and allow listeners. My initial thought was, "create an EventService" to handle publishing and subscribing to events. That EventService can use whatever implementation it desires to actually distribute the events. My client automatically round-robins service requests to available service hosts which are determined using Zookeeper-based service discovery. So, I'd probably use JMS inside of EventService mainly for the purpose of persisting messages (in the event that a service host for EventService goes down before it can distribute the message to all of the available listeners). When I started considering this, I began looking into the differences between Queues and Topics. Topics unfortunately won't work for me, because (at least for now), all listeners must receive the message (even if they were down at the time the event was pushed, or hadn't made a subscription yet because they haven't completed startup (during deployment, for example) - messages should be queued until the service is available). However, I don't want EventService to be responsible for handling all of the events. I don't think it should have the code to react to events inside of it. Each of the services should do what it needs with a given event. This would indicate that each service would need a JMS connection, which questions the value of having EventService at all (as the services could individually publish and subscribe to JMS directly). However, it also couples all of the services to JMS (when I'd rather that there be a single service that's responsible for determining how to distribute events). What I had thought was to publish an event to EventService, which pulls a configuration of listeners from some configuration source (database, flat file, irrelevant for now). It replicates the message and pushes each one back into a queue with information specific to that listener (so, if there are 3 listeners, 1 event would become 3 events in JMS). Then, another thread in EventService (which is replicated, running on multiple hots) would be pulling from the queue, attempting to make the service call to the "listener", and returning the message to the queue (if the service is down), or discarding the message (if the listener completed successfully). tl;dr If I have an EventService that is responsible for receiving events and delegating service calls to "event listeners," (which are really just endpoints on other services), how should it know how to craft the service call? Should I create a generic "Event" object that is shared among all services? Then, the EventService can just construct this object and pass it to the service call. Or is there a better answer to this problem entirely?

    Read the article

  • How to convert a Bazaar repository to GIT repository?

    - by Naruto Uzumaki
    We have a large bazaar repository and we want to convert it to a git repository. The bazaar repository contains the folders of each of the interns. Any documentation/code prepared by interns is committed in their directory so there are a huge number of commits. What steps should be performed to securely convert the bazaar repository to a git repository so that we do not lose any commit information. We firstly need to create a backup of the existing bazaar repository and then convert it. Edit: I followed this link: http://librelist.com/browser//cville/2010/2/9/migrate-repository-bzr-to-git/ It's working fine on my system with Ubuntu. But when I try to run it on the actual server it gives me EOF error and crashes Starting export of 1036 revisions ... fatal: EOF in data (1825 bytes remaining) fast-import: dumping crash report to .git/fast_import_crash_11804 Edit 2: I also tried it on a new CentOS system and received the following error fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree. Use '--' to separate paths from revisions

    Read the article

  • COM INTEROP Support - which is better? C# or VB

    - by dot
    I keep hearing that c# is "better" than vb... but as far as I can see, aside from syntactical differences, both compile down to the same IL. I've found some good articles by googling that explain what the differences are between the two and so I feel comfortable in "diffusing" conversations between developers arguing over vb / c#. =) But I did read an article that said vb.net 2005 had better support for com interop stuff. But i'm wondering if this is still the case? This is of interest to me because we are in the middle of redesigning an old vb6 app that communicates with some older COM components. Does anyone have recent experience with .NET and COM interop? Thanks.

    Read the article

  • Questions about identifying the components in MVC

    - by luiscubal
    I'm currently developing an client-server application in node.js, Express, mustache and MySQL. However, I believe this question should be mostly language and framework agnostic. This is the first time I'm doing a real MVC application and I'm having trouble deciding exactly what means each component. (I've done web applications that could perhaps be called MVC before, but I wouldn't confidently refer to them as such) I have a server.js that ties the whole application together. It does initialization of all other components (including the database connection, and what I think are the "models" and the "views"), receiving HTTP requests and deciding which "views" to use. Does this mean that my server.js file is the controller? Or am I mixing code that doesn't belong there? What components should I break the server.js file into? Some examples of code that's in the server.js file: var connection = mysql.createConnection({ host : 'localhost', user : 'root', password : 'sqlrevenge', database : 'blog' }); //... app.get("/login", function (req, res) { //Function handles a GET request for login forms if (process.env.NODE_ENV == 'DEVELOPMENT') { mu.clearCache(); } session.session_from_request(connection, req, function (err, session) { if (err) { console.log('index.js session error', err); session = null; } login_view.html(res, user_model, post_model, session, mu); //I named my view functions "html" for the case I might want to add other output types (such as a JSON API), or should I opt for completely separate views then? }); }); I have another file that belongs named session.js. It receives a cookies object, reads the stored data to decide if it's a valid user session or not. It also includes a function named login that does change the value of cookies. First, I thought it would be part of the controller, since it kind of dealt with user input and supplied data to the models. Then, I thought that maybe it was a model since it dealt with the application data/database and the data it supplies is used by views. Now, I'm even wondering if it could be considered a View, since it outputs data (cookies are part of HTTP headers, which are output)

    Read the article

  • Alternative way of developing for ASP.NET to WebForms - Any problems with this?

    - by John
    So I have been developing in ASP.NET WebForms for some time now but often get annoyed with all the overhead (like ViewState and all the JavaScript it generates), and the way WebForms takes over a lot of the HTML generation. Sometimes I just want full control over the markup and produce efficient HTML of my own so I have been experimenting with what I like to call HtmlForms. Essentially this is using ASP.NET WebForms but without the form runat="server" tag. Without this tag, ASP.NET does not seem to add anything to the page at all. From some basic tests it seems that it runs well and you still have the ability to use code-behind pages, and many ASP.NET controls such as repeaters. Of course without the form runat="server" many controls won't work. A post at Enterprise Software Development lists the controls that do require the tag. From that list you will see that all of the form elements like TextBoxes, DropDownLists, RadioButtons, etc cannot be used. Instead you use normal HTML form controls. But how do you access these HTML controls from the code behind? Retrieving values on post back is easy, you just use Request.QueryString or Request.Form. But passing data to the control could be a little messy. Do you use a ASP.NET Literal control in the value field or do you use <%= value % in the markup page? I found it best to add runat="server" to my HTML controls and then you can access the control in your code-behind like this: ((HtmlInputText)txtName).Value = "blah"; Here's a example that shows what you can do with a textbox and drop down list: Default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form action="" method="post"> <label for="txtName">Name:</label> <input id="txtName" name="txtName" runat="server" /><br /> <label for="ddlState">State:</label> <select id="ddlState" name="ddlState" runat="server"> <option value=""></option> </select><br /> <input type="submit" value="Submit" /> </form> </body> </html> Default.aspx.cs using System; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; namespace NoForm { public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //Default values string name = string.Empty; string state = string.Empty; if (Request.RequestType == "POST") { //If form submitted (post back) name = Request.Form["txtName"]; state = Request.Form["ddlState"]; //Server side form validation would go here //and actions to process form and redirect } ((HtmlInputText)txtName).Value = name; ((HtmlSelect)ddlState).Items.Add(new ListItem("ACT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NSW")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("QLD")); ((HtmlSelect)ddlState).Items.Add(new ListItem("SA")); ((HtmlSelect)ddlState).Items.Add(new ListItem("TAS")); ((HtmlSelect)ddlState).Items.Add(new ListItem("VIC")); ((HtmlSelect)ddlState).Items.Add(new ListItem("WA")); if (((HtmlSelect)ddlState).Items.FindByValue(state) != null) ((HtmlSelect)ddlState).Value = state; } } } As you can see, you have similar functionality to ASP.NET server controls but more control over the final markup, and less overhead like ViewState and all the JavaScript ASP.NET adds. Interestingly you can also use HttpPostedFile to handle file uploads using your own input type="file" control (and necessary form enctype="multipart/form-data"). So my question is can you see any problems with this method, and any thoughts on it's usefulness? I have further details and tests on my blog.

    Read the article

  • How to get feedback on mobile application

    - by Jason Crosby
    I am relatively new to programming. I have been programming with Java and Android for about 2 years now and just recently released my first app to the Google Play app store. I have passed the word on to everyone I know and posted a few times on Facebook about it. But I am not really seeing anyone install them. I love to code I'm not looking to have the next big time app, but it would be nice to get some installs and feedback/ratings so I can get an idea of how well its doing and if there are any fixes or improvements I can make. I thought about doing an AdMob campaign a couple times here and there at about $10 to $20 per day. But I'm not sure if that will generate any kind of worthwhile feedback. What other things could I be doing in order to get some feedback on my application? Thanks in advance for your help and suggestions.

    Read the article

  • Spreadsheet or writing an application?

    - by Lenny222
    When would you keep simple to medium-complex personal calculations in a spread sheet (Excel etc) and when would you write a small program or script for it? For example when you want to calculate what size of mortgage you can afford to buy a house. I could create a spreadsheet and have a nice tabular representation. On the other hand, if i would write a small script in a nice language (in my case Haskell), i'd have the security of a nice type system, preventing typos etc. What are the pro/cons in your opinion?

    Read the article

  • How much time takes to a new language like D to become popular? [closed]

    - by Adrián Pérez
    I was reading about new languages for me to learn and I find very good comments about D, like it's the new C or what C++ should have been. Knowing that many people say wonders about the language, I'm wondering how much time usually takes to a language to become popular. This is, having libraries ported or written natively for this language and being used in serious software development. I have read about the history of Java, and Python to figure it out, but may be they are too high level complexity to say their development could take the same time as will take for D.

    Read the article

  • Asynchronous update design/interaction patterns

    - by Andy Waite
    These days many apps support asynchronous updates. For example, if you're looking at a list of widgets and you delete one of them then rather than wait for the roundtrip to the server, the app can hide the one you deleted, giving immediate feedback. The actual deletion on the server will happen in the background. This can be seen in web apps, desktop apps, iOS apps, etc. But what about when the background operation fails. How should you feed back to the user? Should you restore the UI to the pre-deletion state? What about when multiple background operations fail together? Does this behaviour/pattern have a name? Perhaps something based on the Command pattern?

    Read the article

  • What is this algorithm for converting strings into numbers called?

    - by CodexArcanum
    I've been doing some work in Parsec recently, and for my toy language I wanted multi-based fractional numbers to be expressible. After digging around in Parsec's source a bit, I found their implementation of a floating-point number parser, and copied it to make the needed modifications. So I understand what this code does, and vaguely why (I haven't worked out the math fully yet, but I think I get the gist). But where did it come from? This seems like a pretty clever way to turn strings into floats and ints, is there a name for this algorithm? Or is it just something basic that's a hole in my knowledge? Did the folks behind Parsec devise it? Here's the code, first for integers: number' :: Integer -> Parser Integer number' base = do { digits <- many1 ( oneOf ( sigilRange base )) ; let n = foldl (\x d -> base * x + toInteger (convertDigit base d)) 0 digits ; seq n (return n) } So the basic idea here is that digits contains the string representing the whole number part, ie "192". The foldl converts each digit individually into a number, then adds that to the running total multiplied by the base, which means that by the end each digit has been multiplied by the correct factor (in aggregate) to position it. The fractional part is even more interesting: fraction' :: Integer -> Parser Double fraction' base = do { digits <- many1 ( oneOf ( sigilRange base )) ; let base' = fromIntegral base ; let f = foldr (\d x -> (x + fromIntegral (convertDigit base d))/base') 0.0 digits ; seq f (return f) Same general idea, but now a foldr and using repeated division. I don't quite understand why you add first and then divide for the fraction, but multiply first then add for the whole. I know it works, just haven't sorted out why. Anyway, I feel dumb not working it out myself, it's very simple and clever looking at it. Is there a name for this algorithm? Maybe the imperative version using a loop would be more familiar?

    Read the article

  • Tender vs. Requirements vs. Solution Design

    - by Tom Tom
    Conventionally, which of the above documents is deemed to hold the most weight when it comes to system acceptance? I recently had a conversation along these lines: It was argued that the initial requirements / tender documentation should be used to determine system acceptance. It was said that the solution design only serves to describe the way in which the system will solve the problem, not the problem it will solve. Furthermore, it was argued that if requirements are missed during solution design, the requirements should be referenced during system acceptance and that if any requirements were missed then the original tender should be referenced. Conversely, I suggested that - while requirements may be based on the original tender - they supersede it once agreed with the stakeholders. Furthermore, during solution design, analysis is performed to address and refine these initial requirements, translating them into a system capable of meeting the actual requirements. Once signed off by the relevant users, this solution design should absolutely represent the requirements (by virtue of the fact that it's designed upon them) but actually supersedes them as the basis for system acceptance. Is one of the above arguments more valid than the other?

    Read the article

  • Associative array challenges, or examples?

    - by Aerovistae
    I understand well when and how to use an associative array, but I'm trying to teach a friend to program and I'm having some trouble with this particular concept. I need a good set of problems whose solutions are best implemented through the use of maps/hashes/associative arrays/dictionaries. I googled all over and couldn't find any. I was hoping someone might know of some, or perhaps get a community wiki sort of answer. That way I can say, here's our problem, and here's how we could effectively solve it through the use of an associative array... It's one of those cases where when I'm programming and I run into a situation that calls for a dictionary, I recognize it, but I can't seem to make up any such situations to use for a demonstration.

    Read the article

  • Sharing Authentication Across Subdomains using cookies

    - by Jordan Reiter
    I know that in general cookies themselves are not considered robust enough to store authentication information. What I am wondering is if there is an existing design pattern or framework for sharing authentication across subdomains without having to use something more complex like OpenID. Ideally, the process would be that the user visits abc.example.org, logs in, and continues on to xyz.example.org where they are automatically recognized (ideally, the reverse should also be possible -- a login via xyz means automatic login at abc). The snag is that abc.example.org and xyz.example.org are both on different servers and different web application frameworks, although they can both use a shared database. The web application platforms include PHP, ColdFusion, and Python (Django), although I'm also interested in this from a more general perspective (i.e. language agnostic).

    Read the article

  • Getting solutions off the internet. Bad or Good? [closed]

    - by Prometheus87
    I was looking on the internet for common interview questions. I came upon one that was about finding the occurrences of certain characters in an array. The solution written right below it was in my opinion very elegant. But then another thought came to my mind that, this solution is way better than what came to my mind. So even though I know the solution (that most probably someone with a better IQ had provided) my IQ was still the same. So that means that even though i may know the answer, it still wasn't mine. hence if that question was asked and i was hired upon my answer to that question i couldn't reproduce that same elegance in my other ventures within the organization My question is what do you guys think about such "borrowed intelligence"? Is it good? Do you feel that if solutions are found off the internet, it makes you think in that same more elegant way?

    Read the article

  • May I remove ads from feed in my news reader app?

    - by Mahdi Ghiasi
    I'm creating a News Reader app for Tablets and PCs. My app is fetching data from news sources by RSS feed of websites (in the server-side). But some of these sites are showing some advertising banners at the end of each article. Should I remove those banners from the feed? Am I legally/ethically allowed to do this? And what about If I want to put some other ads in my application? (Right at the end of each article) I mean, If I want to have my own advertising service... Update: And what if I use feed for content titles and summaries, but use other thing, like Readability API to show full article, and then put my own ads below content? (Readability gets the HTML page, and gives you a clean page without any ads and such.)

    Read the article

  • DDD and validation of aggregate root

    - by Mik378
    Suppose an aggregate root : MailConfiguration (wrapping an AddressPart object). The AddressPart object is a simple immutable value object with some fields like senderAdress, recipentAddress (to make example simple). As being an invariant object, AddressPart should logically wrap its own Validator (by the way of external a kind of AddressValidator for respecting Single Responsibility Principle) I read some articles that claimed an aggregateRoot must validate its 'children'. However, if we follow this principle, one could create an AddressPart with an uncohesive/invalid state. What are your opinion? Should I move the collaborator AddressValidator(used in constructor so in order to validate immediately the cohesion of an AddressPart) from AddressPart and assign it to aggregateRoot (MailConfiguration) ?

    Read the article

  • KISS principle applied to programming language design?

    - by Giorgio
    KISS ("keep it simple stupid", see e.g. here) is an important principle in software development, even though it apparently originated in engineering. Citing from the wikipedia article: The principle is best exemplified by the story of Johnson handing a team of design engineers a handful of tools, with the challenge that the jet aircraft they were designing must be repairable by an average mechanic in the field under combat conditions with only these tools. Hence, the 'stupid' refers to the relationship between the way things break and the sophistication available to fix them. If I wanted to apply this to the field of software development I would replace "jet aircraft" with "piece of software", "average mechanic" with "average developer" and "under combat conditions" with "under the expected software development / maintenance conditions" (deadlines, time constraints, meetings / interruptions, available tools, and so on). So it is a commonly accepted idea that one should try to keep a piece of software simple stupid so that it easy to work on it later. But can the KISS principle be applied also to programming language design? Do you know of any programming languages that have been designed specifically with this principle in mind, i.e. to "allow an average programmer under average working conditions to write and maintain as much code as possible with the least cognitive effort"? If you cite any specific language it would be great if you could add a link to some document in which this intent is clearly expressed by the language designers. In any case, I would be interested to learn about the designers' (documented) intentions rather than your personal opinion about a particular programming language.

    Read the article

  • How to keep background requests in sequence

    - by Jason Lewis
    I'm faced with implementing interfaces for some rather archaic systems, for handling online deposits to stored value accounts (think campus card accounts for students). Here's my dilemma: stage 1 of the process involves passing the user off to a thrid-party site for the credit card transaction, like old-school PayPal. Step two involves using a proprietary protocol for communicating with a legacy system for conducting the actual deposit. Step two requires that each transaction have a unique sequence number, and that the requests' seqnums are in order. Since we're logging each transaction in Postgres, my first thought was to take a number from a sequence in the DB, guaranteeing uniqueness. But since we're dealing with web requests that might come in near-simultaneously, and since latency with the return from the off-ste payment processor is beyond our control, there's always the chance for a race condition in the order of requests passed back to the proprietary system, and if the seqnums are out of order, the request fails silently (brilliant, right?). I thought about enqueuing the requests in Redis and using Resque workers to process them (single worker, single process, so they are processed in order), but we need to be able to give the user feedback as to whether the transaction was processed successfully, so this seems less feasible to me. I've tried to make this application handle concurrency well (as much as possible for a Ruby on Rails app), but now we're in a situation where we have to interact with a system that is designed to be single process, single threaded, and sequential. If it at least gave an "out of order" error, I could just increment (or take the next value off the sequence), but it's designed to fail silently in the event of ANY error. We are handling timeouts in a way that blocks on I/O, but since the application uses multiple workers (Unicorn), that's no guarantee. Any ideas/suggestions would be appreciated.

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >