Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 105/663 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Android programming vs iPhone Programming?

    - by geena
    Hi, I am doing my finol project and thinking of an mobile app to develop.but i am new to mobile OS world and dont know which is good for me to go on.I mean , in long term which will be more beneficial to me b/w android or iPhone programming as well as to my final project ? :) .......... Thanx for all the suggestions of you guyz :) Well I am, if not so bright, then pretty good at Java and C++ :) Although Objective C is a little different from standard C/C++ but I think I can cope with it. Owning a Mac or running Snow Leopard in VMWare is not going to make much difference in iOS development... or is it? Actually, as it is final project for my BS degree, I am wondering whether is it worth taking as a final project or not (iPhone or Android app)...Or.... Is it better to stick with web/desktop development? and what this means that i have to be a

    Read the article

  • Should we use RSS or Atom for feed generation?

    - by Henrik Söderlund
    For various reasons we are required to add feeds to our product. The main reason is to be able to say to potential buyers that "yes, we have feeds". We do not actually expect the feature to be used that much. Ideally we would like to provide both RSS and Atom feeds. However, at the moment we are severely pressed for time and are forced to select just one of these. Should we use Atom or RSS? Feature-wise we are fine with either, so I am only looking for information about the popularity and support for the various formats. Are there many feed readers out there without Atom support?

    Read the article

  • Paper on Linux memory access techniques sought

    - by James
    Over on stackoverflow someone posted a link to a paper written by a Linux kernel engineer about how to use computers and RAM. He started off by explaining how RAM works (right down to the flip-flops) and then went on to discuss performance problems associated with operations on matrices (column vs row accesses), offered solutions and then dealt with some stuff MMX instructions can do. Sorry it's a bit vague but I can't find it anywhere. I think the guy had a Scandinavian name, possibly Anders

    Read the article

  • What is a “pretty and proper OO” way for handling sessions and authentication?

    - by asdfqwer
    Is coupling these two concepts a bad approach? As of right now I'm delegating all session handling and whether or not a user desires to logout in my config.inc file. As I was writing my Auth class I started wondering whether or not my Auth class should be taking care of most of the logic in my config.inc. Regardless, I'm sure there's a more elegant way of handling this... Here is what I have in my config.inc (also a large chunk of this code is based on a reply I found on SO except I can't find the source ._.): ini_set('session.name', 'SID'); # session management session_set_cookie_params(24*60*60); // set SID cookie lifetime session_start(); if(isset($_SESSION['LOGOUT']) { session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data #header('Location: '.DOCROOT); } elseif(isset($_SESSION['SID_AUTH'])) { // verify user has authenticated if (!isset($_SESSION['SID_CREATED'])) { $_SESSION['SID_CREATED'] = time(); } elseif (time() - $_SESSION['SID_CREATED'] > 6*60*60) { // session started more than 6 hours ago session_regenerate_id(); // reset SID value $_SESSION['SID_CREATED'] = time(); // update creation time } if (isset($_SESSION['SID_MODIFIED']) && (time() - $_SESSION['SID_MODIFIED'] > 12*60*60)) { // last request was more than 12 hours ago session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data } $_SESSION['SID_MODIFIED'] = time(); // update last activity time stamp }

    Read the article

  • Best way of Javascript web development in Netbeans (Hot deployment)

    - by marcelocbf
    I'm beginning Javascript development and as a beginner in JavaScript I make a lot of mistakes. The way I'm developing is very counter-productive because every mistake I fix I have to shutdown Glassfish, re-build the app and re-deploy it. My app is a Java back-end with REST services and the Html, JavaScript, CSS for the frontend. Everything is packed in a .ear file. As of right now, I'm just working with the frontend but I do have to make this whole process to update the files. My question is ... is there a better way of doing this? Can somebody tell how do you guys work in a similar setup to do the everyday development?

    Read the article

  • When to favor ASP.NET WebForms over MVC

    - by P.Brian.Mackey
    I know Microsoft has said "ASP.NET MVC is not a replacement for WebForms". Some developers say WebForms is faster to develop than MVC, but I believe this all comes down to comfort level with the technology; so I don't want any answers in this direction. Given that ASP.NET MVC gives a developer more control over our application, why is WebForms not considered obsolete? When should I favor WebForms over MVC for new development?

    Read the article

  • Relationship between TDD and Software Architecture/Design

    - by Christopher Francisco
    I'm new to TDD and have been reading the theory since applying it is more complicated than it sounds when you're learning by yourself. As far as I know, the objective is to write test cases for each requirement and run the test so it fails (to prevent a false positive). Afterward, you should write the minimum amount of code that can pass the test and move to the next one. That being said, is it true that you get a fast development, but what about the code itself? this theory makes me think you are not considering things like abstraction, delegation of responsibilities, design patterns, architecture and others since you're just writing "the minimum amount of code that can pass the test". I know I'm probably wrong because if this were true, we'd have a lot of crappy developers with poor software architecture and documentation so I'm asking for a guide here, what's the relationship between TDD and Software Architecture/Design?

    Read the article

  • A list of pros and cons to giving developers “Local Admin” privileges to their machines? [closed]

    - by Boden
    Possible Duplicate: Is local “User” rights enough or do developers need Local Administrator or Power User while coding? I currently work for a large utilities company which currently does not grant “Local Admin” access to developers. This is causing a lot of grief as anything that requires elevated privileges needs to be done by the Desktop Support/Server Teams. In some cases this can take several days and requires our developers to have to show why they need this access. I personally think that all developers should have local administration rights and are currently fighting with management to achieve this but I would like to know what other people think about this. To achieve this I would like to hear what people believe are the pros and cons of letting developers have local admin access to their machines. Here are some I have come up with: Pros Loss time is keep low as developers can resolve issues that would normally require Local Admin Evaluation of tools and software are possible to improve productivity Desktop support time not wasted installing services and software on developers PC Cons Developers install software on local PC that could be malicious to others or inappropriate in a business environment Desktop Support required to support a PC that is not the norm Development done with admin access that then fails when promoted to another environment that does not have the same access level

    Read the article

  • How do I deal with analysis paralysis?

    - by Anne Nonimus
    Very frequently, I am stuck when choosing the best design decision. Even for small details, such as function definitions, control flow, and variable names, I spend unusually long periods perusing the benefits and trade-offs of my choices. I feel like I am losing a lot of efficiency by spending my hours on insignificant details like these. Even though, I know in the back of my mind that I can change these things if my current design doesn't work out, I have trouble deciding firmly on one choice. What should I do to combat this problem?

    Read the article

  • How to learn how the web works? [closed]

    - by Goma
    I was thinking to start learning ASP.NET Web forms and some of my friends told me that I should learn something else such as ASP.NET MVC or PHP because ASP.NET Web Forms does not learn me how the web works and I will get some misunderstanding of the web if I learn ASP.NET Web Forms. To what extent is that ture? and must I change my path of learning towards ASP.NET MVC or PHP or is it OK if I start with Web Forms?

    Read the article

  • Where to submit my Java blog to get more visitors ?

    - by Javin Paul
    Hi Guys, I have started writing my Java experience in my blog JAVA , TIBCO RV and FIX Protocol Tutorial . I want to share this blog with community so that most of the people can benefit and my site also gets decent number of visitor to keep me motivated to write :) Can you guys please suggest where can I submit my Java blog , how can I share it with the java communities and where ? If this is not the right place then please suggest me the correct place to ask this question. Thanks in advance

    Read the article

  • GCC 4.2.1 Compiling on Cygwin(Win7 64bit) for iPhone [closed]

    - by Kenneth Noland
    Hey This is going to take a long while to explain, but the short version is that I am currently attempting to compile the LLVM GCC frontend for ARMv7 to compile apps for the Cortex-A8(iPhone 3GS). I'm running into an error from LD when compiling libgcc(part of the gcc compilation process) that has been driving me mad! The command is this: /usr/llvm-gcc-4.2-2.8.source/build/./gcc/xgcc \ -B/usr/llvm-gcc-4.2_2.8.source/build/./gcc \ -B/usr/local/arm-apple-darwin/bin \ -B/usr/local/arm-apple-darwin/lib \ -isystem /usr/local/arm-apple-darwin/include \ -isystem /usr/local/arm-apple-darwin/sys-include \ -O2 -g -W -Wall -Wwrite-strings -wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -fno-inline -dynamiclib -nodefaultlibs -W1,-dead_strip \ -marm \ -install_name /usr/local/arm-apple-darwin/lib/libgcc_s.1.dylib \ -single_module -o ./libgcc_s.1.dylib.tmp \ -W1,-exported_symbols_list,libgcc/./libgcc.map -compatibility_version 1 -current_version 1.0 -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED -Dinhibit_libc \ ... long list of .o files ... \ -lc And the result is typically a lot of undefined references to malloc, free, exit, etc. which typically indicate that libc is not getting compiled in. After going through the list of errors that ld is throwing, I see at the top that it is attempting to pull in /usr/lib/libc.a and complains that it is not the correct platform. Okay, that makes sense, so I spent 5 minutes on google and found an answer. Turns out that if I copy the libSystem.dylib and rename it to libc.dylib, that should solve the problem, but it doesn't. I couldn't find a copy of that file on my phone, so I pulled it directly from the SDK. I then get this strange error: ld64: in /usr/local/arm-apple-darwin/lib/libc.dylib, can't re-map file, errno=22 At this point, I did everything I could think of. I grabbed a fresh copy of my /usr/lib folder from my iphone and confirmed that libSystem.dylib(and libSystem.B.dylib) wasn't there. I unpacked the raw .ipsw package for iOS 4.2.1 and once again, I could not find a copy of libSystem.dylib there either. I unpacked the iPhoneSDK and MacOS SDK and I managed to find a copy of it in both, but that error just kept persisting. I copied libSystem.dylib, libSystem.B.dylib, tried all sorts of combinations of renaming to libc.dylib and still nothing but errors. I can't find a way to get it to recognize the file and link against it. I also tried linking against the libc.a located in the iphone SDK and that didn't work either. I checked what ./xgcc was firing off, and it was my freshly built copy of arm-apple-darwin-ld64 which should be fine. A little bit of background here. I built LLVM+Clang 2.8 with no errors, and I rebuilt the ODCCTools with some light modifications to get it to compile on Cygwin(I'll post my changes in a patch along with a tutorial if I can get this to work). I also grabbed the iphone-dev "includes" and "csu" project and those completed successfully, although there really is no point to them since I can't get it to link against crt0.a. I'm running out of ideas here. Can anyone help me out on this?

    Read the article

  • What rules of etiquette should be followed at software conferences?

    - by shemnon
    Whether as an attendee, a speaker, or a vendor I wanted to know what the unspoken rules of etiquette are at software conferences. Other than the blindingly obvious ones (like don't assault the winner of the iPad raffle because you didn't win). What are some of the rules that should be followed, even if you feel they don't need to be said? i.e. what t-shirts are acceptable (like from competing technologies or conferences), playing Wii at vendor booths at Microsoft sponsored conferences, cancelling sessions due to low (one hand) attendance, showering, 'Attendees of Size' spilling over into your seat, talking and eating during sessions or keynotes, etc. Please, one rule per answer, with the summary in bold leading the answer. Post multiple answers if you have multiple rules.

    Read the article

  • "Build your own website" - Developing a CMS with Vague Requirements on a Tight Deadline

    - by walnutmon
    I'm a Java developer in charge of making a product which allows clients to "build their own site". I've spent a lot of time looking into Liferay, as I don't have any experience in building CMSs, and want to either use it, or get ideas of how to build a decent system. The time line is short, requirements are vague, yada yada Is Liferay a good technology to work with when showing the client (who may be very low on computer expertise) a user interface to build a site? The thing is, I want the power and flexibility to avoid the learning curve in building a CMS like product, but I don't want to waste time learning a new technology only to find its over-kill, or can't do the simple - but uncommon and unimplemented - things that we are asked to add as features Ideally I'd like to provide multiple web interfaces to the core API to build the sites - one that is very powerful, and another that is watered down and easy to use.

    Read the article

  • Is “Application Programming Interface” a bad name?

    - by Taylor Hawkes
    Application programming interface seems like a bad name for what it is. Is there a reason it was named such? I understand that people used to call them Advanced Programming Interfaces and then renamed to Application Programming Interface. Is that why it is poorly named? Why is it not named Application (to) Programmer Interface. I guess I'm just confused of the meaning behind that name? I write more about my confusion around the name here: BREAKING DOWN THE WORD “APPLICATION PROGRAMMING INTERFACE” This is a very confusing word. We mostly understand what the word Interface means, but “Application Programming”, what even is that. Honestly I'm confused. Is that suppose to be two words like “Application”, “Programming” and then the “Interface” is suppose to mean between the two? Like would a “Computer Human Interface” be an interface between a “Computer” and a “Human” (monitor , keyboard, mouse ) or is a “Computer Human” a real thing - perhaps the terminator. So a CHI is our boy Kyle Reese who is the only way we are able to work with the computer human. I think more likely “Application Programming Interface” was simply poorly named and doesn't really make sense. It was originally called an “Advanced Programming Interface” , but perhaps being a bit to ostentatious merged into the now wildly accepted “Application Programming Interface”. So now, not wanting to change an acronym has confused the living heck out everyone.... Any thoughts or clarification would be great, I'm giving a lecture on this topic in a month, so I would prefer not to BS my way through it.

    Read the article

  • Is inline SQL still classed as bad practice now that we have Micro ORMs?

    - by Grofit
    This is a bit of an open ended question but I wanted some opinions, as I grew up in a world where inline SQL scripts were the norm, then we were all made very aware of SQL injection based issues, and how fragile the sql was when doing string manipulations all over the place. Then came the dawn of the ORM where you were explaining the query to the ORM and letting it generate its own SQL, which in a lot of cases was not optimal but was safe and easy. Another good thing about ORMs or database abstraction layers were that the SQL was generated with its database engine in mind, so I could use Hibernate/Nhibernate with MSSQL, MYSQL and my code never changed it was just a configuration detail. Now fast forward to current day, where Micro ORMs seem to be winning over more developers I was wondering why we have seemingly taken a U-Turn on the whole in-line sql subject. I must admit I do like the idea of no ORM config files and being able to write my query in a more optimal manner but it feels like I am opening myself back up to the old vulnerabilities such as SQL injection and I am also tying myself to one database engine so if I want my software to support multiple database engines I would need to do some more string hackery which seems to then start to make code unreadable and more fragile. (Just before someone mentions it I know you can use parameter based arguments with most micro orms which offers protection in most cases from sql injection) So what are peoples opinions on this sort of thing? I am using Dapper as my Micro ORM in this instance and NHibernate as my regular ORM in this scenario, however most in each field are quite similar. What I term as inline sql is SQL strings within source code. There used to be design debates over SQL strings in source code detracting from the fundamental intent of the logic, which is why statically typed linq style queries became so popular its still just 1 language, but with lets say C# and Sql in one page you have 2 languages intermingled in your raw source code now. Just to clarify, the SQL injection is just one of the known issues with using sql strings, I already mention you can stop this from happening with parameter based queries, however I highlight other issues with having SQL queries ingrained in your source code, such as the lack of DB Vendor abstraction as well as losing any level of compile time error capturing on string based queries, these are all issues which we managed to side step with the dawn of ORMs with their higher level querying functionality, such as HQL or LINQ etc (not all of the issues but most of them). So I am less focused on the individual highlighted issues and more the bigger picture of is it now becoming more acceptable to have SQL strings directly in your source code again, as most Micro ORMs use this mechanism. Here is a similar question which has a few different view points, although is more about the inline sql without the micro orm context: http://stackoverflow.com/questions/5303746/is-inline-sql-hard-coding

    Read the article

  • Best Practices To Build a Product Registration System?

    - by Volomike
    What are some practices I should use in a product registration system I'm building? I likely can't stop all malicious hacking, but I'd like to slow them down a great deal. (Note, I know only PHP.) I'm talking about things like encrypting traffic, testing the encryption from hacking like a man-in-the-middle attack, etc. The other concern I have is that this needs to work on most PHP5-based web hosting environments, which may not have mcrypt installed.

    Read the article

  • Document-oriented vs Column-oriented database fit

    - by user1007922
    I have a data-intensive application that desperately needs a database make-over. The general data model: There are records with RIDs, grouped together by group IDs (GID). The records have arbitrary data fields, (maybe 5-15) with a few of them mandatory and the rest optional, and thus sparse. The general use model: There are LOTS and LOTS of Writes. Millions to Billions of records are stored. Very often, they are associated with new GIDs, but sometimes, they are associated with existing GIDs. There aren't as many reads, but when they happen, they need to be pretty fast or at least constant speed regardless of the database size. And when the reads happen, it will need to retrieve all the records/RIDs with a certain GID. I don't have a need to search by the record field values. Primarily, I will need to query by the GID and maybe RID. What database implementation should I use? I did some initial research between document-oriented and column-oriented databases and it seems the document-oriented ones are a good fit, model-wise. I could store all the records together under the same document key using the GID. But I don't really have any use for their ability to search the document contents itself. I like the simplicity and scalability of column-oriented databases like Cassandra, but how should I model my data in this paradigm for optimal performance? Should my key be the GID and should I create a column for each record/RID? (there maybe thousands or hundreds of thousands of records in a group/GID). Or should my key be the RID and ensure each row has a column for the GID value? What results in faster writes and reads under this model?

    Read the article

  • What are the legal risks if any of using a GPL'ed Web Application Framework/CMS?

    - by Seth Spearman
    Tried to ask this on SO but was referred here... Am I correct in saying that using a GPL'ed web application framework such as Composite C1 would NOT obligate a company to share the source code we write against said framework? That is the purpose of the AGPL, am I correct? Does this also apply to Javascript frameworks like KendoUI? The GPL would require any changes that we make to the framework be made available to others if we were to offer it for download. In other words, merely loading a web sites content into my browser is not "conveying" or "distributing" that software. I have been arguing that we should avoid GPL web frameworks and now after researching I am pretty sure I am wrong but wanted to get other opinions? Seth

    Read the article

  • How do you decide site availability requirements?

    - by Nathan Long
    I work on a web application to file a specific kind of county taxes. Our company wants our state to mandate that counties must accept electronic filings (as opposed to paper) from any system that meets some sensible requirements for uptime, security, data validation, etc. (Yes, this would help us as a business, but it would also force county governments to be more efficient.) We're creating a draft of those requirements to be reviewed and tweaked with the state. One of the sections is "availability." We want to specify something reasonably high, but not so high that any unexpected problem will get us (or a competitor) penalized. How do we decide what's reasonable for availability requirements?

    Read the article

  • What Contents in a Young Programmer's Personal Website

    - by DotNetStudent
    I recently stumbled upon this question in which the contents a professional programmer's website should have were discussed and I agree with most of the answers there. However, I am by no means a professional programmer (just came out from university) and so I am a bit lost in what concerns the contents I should provide in the personal website I am designing for myself now. I do have a pretty nice job at a fast-growing software company but I would really like to present myself to the outside world in a nice but humble manner since my curriculum is by no means a long one. Any ideas?

    Read the article

  • Opinion on LastPass's security for the Average Joe [closed]

    - by Rook
    This is borderline on objective/subjective, but I'm posting it here since I'm more interested in objective facts, without going into too much technical details, than I am in user reviews of LastPass. I've always used offline ways for (password / sensitive data) storage, but lately I keep hearing good things about LastPass. Indeed, it is more practical having it always accessible from every computer you're using without syncing and related problems, but the security aspect still troubles me. How (in a nutshell for dummies) does LastPass keep your data secure / can their employees see your data, and what is your opinion for such storage of more than usual keeping of sensitive data (bank PIN codes, some financial / business related stuff and so on - you know, the things that would practically hurt if lost / phished)? What are your opinions of it, and do you trust it for such? Any bad experiences? If someone for example is sniffing your wifi network, would such data be easier than usual to sniff out?

    Read the article

  • Is hiring a "chief intern" a good idea?

    - by dukeofgaming
    I'm starting an internship program for our software department and I was wondering about creating a position ("chief intern", intern supervisor, or whatever one should call it) with the following responsibilities: Train interns Coach interns Manage projects and tasks for interns Supervise intern's work in terms of rhythm and quality Act as a liaison between the main team's needs and interns performance/aspirations Evaluate and facilitate intern's progress when they want to grab a higher-level domain-specific task (at this point, a main dev team member can do mentoring) Get freely involved in the main team's software development tasks so that he himself can grow, and have full mentorship from the main dev team. I'm thinking that an apprentice-level engineer (below Jr., or Jr.; but being a graduate and working full-time) can handle this for a while (he will be trained by the main dev team first), until one of two things happen: He/she decides to move on to the main dev team by recommending an appropriate replacement (or me finding another one as a new hire) Keep leading the interns while still being able to grow to Jr. Eng., Eng., Sr. Eng I know the notion of a "chief intern" is common within the medical world, but I don't really know about that in the software world (I was a freelancer for most of my university years). A side-intention to this is also that, if this ends up being a higher rotation position (organically) because the intern supervisor wants to join the main dev team, this could help interns that aspire this position emerge as leaders. My main intention for this, though, is removing distractions from the main team but without making the interns suffer the lack of attention, which could lead to boredom and little intern retention. Is this "chief intern" idea common (or good at least)?, are there any obvious risks to it that I might not be seeing? Edit: I have a draft plan for the kind of work the interns would be doing: Are R&D mini-projects a good activity for interns? Edit #2: My intention is not keeping them isolated, but having someone focus on giving attention to them when we cannot. Edit #3: I'm now convince it is a good idea, but I will take the organic approach to hiring someone in such position: do it myself until I cannot. This way I'll know better what to expect from a person I hire for this role in the future, as well as what works and what doesn't with interns.

    Read the article

  • PHP Aspect Oriented Design

    - by Devin Dixon
    This is a continuation of this Code Review question. What was taken away from that post, and other aspect oriented design is it is hard to debug. To counter that, I implemented the ability to turn tracing of the design patterns on. Turning trace on works like: //This can be added anywhere in the code Run::setAdapterTrace(true); Run::setFilterTrace(true); Run::setObserverTrace(true); //Execute the functon echo Run::goForARun(8); In the actual log with the trace turned on, it outputs like so: adapter 2012-02-12 21:46:19 {"type":"closure","object":"static","call_class":"\/public_html\/examples\/design\/ClosureDesigns.php","class":"Run","method":"goForARun","call_method":"goForARun","trace":"Run::goForARun","start_line":68,"end_line":70} filter 2012-02-12 22:05:15 {"type":"closure","event":"return","object":"static","class":"run_filter","method":"\/home\/prodigyview\/public_html\/examples\/design\/ClosureDesigns.php","trace":"Run::goForARun","start_line":51,"end_line":58} observer 2012-02-12 22:05:15 {"type":"closure","object":"static","class":"run_observer","method":"\/home\/prodigyview\/public_html\/public\/examples\/design\/ClosureDesigns.php","trace":"Run::goForARun","start_line":61,"end_line":63} When the information is broken down, the data translates to: Called by an adapter or filter or observer The function called was a closure The location of the closure Class:method the adapter was implemented on The Trace of where the method was called from Start Line and End Line The code has been proven to work in production environments and features various examples of to implement, so the proof of concept is there. It is not DI and accomplishes things that DI cannot. I wouldn't call the code boilerplate but I would call it bloated. In summary, the weaknesses are bloated code and a learning curve in exchange for aspect oriented functionality. Beyond the normal fear of something new and different, what are other weakness in this implementation of aspect oriented design, if any? PS: More examples of AOP here: https://github.com/ProdigyView/ProdigyView/tree/master/examples/design

    Read the article

  • Is there a tool to do round trip software engineering between a sequence diagram and a group of objects that message back and forth?

    - by DeveloperDon
    Is there a tool to do round trip software engineering between a sequence diagram and a group of objects that message back and forth? Perhaps this seems a little exotic, but it seems like a function that includes message calls or even method invocations on other objects could be automatically converted to a sequence diagram given that it is not hard to do manually. Similarly, when a sequence diagram is modified, based on the message name and type of message, should it not be possible to add a message or method to the calling object?

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >