Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 212/663 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • How to access an encrypted INI file from C on an embedded system with little RAM

    - by Mawg
    I want to encrypt an INI file using a Delphi program on a Windows PC. Then I need to decrypt & access it in C on an embedded system with little RAM. I will do that once & fetch all info; I will not be consutinuously accessing the INI file whenever my program needs data from the file. Any advice as to which encryption to use? Nothing too heavyweight, just good enough for "Security through obscurity" and FOSS for both Delphi & C. And how can I decrypt, get all the info from the INI file - using as little RAM as possible, and then free any allocated RAM? I hope that someone can help. [Update] I am currently using an Atmel UC3, although I am not sure if that will be the final case. It has 512kB falsh & 128kB RAM. For an INI file, I am talking of max 8 sections, with a total of max 256 entries, each max 8 chars. I chose INI (but am not married to it), because i have had major problems in the past when the format of a data fiel changes, no matter whether binary, or text. For tex, I prefer the free format of INI (on PC), but suppose I could switch to line_1=data_1, line_2=data_2 and accept that if I add new fields in future software erleases they must come at the end, even if it is not pretty when read directly by humans. I suppose if I choose a fixed format text file then I never need get more than one line into RAM at a time ...

    Read the article

  • Experience of Python's “PEP-302 New Import Hooks”

    - by Koichi Sasada
    I'm one of the developers of Ruby (CRuby). We are working on Ruby 2.0 release (planned to release 2012/Feb). Python has "PEP302: New Import Hooks" (2003): This PEP proposes to add a new set of import hooks that offer better customization of the Python import mechanism. Contrary to the current import hook, a new-style hook can be injected into the existing scheme, allowing for a finer grained control of how modules are found and how they are loaded. We are considering introducing a feature similar to PEP302 into Ruby 2.0 (CRuby 2.0). I want to make a proposal which can persuade Matz. Currently, CRuby can load scripts from only file systems in a standard way. If you have any experience or consideration about PEP 302, please share. Example: It's a great spec. No need to change it. It is almost good, but it has this problem... If I could go back to 2003, then I would change the spec to... I'm sorry if such a question is not suitable for here. I posted here because I'm not sure that I can ask this question at python-dev (of course, the list is not for cruby development). This post is moved from http://stackoverflow.com/questions/11188229/experience-of-pythons-pep-302-new-import-hooks.

    Read the article

  • Benchmarking CPU processing power

    - by Federico Zancan
    Provided that many tools for computers benchmarking are available already, I'd like to write my own, starting with processing power measurement. I'd like to write it in C under Linux, but other language alternatives are welcome. I thought starting from floating point operations per second, but it is just a hint. I also thought it'd be correct to keep track of CPU number of cores, RAM amount and the like, to more consistently associate results with CPU architecture. How would you proceed to the task of measuring CPU computing power? And on top of that: I would worry about a properly minimum workload induced by concurrently running services; is it correct to run benchmarking as a standalone (and possibly avulsed from the OS environment) process?

    Read the article

  • How to convince management to deal with technical debt?

    - by Desolate Planet
    This is a question that I often ask myself when working with developers. I've worked at four companies so far, and I've noticed a lack of attention to keeping code clean and dealing with technical debt that hinders future progress in a software app. For example, the first company I worked for had written a database from scratch rather than take something like MySQL and that created hell for the team when refacoring or extending the app. I've always tried to be honest and clear with my manager when he discusses projections, but management doesn't seem interested in fixing what's already there and it's horrible to see the impact it has on team morale and in their attitude towards others. What are your thoughts on the best way to tackle this problem? What I've seen is people packing up and leaving and the company becomes a revolving door with developers coming and and out and making the code worse. How do you communicate this to management to get them interested in sorting out technical debt?

    Read the article

  • White box testing with Google Test

    - by Daemin
    I've been trying out using GoogleTest for my C++ hobby project, and I need to test the internals of a component (hence white box testing). At my previous work we just made the test classes friends of the class being tested. But with Google Test that doesn't work as each test is given its own unique class, derived from the fixture class if specified, and friend-ness doesn't transfer to derived classes. Initially I created a test proxy class that is friends with the tested class. It contains a pointer to an instance of the tested class and provides methods for the required, but hidden, members. This worked for a simple class, but now I'm up to testing a tree class with an internal private node class, of which I need to access and mess with. I'm just wondering if anyone using the GoogleTest library has done any white box testing and if they have any hints or helpful constructs that would make this easier. Ok, I've found the FRIEND_TEST macro defined in the documentation, as well as some hints on how to test private code in the advanced guide. But apart from having a huge amount of friend declerations (i.e. one FRIEND_TEST for each test), is there an easier idion to use, or should I abandon using GoogleTest and move to a different test framework?

    Read the article

  • How many questions is it appropriate to ask as an intern?

    - by Casey Patton
    So, I just started an internship, and I'm worried that I'm asking too many questions. I've been assigned a mentor who has been assigning me projects and helping me learn all the company's technologies and methodologies. However, there's so much new material for me to learn while doing this project that I have a lot of questions. I generally ask questions over instant messages or E-mail (those are the primary modes of communication for my company). I'm trying to be careful not to ask too many questions: I don't want to come off as annoying or dumb. How many questions is appropriate to ask? Once an hour? More? Less? Keep in mind, my mentor is also a fellow programmer that has his own responsibilities.

    Read the article

  • How can I justify software testing to management?

    - by Nate
    I work for a small company (less than 200 employees) whose software group only makes up a small part of our staff (4 employees, occasionally with a few contractors). The four of us have been making strides in transitioning to better practices, and one of the next logical steps is to improve our testing. As anyone who has done any meaningful tests knows, testing takes a lot of time - and at my company, it takes too much time to justify to management, so we generally do what little we do on the sly. I don't think this is serving us well, as we keep coming up against otherwise avoidable problems when we ship under-tested software. I would like to be able to come to management with a justification for hiring a dedicated software test engineer (someone who can both write automated tests and perform manual ones). Are there any good published studies that show the benefits of adding such a position to a small company? Where can I find information about costs associated with the position? I plan on doing a little number crunching on our own history, but having some external sources to point to would help bolster my case.

    Read the article

  • How can I separate the user interface from the business logic while still maintaining efficiency?

    - by Uri
    Let's say that I want to show a form that represents 10 different objects on a combobox. For example, I want the user to pick one hamburguer from 10 different ones that contain tomatoes. Since I want to separate UI and logic, I'd have to pass the form a string representation of the hamburguers in order to display them on the combobox. Otherwise, the UI would have to dig into the objects fields. Then the user would pick a hamburguer from the combobox, and submit it back to the controller. Now the controller would have to find again said hamburguer based on the string representation used by the form (maybe an ID?). Isn't that incredibly inefficient? You already had the objects you wanted to pick one from. If you submited to the form the whole objects, and then returned a specific object, you wouldn't have to refind it later on since the form already returned a reference to that object. Moreover, if I'm wrong and you actually should send the whole object to the form, how can I isolate UI from logic?

    Read the article

  • Becoming a "maintenance developer"

    - by anon
    So I've kind of been getting angry about the current position I'm in, and I'd love to get other developers' input on this. I've been at my current place of employment for about 11 months now. When I began, I was working on all new features. I basically worked on an entire new web project for the first 5-6 months I was here. After this, I was moved to more of a service oriented role (which was still great, all new stuff for me), and I was in this role for about the past 5-6 months. Here's where the problem comes in. Basically, a couple of days ago I was made the support/maintenance guy. Now, we have an IT support team, so I'm not talking that kind of support, I'm talking more of a second level support guy (when the guys on the surface can't really get to the root of the issue), coupled with working on maintenance issues that have been lingering in the backlog for a while. To me, a developer with about 3 years of experience, this is kind of disheartening. With the type of work place this is, I wouldn't be surprised if these support issues take up most of my days, and I barely make it to working on maintenance issues. Also, most of these support issues aren't even related to code, they are more or less just knowing the system architecture, working with making sure services are running/getting started properly, handling/fixing bad data, etc. I'm a developer, so this part sucks. Also, even when I do have time to work maintenance, these are basically just bug fixes/improving bad code, so this sucks as well, however at least it's related to coding. Am I wrong for getting angry here? I don't want to really complain about it, but to be honest, I wasn't spoken to about this or anything, I was kind of just sent an e-mail letting me know I'm the guy for this type of thing, and that was that. The entire team took a few minutes to give me their "that sucks" talk, because they know how annoying it is to be on support for the type of work we do, so I know I'm not the only guy that knows it's not that great of an opportunity. I'm just kind of on the fence about how to move forward. Obviously I'm just going to continue working for the time being, no point making a bad impression on anybody, but I'd like to know how you guys would approach this situation, or how you think I should be feeling about it/how you guys would feel. Thanks guys.

    Read the article

  • MonoGame; reliable enough to be accepted on iOS, Win 8 and Android stores?

    - by Serguei Fedorov
    I love XNA; it simplifies rendering code to where I don't have to deal with it, it runs on C# and has very fairly large community and documentation. I would love to be able to use it for games across many platforms. However, I am a little bit concerned about how well it will be met by platform owners; Apple has very tight rules about code base but Android does not. Microsoft's new Windows 8 platforms seems to be pretty lenient but I am not sure oh how they would respond to an XNA project being pushed to the app store (given they suddenly decided to dump it and force developers to use C++/Direct3D). So the bottom line is; is it safe to invest time and energy into a project that runs on MonoGame? In the end, is is possible to see my game on multiple platforms and not be shot down with a useless product?

    Read the article

  • Visual Studio 2010 on Macbook Air

    - by Kyle B.
    Does anyone here run Visual Studio 2010 (or VS12 RC) on a Macbook Air? I have the current model with 4GB ram, 13" screen, and 256GB SSD drive. Before I go through the effort of configuring this, I'd like to know if anyone from the community has done this and: Was the performance acceptable? If it is, I plan to get a larger cinema display monitor as a second display and do all my coding on this machine ditching my desktop. Did you use Boot camp, Parallels, or VMWare? I feel to maximize performance that boot camp would be necessary to make the most utilization of the memory, but am not sure if this completely necessary. I'd prefer to use a VM, but wasn't sure if this was practical and would value your input before buying a license. Did you also run anything else on the Windows installation, such as SQL Server express, IISExpress, etc? Did performance lag after a certain point? Note: I would have asked this in superuser.com, but felt this applied more directly to the programming community.

    Read the article

  • Books, resources and so on about GUI architecture [on hold]

    - by Moses
    I'm making first steps in GUI programming. Earlier I've had little experience with GUI and I remember that it was kind of pain. Code was either coupled or to verbose with tons of "Listeners". It seems to me that problem in me and not in a library that I used(Swing). So, could you recommend me some books, tutorials or resources where I can find how to design gui programms? Emphasize that I'm interested in architecture and not in how to use components of some framework(which about 90% of tutorials that I've ever seen).

    Read the article

  • When is a 'core' library a bad idea?

    - by Alex Angas
    When developing software, I often have a centralised 'core' library containing handy code that can be shared and referenced by different projects. Examples: a set of functions to manipulate strings commonly used regular expressions common deployment code However some of my colleagues seem to be turning away from this approach. They have concerns such as the maintenance overhead of retesting code used by many projects once a bug is fixed. Now I'm reconsidering when I should be doing this. What are the issues that make using a 'core' library a bad idea?

    Read the article

  • Optimization ended up in casting an object at each method call

    - by Aybe
    I've been doing some optimization for the following piece of code : public void DrawLine(int x1, int y1, int x2, int y2, int color) { _bitmap.DrawLineBresenham(x1, y1, x2, y2, color); } After profiling it about 70% of the time spent was in getting a context for drawing and disposing it. I ended up sketching the following overload : public void DrawLine(int x1, int y1, int x2, int y2, int color, BitmapContext bitmapContext) { _bitmap.DrawLineBresenham(x1, y1, x2, y2, color, bitmapContext); } Until here no problems, all the user has to do is to pass a context and performance is really great as a context is created/disposed one time only (previously it was a thousand times per second). The next step was to make it generic in the sense it doesn't depend on a particular framework for rendering (besides .NET obvisouly). So I wrote this method : public void DrawLine(int x1, int y1, int x2, int y2, int color, IDisposable bitmapContext) { _bitmap.DrawLineBresenham(x1, y1, x2, y2, color, (BitmapContext)bitmapContext); } Now every time a line is drawn the generic context is casted, this was unexpected for me. Are there any approaches for fixing this design issue ? Note : _bitmap is a WriteableBitmap from WPF BitmapContext is from WriteableBitmapEx library DrawLineBresenham is an extension method from WriteableBitmapEx

    Read the article

  • How does one handle an incorrect resource file?

    - by AedonEtLIRA
    I'm starting the parser that will handle one of the key features of my app and realizing exactly who easy it would be for me to screw up a resource file that is provided to the application. For example, a simple resource that I provide to my app is a JSON file that contains an entity layout (name, fascia, location etc...). It would be easy for me to leave out the name of the entity or misspell the JSON key. Obviously catastrophic failures during parsing are to be handled in a try/catch, but how would subtle failures (such as a dyslexic spelling of name) be handled?

    Read the article

  • Is it typical for a provider of a web services to also provide client libraries?

    - by HDave
    My company is building a corporate Java web-app and we are leaning towards using GWT-RPC as the client-server protocol for performance reasons. However, in the future, we will need to provide an API for other enterprise systems to access our data as well. For this, we were thinking of a SOAP based web service. In my experience it is common for commercial providers of enterprise web applications to provide client libraries (Java, .NET, C#, etc.). Is this generally the case? I ask because if so, then why bother using SOAP or REST or any standard web services protocol at all? Why not just create a client libraries that communicate via GWT-RPC?

    Read the article

  • On github (gist) what is the star-button used for?

    - by user unknown
    On github::gist what is the star-button used for? For example here: https://gist.github.com/1372818 I have 3 Buttons in the headline, they are edit download (un)star With the last one changing from star to unstar. My first idea was: Upvote system. But I can star my own code, and I didn't mention some star appearing somewhere. Next idea: a favorite, a bookmark. I can select 'starred gists' but the page came up even before I starred it. Reading the Help didn't let me find something about 'star', and the searchbox is for searching in code, in pages - not to search the helppages of github::gist.

    Read the article

  • Should main method be only consists of object creations and method calls?

    - by crucified soul
    A friend of mine told me that, the best practice is class containing main method should be named Main and only contains main method. Also main method should only parse inputs, create other objects and call other methods. The Main class and main method shouldn't do anything else. Basically what he is saying that class containing main method should be like: public class Main { public static void main(String[] args) { //parse inputs //create other objects //call methods } } Is it the best practice?

    Read the article

  • Rationale behind freeware projects

    - by VexXtreme
    I've seen some freeware projects in the past where the author(s) invested a significant amount of their personal time and resources and never even considered charging for the software. A lot of these projects were donation based, and from what I've heard, donationware can never be a viable business model (even to simply support development costs) because most people choose not to donate if given an option. A lot of these projects eventually shut down because their authors could not sustain them further. Granted, some people simply like making the community happy (or something), but if you're struggling to keep your project alive, why not charge some small amount such as $10 simply to stay operational? If people find your software useful (and a lot of people found those projects VERY useful) they won't have a problem paying such a small amount. The question is: if you have a popular app that people like and download in great numbers, why not put a price tag on it? Why do it for free?

    Read the article

  • Developing a php system that tracks other websites analytics

    - by CodeCrack
    I want to develop a PHP website feature where users sign up, get a javascript snippet code that display an image on their site, and let's me track the number of visitors, unique hits, clicks and average visitor duration on their page. Is that something that should be done with some open source analytic software such as http://piwik.org/ or it's pretty doable on your own? If I had to do it myself from scratch, I would use image/pixel as a way to track the visit, drop a cookie with javascript snippet to track uniques, track clicks based on image click and redirect, and not sure about the bounce rate. Any thoughts or opinions are welcome.

    Read the article

  • Explanation of the definition of interface inheritance as described in GoF book

    - by Geek
    I am reading the first chapter of the Gof book. Section 1.6 discusses about class vs interface inheritance: Class versus Interface Inheritance It's important to understand the difference between an object's class and its type. An object's class defines how the object is implemented.The class defines the object's internal state and the implementation of its operations.In contrast,an object's type only refers to its interface--the set of requests on which it can respond. An object can have many types, and objects of different classes can have the same type. Of course, there's a close relationship between class and type. Because a class defines the operations an object can perform, it also defines the object's type . When we say that an object is an instance of a class, we imply that the object supports the interface defined by the class. Languages like c++ and Eiffel use classes to specify both an object's type and its implementation. Smalltalk programs do not declare the types of variables; consequently,the compiler does not check that the types of objects assigned to a variable are subtypes of the variable's type. Sending a message requires checking that the class of the receiver implements the message, but it doesn't require checking that the receiver is an instance of a particular class. It's also important to understand the difference between class inheritance and interface inheritance (or subtyping). Class inheritance defines an object's implementation in terms of another object's implementation. In short, it's a mechanism for code and representation sharing. In contrast,interface inheritance(or subtyping) describes when an object can be used in place of another. I am familiar with the Java and JavaScript programming language and not really familiar with either C++ or Smalltalk or Eiffel as mentioned here. So I am trying to map the concepts discussed here to Java's way of doing classes, inheritance and interfaces. This is how I think of of these concepts in Java: In Java a class is always a blueprint for the objects it produces and what interface(as in "set of all possible requests that the object can respond to") an object of that class possess is defined during compilation stage only because the class of the object would have implemented those interfaces. The requests that an object of that class can respond to is the set of all the methods that are in the class(including those implemented for the interfaces that this class implements). My specific questions are: Am I right in saying that Java's way is more similar to C++ as described in the third paragraph. I do not understand what is meant by interface inheritance in the last paragraph. In Java interface inheritance is one interface extending from another interface. But I think the word interface has some other overloaded meaning here. Can some one provide an example in Java of what is meant by interface inheritance here so that I understand it better?

    Read the article

  • How to explain a layperson why a developer should not be interrupted while neck-deep in coding?

    - by András Szepesházi
    If you just consider the second part of my question, "Why a developer should not be interrupted while neck-deep in coding", that has been discussed a number of times by smart people. Heck, even the co-founder of SO, Joel Spolsky, wrote a blog post about "getting in the zone" and "being knocked out of the zone" and why it takes an average of 15 minutes to achieve productivity when participating in complex, software development related tasks. So I think the why has been established. What I'm interested in is how to explain all that to somebody who doesn't know beans about Beans (khmm I mean software development). How to tell the wife, or the funny guy from accounting at the workplace, or the long time friend who pings you on Skype every 30 minutes with a "Wazzzzzzup?!", that all the interruptions have a much deeper impact on your work than the obvious 30 seconds they took from your time. Obviously you can't explain it by sentences like "I have to juggle a lot of variable names in my short term memory" unless you want to be the target of blank stares or friendly abuse. I'd like to be able to explain all that to non-developers in a way that will make them clearly understand - without being offensive, elitist or too technical.

    Read the article

  • Is there a SUPPORTED way to run .NET 4.0 applications natively on a Mac?

    - by Dan
    What, if any, are the Microsoft supported options for running C#/.NET 4.0 code natively on the Mac? Yes, I know about Mono, but among other things, it lags Microsoft. And Silverlight only works in a web browser. A VMWare-type solution won't cut it either. Here's the subjective part (and might get this closed): is there any semi-authoritative answer to why Microsoft just doesn't support .NET on the Mac itself? It would seem like they could Silverlight and/or buy Mono and quickly be there. No need for native Visual Studio; cross-compiling and remote debugging is fine. The reason is that where I work there is a growing amount of Uncertainty about the future which is causing a lot more development to be done in C++ instead of C#; brand new projects are chosing to use C++. Nobody wants to tell management 18–24 months from now "sorry" should the Mac (or iPad) become a requirement. C++ is seen as the safer option, even if it (arguably) means a loss in productivity today.

    Read the article

  • Is there a SUPPORTED way to run .NET 4.0 applications natively on a Mac?

    - by Dan
    What, if any, are the (Microsoft) supported options for running C#/.NET 4.0 code natively on the Mac? Yes, I know about Mono, but among other things, it lags Microsoft. And Silverlight only works in a web browser. A VMWare-type solution won't cut it either. Here's the subjective part (and might get this closed): is there any semi-authoritative answer to why Microsoft just doesn't support .NET on the Mac itself? It would seem like they could Silverlight and/or buy Mono and quickly be there. No need for native Visual Studio; cross-compiling and remote debugging is fine. The reason is that where I work there is a growing amount of Uncertainty about the future which is causing a lot more development to be done in C++ instead of C#; brand new projects are chosing to use C++. Nobody wants to tell management 18–24 months from now "sorry" should the Mac (or iPad) become a requirement. C++ is seen as the safer option, even if it (arguably) means a loss in productivity today.

    Read the article

  • How do you end up with event-sourcing if you use a xDD approach?

    - by Tomas Jansson
    When working in a TDD or BDD manner your unit tests are supposed to drive your design. But how do you end up with event-sourcing using a xDD techniques? As I see it event sourcing is something you need to adopt early on to take full advantage of it. Lets say that you start without event-sourcing and do a release. Later on when you are releasing version 2.0 you realize that it would be great to use event-sourcing, but at that point you alread have missed all the events from version 1.0 so it makes it much harder to implement. Or do you take some kind of backup of your db from before event-sourcing and use that as base line and then add event-sourcing on top of that?

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >