Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 142/563 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Is WCF suitable for writing an application which is shared among applications?

    - by RPK
    I have developed and deployed few ASP.NET applications. Sometimes I want to stop the users from either inserting or updating a record when: Maintenance is going on. Stop operations due to payment delay. In one of my recent application I have implemented this feature to first check the database operations for locked status. If any of the above condition fulfils, database operations like insert and update are not carried out. I now need this feature in all the old applications and the future applications I build. I want to know whether WCF is suitable in this scenario as I want to share methods or an independent locking application among various other applications. Is WCF appropriate for this type of scenario?

    Read the article

  • TDD - Outside In vs Inside Out

    - by Songo
    What is the difference between building an application Outside In vs building it Inside Out using TDD? These are the books I read about TDD and unit testing: Test Driven Development: By Example Test-Driven Development: A Practical Guide: A Practical Guide Real-World Solutions for Developing High-Quality PHP Frameworks and Applications Test-Driven Development in Microsoft .NET xUnit Test Patterns: Refactoring Test Code The Art of Unit Testing: With Examples in .Net Growing Object-Oriented Software, Guided by Tests---This one was really hard to understand since JAVA isn't my primary language :) Almost all of them explained TDD basics and unit testing in general, but with little mention of the different ways the application can be constructed. Another thing I noticed is that most of these books (if not all) ignore the design phase when writing the application. They focus more on writing the test cases quickly and letting the design emerge by itself. However, I came across a paragraph in xUnit Test Patterns that discussed the ways people approach TDD. There are 2 schools out there Outside In vs Inside Out. Sadly the book doesn't elaborate more on this point. I wish to know what is the main difference between these 2 cases. When should I use each one of them? To a TDD beginner which one is easier to grasp? What is the drawbacks of each method? Is there any materials out there that discuss this topic specifically?

    Read the article

  • Productivity vs Security [closed]

    - by nerijus
    Really do not know is this right place to ask such a questions. But it is about programming in a different light. So, currently contracting with company witch pretends to be big corporation. Everyone is so important that all small issues like developers are ignored. Give you a sample: company VPN is configured so that if you have VPN then HTTP traffic is banned. Bearing this in mind can you imagine my workflow: Morning. Ok time to get latest source. Ups, no VPN. Let’s connect. Click-click. 3 sec. wait time. Ok getting source. Do I have emails? Ups. VPN is on, can’t check my emails. Need to wait for source to come up. Finally here it is! Ok Click-click VPN is gone. What is in my email. Someone reported a bug. Good, let’s track it down. It is in TFS already. Oh, dam, I need VPN. Click-click. Ok, there is description. Yea, I have seen this issue in stachoverflow.com. Let’s go there. Ups, no internet. Click-click. No internet. What? IPconfig… DHCP server kicked me out. Dam. Renew ip. 1..2..3. Ok internet is back. Google: site: stachoverflow.com 3 min. I have solution. Great I love stackoverflow.com. Don’t want to remember days where there was no stackoveflow.com. Ok. Copy paste this like to studio. Dam, studio is stalled, can’t reach files on TFS. Click-click. VPN is back. Get source out, paste my code. Grand. Let’s see what other comments about an issue in stackoverflow.com tells. Hmm.. There is a link. Click. Dammit! No internet. Click-click. No internet. DHCP kicked me out. Dammit. Now it is even worse: this happens 3-4 times a day. After certain amount of VPN connections open\closed my internet goes down solid. Only way to get internet back is reboot. All my browser tabs/SQL windows/studio will be gone. This happened just now when I am typing this. Back to issue I am solving right now: I am getting frustrated - I do not care about better solution for this issue. Let’s do it somehow and forget. This Click-click barrier between internet and TFS kills me… Sounds familiar? You could say there are VPN settings to change. No! This is company laptop, not allowed to do changes. I am very very lucky to have admin privileges on my machine. Most of developers don’t. So just learned to live with this frustration. It takes away 40-60 minutes daily. Tried to email company support, admins. They are too important ant too busy with something that just ignored my little man’s problem. Politely ignored. Question is: Is this normal in corporate world? (Have been in States, Canada, Germany. Never seen this.)

    Read the article

  • Difference between a pseudo code and algorithm?

    - by Vamsi Emani
    Technically, Is there a difference between these two words or can we use them interchangeably? Both of them more or less describe the logical sequence of steps that follow in solving a problem. ain't it? SO why do we actually use two such words if they are meant to talk of the same? Or, In case if they aren't synonymous words, What is it that differentiates them? In what contexts are we supposed to use the word pseudo code vs the word algorithm? Thanks.

    Read the article

  • Why there is perception that VB.NET is good for small to medium size application and not for enterprise class project?

    - by Gens
    I love VB.NET very much. Coding VB.NET with Visual Studio is just like typing messages. Smooth, fast and simple. Any error will be notified instantly. The OO capability of VB.NET is good enough. But often in any .Net languages discussion, there is perception that VB.NET is good for small to medium size application and not for large scale project? Why there is such perception? Or am I missing anything regarding to VB.NET?

    Read the article

  • A good interpreted language for a small embedded project

    - by Earlz
    I have an mbed which has a small ARM Cortex M3 on it. Basically, my effective resources for the project are ~25Kb of RAM and ~400Kb of Flash. For I/O I'll have a PS/2 keyboard, a VGA framebuffer(with character output), and an SD card for saving/loading programs(up to a couple of Mb maybe) The reason I ask this here is because I'm trying to figure out what programming language to implement on the thing. I'm looking for an interpreted language that's easy for me to implement, and won't break the bank on my resources. I also intend for this to be at least possible to write on th device itself, though the editor can be interpreted(yay bootstrapping) Anyway, I've looked at a few simple languages. Some nice candidates: Forth BASIC Scheme? Has anyone done something like this or know of any languages that can fit this bill or have comments about my three candidates so far?

    Read the article

  • How do developer get rid of silly requirements?

    - by sugar
    Hmm ! First of all let me give you the note of the requirement. So that you can have idea what kind of problem I am facing. Words From Project Manager : Hey ! Sugar, I am assigning you a task for developing a framework. This framework is supposed to be developed for all iOS application. Please go through brief of the required framework. It should be able to detect the thickness of my Thumb. It should be able to detect whether User is using thumb or Fingers If user is using thumbs/Fingers, Framework should calculate the size of thumb/fingers. Once size is been calculated, all elements of user interface should arranged & resized automatically. ( not specified how & where as its framework - it should be smart enough to arrange automatically ) If thumb size is larger elements should get arranged near by center area of iPad/iPhone If thumb size is smaller elements should get arranged near by corners of iPad/iPhone If thumb size is larger, fonts of all elements should get smaller. ( assuming = aged person ) If thumb size is smaller, fonts of all elements should get larger. ( assuming smaller thumb = low aged person ) Summary : This framework is required for creating user-friendly user-interfaces programmatically. We need to develop a very developer-friendly framework. Framework should be developed in such a way that we can use in as many projects as needed. Well, I am a developer. What I want to have as an answer is as follows. How to describe them - the way of they thinking is bit ridiculous ? How do I explain them - we can better concentrate on developing actual projects ? How do I convince them - that this kind of things even if possible, is not recommended to develop such things ? How do I say politely, gently & respectfully NO to this ? What should I say, So that they can not point at my experience ? ( e.g. you are 3 years experienced guy & you must have abilities to develop such things ) Feeling horror. Please help. Thanks in advance, Sugar. Note : Please help me to tag this question properly. I am stuck & this is real situation. Frustrated & tensed. You guys might have faced such requirements from TopLevel. requesting you to help with your experience. Well ! I came to know that - those TOPLEVEL guys don't have any idea of iPad, iPhone, Apple etc. I would do one thing. Sir, before we go further for framework development. It is strongly recommended to read Apple Human Interface Guidlines.

    Read the article

  • Is there a constant for "end of time"?

    - by Nick Rosencrantz
    For some systems, the time value 9999-12-31 is used as the "end of time" as the end of the time that the computer can calculate. But what if it changes? Wouldn't it be better to define this time as a builtin variable? In C and other programming languages there usually is a variable such as MAX_INT or similar to get the largest value an integer could have. Why is there not a similar function for MAX_TIME i.e. set the variable to the "end of time" which for many systems usually is 9999-12-31. To avoid the problem of hardcoding to a wrong year (9999) could these systems introduce a variable for the "end of time"?

    Read the article

  • Program error trying to generate Outlook 2013 email from Visual Basic 2010 [on hold]

    - by Dewayne Pinion
    I am using vb to send emails through outlook. Currently we have a mix of outlook versions at our office: 2010 and 2013 with a mix of 32 bit and 64 bit (a mess, I know). The code I have works well for Outlook 2010: Private Sub btnEmail_Click(sender As System.Object, e As System.EventArgs) Handles btnEmail.Click CreateMailItem() End Sub Private Sub CreateMailItem() Dim application As New Application Dim mailItem As Microsoft.Office.Interop.Outlook.MailItem = CType(application.CreateItem( _ Microsoft.Office.Interop.Outlook.OlItemType.olMailItem), Microsoft.Office.Interop.Outlook.MailItem) 'Me.a(Microsoft.Office.Interop.Outlook.OlItemType.olMailItem) mailItem.Subject = "This is the subject" mailItem.To = "[email protected]" mailItem.Body = "This is the message." mailItem.Importance = Microsoft.Office.Interop.Outlook.OlImportance.olImportanceLow mailItem.Display(True) End Sub However, I cannot get this to work for 2013. I have referenced the version 15 dll for 2013 and it seems to be backward compatible, but when I try to use the above code for 2013 (it is 64 bit) it says it cannot start Microsoft Outlook. A program error has occured. This is happening on the application Dim statement line. I have tried googling around but there doesn't seem to be much out there referencing 2013 but I feel that the problem here probably has more to do with 64 bit than the software version. Thank you for any suggestions!

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • How should I pitch moving to an agile/iterative development cycle with mandated 3-week deployments?

    - by Wayne M
    I'm part of a small team of four, and I'm the unofficial team lead (I'm lead in all but title, basically). We've largely been a "cowboy" environment, with no architecture or structure and everyone doing their own thing. Previously, our production deployments would be every few months without being on a set schedule, as things were added/removed to the task list of each developer. Recently, our CIO (semi-technical but not really a programmer) decided we will do deployments every three weeks; because of this I instantly thought that adopting an iterative development process (not necessarily full-blown Agile/XP, which would be a huge thing to convince everyone else to do) would go a long way towards helping manage expectations properly so there isn't this far-fetched idea that any new feature will be done in three weeks. IMO the biggest hurdle is that we don't have ANY kind of development approach in place right now (among other things like no CI or automated tests whatsoever). We don't even use Waterfall, we use "Tell Developer X to do a task, expect him to do everything and get it done". Are there any pointers that would help me start to ease us towards an iterative approach and A) Get the other developers on board with it and B) Get management to understand how iterative works? So far my idea involves trying to set up a CI server and get our build process automated (it takes about 10-20 minutes right now to simply build the application to put it on our development server), since pushing tests and/or TDD will be met with a LOT of resistance at this point, and constantly force us to break larger projects into smaller chunks that could be done iteratively in a three-week cycle; my only concern is that, unless I'm misunderstanding, an agile/iterative process may or may not release the software (depending on the project scope you might have "working" software after three weeks, but there isn't enough of it that works to let users make use of it), while I think the expectation here from management is that there will always be something "ready to go" in three weeks, and that disconnect could cause problems. On that note, is there any literature or references that explains the agile/iterative approach from a business standpoint? Everything I've seen only focuses on the developers, how to do it, but nothing seems to describe it from the perspective of actually getting the buy-in from the businesspeople.

    Read the article

  • Is monkeypatching considered good programming practice?

    - by vartec
    I've been under impression, that monkeypatching is more in quick and dirty hack category, rather than standard, good programming practice. While I'd used from time to time to fix minor issues with 3rd party libs, I considered it temporary fix and I'd submit proper patch to the 3rd party project. However, I've seen this technique used as "the normal way" in mainstream projects, for example in Gevent's gevent.monkey module. Has monkeypatching became mainstream, normal, acceptable programming practice? See also: "Monkeypatching For Humans" by Jeff Atwood

    Read the article

  • Releasing an open source project without getting embarrassed

    - by Hopeful
    I've been working by myself on a fairly large open source project for quite a while and it's nearing the point where I'd like to release it. However, I'm self-taught and I don't really know anyone who could adequately review my project. A few years ago, I had released a small bit of code which pretty much got ripped apart (in a critical sense) on the forum where I released it. Even though the code worked, the criticism was accurate but brutal. It prompted me to begin searching for best practices for everything and in the end I feel that it made me a much better developer. I've gone over everything in my project so many times trying to make it perfect that I've lost count. I believe in my project and think it has the potential to help a lot of people and I feel like I've done some cool things in interesting ways with it. Still, because I'm self-taught, I can't help but wonder what gaps exist in my self-education. The way my code was ripped apart last time isn't something I'd like to repeat. I think my two biggest fears with releasing my project that I've poured countless hours into are being absolutely embarrassed because I missed some patently obvious things because of my self-education or, worse, releasing it to the sound of crickets. Is there anyone who has been in a similar situation? I'm not afraid of constructive criticism, so long as it is constructive and not just a rant on how I screwed up. I know there is a code review site on StackExchange, but it's not really set up for large projects and I didn't feel like the community there is large enough yet to get good feedback if I were to post parts of my project piecemeal (I tried with one file). What can I do to give my project at least some measure of success without getting embarrassed or devestated in the process?

    Read the article

  • Inheritance, commands and event sourcing

    - by Arthis
    In order not to redo things several times I wanted to factorize common stuff. For Instance, let's say we have a cow and a horse. The cow produces milk, the horse runs fast, but both eat grass. public class Herbivorous { public void EatGrass(int quantity) { var evt= Build.GrassEaten .WithQuantity(quantity); RaiseEvent(evt); } } public class Horse : Herbivorous { public void RunFast() { var evt= Build.FastRun; RaiseEvent(evt); } } public class Cow: Herbivorous { public void ProduceMilk() { var evt= Build.MilkProduced; RaiseEvent(evt); } } To eat Grass, the command handler should be : public class EatGrassHandler : CommandHandler<EatGrass> { public override CommandValidation Execute(EatGrass cmd) { Contract.Requires<ArgumentNullException>(cmd != null); var herbivorous= EventRepository.GetById<Herbivorous>(cmd.Id); if (herbivorous.IsNull()) throw new AggregateRootInstanceNotFoundException(); herbivorous.EatGrass(cmd.Quantity); EventRepository.Save(herbivorous, cmd.CommitId); } } so far so good. I get a Herbivorous object , I have access to its EatGrass function, whether it is a horse or a cow doesn't matter really. The only problem is here : EventRepository.GetById<Herbivorous>(cmd.Id) Indeed, let's imagine we have a cow that has produced milk during the morning and now wants to eat grass. The EventRepository contains an event MilkProduced, and then come the command EatGrass. With the CommandHandler, we are no longer in the presence of a cow and the herbivorious doesn't know anything about producing milk . what should it do? Ignore the event and continue , thus allowing the inheritance and "general" commands? or throw an exception to forbid execution, it would mean only CowEatGrass, and HorseEatGrass might exists as commands ? Thanks for your help, I am just beginning with these kinds of problem, and I would be glad to have some news from someone more experienced.

    Read the article

  • How I might think like a hacker so that I can anticipate security vulnerabilities in .NET or Java before a hacker hands me my hat [closed]

    - by Matthew Patrick Cashatt
    Premise I make a living developing web-based applications for all form-factors (mobile, tablet, laptop, etc). I make heavy use of SOA, and send and receive most data as JSON objects. Although most of my work is completed on the .NET or Java stacks, I am also recently delving into Node.js. This new stack has got me thinking that I know reasonably well how to secure applications using known facilities of .NET and Java, but I am woefully ignorant when it comes to best practices or, more importantly, the driving motivation behind the best practices. You see, as I gain more prominent clientele, I need to be able to assure them that their applications are secure and, in order to do that, I feel that I should learn to think like a malevolent hacker. What motivates a malevolent hacker: What is their prime mover? What is it that they are most after? Ultimately, the answer is money or notoriety I am sure, but I think it would be good to understand the nuanced motivators that lead to those ends: credit card numbers, damning information, corporate espionage, shutting down a highly visible site, etc. As an extension of question #1--but more specific--what are the things most likely to be seeked out by a hacker in almost any application? Passwords? Financial info? Profile data that will gain them access to other applications a user has joined? Let me be clear here. This is not judgement for or against the aforementioned motivations because that is not the goal of this post. I simply want to know what motivates a hacker regardless of our individual judgement. What are some heuristics followed to accomplish hacker goals? Ultimately specific processes would be great to know; however, in order to think like a hacker, I would really value your comments on the broader heuristics followed. For example: "A hacker always looks first for the low-hanging fruit such as http spoofing" or "In the absence of a CAPTCHA or other deterrent, a hacker will likely run a cracking script against a login prompt and then go from there." Possibly, "A hacker will try and attack a site via Foo (browser) first as it is known for Bar vulnerability. What are the most common hacks employed when following the common heuristics? Specifics here. Http spoofing, password cracking, SQL injection, etc. Disclaimer I am not a hacker, nor am I judging hackers (Heck--I even respect their ingenuity). I simply want to learn how I might think like a hacker so that I may begin to anticipate vulnerabilities before .NET or Java hands me a way to defend against them after the fact.

    Read the article

  • Business Analyst role in development process

    - by Ryan
    I work as a business analyst and I currently oversee much of the development efforts of an internal project. I'm responsible for the requirements, specs, and overall testing. I work closely with the developers (onshore and offshore). The offshore team produces all of the reports. Version 1.0 had a 9 month development cycle and I had about 4-5 months to test all the reports. There was the usual back and forth to get the implementation right. Version 2.0 had a much shorter development cycle (3 months). I received the first version of the reports about 3 weeks ago and noticed a lot of things wrong with it. Many of the requirements were wrong and the performance of the queries was horrendous at 5x - 6x longer than it should have been. The onshore lead developer was out and did not supervise the offshore development team in generating the reports. Without consulting management, I took a look at the SQL in the reports and was able to improve performance greatly (by a factor of 6x) which is acceptable for this version. I sent the updated queries as guidelines to the offshore team and told them they should look at doing X instead of Y to improve performance and also to fix some specific logic issues. I then spoke to my managers about this because it doesn't feel right that I was developing SQL queries, but given our time crunch I saw no other way. We were able to fix the issue quite fast which I'm happy with. Current situation: the onshore managers aren't too pleased that the offshore team did not code for performance. I know there are some things I could have done better throughout this process and I do not in any way consider myself a programmer. My question is, if an offshore team that works apart from the onshore project resources fails to deliver an acceptable release, is it appropriate to clean up their work to meet a deadline? What kind of problems could this create in the future?

    Read the article

  • Questions to ask interviewer in an Interview

    - by chota
    Hello All, I have an SDET interview upcoming week. I have been preparing since long. It is a good company. I am working as SDET since two year. I wonder what questions should i ask to my interviewer regarding testing and other thing. I would appreciate your help if you give me some sample questions that i should ask to my interviewer during the interview. Some of them i thoughts are a) What type of testing methodologies do you use? Do you have triage meeting everyday? What percentage of code coverage is done by unit tests? I do not find these questions to be more effective, i would appreciate if somebody could help me out in coming out with better question?

    Read the article

  • What am I allowed to do programmatically with pictures that have a Creative Commons "don't modify" license

    - by nist
    I'm working on a project that uses some icons that are under a Creative Commons license (ND) that forbids modification of the picture. What can I do with this icons as a programmer? Can I modify the looks of the image in the program as long as I don't change anything in the file that contains the icon? Have I modified the image if I put a colored transparent layer over it so the color of the icon changes?

    Read the article

  • Interviewing a DBA

    - by kev
    Our Company is in the Process of recuiting a DBA. I have built a group test of questions from basic questions such as Pk and Fk constraints, simple querries(fizzbuzz style) to more advanced things such as indexes, Collation, isolation levels and how to trace deadlocks. However, that is the limit of my knowledge. So my question to all the DBA's is what is the base level knowledge that all DBA's should have? We are really looking for someone that will be able to manage our replication, analyzing some of our slower running queries(that the devs can go to for help) and someone that can trace some of the deadlock issues that we are having. Any help would be most appreciated!

    Read the article

  • Returning null vs Throwing exceptions

    - by Svish
    Is in a bit of disagreement with a more experienced developer on this issue, and was wondering what you guys here think about this. Environment is Java, EJB 3, services, etc. The code I wrote calls a service to get things and to create things. Problem was that I got null pointer exceptions in places that didn't make sense. For example when I asked the service to create an object, I got null back. And when I tried to look up an object with an id I knew existed, I still got null back. Was like it was ignoring me. Spent some time trying to figure out what was wrong in my code (since I'm less experienced I usually assume I have messed up). Turns out the reason was security. If the user principal using my service didn't have the right permissions to use the service I called from my service, then that service simply returned null. The services that are here already are usually not documented either, so this is just something you have to know... somehow... So here is the thing: I mean that this is rather confusing as a developer interacting with this service. To me it would make much more sense if that service thew an exception which would tell me that hey, you don't have the proper permissions to get info about this thing or to create this new thing. I would then immediately know why my service wasn't working as expected. However, he argued that asking is not wrong. Exceptions should only be thrown when there is an error and asking for a thing is not an error. Even if you don't have permission to "see" that the thing you asked for. The things are often looked up in a GUI by users and for those users not having the right permissions, these things simply "do not exist". So, in short: Asking is not wrong, hence no exception. Get methods return null because to those users those things "doesn't exist". Create methods return null because nothing was created, since the user wasn't allowed to create anything. So, what do you guys think? Is this normal and/or good practice? I prefer exceptions as I prefer throwing and catching exceptions because I find it much easier to know what's going on. So I would for example also prefer to throw a NotFoundException if you asked for an id which didn't exist, rather than returning null. Anyways, just curious to what others think about this as I'm not the most experienced developer yet.

    Read the article

  • How to be a responsible early adopter?

    - by Gaurav
    A disconcertingly high number of new technologies/paradigms are overhyped (as is evident from replies to this question). Combine this with the fact that being early adopter is inherently risky. This makes evaluation of technology before adoption critical. So how exactly early adopters among you go about it. I can think of following general criteria. For early adoption technology must address previously unaddressed issue and/or For acceptance beyond early adoption technology should address performance One indicator that something is not right is when there is too much of jargon for technology which is either irrelevant or there is no evidence to back it up What is your opinion. Update: 4. BTW, the biggest reason to fear technology is when it is imposed by the tech-unaware management based entirely on number of buzzwords.

    Read the article

  • Can the Clojure set and maps syntax be added to other Lisp dialects?

    - by Cedric Martin
    In addition to create list using parentheses, Clojure allows to create vectors using [ ], maps using { } and sets using #{ }. Lisp is always said to be a very extensible language in which you can easily create DSLs etc. But is Lisp so extensible that you can take any Lisp dialect and relatively easily add support for Clojure's vectors, maps and sets (which are all functions in Clojure)? I'm not necessarily asking about cons or similar actually working on these functions: what I'd like to know is if the other could be modified so that the source code would look like Clojure's source code (that is: using matching [ ], { } and #{ } in addition to ( )). Note that if it cannot be done this is not a criticism of Lisp: what I'd like to know is, technically, what should be done or what cannot be done if one were to add such a thing.

    Read the article

  • Storing lots of large strings with frequent "appends" and few reads

    - by Thiago Moraes
    In my current project, I need to store a very long ASCII string to each instance of a given object. This string will receive an 2 appends per minute and will not be retrieved so frequently. The worst case scenario is a 5-10MB string. I'll have thousands of instances of my object and I'm worried that storing all those strings in the filesystem would not be optimal, but I can't think of a better solution. Can anyone suggest an alternative? Maybe a key-value store? In this case, which one? Any other thoughts?

    Read the article

  • Is comparing an OO compiler to a SQL compiler/optimizer valid?

    - by Brad
    I'm now doing a lot of SQL development at my new job where as before I was doing Object Oriented desktop app stuff. I keep running across very large scripts (thousands of lines) and wanting to refactor in some way. I am seeing that SQL is a different sort of beast and it's probably fine to have these big scripts for the most part but while explaining this to me people are also insisting that the whole idea of refactoring is bad. That stuff like the .NET compiler are actually burdened by refactored code and that a big wall of code is more efficient and better design than code designed for reuse, readability and scalability. The other argument is that OO compilers are almost dangerously inefficient and don't have efficient memory management or runs too many CPU instructions compared to older "simpler" compilers and compared to SQL. Are these valid complaints? Even if some compiler like a C compiler is modestly more "efficient" (whatever that means on this high of a level without seeing code) would you want to write applications in C over C# or Java? Is comparing an OO compiler to a SQL compiler/optimizer even valid?

    Read the article

  • How to access an encrypted INI file from C on an embedded system with little RAM

    - by Mawg
    I want to encrypt an INI file using a Delphi program on a Windows PC. Then I need to decrypt & access it in C on an embedded system with little RAM. I will do that once & fetch all info; I will not be consutinuously accessing the INI file whenever my program needs data from the file. Any advice as to which encryption to use? Nothing too heavyweight, just good enough for "Security through obscurity" and FOSS for both Delphi & C. And how can I decrypt, get all the info from the INI file - using as little RAM as possible, and then free any allocated RAM? I hope that someone can help. [Update] I am currently using an Atmel UC3, although I am not sure if that will be the final case. It has 512kB falsh & 128kB RAM. For an INI file, I am talking of max 8 sections, with a total of max 256 entries, each max 8 chars. I chose INI (but am not married to it), because i have had major problems in the past when the format of a data fiel changes, no matter whether binary, or text. For tex, I prefer the free format of INI (on PC), but suppose I could switch to line_1=data_1, line_2=data_2 and accept that if I add new fields in future software erleases they must come at the end, even if it is not pretty when read directly by humans. I suppose if I choose a fixed format text file then I never need get more than one line into RAM at a time ...

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >