Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 181/563 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • How can an SQL relational database be used to model a thesaurus? [closed]

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus: a long list of words with attributes, all of which are linked to each other. This thesaurus data model can be defined as: a controlled vocabulary arranged in a known order in which equivalence, hierarchical, and associative relationships among terms are clearly displayed and identified by standardized relationship indicators. My idea so far is to have one database in which every word is a table, and every table contains all words related to that word. e.g. Thesaurus(database) - happy(table) - excited(row)|cheerful(row)|lively(row) Is there are more efficient way to store words and their relationship to other words in a relational SQL database?

    Read the article

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • C# String.format extension method

    - by Paul Roe
    With the addtion of Extension methods to C# we've seen a lot of them crop up in our group. One debate revolves around extension methods like this one: public static class StringExt { /// <summary> /// Shortcut for string.Format. /// </summary> /// <param name="str"></param> /// <param name="args"></param> /// <returns></returns> public static string Format(this string str, params object[] args) { if (str == null) return null; return string.Format(str, args); } } Does this extension method break any programming best practices that you can name? Would you use it anyway, if not why? If I renamed the function to "F" but left the xml comments would that be epic fail or just a wonderful savings of keystrokes?

    Read the article

  • What is Mozilla's new release management strategy ?

    - by RonK
    I saw today that FireFox released a new version (5). I tried reading about what was added and ran into this link: http://arstechnica.com/open-source/news/2011/06/firefox-5-released-arrives-only-three-months-after-firefox-4.ars It states that: Mozilla has launched Firefox 5, a new version of the popular open source Web browser. This is the first update that Mozilla has issued since adopting a new release management strategy that has drastically shortened the Firefox development cycle. I find this very intriguing - any idea what this new strategy is?

    Read the article

  • How is the RIP loaded when an interrupt arrives in an IA-32e 64-bit IDT Gate Descriptor?

    - by Vern
    I need some help with the programming of an IA-32e Interrupt Descriptor as I'm pretty new to it. I don't think I quite understand how the RIP is loaded when an Interrupt arrives. There is a Segment Selector in Intel's 64-bit IDT Gate Descriptor. However, from my understanding across the 5 part Intel manuals, the Linear Address of the Interrupt Handler is loaded into RIP from the 64-bit offset specified in the IDT Gate Descriptor. The only use of the segment selector is to check: if there is a change in privilege levels the Interrupt Handler is truly pointing to a code segment My questions are: Is RIP taken from the 64-bit offset only? Or is RIP = offset(sign extended to 64-bits) + segment selector base? Is the base address pointed to by the segment selector in the IDT Gate Descriptor ignored? Or does it have a use?

    Read the article

  • What is the best way to design a table with an arbitrary id?

    - by P.Brian.Mackey
    I have the need to create a table with a unique id as the PK. The ID is a surrogate key. Originally, I had a natural key, but requirement changes have undermined this idea. Then, I considered adding an auto incrementing identity. But, this presents problems. A. I can't specify my own ID. B. The ID's are difficult to reset. Both of these together make it difficult to copy over this table with new data or move the table across domains, e.g. Dev to QA. I need to refer to these ID's from the front end, JavaScript...so they must not change. So, the only way I am aware of to meet all these challenges is to make a GUID ID. This way, I can overwrite the ID's when I need to or I can generate a new one without concern for order (E.G. an int based id would require I know the last inserted ID). Is a GUID the best way to accomplish my goals? Considering that a GUID is a string and joining on a string is an expensive task, is there a better way?

    Read the article

  • How can I justify software testing to management?

    - by Nate
    I work for a small company (less than 200 employees) whose software group only makes up a small part of our staff (4 employees, occasionally with a few contractors). The four of us have been making strides in transitioning to better practices, and one of the next logical steps is to improve our testing. As anyone who has done any meaningful tests knows, testing takes a lot of time - and at my company, it takes too much time to justify to management, so we generally do what little we do on the sly. I don't think this is serving us well, as we keep coming up against otherwise avoidable problems when we ship under-tested software. I would like to be able to come to management with a justification for hiring a dedicated software test engineer (someone who can both write automated tests and perform manual ones). Are there any good published studies that show the benefits of adding such a position to a small company? Where can I find information about costs associated with the position? I plan on doing a little number crunching on our own history, but having some external sources to point to would help bolster my case.

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • Developing for Windows CE platform?

    - by grmbl
    I'm looking in creating some applications for workers to use on the workfloor. They'll be using Psion NEO devices running Windows CE 5.0. My skillset allows for C#, PHP, ASP.Net (+ webservices). Application requirements: should connect to our ERP system running on IBM iSeries (AS400). should be run in fullscreen (effectively hiding the OS). usability touch functionality. I have tried the following: Full winform application ran through RDP session: [+] easy deployment using .rdp file. [+] application can be run on desktop environment too. [+] rdp host can easily access DB2 using IBM drivers. [+] GUI works ok on small screen. [-] environment = terminal server. (which is already under heavy use) Full winform application running on device OS: [+] environment = local. [+] responsive. [-] must use a webservice to access DB2. [-] deployment... [-] fixed platform (no desktop) Console application running on device OS: [+] environment = local. [+] very responsive. [-] must use a webservice to access DB2. [-] no fullscreen or other window options? [-] deployment... [-] fixed platform (no desktop) I'm considering creating a web application but it seems the OS comes with IE 5? I don't want to alter the OS in any way! (install other browsers etc.) I would like to have an application that's responsive, easy to deploy, fullscreen and optionally multiplatform. I have seen handheld devices using terminal (emulation?) with a console like interface. This seems to be native to the device but I'm afraid this requires modest knowledge of C++? It seems that using RDP is the way to go but, I came here for advice and look for people that have been in the same situation willing to share their experience. There does not seem to be many "best practices" on the web that could help me decide the best way of working. Greetings

    Read the article

  • Why use link classes in oql instead of classes that contain links

    - by Isaac
    itop abstracts its very complex database design with an object query language (oql). For this there are classes definded, like 'Ticket' and 'Server'. Now a Ticket usually is linked to a Server. In my naive way I would give the Ticket class an attribute 'affected_server_list', where I could reference the affected servers. itop does it different: neither Servers nor Tickets know of each other. Instead there is a class 'linkTicketToServer', which provides the link between the two. The first thing I noticed is that it makes oql queries more complex. So I wondered why they designed it this way. One thing that occured to me is that it allows for more flexiblity, in that I can add links without modifying the original classes. Is this allready why one would implement it this way, or are there other reasons for this kind of design?

    Read the article

  • Data binding in web UI frameworks, what's the deal?

    - by c-smile
    I believe that most of modern Web frameworks that pretend to be MVC ones also has a notion of data binding in one form or another. Examples: AngularJS, EmberJS, KnockoutJS, etc. I am assuming that "data binding" is a declarative definition (oxymoron, no?) of live link between data (a.k.a. model) and its representation (a.k.a. view). With some transformers in between (a.k.a. controllers). I understand why declarativeness is kind of appealing but also understand that as usual it comes with the price. In particular: 1. Live binding is quite heavy, either with dirty watch (high CPU consumption) or with Object.observe() (high memory consumption with high CPU load in some scenarios). 2. There is a "frame" part in the framework word, means there are some boundaries/limits that can be hard to overcome if you need slightly more than it was designed for. Quite usual time split: 90% of features are made in 10% of project time. But 10% rest take 90% of project time. I suspect (a.k.a. educated guess) that those MVC things are not helping to implement more functionality in less time... If so their usage motivation is not quite clear. As an example: last week wanted to find virtual list idea/solution. Found one in vanilla JavaScript that is 120 LOC. Implementation of the same but in AngualrJS is about 420 LOC. Most of the code there seems like a fight with the framework itself... So is my question: what benefits that MVC stuff or data binding give us? Is it just a buzzword popular among project managers or they give us something useful. If later one then what exactly?

    Read the article

  • moore's law and quadratic algorithm

    - by damon
    I was going thru a video (from coursera - by sedgewick) in which he argues that you cannot sustain Moore's law using a quadratic algorithm.He elaborates like this In year 197* you build a computer of power X ,and need to count N objects.This takes M days According to Moore's law,you have a computer of power 2X after 1.5 years.But now you have 2N objects to count. If you use a quadratic algorithm, In year 197*+1.5 ,it takes (4M)/2 = 2M days 4M because the algorithm is quadratic,and division by 2 because of doubling computer power. I find this hard to understand.I tried to work thru this as below To count N objects using comp=X , it takes M days. -> N/X = M After 1.5 yrs ,you need to count 2N objects using comp=2X -> 2N/(2X) -> N/X -> M days where do I go wrong? can someone please help me understand?

    Read the article

  • Where To Begin To Make A Website

    - by lolyoshi
    I'm a newbie in web programming. I haven't done anything that relates to website before. Now, my new task is creating a website using Java, Jsp, HTML, CSS, mySQL, Apache and Spring Framework (MVC model). I want to know what I should research if I want my website has the function as post entries, comment entries, delete entries, edit entries, etc as a forum? Which I need to know beside above things? I don't know how to update my website automatically when there're changes in website as the top view products, the best products. I don't think I'll input or change them manually. So, which tools or language can support that? Thank for advance

    Read the article

  • How can one manage thousands of IF...THEN...ELSE rules?

    - by David
    I am considering building an application, which, at its core, would consist of thousands of if...then...else statements. The purpose of the application is to be able to predict how cows move around in any landscape. They are affected by things like the sun, wind, food source, sudden events etc. How can such an application be managed? I imagine that after a few hundred IF-statements, it would be as good as unpredictable how the program would react and debugging what lead to a certain reaction would mean that one would have to traverse the whole IF-statement tree every time. I have read a bit about rules engines, but I do not see how they would get around this complexity.

    Read the article

  • How can I let prospective employers know I'm a great developer?

    - by Zoe Gagnon
    I've recently read through Joel's guide to finding great developers (http://www.joelonsoftware.com/articles/FindingGreatDevelopers.html), and I feel really strongly that I am smart and get things done. The problem is, I didn't learn how to get things done until about halfway through college, so my GPA is less than stellar. Additionally, I've got a few other things going against me: late into the job market (~30), no internship, state college instead of university, and when I graduated, I pretty much had to take the first job that offered. With all of these things piled together, my resume (the first step to getting a job), is not terribly impressive. What can I do to let people know that I'm a great developer and would complement the best companies in the world?

    Read the article

  • Nearly technical books that you enjoyed reading

    - by pablo
    I've seen questions about "What books you recommend" in several of the Stack Exchange verticals. Perhaps these two questions (a and b) are the most popular. But, I'd like to ask for recommendations of a different kind of books. I have read in the past "The Passionate Programmer" and I am now reading "Coders at Work". Both of them I would argue that are almost a biography (or biographies in the "Coders at work") or even a bit of "self-help" book (that is more the case of the "Passionate programmer"). And please don't get me wrong. I loved reading the first one, and I am loving reading the second one. There's a lot of value in it, mostly in "lessons of the trade" kind of way. So, here is what I'd like to know. What other books that you read that are similar to these ones in intent that you enjoyed? Why?

    Read the article

  • Software Life-cycle of Hacking

    - by David Kaczynski
    At my local university, there is a small student computing club of about 20 students. The club has several small teams with specific areas of focus, such as mobile development, robotics, game development, and hacking / security. I am introducing some basic agile development concepts to a couple of the teams, such as user stories, estimating complexity of tasks, and continuous integration for version control and automated builds/testing. I am familiar with some basic development life-cycles, such as waterfall, spiral, RUP, agile, etc., but I am wondering if there is such a thing as a software development life-cycle for hacking / breaching security. Surely, hackers are writing computer code, but what is the life-cycle of that code? I don't think that they would be too concerned with maintenance, as once the breach has been found and patched, the code that exploited that breach is useless. I imagine the life-cycle would be something like: Find gap in security Exploit gap in security Procure payload Utilize payload I propose the following questions: What kind of formal definitions (if any) are there for the development life-cycle of software when the purpose of the product is to breach security?

    Read the article

  • Would Using a PHP Framework Be Beneficial in My Context?

    - by Fractal
    I've just started work at a small start-up company who mainly uses PHP to develop their front-end apps. I had no prior PHP experience before joining, and this has led to my apps becoming large pieces of spaghetti code. I essentially started by adding code to implement an initial feature, and then continued to hack in more code to implement further features – without much thought for the overall design. The apps themselves output XML to render on small mobile devices. I recently started looking into frameworks that I could use. I reckon an advantage would be that they seem to force developers to modularise their programs using good-practice design patterns. This seems great for someone in my position. The extra functions they provide, for example: interfacing with databases in such a way as to make SQL injection impossible, would be very useful too. The downside I can see is that there will be a lot of overhead for me in terms of the time taken to learn the framework itself (while still getting to grips with PHP itself). I'm also worried that it will be overkill for the scale of the apps we develop. They tend to be programs that interface with a fairly simple back-end DB, and will generate about 5 different XML screens. Probably around 1 or 2 thousand lines of code. The time it takes just to configure the frameworks may not be worth it. The final problem I can see is that developers in the company – who have to go over my code, and who do not know the PHP framework I may use – will have a much harder time understanding it. Given those pros and cons, I'm still not sure on what the best course of action will be; so any advice will be greatly appreciated.

    Read the article

  • Resources for up-to-date Delphi programming

    - by Dan Kelly
    I'm a developer in a small department and have been using Delphi for the last 10 years. Whilst I've tried to keep up-to-date with movements there are a lot of changes that have occurred between Delphi 7 and (current for us) 2010. Stack Exchange and here have been great for answering the "how do you" questions, but what I'd like is a resource that shows great examples of larger scale programming. For example is there anywhere that hosts examples of well written, multi form applications? Something that can be looked at as a whole to illustrate why things should be done a certain way?

    Read the article

  • Are there good resources for leading documentation for an existing software product having none?

    - by Ben Rose
    Hello. I'm a software developer at a technology company. I have been tasked with leading the documentation effort for the product I work on, both internal to developers as well as spilling over into facilitating the business side of requirements documentation. This internal product has been around for at least 6 years. One challenge is that this software application has no form of documentation other than some small, outdated pieces here and there. There are comments in the code, but they are technical and do not convey any over-arching behavior (even on technical side). As a consequence of having little to no documentation, this product is often unnecessarily complex under the covers adding to the challenge. We are very limited on time that will be given to us to work on documentation. Another thing about me is that I've displayed some ability in writing/communication around the office, but I'm not coming from any sort of documentation or formal writing background (beyond my academic career). Please share your advise or recommend resources, book/website/forum/whatever, for helping me come up with a plan with milestones, best practices, task delegation, templates, buy-in, etc. I'm hoping for a resource targeting or giving special mention of introducing good documentation on existing projects where there previously was none. I would be very grateful for your responses. Ben

    Read the article

  • How broad should a computer science/engineering student go?

    - by AskQuestions
    I have less than 2 years of college left and I still don't know what to focus on. But this is not about me, this is about being a future developer. I realize that questions like "Which language should I learn next?" are not really popular, but I think my question is broader than that. I often see people write things like "You have to learn many different things. Being a developer is not about learning one programming language / technology and then doing that for the rest of your life". Well, sure, but it's impossible to really learn everything thoroughly. Does that mean that one should just learn the basics of everything and then learn some things more thoroughly AFTER getting a particular job? I mean, the best way to learn programming is by actually programming stuff... But projects take time. Does an average developer really switch between (for example) being a web developer, doing artificial intelligence and machine learning related stuff and programming close to the hardware? I mean, I know a lot of different things, but I don't feel proficient in any of those things. If I want to find a job as a web developer (that's just an example) after I finish college, shouldn't I do some web related project (maybe using something I still don't know) rather than try to learn functional programming? So, the question is: How broad should a computer science student's field of focus be? One programming language is surely far too narrow, but what is too broad?

    Read the article

  • Understanding the problem when things break in production

    - by bitcycle
    Scenario: You push to production The push broke multiple things That same build did not break qa or dev As a developer, you don't have prod access. There is lots of pressure from above to get things working agian. Specifics: PHP/MVC application that is API-driven in Zend. Deployed to a few servers. My question: While investigating, lets say I have a hunch that something is wrong. But, I don't know for sure. And, of course, I can't test things in production. If I have a suggested fix based on that hunch, would it be wise to try and apply it and see if it works, before understanding what the problem is?

    Read the article

  • What should every programmer know about web development?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web application before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also, I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification.

    Read the article

  • Pointer initialization doubt

    - by Jestin Joy
    We could initialize a character pointer like this in C. char *c="test"; Where c points to the first character(t). But when I gave code like below. It gives segmentation fault. #include<stdio.h> #include<stdlib.h> main() { int *i; *i=0; printf("%d",*i); } But when I give #include<stdio.h> #include<stdlib.h> main() { int *i; i=(int *)malloc(2); *i=0; printf("%d",*i); } It works( gives output 0). Also when I give malloc(0), It also works( gives output 0). Please tell what is happening

    Read the article

  • Have unit test generators helped you when working with legacy code?

    - by Duncan Bayne
    I am looking at a small (~70kLOC including generated) C# (.NET 4.0, some Silverlight) code-base that has very low test coverage. The code itself works in that it has passed user acceptance testing, but it is brittle and in some areas not very well factored. I would like to add solid unit test coverage around the legacy code using the usual suspects (NMock, NUnit, StatLight for the Silverlight bits). My normal approach is to start working through the project, unit testing & refactoring, until I am satisfied with the state of the code. I've done this many times in the past, and it's worked well. However, this time I'm thinking of using a test generator (in particular Pex) to create the test framework, then manually fleshing it out. My question is: have you used unit test generators in the past when commencing work on a legacy codebase, and if so, would you recommend them? My fear is that the generated tests will miss the semantic nuances of the code-base, leading to the dreaded situation of having tests for the sake of the coverage metric, rather than tests which clearly express the intended behaviour in code.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >