Search Results

Search found 2372 results on 95 pages for 'relational theory'.

Page 9/95 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • What are the processes of true Quality assurance?

    - by user970696
    Having read that Quality Assurance (QA) is focused on processes (while Quality Control (QC) is focused on the product), the books often mentions QA is the verification process - doing peer reviews, inspections etc. I still tend to think these are also QC as they check intermediate products. Elsewhere I have read that QA activity is e.g. choosing the right bugtracker. That sounds better to me in terms of process improvement. The question that close-voting person obviously missed is pretty clear: What are the activities that true QA should perform? I would appreciate the reference as I work on my thesis dealing with all these discrepancies and inconsistencies in the software quality world.

    Read the article

  • Why many designs ignore normalization in RDBMS?

    - by Yosi
    I got to see many designs that normalization wasn't the first consideration in decision making phase. In many cases those designs included more than 30 columns, and the main approach was "to put everything in the same place" According to what I remember normalization is one of the first, most important things, so why is it dropped so easily sometimes? Edit: Is it true that good architects and experts choose a denormalized design while non-experienced developers choose the opposite? What are the arguments against starting your design with normalization in mind?

    Read the article

  • Is ORM an Anti-Pattern?

    - by derphil
    I had a very stimulating and interessting discussion with a colleague about ORM and it's Pros and Cons. In my opinion, an ORM is useful only in the rarest cases. At least in my experience. But I don't want to list my own arguments at this time. So I ask you, what do you think about ORM? What are the Pros and the Cons? P.S. I've posted this "question" yesterday on Stackoverflow, but some of the user think, that this should better posted here.

    Read the article

  • Verfication vs validation again, does testing belong to verification? If so, which?

    - by user970696
    I have asked before and created a lot of controversy so I tried to collect some data and ask similar question again. E.g. V&V where all testing is only validation: http://www.buzzle.com/editorials/4-5-2005-68117.asp According to ISO 12207, testing is done in validation: •Prepare Test Requirements,Cases and Specifications •Conduct the Tests In verification, it mentiones. The code implements proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets, and error definition, isolation, and recovery. and The software components and units of each software item have been completely and correctly integrated into the software item Not sure how to verify without testing but it is not there as a technique. From IEEE: Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] At the end of development phase? That would mean UAT.. So the question is, what testing (unit, integration, system, uat) will be considered verification or validation? I do not understand why some say dynamic verification is testing, while others that only validation. An example: I am testing an application. System requirements say there are two fields with max. lenght of 64 characters and Save button. Use case say: User will fill in first and last name and save. When checking the fields and Save button presence, I would say its verification. When I follow the use case, its validation. So its both together, done on the system as a whole.

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • How many copies are needed to enlarge an array?

    - by user10326
    I am reading an analysis on dynamic arrays (from the Skiena's algorithm manual). I.e. when we have an array structure and each time we are out of space we allocate a new array of double the size of the original. It describes the waste that occurs when the array has to be resized. It says that (n/2)+1 through n will be moved at most once or not at all. This is clear. Then by describing that half the elements move once, a quarter of the elements twice, and so on, the total number of movements M is given by: This seems to me that it adds more copies than actually happen. E.g. if we have the following: array of 1 element +--+ |a | +--+ double the array (2 elements) +--++--+ |a ||b | +--++--+ double the array (4 elements) +--++--++--++--+ |a ||b ||c ||c | +--++--++--++--+ double the array (8 elements) +--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x | +--++--++--++--++--++--++--++--+ double the array (16 elements) +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x || || || || || || || || | +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ We have the x element copied 4 times, c element copied 4 times, b element copied 4 times and a element copied 5 times so total is 4+4+4+5 = 17 copies/movements. But according to formula we should have 1*(16/2)+2*(16/4)+3*(16/8)+4*(16/16)= 8+8+6+4=26 copies of elements for the enlargement of the array to 16 elements. Is this some mistake or the aim of the formula is to provide a rough upper limit approximation? Or am I missunderstanding something here?

    Read the article

  • How does the "Fourth Dimension" work with arrays?

    - by Questionmark
    Abstract: So, as I understand it (although I have a very limited understanding), there are three dimensions that we (usually) work with physically: The 1st would be represented by a line. The 2nd would be represented by a square. The 3rd would be represented by a cube. Simple enough until we get to the 4th -- It is kinda hard to draw in a 3D space, if you know what I mean... Some people say that it has something to do with time. The Question: Now, that is all great with me. My question isn't about this, or I'd be asking it on MathSO or PhysicsSO. My question is: How does the computer handle this with arrays? I know that you can create 4D, 5D, 6D, etc... arrays in many different programming languages, but I want to know how that works.

    Read the article

  • Severity and relation to occurence - priority?

    - by user970696
    I have been browsing through some webpages related to testing and found one dealing with the metrics of testing. It says: The severity level of a defect indicates the potential business impact for the end user (business impact = effect on the end user x frequency of occurrence). I do not think think this is correct or what am I missing? Usually it is the priority which is the result of such a calculation (severe bug that occurs rarely is still severe but does not have to be fixed immediately). Also from this description, what is the difference between the effect on the end user and business impact?

    Read the article

  • Is excessive indirection and/or redundant encapsulation a recognized concept?

    - by Omega
    I'm curious if there's a series of tendencies or anti-patterns when programming whereby a developer will always locally re-wrap external dependencies when consuming them. A slightly less vague example might be say when consuming an implementation of an interface or abstract, and mapping every touch-point locally before interacting with them. Like an overcomplicated take on composition. Given my example, would the interface not be reliable enough and any change to it never be surmountable any any level of indirection? Is this a good or a bad practice? Can it ever go too far? Does it have a proper name?

    Read the article

  • Quality Assurance tools discrepancies

    - by Roudak
    It is a bit ironic, yesterday I answered a question related to this topic that was marked to be good and today I'm the one who asks. These are my thoughts and a question: Also let's agree on the terms: QA is a set of activities that defines and implements processes during SW development. The common tool is the process audit. However, my colleague at work agrees with the opinion that reviews and inspections are also quality assurance tools, although most sources classify them as quality control. I would say both sides are partially right: during inspections, we evaluate a physical product (clearly QC) but we see it as a white box so we can check its compliance with set processes (QA). Do you think it is the reason of the dichotomy among the authors? I know it is more like an academic question but it deserves the answer :)

    Read the article

  • SQL language drawbacks, The Third Manifesto

    - by David Portabella
    Sometime ago I read about SQL language drawbacks (the basic language specification, not vendor specific), and one of the drawbacks was that the language does not allow to create a set of tuples that don't come from a table. For instance, SELECT firstName, lastName from people; this creates a set of tuples coming from the table people. Now, if I don't have this table people, and I want to return a constant, I'd need something like this to return a set of two tuples (this would not require to have a table): SELECT VALUES('james', 'dean'), ('tom', 'cruisse'); Why I would need that? Because of the same reasons that we can define constants (not only basic types, but objects and arrays also) in any advanced programming language. Workarounds, Yes, I could create a temporal table, fill the data, and SELECT from that table. This is a hack, to overcome the drawbacks of the poor SQL language. I think that I read about this somewhere in "The Third Manifesto", but I don't find the paragraph/example talking about this concrete drawback anymore. Do you know a reference about it?

    Read the article

  • Isn't class scope purely for organization?

    - by Di-0xide
    Isn't scope just a way to organize classes, preventing outside code from accessing certain things you don't want accessed? More specifically, is there any functional gain to having public, protected, or private-scoped methods? Is there any advantage to classifying method/property scope rather than to, say, just public-ize everything? My presumption says no simply because, in binary code, there is no sense of scope (other than r/w/e, which isn't really scope at all, but rather global permissions for a block of memory). Is this correct? What about in languages like Java and C#[.NET]?

    Read the article

  • How can I implement a database TableView like thing in C++?

    - by Industrial-antidepressant
    How can I implement a TableView like thing in C++? I want to emulating a tiny relation database like thing in C++. I have data tables, and I want to transform it somehow, so I need a TableView like class. I want filtering, sorting, freely add and remove items and transforming (ex. view as UPPERCASE and so on). The whole thing is inside a GUI application, so datatables and views are attached to a GUI (or HTML or something). So how can I identify an item in the view? How can I signal it when the table is changed? Is there some design pattern for this? Here is a simple table, and a simple data item: #include <string> #include <boost/multi_index_container.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/random_access_index.hpp> using boost::multi_index_container; using namespace boost::multi_index; struct Data { Data() {} int id; std::string name; }; struct row{}; struct id{}; struct name{}; typedef boost::multi_index_container< Data, indexed_by< random_access<tag<row> >, ordered_unique<tag<id>, member<Data, int, &Data::id> >, ordered_unique<tag<name>, member<Data, std::string, &Data::name> > > > TDataTable; class DataTable { public: typedef Data item_type; typedef TDataTable::value_type value_type; typedef TDataTable::const_reference const_reference; typedef TDataTable::index<row>::type TRowIndex; typedef TDataTable::index<id>::type TIdIndex; typedef TDataTable::index<name>::type TNameIndex; typedef TRowIndex::iterator iterator; DataTable() : row_index(rule_table.get<row>()), id_index(rule_table.get<id>()), name_index(rule_table.get<name>()), row_index_writeable(rule_table.get<row>()) { } TDataTable::const_reference operator[](TDataTable::size_type n) const { return rule_table[n]; } std::pair<iterator,bool> push_back(const value_type& x) { return row_index_writeable.push_back(x); } iterator erase(iterator position) { return row_index_writeable.erase(position); } bool replace(iterator position,const value_type& x) { return row_index_writeable.replace(position, x); } template<typename InputIterator> void rearrange(InputIterator first) { return row_index_writeable.rearrange(first); } void print_table() const; unsigned size() const { return row_index.size(); } TDataTable rule_table; const TRowIndex& row_index; const TIdIndex& id_index; const TNameIndex& name_index; private: TRowIndex& row_index_writeable; }; class DataTableView { DataTableView(const DataTable& source_table) {} // How can I implement this? // I want filtering, sorting, signaling upper GUI layer, and sorting, and ... }; int main() { Data data1; data1.id = 1; data1.name = "name1"; Data data2; data2.id = 2; data2.name = "name2"; DataTable table; table.push_back(data1); DataTable::iterator it1 = table.row_index.iterator_to(table[0]); table.erase(it1); table.push_back(data1); Data new_data(table[0]); new_data.name = "new_name"; table.replace(table.row_index.iterator_to(table[0]), new_data); for (unsigned i = 0; i < table.size(); ++i) std::cout << table[i].name << std::endl; #if 0 // using scenarios: DataTableView table_view(table); table_view.fill_from_source(); // synchronization with source table_view.remove(data_item1); // remove item from view table_view.add(data_item2); // add item from source table table_view.filter(filterfunc); // filtering table_view.sort(sortfunc); // sorting // modifying from source_able, hot to signal the table_view? // FYI: Table view is atteched to a GUI item table.erase(data); table.replace(data); #endif return 0; }

    Read the article

  • ISO 12207 - testing being only validation activity? [closed]

    - by user970696
    Possible Duplicate: How come verification does not include actual testing? ISO norm 12207 states that testing is only validation activity, while all static inspections are verification (that requirement, code.. is complete, correct..). I did found some articles saying its not correct but you know, it is not "official". I would like to understand because there are two different concepts (in books & articles): 1) Verification is all testing except for UAT (because only user can really validate the use). E.g. here OR 2) Verification is everything but testing. All testing is validation. E.g. here Definitions are mostly the same, as Sommerville's: The aim of verification is to check that the software meets its stated functional and non-functional requirements. Validation, however, is a more general process. The aim of validation is to ensure that the software meets the customer’s expectations. It goes beyond simply checking conformance with the specification to demonstrating that the software does what the customer expects it to do It is really bugging me because I tend to agree that functional testing done on a product (SIT) is still verification because I just follow the requirements. But ISO does not agree..

    Read the article

  • How often do CPUs make calculation errors?

    - by veryfoolish
    In Dijkstra's Notes on Structured Programming he talks a lot about the provability of computer programs as abstract entities. As a corollary, he remarks how testing isn't enough. E.g., he points out the fact that it would be impossible to test a multiplication function f(x,y) = x*y for any large values of x and y across the entire ranges of x and y. My question concerns his misc. remarks on "lousy hardware". I know the essay was written in the 1970s when computer hardware was less reliable, but computers still aren't perfect, so they must make calculation mistakes sometimes. Does anybody know how often this happens or if there are any statistics on this?

    Read the article

  • What is the aim of software testing?

    - by user970696
    Having read many books, there is a basic contradiction: Some say, "the goal of testing is to find bugs" while other say "the goal of the testing is to equalize the quality of the product", meaning that bugs are its by-products. I would also agree that if testing would be aimed primarily on a bug hunt, who would do the actual verification and actually provided the information, that the software is ready? Even e.g. Kaner changed his original definiton of testing goal from bug hunting to quality assesement provision but I still cannot see the clear difference. I percieve both as equally important. I can verify software by its specification to make sure it works and in that case, bugs found are just by products. But also I perform tests just to brake things. Also what definition is more accurate?

    Read the article

  • Semantic algorithms

    - by Mythago
    I have a more theoretical than practical question. I'll start with an example - when I get an email and open it on my iPad, there is a feature, which recognizes the timestamp from the text and offers me to create an event in the calendar. Simply told, I want to know theoretically how it's done - I believe it's some kind of semantic parsing, and I would like if someone could point me to some resources, where I can read more about this.

    Read the article

  • Come up with a real-world problem in which only the best solution will do (a problem from Introduction to algorithms) [closed]

    - by Mike
    EDITED (I realized that the question certainly needs a context) The problem 1.1-5 in the book of Thomas Cormen et al Introduction to algorithms is: "Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough." I'm interested in its first statement. And (from my understanding) it is asked to name a real-world problem where only the exact solution will work as opposed to a real-world problem where good-enough solution will be ok. So what is the difference between the exact and good enough solution. Consider some physics problem for example the simulation of the fulid flow in the permeable medium. To make this simulation happen some simplyfing assumptions have to be made when deriving a mathematical model. Otherwise the model becomes at least complex and unsolvable. Virtually any particle in the universe has its influence on the fluid flow. But not all particles are equal. Those that form the permeable medium are much more influental than the ones located light years away. Then when the mathematical model needs to be solved an exact solution can rarely be found unless the mathematical model is simple enough (wich probably means the model isn't close to reality). We take an approximate numerical method and after hours of coding and days of verification come up with the program or algorithm which is a solution. And if the model and an algorithm give results close to a real problem by some degree that is good enough soultion. Its worth noting the difference between exact solution algorithm and exact computation result. When considering real-world problems and real-world computation machines I believe all physical problems solutions where any calculations are taken can not be exact because universal physical constants are represented approximately in the computer. Any numbers are represented with the limited precision, at least limited by amount of memory available to computing machine. I can imagine plenty of problems where good-enough, good to some degree solution will work, like train scheduling, automated trading, satellite orbit calculation, health care expert systems. In that cases exact solutions can't be derived due to constraints on computation time, limitations in computer memory or due to the nature of problems. I googled this question and like what this guy suggests: there're kinds of mathematical problems that need exact solutions (little note here: because the question is taken from the book "Introduction to algorithms" the term "solution" means an algorithm or a program, which in this case gives exact answer on each input). But that's probably more of theoretical interest. So I would like to narrow down the question to: What are the real-world practical problems where only the best (exact) solution algorithm or program will do (but not the good-enough solution)? There are problems like breaking of cryptographic ciphers where only exact solution matters in practice and again in practice the process of deciphering without knowing a secret should take reasonable amount of time. Returning to the original question this is the problem where good-enough (fast-enough) solution will do there's no practical need in instant crack though it's desired. So the quality of "best" can be understood in any sense: exact, fastest, requiring least memory, having minimal possible network traffic etc. And still I want this question to be theoretical if possible. In a sense that there may be example of computer X that has limited resource R of amount Y where the best solution to problem P is the one that takes not more than available Y for inputs of size N*Y. But that's the problem of finding solution for P on computer X which is... well, good enough. My final thought that we live in a world where it is required from programming solutions to practical purposes to be good enough. In rare cases really very very good but still not the best ones. Isn't it? :) If it's not can you provide an example? Or can you name any such unsolved problem of practical interest?

    Read the article

  • when should a database table be broken into multiple tables with relations?

    - by GSto
    I have an application that needs to store client data, and part of that is some data about their employer as well. Assuming that a client can only have one employer, and that the chance of people having identical employer data is slim to none, which schema would make more sense to use? Schema 1 Client Table: ------------------- id int name varchar(255), email varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16), employer_name varchar(255), employer_phone varchar(255), employer_address varchar(255), employer_city varchar(255), employer_state char(2), employer_zip varchar(16) **Schema 2** Client Table ------------------ id int name varchar(255), email varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16), Employer Table --------------------- id int name varchar(255), phone varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16) patient_id int Part of me thinks that since are clearly two different 'objects' in the real world, seperating them out into two different tables makes sense. However, since a client will always have an employer, I'm also not seeing any real benefits to seperating them out, and it would make querying data about clients more complex. Is there any benefit / reason for creating two tables in a situation like this one instead of one?

    Read the article

  • Storing translation data as JSON column

    - by j0ntech
    We're deciding on how to store translations of some descriptions of database items. We could go the traditional way and keep a translations table (and a language table and an object_translation linking table) OR we thought it might be better to just have a Description column that contains JSON like the following: { "EN": "This is the translation in English", "EE" : "See on kirjeldus eesti keeles" } Are there any serious downsides as to why we shouldn't use this? (I haven't seen it being used anywhere else)

    Read the article

  • Validation and Verification explanation (Boehm) - I cannot understand its point

    - by user970696
    Hopefully my last thread about V&V as I found the B.Boehm is text which I just do not understand well (likely my technical English is not that good). http://csse.usc.edu/csse/TECHRPTS/1979/usccse79-501/usccse79-501.pdf Basically he says that verification is about checking that products derived from requirements baseline must correspond to it and that deviation leads only to changes in these derived products (design, code). But he says it begins with design and ends with acceptance tests (you can check the V model inside). The thing is, I have accepted ISO12207 in terms of all testing is validation, yet it does not make any sense here. In order to be sure the product complies with requirements (acceptance test) I need to test it. Also it says that validation problems means that requirements are bad and needs to be changed - which does not happen with testing that testers do, who just checks correspondence with requirements.

    Read the article

  • Quality Assurance=inspections, reviews..?

    - by user970696
    Studying this subject extensively, the most books state the following: Quality Assurance: prevention activity. Act of inspection, reviewing.. Quality Control: testing While there are some exceptions that mention that QA deals with just processes (planning, strategy, standard application etc.) which is IMHO much closer to real QA, yet I cannot find any good reference in Google Books. I believe that inspections, reviews, testing is all quality control as it is about checking products, no matter if it is the final one or work products. The problem is that so many authors do not agree. I would be grateful for detailed explanation, ideally with a reference.

    Read the article

  • Is there such a thing these days as programming in the small?

    - by WeNeedAnswers
    With all the programming languages that are out there, what exactly does it mean to program in the small and is it still possible, without the possibility of re-purposing to the large. The original article which mentions in the small was dated to 1975 and referred to scripting languages (as glue languages). Maybe I am missing the point, but any language that you can built components of code out of, I would regard to being able to handle "in the large". Is there a confusion on what Objects are and do they really figure as being mandatory to being able to handle "the large". Many have argued that this is the true meaning of "In the large" and that the concepts of objects are best fit for the job.

    Read the article

  • Bug severity classification issues

    - by KyleMinn
    In a book I have, there is a following classification of defect: Critical : A defect receives a “critical” severity level if one or more critical system functionalities are impaired by a defect with is impaired and there is no workaround. High: A defect receives a “high” severity level if some fundamental system functionalities are impaired but a workaround exists. Medium: A defect receives a “medium” severity level if no critical functionality is impaired and a workaround exists for the defect. Low: A defect receives a “low” severity level if the problem involves a cosmetic feature of the system. To be honest, I do not get it.. For example point 2. What if fundamental but not critical feature is impaired and there is NOT a workaround. The same for point 3: what if no critical functionality is affected but there is no workaround? E.g. optional field in the registration form does not work. No workaround but barely an issue.

    Read the article

  • How to use lists in equivalence partitioning?

    - by KhDonen
    I have read that equivalence partitioning can be used typically for intervals or lists, e.g. I assume it can be used for every set of inputs. Anyway if the requirement says that allowed colors are (RED,BLUE,BLACK, GREEN), I cannot treat them like a list, right? I mean, testing one of them would not be enough because developers most likely used some switch-case and thus it is not real "set" where one could represent also the others. So how it is meant with lists? Also what is not that clear to me, I do not think it is always possible to do the initial partioning and then design the test cases. What about checking two lines intersection: Y=MX+C. (two inputs) 1) The lines are paraller. M1=M1 but C1 must be different from C2. 2) Lines are intersecting. M1 must be different from M2. 3) Coincident. The are the same. How can I use partitioning here? THis is actually taken from a book and it says that these sets are eq.classes.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >