Search Results

Search found 1783 results on 72 pages for 'computation theory'.

Page 5/72 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • New book in the style of Advanced Programming Language Design by R. A. Finkel [closed]

    - by mfellner
    I am currently researching visual programming language design for a university paper and came across Advanced Programming Language Design by Raphael A. Finkel from 1996. Other, older discussions in the same vein on Stackoverflow have mentioned Language Implementation Patterns by Terence Parr and Programming Language Pragmatics* by Michael L. Scott. I was wondering if there is even more (and especially up-to-date) literature on the general topic of programming language design. *) http://www.cs.rochester.edu/~scott/pragmatics/

    Read the article

  • Why some consider static analysis a testing and some do not?

    - by user970696
    Preparing myself also to ISTQB certification, I found they call static analysis actually as a static testing, while some engineering book distinct between static analysis and testing, which is the dynamic activity. I tent to think that static analysis is not a testing in the true sense as it does not test, it checks/verifies. But sure I would love to hear opinion of the true experts here. Thank you

    Read the article

  • What is Granularity?

    - by tonyrogerson
    Granularity defines “the lowest level of detail”; but what is meant by “the lowest level of detail”? Consider the Transactions table below: create table Transactions ( TransactionID int not null primary key clustered, TransactionDate date not null, ClientID int not null, StockID int not null, TransactionAmount decimal ( 28 , 2 ) not null, CommissionAmount decimal ( 28 , 5 ) not null ) A Client can Trade in one or many Stocks on any date – there is no uniqueness to ClientID, Stock and TransactionDate...(read more)

    Read the article

  • Data validation best practices: how can I better construct user feedback?

    - by Cory Larson
    Data validation, whether it be domain object, form, or any other type of input validation, could theoretically be part of any development effort, no matter its size or complexity. I sometimes find myself writing informational or error messages that might seem harsh or demanding to unsuspecting users, and frankly I feel like there must be a better way to describe the validation problem to the user. I know that this topic is subjective and argumentative. StackOverflow might not be the proper channel for diving into this subject, but like I've mentioned, we all run into this at some point or another. There are so many StackExchange sites now; if there is a better one, feel free to share! Basically, I'm looking for good resources on data validation and user feedback that results from it at a theoretical level. Topics and questions I'm interested in are: Content Should I be describing what the user did correctly or incorrectly, or simply what was expected? How much detail can the user read before they get annoyed? (e.g. Is "Username cannot exceed 20 characters." enough, or should it be described more fully, such as "The username cannot be empty, and must be at least 6 characters but cannot exceed 30 characters."?) Grammar How do I decide between phrases like "must not," "may not," or "cannot"? Delivery This can depend on the project, but how should the information be delivered to the user? Should it be obtrusive (e.g. JavaScript alerts) or friendly? Should they be displayed prominently? Immediately (i.e. without confirmation steps, etc.)? Logging Do you bother logging validation errors? Internationalization Some cultures prefer or better understand directness over subtlety and vice-versa (e.g. "Don't do that!" vs. "Please check what you've done."). How do I cater to the majority of users? I may edit this list as I think more about the topic, but I'm genuinely interest in proper user feedback techniques. I'm looking for things like research results, poll results, etc. I've developed and refined my own techniques over the years that users seem to be okay with, but I work in an environment where the users prefer to adapt to what you give them over speaking up about things they don't like. I'm interested in hearing your experiences in addition to any resources to which you may be able to point me.

    Read the article

  • Is finding graph minors without single node pinch points possible?

    - by Alturis
    Is it possible to robustly find all the graph minors within an arbitrary node graph where the pinch points are generally not single nodes? I have read some other posts on here about how to break up your graph into a Hamiltonian cycle and then from that find the graph minors but it seems to be such an algorithm would require that each "room" had "doorways" consisting of single nodes. To explain a bit more a visual aid is necessary. Lets say the nodes below are an example of the typical node graph. What I am looking for is a way to automatically find the different colored regions of the graph (or graph minors)

    Read the article

  • Theoretically bug-free programs

    - by user2443423
    I have read lot of articles which state that code can't be bug-free, and they are talking about these theorems: Halting problem Gödel's incompleteness theorem Rice's theorem Actually Rice's theorem looks like an implication of the halting problem and the halting problem is in close relationship with Gödel's incompleteness theorem. Does this imply that every program will have at least one unintended behavior? Or does it mean that it's not possible to write code to verify it? What about recursive checking? Let's assume that I have two programs. Both of them have bugs, but they don't share the same bug. What will happen if I run them concurrently? And of course most of discussions talked about Turing machines. What about linear-bounded automation (real computers)?

    Read the article

  • Is code that terminates on a random condition guaranteed to terminate?

    - by Simon Campbell
    If I had a code which terminated based on if a random number generator returned a result (as follows), would it be 100% certain that the code would terminate if it was allowed to run forever. while (random(MAX_NUMBER) != 0): // random returns a random number between 0 and MAX_NUMBER print('Hello World') I am also interested in any distinctions between purely random and the deterministic random that computers generally use. Assume the seed is not able to be known in the case of the deterministic random. Naively it could be suggested that the code will exit, after all every number has some possibility and all of time for that possibility to be exercised. On the other hand it could be argued that there is the random chance it may not ever meet the exit condition-- the generator could generate 1 'randomly' until infinity. (I suppose one would question the validity of the random number generator if it was a deterministic generator returning only 1's 'randomly' though)

    Read the article

  • Which would be a better way to load data via ajax

    - by Mike
    I am using google maps and returning html/lat/long from my MySQL database Currently A user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the Controller then queries the db, and returns the following data via JSON Lat/Long of the marker HTML for the popup window this is approximately 34 rows in the database across two tables per business the ajax call receives this data and then plots the marker along with the html onto the map The data that is returned from the controller is one big json object... This is done for all businesses that exist in the Video Production category (currently approx 40 businesses). As you can see, pulling this data for multiple categories (100s of businesses) can get very very taxing on the server. My question is Would it be more beneficial to modify the process flow as such: a user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the controller then queries the database for the location base information lat/long level (used to change marker icon color) This would be a single row per business with several columns the ajax call receives this data and then plots the marker on the map when the user clicks a marker an ajax call is sent to a CodeIgniter Controller the controller queries the database for the HTML and additional data based on business_id and if not, what are some better suggestions to this problem? In summary this means rather than including the HTML and additional data along for each business, only submitting minimal location information and then re-query for that information when each business marker is clicked. Potential Downsides longer load times when a user clicks a marker icon more code?? more queries to the database

    Read the article

  • Don Knuth and MMIXAL vs. Chuck Moore and Forth -- Algorithms and Ideal Machines -- was there cross-pollination / influence in their ideas / work?

    - by AKE
    Question: To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms? I'm interested in citations, interviews, articles, links, or any other sort of evidence. It could also be evidence of the form of A and B here suggest that Moore might have borrowed or influenced C and D from Knuth here, or vice versa. (Opinions are of course welcome, but references / links would be better!) Context: Until fairly recently, I have been primarily familiar with Knuth's work on algorithms and computing models, mostly through TAOCP but also through his interviews and other writings. However, the more I have been using Forth, the more I am struck by both the power of a stack-based machine model, and the way in which the spareness of the model makes fundamental algorithmic improvements more readily apparent. A lot of what Knuth has done in fundamental analysis of algorithms has, it seems to me, a very similar flavour, and I can easily imagine that in a parallel universe, Knuth might perhaps have chosen Forth as his computing model. That's the software / algorithms / programming side of things. When it comes to "ideal computing machines", Knuth in the 70s came up with the MIX computer model, and then, collaborating with designers of state-of-the-art RISC chips through the 90s, updated this with the modern MMIX model and its attendant assembly language MMIXAL. Meanwhile, Moore, having been using and refining Forth as a language, but using it on top of whatever processor happened to be in the computer he was programming, began to imagine a world in which the efficiency and value of stack-based programming were reflected in hardware. So he went on in the 80s to develop his own stack-based hardware chips, defining the term MISC (Minimal Instruction Set Computers) along the way, and ending up eventually with the first Forth chip, the MuP21. Both are brilliant men with keen insight into the art of programming and algorithms, and both work at the intersection between algorithms, programs, and bare metal hardware (i.e. hardware without the clutter of operating systems). Which leads me to the headlined question... Question:To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms?

    Read the article

  • How to recover from finite-state-machine breakdown?

    - by Earl Grey
    My question may seems very scientific but I think it's a common problem and seasoned developers and programmers hopefully will have some advice to avoid the problem I mention in title. Btw., what I describe bellow is a real problem I am trying to proactively solve in my iOS project, I want to avoid it at all cost. By finite state machine I mean this I have a UI with a few buttons, several session states relevant to that UI and what this UI represents, I have some data which values are partly displayed in the UI, I receive and handle some external triggers (represented by callbacks from sensors). I made state diagrams to better map the relevant scenarios that are desirable and alowable in that UI and application. As I slowly implement the code, the app starts to behave more and more like it should. However, I am not very confident that it is robust enough. My doubts come from watching my own thinking and implementation process as it goes. I was confident that I had everything covered, but it was enough to make a few brute tests in the UI and I quickly realized that there are still gaps in the behavior ..I patched them. However, as each component depends and behaves based on input from some other component, a certain input from user or some external source trigers a chain of events, state changes..etc. I have several components and each behave like this Trigger received on input - trigger and its sender analyzed - output something (a message, a state change) based on analysis The problem is, this is not completely selfcontained, and my components (a database item, a session state, some button's state)...COULD be changed, influenced, deleted, or otherwise modified, outside the scope of the event-chain or desirable scenario. (phone crashes, battery is empty phone turn of suddenly) This will introduce a nonvalid situation into the system, from which the system potentially COULD NOT BE ABLE to recover. I see this (althought people do not realize this is the problem) in many of my competitors apps that are on apple store, customers write things like this "I added three documents, and after going there and there, i cannot open them, even if a see them." or "I recorded videos everyday, but after recording a too log video, I cannot turn of captions on them.., and the button for captions doesn't work".. These are just shortened examples, customers often describe it in more detail..from the descriptions and behavior described in them, I assume that the particular app has a FSM breakdown. So the ultimate question is how can I avoid this, and how to protect the system from blocking itself? EDIT I am talking in the context of one viewcontroller's view on the phone, I mean one part of the application. I Understand the MVC pattern, I have separate modules for distinct functionality..everything I describe is relevant to one canvas on the UI.

    Read the article

  • What would a database look like if it were normalized to be completely abstracted? lets call it Max(n) normal form

    - by Doug Chamberlain
    edit: By simplest form i was not implying that it would be easy to understand. For instance, developing in low level assembly language is the simplest way to can develop code, but it is far from the easiest. Essentially, what I am asking is in math you can simplify a fraction to a point where it can no longer be simplfied. Can the same be true for a database and what would a database look like in its simplest, form?

    Read the article

  • How can I make sure that I'm actually learning how to program rather than simply learning the details of a language?

    - by Ryan
    I often hear that a real programmer can easily learn any language within a week. Languages are just tools for getting things done, I'm told. Programming is the ultimate skill that must be learned and mastered. How can I make sure that I'm actually learning how to program rather than simply learning the details of a language? And how can I develop programming skills that can be applied towards all languages instead of just one?

    Read the article

  • Why not expose a primary key

    - by Angelo Neuschitzer
    In my education I have been told that it is a flawed idea to expose actual primary keys (not only DB keys, but all primary accessors) to the user. I always thought it to be a security problem (because an attacker could attempt to read stuff not their own). Now I have to check if the user is allowed to access anyway, so is there a different reason behind it? Also, as my users have to access the data anyway I will need to have a public key for the outside world somewhere in between. Now that public key has the same problems as the primary key, doesn't it?

    Read the article

  • Quality Assurance activities

    - by MasloIed
    Having asked but deleted the question as it was a bit misunderstood. If Quality Control is the actual testing, what are the commonest true quality assurance activities? I have read that verification (reviews, inspections..) but it does not make much sense to me as it looks more like quality control as mentioned here: DEPARTMENT OF HEALTH AND HUMAN SERVICES ENTERPRISE PERFORMANCE LIFE CYCLE FRAMEWORK Practices guide Verification - “Are we building the product right?” Verification is a quality control technique that is used to evaluate the system or its components to determine whether or not the project’s products satisfy defined requirements. During verification, the project’s processes are reviewed and examined by members of the IV&V team with the goal of preventing omissions, spotting problems, and ensuring the product is being developed correctly. Some Verification activities may include items such as: • Verification of requirement against defined specifications • Verification of design against defined specifications • Verification of product code against defined standards • Verification of terms, conditions, payment, etc., against contracts

    Read the article

  • bug: deviation from requirements vs deviation from expectations

    - by user970696
    I am not clear on this one. No matter the terminology, in the end the software fault/bug causes (according to a lot of sources): Deviation from requirements Devation from expectations But if the expectations are not in requirements, then stakeholder could see a bug everywhere as he expected it to be like this or that..So how can I really know? I did read that specification can miss things and then of course its expected but not specified (by mistake).

    Read the article

  • Is verification and validation part of testing process?

    - by user970696
    Based on many sources I do not believe the simple definition that aim of testing is to find as many bugs as possible - we test to ensure that it works or that it does not. E.g. followint are goals of testing form ISTQB: Determine that (software products) satisfy specified requirements ( I think its verificication) Demonstrate that (software products) are fit for purpose (I think that is validation) Detect defects I would agree that testing is verification, validation and defect detection. Is that correct?

    Read the article

  • How often do CPUs make calculation errors?

    - by veryfoolish
    In Dijkstra's Notes on Structured Programming he talks a lot about the provability of computer programs as abstract entities. As a corollary, he remarks how testing isn't enough. E.g., he points out the fact that it would be impossible to test a multiplication function f(x,y) = x*y for any large values of x and y across the entire ranges of x and y. My question concerns his misc. remarks on "lousy hardware". I know the essay was written in the 1970s when computer hardware was less reliable, but computers still aren't perfect, so they must make calculation mistakes sometimes. Does anybody know how often this happens or if there are any statistics on this?

    Read the article

  • I don't understand why algorithms are so special

    - by Jessica
    I'm a student of computer science trying to soak up as much information on the topic as I can during my free time. I keep returning to algorithms time and again in various formats (online course, book, web tutorial), but the concept fails to sustain my attention. I just don't understand: why are algorithms so special? I can tell you why fractals are awesome, why the golden ratio is awesome, why origami is awesome and scientific applications of all the above. Heck I even love Newton's laws and conical sections. But when it comes to algorithms, I'm just not astounded. They are not insightful in new ways about human cognition at all. I was expecting algorithms to be shattering preconceptions and mind-altering but time and time again they fail miserably. What am I doing wrong in my approach? Can someone tell me why algorithms are so awesome?

    Read the article

  • Is there a real difference between dynamic analysis and testing?

    - by user970696
    Often testing is regarded as a dynamic analysis of a software. Yet while writing my thesis, the reviewer noted to me that dynamic analysis is about analyzing the program behind the scenes - e.g. profiling and that it is not the same as testing because its "analysis" which looks inside and observes. I know that "static analysis" is not testing, should we then separate this "dynamic analysis" also from testing? Some books do refer to dynamic analysis in this sense. I would maybe say that testing is a one mean of dynamic analysis?

    Read the article

  • What are the processes of true Quality assurance?

    - by user970696
    Having read that Quality Assurance (QA) is focused on processes (while Quality Control (QC) is focused on the product), the books often mentions QA is the verification process - doing peer reviews, inspections etc. I still tend to think these are also QC as they check intermediate products. Elsewhere I have read that QA activity is e.g. choosing the right bugtracker. That sounds better to me in terms of process improvement. The question that close-voting person obviously missed is pretty clear: What are the activities that true QA should perform? I would appreciate the reference as I work on my thesis dealing with all these discrepancies and inconsistencies in the software quality world.

    Read the article

  • Is ORM an Anti-Pattern?

    - by derphil
    I had a very stimulating and interessting discussion with a colleague about ORM and it's Pros and Cons. In my opinion, an ORM is useful only in the rarest cases. At least in my experience. But I don't want to list my own arguments at this time. So I ask you, what do you think about ORM? What are the Pros and the Cons? P.S. I've posted this "question" yesterday on Stackoverflow, but some of the user think, that this should better posted here.

    Read the article

  • Verfication vs validation again, does testing belong to verification? If so, which?

    - by user970696
    I have asked before and created a lot of controversy so I tried to collect some data and ask similar question again. E.g. V&V where all testing is only validation: http://www.buzzle.com/editorials/4-5-2005-68117.asp According to ISO 12207, testing is done in validation: •Prepare Test Requirements,Cases and Specifications •Conduct the Tests In verification, it mentiones. The code implements proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets, and error definition, isolation, and recovery. and The software components and units of each software item have been completely and correctly integrated into the software item Not sure how to verify without testing but it is not there as a technique. From IEEE: Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] At the end of development phase? That would mean UAT.. So the question is, what testing (unit, integration, system, uat) will be considered verification or validation? I do not understand why some say dynamic verification is testing, while others that only validation. An example: I am testing an application. System requirements say there are two fields with max. lenght of 64 characters and Save button. Use case say: User will fill in first and last name and save. When checking the fields and Save button presence, I would say its verification. When I follow the use case, its validation. So its both together, done on the system as a whole.

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • How many copies are needed to enlarge an array?

    - by user10326
    I am reading an analysis on dynamic arrays (from the Skiena's algorithm manual). I.e. when we have an array structure and each time we are out of space we allocate a new array of double the size of the original. It describes the waste that occurs when the array has to be resized. It says that (n/2)+1 through n will be moved at most once or not at all. This is clear. Then by describing that half the elements move once, a quarter of the elements twice, and so on, the total number of movements M is given by: This seems to me that it adds more copies than actually happen. E.g. if we have the following: array of 1 element +--+ |a | +--+ double the array (2 elements) +--++--+ |a ||b | +--++--+ double the array (4 elements) +--++--++--++--+ |a ||b ||c ||c | +--++--++--++--+ double the array (8 elements) +--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x | +--++--++--++--++--++--++--++--+ double the array (16 elements) +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x || || || || || || || || | +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ We have the x element copied 4 times, c element copied 4 times, b element copied 4 times and a element copied 5 times so total is 4+4+4+5 = 17 copies/movements. But according to formula we should have 1*(16/2)+2*(16/4)+3*(16/8)+4*(16/16)= 8+8+6+4=26 copies of elements for the enlargement of the array to 16 elements. Is this some mistake or the aim of the formula is to provide a rough upper limit approximation? Or am I missunderstanding something here?

    Read the article

  • How does the "Fourth Dimension" work with arrays?

    - by Questionmark
    Abstract: So, as I understand it (although I have a very limited understanding), there are three dimensions that we (usually) work with physically: The 1st would be represented by a line. The 2nd would be represented by a square. The 3rd would be represented by a cube. Simple enough until we get to the 4th -- It is kinda hard to draw in a 3D space, if you know what I mean... Some people say that it has something to do with time. The Question: Now, that is all great with me. My question isn't about this, or I'd be asking it on MathSO or PhysicsSO. My question is: How does the computer handle this with arrays? I know that you can create 4D, 5D, 6D, etc... arrays in many different programming languages, but I want to know how that works.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >