Search Results

Search found 42001 results on 1681 pages for 'type theory'.

Page 20/1681 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • SQL SERVER – Puzzle Involving NULL – Resolve – Error – Operand data type void type is invalid for sum operator

    - by pinaldave
    Today is Monday let us start this week with interesting puzzle. Yesterday I had also posted quick question here: SQL SERVER – T-SQL Scripts to Find Maximum between Two Numbers Run following code: SELECT SUM(data) FROM (SELECT NULL AS DATA) t It will throw following error. Msg 8117, Level 16, State 1, Line 1 Operand data type void type is invalid for sum operator. I can easily fix if I use ISNULL Function as displayed following. SELECT SUM(data) FROM (SELECT ISNULL(NULL,0) AS DATA) t Above script will not throw an error. However, there is one more method how this can be fixed. Can you come up with another method which will not generate error? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Entity Type specific updates in entity component system

    - by Nathan
    I am currently familiarizing myself with the entity component paradigm. For an example, take a collision system, that detects if entities collide and if they do let them explode. So the collision system has to test collision based on the position component and then set the state of those entities to exploding. But what if the "effect" (setting the state to exploding) is different for different entities? For example, a ship fades out while for an asteroid a particle system must be created. Since entities and components are only data, this must happen in some system. The collision system could do it, but then it must switch over the entity type, which in my opinion is a cumbersome and difficult to extend solution. So how do I trigger "entity type dependend" updates on an entity?

    Read the article

  • Theoretically bug-free programs

    - by user2443423
    I have read lot of articles which state that code can't be bug-free, and they are talking about these theorems: Halting problem Gödel's incompleteness theorem Rice's theorem Actually Rice's theorem looks like an implication of the halting problem and the halting problem is in close relationship with Gödel's incompleteness theorem. Does this imply that every program will have at least one unintended behavior? Or does it mean that it's not possible to write code to verify it? What about recursive checking? Let's assume that I have two programs. Both of them have bugs, but they don't share the same bug. What will happen if I run them concurrently? And of course most of discussions talked about Turing machines. What about linear-bounded automation (real computers)?

    Read the article

  • Is code that terminates on a random condition guaranteed to terminate?

    - by Simon Campbell
    If I had a code which terminated based on if a random number generator returned a result (as follows), would it be 100% certain that the code would terminate if it was allowed to run forever. while (random(MAX_NUMBER) != 0): // random returns a random number between 0 and MAX_NUMBER print('Hello World') I am also interested in any distinctions between purely random and the deterministic random that computers generally use. Assume the seed is not able to be known in the case of the deterministic random. Naively it could be suggested that the code will exit, after all every number has some possibility and all of time for that possibility to be exercised. On the other hand it could be argued that there is the random chance it may not ever meet the exit condition-- the generator could generate 1 'randomly' until infinity. (I suppose one would question the validity of the random number generator if it was a deterministic generator returning only 1's 'randomly' though)

    Read the article

  • Why prefer a wildcard to a type discriminator in a Java API (Re: Effective Java)

    - by Michael Campbell
    In the generics section of Bloch's Effective Java (which handily is the "free" chapter available to all: http://java.sun.com/docs/books/effective/generics.pdf), he says: If a type parameter appears only once in a method declaration, replace it with a wildcard. (See page 31-33 of that pdf) The signature in question is: public static void swap(List<?> list, int i, int j) vs public static void swap(List<E> list, int i, int j) And then proceeds to use a static private "helper" function with an actual type parameter to perform the work. The helper function signature is EXACTLY that of the second option. Why is the wildcard preferable, since you need to NOT use a wildcard to get the work done anyway? I understand that in this case since he's modifying the List and you can't add to a collection with an unbounded wildcard, so why use it at all?

    Read the article

  • Abstract Data Type and Data Structure

    - by mark075
    It's quite difficult for me to understand these terms. I searched on google and read a little on Wikipedia but I'm still not sure. I've determined so far that: Abstract Data Type is a definition of new type, describes its properties and operations. Data Structure is an implementation of ADT. Many ADT can be implemented as the same Data Structure. If I think right, array as ADT means a collection of elements and as Data Structure, how it's stored in a memory. Stack is ADT with push, pop operations, but can we say about stack data structure if I mean I used stack implemented as an array in my algorithm? And why heap isn't ADT? It can be implemented as tree or an array.

    Read the article

  • Which would be a better way to load data via ajax

    - by Mike
    I am using google maps and returning html/lat/long from my MySQL database Currently A user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the Controller then queries the db, and returns the following data via JSON Lat/Long of the marker HTML for the popup window this is approximately 34 rows in the database across two tables per business the ajax call receives this data and then plots the marker along with the html onto the map The data that is returned from the controller is one big json object... This is done for all businesses that exist in the Video Production category (currently approx 40 businesses). As you can see, pulling this data for multiple categories (100s of businesses) can get very very taxing on the server. My question is Would it be more beneficial to modify the process flow as such: a user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the controller then queries the database for the location base information lat/long level (used to change marker icon color) This would be a single row per business with several columns the ajax call receives this data and then plots the marker on the map when the user clicks a marker an ajax call is sent to a CodeIgniter Controller the controller queries the database for the HTML and additional data based on business_id and if not, what are some better suggestions to this problem? In summary this means rather than including the HTML and additional data along for each business, only submitting minimal location information and then re-query for that information when each business marker is clicked. Potential Downsides longer load times when a user clicks a marker icon more code?? more queries to the database

    Read the article

  • Don Knuth and MMIXAL vs. Chuck Moore and Forth -- Algorithms and Ideal Machines -- was there cross-pollination / influence in their ideas / work?

    - by AKE
    Question: To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms? I'm interested in citations, interviews, articles, links, or any other sort of evidence. It could also be evidence of the form of A and B here suggest that Moore might have borrowed or influenced C and D from Knuth here, or vice versa. (Opinions are of course welcome, but references / links would be better!) Context: Until fairly recently, I have been primarily familiar with Knuth's work on algorithms and computing models, mostly through TAOCP but also through his interviews and other writings. However, the more I have been using Forth, the more I am struck by both the power of a stack-based machine model, and the way in which the spareness of the model makes fundamental algorithmic improvements more readily apparent. A lot of what Knuth has done in fundamental analysis of algorithms has, it seems to me, a very similar flavour, and I can easily imagine that in a parallel universe, Knuth might perhaps have chosen Forth as his computing model. That's the software / algorithms / programming side of things. When it comes to "ideal computing machines", Knuth in the 70s came up with the MIX computer model, and then, collaborating with designers of state-of-the-art RISC chips through the 90s, updated this with the modern MMIX model and its attendant assembly language MMIXAL. Meanwhile, Moore, having been using and refining Forth as a language, but using it on top of whatever processor happened to be in the computer he was programming, began to imagine a world in which the efficiency and value of stack-based programming were reflected in hardware. So he went on in the 80s to develop his own stack-based hardware chips, defining the term MISC (Minimal Instruction Set Computers) along the way, and ending up eventually with the first Forth chip, the MuP21. Both are brilliant men with keen insight into the art of programming and algorithms, and both work at the intersection between algorithms, programs, and bare metal hardware (i.e. hardware without the clutter of operating systems). Which leads me to the headlined question... Question:To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms?

    Read the article

  • Error: type or namespace name 'AssemblyKeyFileAttribute' and 'AssemblyKeyFile' could not be found

    To associate an assembly with a strong key file to store it to GAC, we use should include following line after all the imports and before defing namespace. For VB.NET:  <Assembly: AssemblyKeyFile("c:\path\mykey.snk")> For C#:    [assembly: AssemblyKeyFile(@"c:\path\mykey.snk")] but, you might encounter following two errors at the time of creating Assembly for GAC. 1. The type or namespace name 'AssemblyKeyFileAttribute' could not be found (are you missing a using directive or an assembly reference?) 2. The type or namespace name 'AssemblyKeyFile' could not be found (are you missing a using directive or an assembly reference?) How to resolve these errors: Just include "System.Reflection" namespace. It resolve above two errors. span.fullpost {display:none;}

    Read the article

  • How to change the default editor of a specific file type in JDeveloper

    - by [email protected]
    When you open a file in JDeveloper, the mode that is used as the default might not be what you as a developer want.  If, for example, every time you open a .jsp(x) file you click on the source tab at the bottom of the window so that you can edit the jsp(x) file in source code mode, you may want to consider changing the default editor for that file type.  This is easy to do in the JDeveloper tool preferences and can be a time saver in the long run, since some editors can take a while to start up and if you don't need them often, this would just be lost time.  Here are the steps:  From the JDeveloper menu, select Tools->Preferences...Select "File Types" in the tree component on the left side of the preferences dialog.Click on the "Default Editors" tab.Scroll to the file type you want to change.In the details section at the bottom of the dialog, use the "Default Editor" select list to change the default to your liking.

    Read the article

  • How to recover from finite-state-machine breakdown?

    - by Earl Grey
    My question may seems very scientific but I think it's a common problem and seasoned developers and programmers hopefully will have some advice to avoid the problem I mention in title. Btw., what I describe bellow is a real problem I am trying to proactively solve in my iOS project, I want to avoid it at all cost. By finite state machine I mean this I have a UI with a few buttons, several session states relevant to that UI and what this UI represents, I have some data which values are partly displayed in the UI, I receive and handle some external triggers (represented by callbacks from sensors). I made state diagrams to better map the relevant scenarios that are desirable and alowable in that UI and application. As I slowly implement the code, the app starts to behave more and more like it should. However, I am not very confident that it is robust enough. My doubts come from watching my own thinking and implementation process as it goes. I was confident that I had everything covered, but it was enough to make a few brute tests in the UI and I quickly realized that there are still gaps in the behavior ..I patched them. However, as each component depends and behaves based on input from some other component, a certain input from user or some external source trigers a chain of events, state changes..etc. I have several components and each behave like this Trigger received on input - trigger and its sender analyzed - output something (a message, a state change) based on analysis The problem is, this is not completely selfcontained, and my components (a database item, a session state, some button's state)...COULD be changed, influenced, deleted, or otherwise modified, outside the scope of the event-chain or desirable scenario. (phone crashes, battery is empty phone turn of suddenly) This will introduce a nonvalid situation into the system, from which the system potentially COULD NOT BE ABLE to recover. I see this (althought people do not realize this is the problem) in many of my competitors apps that are on apple store, customers write things like this "I added three documents, and after going there and there, i cannot open them, even if a see them." or "I recorded videos everyday, but after recording a too log video, I cannot turn of captions on them.., and the button for captions doesn't work".. These are just shortened examples, customers often describe it in more detail..from the descriptions and behavior described in them, I assume that the particular app has a FSM breakdown. So the ultimate question is how can I avoid this, and how to protect the system from blocking itself? EDIT I am talking in the context of one viewcontroller's view on the phone, I mean one part of the application. I Understand the MVC pattern, I have separate modules for distinct functionality..everything I describe is relevant to one canvas on the UI.

    Read the article

  • What would a database look like if it were normalized to be completely abstracted? lets call it Max(n) normal form

    - by Doug Chamberlain
    edit: By simplest form i was not implying that it would be easy to understand. For instance, developing in low level assembly language is the simplest way to can develop code, but it is far from the easiest. Essentially, what I am asking is in math you can simplify a fraction to a point where it can no longer be simplfied. Can the same be true for a database and what would a database look like in its simplest, form?

    Read the article

  • How can I make sure that I'm actually learning how to program rather than simply learning the details of a language?

    - by Ryan
    I often hear that a real programmer can easily learn any language within a week. Languages are just tools for getting things done, I'm told. Programming is the ultimate skill that must be learned and mastered. How can I make sure that I'm actually learning how to program rather than simply learning the details of a language? And how can I develop programming skills that can be applied towards all languages instead of just one?

    Read the article

  • Why not expose a primary key

    - by Angelo Neuschitzer
    In my education I have been told that it is a flawed idea to expose actual primary keys (not only DB keys, but all primary accessors) to the user. I always thought it to be a security problem (because an attacker could attempt to read stuff not their own). Now I have to check if the user is allowed to access anyway, so is there a different reason behind it? Also, as my users have to access the data anyway I will need to have a public key for the outside world somewhere in between. Now that public key has the same problems as the primary key, doesn't it?

    Read the article

  • Quality Assurance activities

    - by MasloIed
    Having asked but deleted the question as it was a bit misunderstood. If Quality Control is the actual testing, what are the commonest true quality assurance activities? I have read that verification (reviews, inspections..) but it does not make much sense to me as it looks more like quality control as mentioned here: DEPARTMENT OF HEALTH AND HUMAN SERVICES ENTERPRISE PERFORMANCE LIFE CYCLE FRAMEWORK Practices guide Verification - “Are we building the product right?” Verification is a quality control technique that is used to evaluate the system or its components to determine whether or not the project’s products satisfy defined requirements. During verification, the project’s processes are reviewed and examined by members of the IV&V team with the goal of preventing omissions, spotting problems, and ensuring the product is being developed correctly. Some Verification activities may include items such as: • Verification of requirement against defined specifications • Verification of design against defined specifications • Verification of product code against defined standards • Verification of terms, conditions, payment, etc., against contracts

    Read the article

  • bug: deviation from requirements vs deviation from expectations

    - by user970696
    I am not clear on this one. No matter the terminology, in the end the software fault/bug causes (according to a lot of sources): Deviation from requirements Devation from expectations But if the expectations are not in requirements, then stakeholder could see a bug everywhere as he expected it to be like this or that..So how can I really know? I did read that specification can miss things and then of course its expected but not specified (by mistake).

    Read the article

  • Is verification and validation part of testing process?

    - by user970696
    Based on many sources I do not believe the simple definition that aim of testing is to find as many bugs as possible - we test to ensure that it works or that it does not. E.g. followint are goals of testing form ISTQB: Determine that (software products) satisfy specified requirements ( I think its verificication) Demonstrate that (software products) are fit for purpose (I think that is validation) Detect defects I would agree that testing is verification, validation and defect detection. Is that correct?

    Read the article

  • How often do CPUs make calculation errors?

    - by veryfoolish
    In Dijkstra's Notes on Structured Programming he talks a lot about the provability of computer programs as abstract entities. As a corollary, he remarks how testing isn't enough. E.g., he points out the fact that it would be impossible to test a multiplication function f(x,y) = x*y for any large values of x and y across the entire ranges of x and y. My question concerns his misc. remarks on "lousy hardware". I know the essay was written in the 1970s when computer hardware was less reliable, but computers still aren't perfect, so they must make calculation mistakes sometimes. Does anybody know how often this happens or if there are any statistics on this?

    Read the article

  • I don't understand why algorithms are so special

    - by Jessica
    I'm a student of computer science trying to soak up as much information on the topic as I can during my free time. I keep returning to algorithms time and again in various formats (online course, book, web tutorial), but the concept fails to sustain my attention. I just don't understand: why are algorithms so special? I can tell you why fractals are awesome, why the golden ratio is awesome, why origami is awesome and scientific applications of all the above. Heck I even love Newton's laws and conical sections. But when it comes to algorithms, I'm just not astounded. They are not insightful in new ways about human cognition at all. I was expecting algorithms to be shattering preconceptions and mind-altering but time and time again they fail miserably. What am I doing wrong in my approach? Can someone tell me why algorithms are so awesome?

    Read the article

  • Is there a real difference between dynamic analysis and testing?

    - by user970696
    Often testing is regarded as a dynamic analysis of a software. Yet while writing my thesis, the reviewer noted to me that dynamic analysis is about analyzing the program behind the scenes - e.g. profiling and that it is not the same as testing because its "analysis" which looks inside and observes. I know that "static analysis" is not testing, should we then separate this "dynamic analysis" also from testing? Some books do refer to dynamic analysis in this sense. I would maybe say that testing is a one mean of dynamic analysis?

    Read the article

  • What are the processes of true Quality assurance?

    - by user970696
    Having read that Quality Assurance (QA) is focused on processes (while Quality Control (QC) is focused on the product), the books often mentions QA is the verification process - doing peer reviews, inspections etc. I still tend to think these are also QC as they check intermediate products. Elsewhere I have read that QA activity is e.g. choosing the right bugtracker. That sounds better to me in terms of process improvement. The question that close-voting person obviously missed is pretty clear: What are the activities that true QA should perform? I would appreciate the reference as I work on my thesis dealing with all these discrepancies and inconsistencies in the software quality world.

    Read the article

  • Is ORM an Anti-Pattern?

    - by derphil
    I had a very stimulating and interessting discussion with a colleague about ORM and it's Pros and Cons. In my opinion, an ORM is useful only in the rarest cases. At least in my experience. But I don't want to list my own arguments at this time. So I ask you, what do you think about ORM? What are the Pros and the Cons? P.S. I've posted this "question" yesterday on Stackoverflow, but some of the user think, that this should better posted here.

    Read the article

  • Verfication vs validation again, does testing belong to verification? If so, which?

    - by user970696
    I have asked before and created a lot of controversy so I tried to collect some data and ask similar question again. E.g. V&V where all testing is only validation: http://www.buzzle.com/editorials/4-5-2005-68117.asp According to ISO 12207, testing is done in validation: •Prepare Test Requirements,Cases and Specifications •Conduct the Tests In verification, it mentiones. The code implements proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets, and error definition, isolation, and recovery. and The software components and units of each software item have been completely and correctly integrated into the software item Not sure how to verify without testing but it is not there as a technique. From IEEE: Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] At the end of development phase? That would mean UAT.. So the question is, what testing (unit, integration, system, uat) will be considered verification or validation? I do not understand why some say dynamic verification is testing, while others that only validation. An example: I am testing an application. System requirements say there are two fields with max. lenght of 64 characters and Save button. Use case say: User will fill in first and last name and save. When checking the fields and Save button presence, I would say its verification. When I follow the use case, its validation. So its both together, done on the system as a whole.

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • How many copies are needed to enlarge an array?

    - by user10326
    I am reading an analysis on dynamic arrays (from the Skiena's algorithm manual). I.e. when we have an array structure and each time we are out of space we allocate a new array of double the size of the original. It describes the waste that occurs when the array has to be resized. It says that (n/2)+1 through n will be moved at most once or not at all. This is clear. Then by describing that half the elements move once, a quarter of the elements twice, and so on, the total number of movements M is given by: This seems to me that it adds more copies than actually happen. E.g. if we have the following: array of 1 element +--+ |a | +--+ double the array (2 elements) +--++--+ |a ||b | +--++--+ double the array (4 elements) +--++--++--++--+ |a ||b ||c ||c | +--++--++--++--+ double the array (8 elements) +--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x | +--++--++--++--++--++--++--++--+ double the array (16 elements) +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ |a ||b ||c ||c ||x ||x ||x ||x || || || || || || || || | +--++--++--++--++--++--++--++--++--++--++--++--++--++--++--++--+ We have the x element copied 4 times, c element copied 4 times, b element copied 4 times and a element copied 5 times so total is 4+4+4+5 = 17 copies/movements. But according to formula we should have 1*(16/2)+2*(16/4)+3*(16/8)+4*(16/16)= 8+8+6+4=26 copies of elements for the enlargement of the array to 16 elements. Is this some mistake or the aim of the formula is to provide a rough upper limit approximation? Or am I missunderstanding something here?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >