Search Results

Search found 1068 results on 43 pages for 'degree'.

Page 40/43 | < Previous Page | 36 37 38 39 40 41 42 43  | Next Page >

  • Color Theory: How to convert Munsell HVC to RGB/HSB/HSL

    - by Ian Boyd
    I'm looking at at document that describes the standard colors used in dentistry to describe the color of a tooth. They quote hue, value, chroma values, and indicate they are from the 1905 Munsell description of color: The system of colour notation developed by A. H. Munsell in 1905 identifies colour in terms of three attributes: HUE, VALUE (Brightness) and CHROMA (saturation) [15] HUE (H): Munsell defined hue as the quality by which we distinguish one colour from another. He selected five principle colours: red, yellow, green, blue, and purple; and five intermediate colours: yellow-red, green-yellow, blue-green, purple-blue, and red-purple. These were placed around a colour circle at equal points and the colours in between these points are a mixture of the two, in favour of the nearer point/colour (see Fig 1.). VALUE (V): This notation indicates the lightness or darkness of a colour in relation to a neutral grey scale, which extends from absolute black (value symbol 0) to absolute white (value symbol 10). This is essentially how ‘bright’ the colour is. CHROMA (C): This indicates the degree of divergence of a given hue from a neutral grey of the same value. The scale of chroma extends from 0 for a neutral grey to 10, 12, 14 or farther, depending upon the strength (saturation) of the sample to be evaluated. There are various systems for categorising colour, the Vita system is most commonly used in Dentistry. This uses the letters A, B, C and D to notate the hue (colour) of the tooth. The chroma and value are both indicated by a value from 1 to 4. A1 being lighter than A4, but A4 being more saturated than A1. If placed in order of value, i.e. brightness, the order from brightest to darkest would be: A1, B1, B2, A2, A3, D2, C1, B3, D3, D4, A3.5, B4, C2, A4, C3, C4 The exact values of Hue, Value and Chroma for each of the shades is shown below (16) So my question is, can anyone convert Munsell HVC into RGB, HSB or HSL? Hue Value (Brightness) Chroma(Saturation) === ================== ================== 4.5 7.80 1.7 2.4 7.45 2.6 1.3 7.40 2.9 1.6 7.05 3.2 1.6 6.70 3.1 5.1 7.75 1.6 4.3 7.50 2.2 2.3 7.25 3.2 2.4 7.00 3.2 4.3 7.30 1.6 2.8 6.90 2.3 2.6 6.70 2.3 1.6 6.30 2.9 3.0 7.35 1.8 1.8 7.10 2.3 3.7 7.05 2.4 They say that Value(Brightness) varies from 0..10, which is fine. So i take 7.05 to mean 70.5%. But what is Hue measured in? i'm used to hue being measured in degrees (0..360). But the values i see would all be red - when they should be more yellow, or brown. Finally, it says that Choma/Saturation can range from 0..10 ...or even higher - which makes it sound like an arbitrary scale. So can anyone convert Munsell HVC to HSB or HSL, or better yet, RGB?

    Read the article

  • Best practices concerning view model and model updates with a subset of the fields

    - by Martin
    By picking MVC for developing our new site, I find myself in the midst of "best practices" being developed around me in apparent real time. Two weeks ago, NerdDinner was my guide but with the development of MVC 2, even it seems outdated. It's an thrilling experience and I feel privileged to be in close contact with intelligent programmers daily. Right now I've stumbled upon an issue I can't seem to get a straight answer on - from all the blogs anyway - and I'd like to get some insight from the community. It's about Editing (read: Edit action). The bulk of material out there, tutorials and blogs, deal with creating and view the model. So while this question may not spell out a question, I hope to get some discussion going, contributing to my decision about the path of development I'm to take. My model represents a user with several fields like name, address and email. All the names, in fact, on field each for first name, last name and middle name. The Details view displays all these fields but you can change only one set of fields at a time, for instance, your names. The user expands a form while the other fields are still visible above and below. So the form that is posted back contains a subset of the fields representing the model. While this is appealing to us and our layout concerns, for various reasons, it is to be shunned by serious MVC-developers. I've been reading about some patterns and best practices and it seems that this is not in key with the paradigm of viewmodel == view. Or have I got it wrong? Anyway, NerdDinner dictates using FormCollection och UpdateModel. All the null fields are happily ignored. Since then, the MVC-community has abandoned this approach to such a degree that a bug in MVC 2 was not discovered. UpdateModel does not work without a complete model in your formcollection. The view model pattern receiving most praise seems to be Dedicated view model that contains a custom view model entity and is the only one that my design issue could be made compatible with. It entails a tedious amount of mapping, albeit lightened by the use of AutoMapper and the ideas of Jimmy Bogard, that may or may not be worthwhile. He also proposes a 1:1 relationship between view and view model. In keeping with these design paradigms, I am to create a view and associated view for each of my expanding sets of fields. The view models would each be nearly identical, differing only in the fields which are read-only, the views also containing much repeated markup. This seems absurd to me. In future I may want to be able to display two, more or all sets of fields open simultaneously. I will most attentively read the discussion I hope to spark. Many thanks in advance.

    Read the article

  • Really "wow" them in the interview

    - by Juliet
    Let me put it to you this way: I'm a top-notch programmer, but a notoriously bad interviewee. I've flunked 3 interviews consecutively because I get so nervous that my voice tightens at least 2 octaves higher and I start visibly shaking -- mind you, I can handle whatever technical questions the interviewer throws at me in that state, but I think it looks bad to come off as a quivering, squeaky-voiced young woman during a job interview. I've just got the personality type of a shy computer programmer. No matter how technical I am, I'm going to get passed up in favor of a smooth talker. I have another interview coming up shortly, and I want to really impress the company. Here are my trouble spots: What can I do to be less nervous during my interview? I always get really excited when I hear I have a face-to-face interview, but get more and more anxious as D-Day the interview approaches. My employers wants me to explain what I used to do at my prior employment. I'm a very chatty person and tend to talk/squeak for 10 minutes at a time. How long or short should I time my answers? On that note, when I'm explaining what I did at prior jobs, what exactly is my interviewer looking for? At some point, my interviewer will ask "do you have any questions for me while you're here?" I should, but what kinds of questions should I ask to show that I'm interested in being employed? My interviewer always asks why I'm looking for a new job. The real reason is that my present salary is $27K/yr [Edit to add: and I've yet to get a raise since I started], and I want to make more money -- otherwise the work environment is fine. How do I sugarcoat "I want to make more money" into something that sounds nicer? I have only one prior programmer job, and I've worked there for 18 months, but I have the skill of someone with 4 to 6 years of experience. What can I say to compete against applicants with more work experience? I took a low-paying $27K/yr programming job just to get my foot in IT, and I've been trying to leverage that job as a stepping stone to better opportunities. I get interviews because I consistently out-score senior-level developers in aptitude tests, and my desired salary range is right in the ballpark of what most companies want to offer. Unfortunately, while I've been a programming as a hobby for 10 years and I'm geared to graduate with my BA in Comp Sci in May '09, employers see me as a junior-level programmer with no degree. I want to prove them wrong and get a job that matches my skill level. I'd appreciate any advice anyone has to offer, especially if they can help me get a better job in the process.

    Read the article

  • Scared of Calculus - Required to pass Differential Calculus as part of my Computer science major

    - by ke3pup
    Hi guys I'm finishing my Computer science degree in university but my fear of maths (lack of background knowledge) made me to leave all my maths units til' the very end which is now. i either take them on and pass or have to give up. I've passed all my programming units easily but knowing my poor maths skills won't do i've been staying clear of the maths units. I have to pass Differential Calculus and Linear Algebra first. With a help of book named "Linear Algebra: A Modern Introduction" i'm finding myself on track and i think i can pass the Linear Algebra unit. But with differential calculus i can't find a book to help me. They're either too advanced or just too simple for what i have to learn. The things i'm required to know for this units are: Set notation, the real number line, Complex numbers in cartesian form. Complex plane, modulus. Complex numbers in polar form. De Moivre’s Theorem. Complex powers and nth roots. Definition of ei? and ez for z complex. Applications to trigonometry. Revision of domain and range of a function Working in R3. Curves and surfaces. Functions of 2 variables. Level curves.Partial derivatives and tangent planes. The derivative as a difference quotient. Geometric significance of the derivative. Discussion of limit. Higher order partial derivatives. Limits of f(x,y). Continuity. Maxima and minima of f(x,y). The chain rule. Implicit differentiation. Directional derivatives and the gradient. Limit laws, l’Hoˆpital’s rule, composition law. Definition of sinh and cosh and their inverses. Taylor polynomials. The remainder term. Taylor series. Is there a book to help me get on track with the above? Being a student i can't buy too many books hence why i'm looking for a book that covers topics I need to know. The University library has a fairly limited collection which i took as loan but didn't find useful as it was too complex.

    Read the article

  • Is the Scala 2.8 collections library a case of "the longest suicide note in history" ?

    - by oxbow_lakes
    First note the inflammatory subject title is a quotation made about the manifesto of a UK political party in the early 1980s. This question is subjective but it is a genuine question, I've made it CW and I'd like some opinions on the matter. Despite whatever my wife and coworkers keep telling me, I don't think I'm an idiot: I have a good degree in mathematics from the University of Oxford and I've been programming commercially for almost 12 years and in Scala for about a year (also commercially). I have just started to look at the Scala collections library re-implementation which is coming in the imminent 2.8 release. Those familiar with the library from 2.7 will notice that the library, from a usage perspective, has changed little. For example... > List("Paris", "London").map(_.length) res0: List[Int] List(5, 6) ...would work in either versions. The library is eminently useable: in fact it's fantastic. However, those previously unfamiliar with Scala and poking around to get a feel for the language now have to make sense of method signatures like: def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That For such simple functionality, this is a daunting signature and one which I find myself struggling to understand. Not that I think Scala was ever likely to be the next Java (or /C/C++/C#) - I don't believe its creators were aiming it at that market - but I think it is/was certainly feasible for Scala to become the next Ruby or Python (i.e. to gain a significant commercial user-base) Is this going to put people off coming to Scala? Is this going to give Scala a bad name in the commercial world as an academic plaything that only dedicated PhD students can understand? Are CTOs and heads of software going to get scared off? Was the library re-design a sensible idea? If you're using Scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens? Steve Yegge once attacked Scala (mistakenly in my opinion) for what he saw as its overcomplicated type-system. I worry that someone is going to have a field day spreading fud with this API (similarly to how Josh Bloch scared the JCP out of adding closures to Java). Note - I should be clear that, whilst I believe that Josh Bloch was influential in the rejection of the BGGA closures proposal, I don't ascribe this to anything other than his honestly-held beliefs that the proposal represented a mistake.

    Read the article

  • Where to put a glossary of important terms and patterns in documentation?

    - by Tetha
    Greetings. I want to document certain patterns in the code in order to build up a consistent terminology (in order to easen communication about the software). I am, however, unsure, where to define the terms given. In order to get on the same level, an example: I have a code generator. This code generator receives a certain InputStructure from the Parser (yes, the name InputStructure might be less than ideal). This InputStructure is then transformed into various subsequent datastructures (like an abstract description of the validation process). Each of these datastructures can be either transformed into another value of the same datastructure or it can be transformed into the next datastructure. This should sound like Pipes and Filters to some degree. Given this, I call an operation which takes a datastructures and constructs a value of the same datastructure a transformation, while I call an operation which takes a datastructure and produces a different follow-up datastructure a derivation. The final step of deriving a string containing code is called emitting. (So, overall, the codegenerator takes the input-structure and transforms, transforms, derives, transforms, derives and finally emits). I think emphasizing these terms will be benefitial in communications, because then it is easy to talk about things. If you hear "transformation", you know "Ok, I only need to think about these two datastructures", if you hear "emitting", you know "Ok, I only need to know this datastructure and the target language.". However, where do I document these patterns? The current code base uses visitors and offers classes called like ValidatorTransformationBase<ResultType> (or InputStructureTransformationBase<ResultType>, and so one and so on). I do not really want to add the definition of such terms to the interfaces, because in that case, I'd have to repeat myself on each and every interface, which clearly violates DRY. I am considering to emphasize the distinction between Transformations and Derivations by adding further interfaces (I would have to think about a better name for the TransformationBase-classes, but then I could do thinks like ValidatorTransformation extends ValidatorTransformationBase<Validator>, or ValidatorDerivationFromInputStructure extends InputStructureTransformation<Validator>). I also think I should add a custom page to the doxygen documentation already existing, as in "Glossary" or "Architecture Principles", which contains such principles. The only disadvantage of this would be that a contributor will need to find this page in order to actually learn about this. Am I missing possibilities or am I judging something wrong here in your opinion? -- Regards, Tetha

    Read the article

  • Need to add totals of select list, and subtract if taken away

    - by jeremy
    This is the code I am using to calculate the sum of two values (example below) http://www.healthybrighton.co.uk/wse/node/1841 This works to a degree. It will add the two together but only if #edit-submitted-test is selected first, and even then it will show NaN until I select #edit-submitted-test-1. I want it to calculate the sum of the fields and show the amount live, i.e. the value of edit-submitted-test-1 will show 500, then if i select another field it will update to 1000. If i take one selection away it will then subtract it and will be back to 500. Any ideas would be helpful thanks! Drupal.behaviors.bookingForm = function (context) { // some debugging here, remove when you're finished console.log('[bookingForm] started.'); // get number of travelers and multiply it by 100 function recalculateTotal() { var count = $('#edit-submitted-test').val(); count = parseFloat( count ); var cost = $('#edit-submitted-test-1').val(); cost = parseFloat( cost ); $('#edit-submitted-total').val( count + cost ); } // run recalculateTotal every time user enters a new value var fieldCount = $('#edit-submitted-test'); var fieldCount = $('#edit-submitted-test-1'); fieldCount.change( recalculateTotal ); // etc ... }; EDIT This is all working beautiful, however I now want to add all the values together, and automatically update a total cost field that is the sum of a the added field with code above and field with value passed from previous page. I did this where accomcost is the field that is added together, but the total cost field does update, but not automatically, it is always one selection behind. i.e. If i select options and accomodation cost updates to £900, total cost remains empty. If i then change the selection and the accomodation updates to £300, the total cost updates to the previous selection of £900. Any help on this one Felix? ;) thanks. var accomcost = $('#edit-submitted-accomodation-cost').val(); accomcost = accomcost ? parseFloat(accomcost) : 0; var coursescost = $('#edit-submitted-courses-cost-2').val(); coursescost = coursescost ? parseFloat(coursescost) : 0; $('#edit-submitted-accomodation-cost').val( w1 + w2 + w3 + w4 + w5 + w6 + w7 + w8 + w9 + w10 + w11 + w12 + w13 + w14 + w15 + w16 + w17 + w18 + w19 + w20 + w21 + w22 + w23 + w24 + w25 + w26 + w27 + w28 + w29 + w30 ); $('#edit-submitted-total-cost').val( accomcost + coursescost ); var fieldCount = $('#edit-submitted-accomodation-cost, #edit-submitted-courses-cost-2') .change( recalculateTotal );

    Read the article

  • Linked List Design

    - by Jim Scott
    The other day in a local .NET group I attend the following question came up: "Is it a valid interview question to ask about Linked Lists when hiring someone for a .NET development position?" Not having a computer sciense degree and being a self taught developer my response was that I did not feel it was appropriate as I in 5 years of developer with .NET had never been exposed to linked lists and did not hear any compeling reason for a use for one. However the person commented that it is a very common interview question so I decided when I left that I would do some reasearch on linked lists and see what I might be missing. I have read a number of posts on stack overflow and various google searches and decided the best way to learn about them was to write my own .NET classes to see how they worked from the inside out. Here is my class structure Single Linked List Constructor public SingleLinkedList(object value) Public Properties public bool IsTail public bool IsHead public object Value public int Index public int Count private fields not exposed to a property private SingleNode firstNode; private SingleNode lastNode; private SingleNode currentNode; Methods public void MoveToFirst() public void MoveToLast() public void Next() public void MoveTo(int index) public void Add(object value) public void InsertAt(int index, object value) public void Remove(object value) public void RemoveAt(int index) Questions I have: What are typical methods you would expect in a linked list? What is typical behaviour when adding new records? For example if I have 4 nodes and I am currently positioned in the second node and perform Add() should it be added after or before the current node? Or should it be added to the end of the list? Some of the designs I have seen explaining things seem to expose outside of the LinkedList class the Node object. In my design you simply add, get, remove values and know nothing about any node object. Should the Head and Tail be placeholder objects that are only used to define the head/tail of the list? I require my Linked List be instantiated with a value which creates the first node of the list which is essentially the head and tail of the list. Would you change that ? What should the rules be when it comes to removing nodes. Should someone be able to remove all nodes? Here is my Double Linked List Constructor public DoubleLinkedList(object value) Properties public bool IsHead public bool IsTail public object Value public int Index public int Count Private fields not exposed via property private DoubleNode currentNode; Methods public void AddFirst(object value) public void AddLast(object value) public void AddBefore(object existingValue, object value) public void AddAfter(object existingValue, object value) public void Add(int index, object value) public void Add(object value) public void Remove(int index) public void Next() public void Previous() public void MoveTo(int index)

    Read the article

  • What is a good dumbed-down, safe template system for PHP?

    - by Wilhelm
    (Summary: My users need to be able to edit the structure of their dynamically generated web pages without being able to do any damage.) Greetings, ladies and gentlemen. I am currently working on a service where customers from a specific demographic can create a specific type of web site and fill it with their own content. The system is written in PHP. Many of the users of this system wish to edit how their particular web site looks, or, more commonly, have a designer do it for them. Editing the CSS is fine and dandy, but sometimes that's not enough. Sometimes they want to shuffle the entire page structure around by editing the raw HTML of the dynamically created web pages. The templating system used by WordPress is, as far as I can see, perfect for my use. Except for one thing which is critically important. In addition to being able to edit how comments are displayed or where the menu goes, someone editing a template can have that template execute arbitrary PHP code. As the same codebase runs all these different sites, with all content in the same databse, allowing my users to run arbitrary code is clearly out of the question. So what I need, is a dumbed-down, idiot-proof templating system where my users can edit most of the page structure on their own, pulling in the dynamic sections wherever, without being able to even echo 1+1;. Observe the following psuedocode: <!DOCTYPE html> <title><!-- $title --></title> <!-- header() --> <!-- menu() --> <div>Some random custom crap added by the user.</div> <!-- page_content() --> That's the degree of power I'd like to grant my users. They don't need to do their own loops or calculations or anything. Just include my variables and functions and leave the rest to me. I'm sure I'm not the only person on the planet that needs something like this. Do you know of any ready-made templating systems I could use? Thanks in advance for your reply.

    Read the article

  • Reading Source Code Aloud

    - by Jon Purdy
    After seeing this question, I got to thinking about the various challenges that blind programmers face, and how some of them are applicable even to sighted programmers. Particularly, the problem of reading source code aloud gives me pause. I have been programming for most of my life, and I frequently tutor fellow students in programming, most often in C++ or Java. It is uniquely aggravating to try to verbally convey the essential syntax of a C++ expression. The speaker must give either an idiomatic translation into English, or a full specification of the code in verbal longhand, using explicit yet slow terms such as "opening parenthesis", "bitwise and", et cetera. Neither of these solutions is optimal. On the one hand, an idiomatic translation is only useful to a programmer who can de-translate back into the relevant programming code—which is not usually the case when tutoring a student. In turn, education (or simply getting someone up to speed on a project) is the most common situation in which source is read aloud, and there is a very small margin for error. On the other hand, a literal specification is aggravatingly slow. It takes far far longer to say "pound, include, left angle bracket, iostream, right angle bracket, newline" than it does to simply type #include <iostream>. Indeed, most experienced C++ programmers would read this merely as "include iostream", but again, inexperienced programmers abound and literal specifications are sometimes necessary. So I've had an idea for a potential solution to this problem. In C++, there is a finite set of keywords—63—and operators—54, discounting named operators and treating compound assignment operators and prefix versus postfix auto-increment and decrement as distinct. There are just a few types of literal, a similar number of grouping symbols, and the semicolon. Unless I'm utterly mistaken, that's about it. So would it not then be feasible to simply ascribe a concise, unique pronunciation to each of these distinct concepts (including one for whitespace, where it is required) and go from there? Programming languages are far more regular than natural languages, so the pronunciation could be standardised. Speakers of any language would be able to verbally convey C++ code, and due to the regularity and fixity of the language, speech-to-text software could be optimised to accept C++ speech with a high degree of accuracy. So my question is twofold: first, is my solution feasible; and second, does anyone else have other potential solutions? I intend to take suggestions from here and use them to produce a formal paper with an example implementation of my solution.

    Read the article

  • Using VCL for the web (intraweb) as a trick for adding web interface to a legacy non-tiered (2 tiers

    - by user193655
    My team is maintaining a huge Client Server win32 Delphi application. It is a client/server application (Thick client) that uses DevArt (SDAC) components to connect to SQL Server. The business logic is often "trapped" in Component's event handlers, anyway with some degree of refactoring it is doable to move the business logic in common units (a big part of this work has already been done during refactoring... Maintaing legacy applications someone else wrote is very frustrating, but this is a very common job). Now there is the request of a web interface, I have several options of course, in this question i want to focus on the VCL for the web (intraweb) option. The idea is to use the common code (the same pas files) for both the client/server application and the web application. I heard of many people that moved legacy apps from delphi to intraweb, but here I am trying to keep the Thick client too. The idea is to use common code, may be with some compiler directives to write specific code: {$IFDEF CLIENTSERVER} {here goes the thick client specific code} {$ELSE} {here goes the Intraweb specific code} {$ENDIF} Then another problem is the "migration plan", let's say I have 300 features and on the first release I will have only 50 of them available in the web application. How to keep track of it? I was thinking of (ab)using Delphi interfaces to handle this. For example for the User Authentication I could move all the related code in a procedure and declare an interface like: type IUserAuthentication= interface['{0D57624C-CDDE-458B-A36C-436AE465B477}'] procedure UserAuthentication; end; In this way as I implement the IUserAuthentication interface in both the applications (Thick Client and Intraweb) I know that That feature has been "ported" to the web. Anyway I don't know if this approach makes sense. I made a prototype to simulate the whole process. It works for a "Hello world" application, but I wonder if it makes sense on a large application or this Interface idea is only counter-productive and can backfire. My question is: does this approach make sense? (the Interface idea is just an extra idea, it is not so important as the common code part described above) Is it a viable option? I understand it depends a lot of the kind of application, anyway to be generic my one is in the CRM/Accounting domain, and the number of concurrent users on a single installation is typically less than 20 with peaks of 50. EXTRA COMMENT (UPDATE): I ask this question because since I don't have a n-tier application I see Intraweb as the unique option for having a web application that has common code with the thick client. Developing webservices from the Delphi code makes no sense in my specific case, so the alternative I have is to write the web interface using ASP.NET (duplicating the business logic), but in this case I cannot take advantage of the common code in an easy way. Yes I could use dlls maybe, but my code is not suitable for that.

    Read the article

  • Function lfit in numerical recipes, providing a test function

    - by Simon Walker
    Hi I am trying to fit collected data to a polynomial equation and I found the lfit function from Numerical Recipes. I only have access to the second edition, so am using that. I have read about the lfit function and its parameters, one of which is a function pointer, given in the documentation as void (*funcs)(float, float [], int)) with the help The user supplies a routine funcs(x,afunc,ma) that returns the ma basis functions evaluated at x = x in the array afunc[1..ma]. I am struggling to understand how this lfit function works. An example function I found is given below: void fpoly(float x, float p[], int np) /*Fitting routine for a polynomial of degree np-1, with coe?cients in the array p[1..np].*/ { int j; p[1]=1.0; for (j=2;j<=np;j++) p[j]=p[j-1]*x; } When I run through the source code for the lfit function in gdb I can see no reference to the funcs pointer. When I try and fit a simple data set with the function, I get the following error message. Numerical Recipes run-time error... gaussj: Singular Matrix ...now exiting to system... Clearly somehow a matrix is getting defined with all zeroes. I am going to involve this function fitting in a large loop so using another language is not really an option. Hence why I am planning on using C/C++. For reference, the test program is given here: int main() { float x[5] = {0., 0., 1., 2., 3.}; float y[5] = {0., 0., 1.2, 3.9, 7.5}; float sig[5] = {1., 1., 1., 1., 1.}; int ndat = 4; int ma = 4; /* parameters in equation */ float a[5] = {1, 1, 1, 0.1, 1.5}; int ia[5] = {1, 1, 1, 1, 1}; float **covar = matrix(1, ma, 1, ma); float chisq = 0; lfit(x,y,sig,ndat,a,ia,ma,covar,&chisq,fpoly); printf("%f\n", chisq); free_matrix(covar, 1, ma, 1, ma); return 0; } Also confusing the issue, all the Numerical Recipes functions are 1 array-indexed so if anyone has corrections to my array declarations let me know also! Cheers

    Read the article

  • Matlab wont extract first row & column because of matrix dimensions !

    - by ZaZu
    Hey guys, I am tracking an object that is thrown in air, and this object governs a parabolic pattern. Im tracking the object through a series of 30 images. I managed to exclude all the background and keep the object apparent, then used its centroid to get its coordinates and plot them. Now im supposed to predict where the object is going to fall, so I used polyfit & polyval .. the problem is, matlab says ??? Index exceeds matrix dimensions. Now the centroid creates its own structure with a row and 2 columns. Everytime the object moves in the loop, it updates the first row only .. Here is part of the code : For N=1:30 . . . x=centroid(1,1); % extract first row and column for x y=centroid(1,2); % extract secnd row and column for x plot_xy=plot(x,y) set(plot_xy,'XData',x(1:N),'YData',y(1:N)); fitting=polyfit(x(1:N),y(1:N),2); parabola=plot(x,nan(23,1)); evaluate=polyval(fitting,x); set(parabola,'YData',evaluate) . . end The error message I get is ??? Index exceeds matrix dimensions. It seems that (1:N) is causing the problems .. I honestly do not know why .. But when I remove N, the object is plotted along with its points, but polyfitting wont work, it gives me an error saying : Warning: Polynomial is not unique; degree >= number of data points. > In polyfit at 72 If I made it (1:N-1) or something, it plots more points before it starts giving me the same error (not unique ...) . Any ideas why ?? Thanks alot !!

    Read the article

  • How do we, as a community, help encourage programming in public schools? (Or state Schools for the U

    - by NoMoreZealots
    PRIMARY MOTIVATION My office gets involved with the "First Robotics" competitions and one thing that lingers year to year is the students typically have no preparation for doing even simple programming as part of the public schools system. While the science classes provide some basic grasp of mechanical and electrical concepts, by in large computer programming gets no coverage from the curriculum. (This my be different in other areas of the country/world.) What makes it worse is there is only a short period of time you have to prepare the student's and help them design the robot. Talking to some professors from local colleges, it's a problem because you can't assume even the most basic understanding for freshman CS majors. Languages like Python, Lua and BASIC are simple enough for at least high school level students, if not younger. SCOPE So how do you get public schools to support a programming, at least to the level of "Try it in BASIC" examples that used to be at the end of a chapter in my Algebra book? At least enough to prepare them for event's such as the FIRST Robotic competitions. Which the primary objectives are to teach problem solving and team work, and to possible foster an interest in Math, Science and Engineering in general. (Not force feed to them, as some people her seem to be implying.) Edit: Why teach kids: (Since 2000 CS enrollment in US colleges has decreased by 70% while college enrollment has increased, this is a PROBLEM.) Saying there is no value in teaching someone programming in Jr./High school because they might think "they know programming." Is like saying there's no value in teaching High school science and physics, because they might decide they "know physics." Leading to abuse like: "I passed a high school physics class, I'm going to develop a Unified Quantum Gravitational Theory." Better Prepared students are better students. Instead it would allows college programs to raise the bar on the entry level courses, allowing students to be weeded out based on their understanding of more advanced material. Plus people who did poorly in that in topic in High school aren't as likely to say "I think there's money in computer's so I'll computer science." Plus if people take it in high school and decide THEN that it's not for them, it's better than them wasting their money to PAY a college to figure that out. The result is that people who take the degree are more likely to succeed and be there for the RIGHT reasons. (i.e. It's what they REALLY want to do. And that's REALLY the key to being good at anything.) Programming is like anything else, the more practice and genuine interest you have the better you get. If you start them later, they get less practice. The earlier give them the opportunity to start, the more practice they will get. All other things equal, the more practice the better the programmer.

    Read the article

  • Algorithm for finding symmetries of a tree

    - by Paxinum
    I have n sectors, enumerated 0 to n-1 counterclockwise. The boundaries between these sectors are infinite branches (n of them). The sectors live in the complex plane, and for n even, sector 0 and n/2 are bisected by the real axis, and the sectors are evenly spaced. These branches meet at certain points, called junctions. Each junction is adjacent to a subset of the sectors (at least 3 of them). Specifying the junctions, (in pre-fix order, lets say, starting from junction adjacent to sector 0 and 1), and the distance between the junctions, uniquely describes the tree. Now, given such a representation, how can I see if it is symmetric wrt the real axis? For example, n=6, the tree (0,1,5)(1,2,4,5)(2,3,4) have three junctions on the real line, so it is symmetric wrt the real axis. If the distances between (015) and (1245) is equal to distance from (1245) to (234), this is also symmetric wrt the imaginary axis. The tree (0,1,5)(1,2,5)(2,4,5)(2,3,4) have 4 junctions, and this is never symmetric wrt either imaginary or real axis, but it has 180 degrees rotation symmetry if the distance between the first two and the last two junctions in the representation are equal. Edit: This is actually for my research. I have posted the question at mathoverflow as well, but my days in competition programming tells me that this is more like an IOI task. Code in mathematica would be excellent, but java, python, or any other language readable by a human suffices. Here are some examples (pretend the double edges are single and we have a tree) http://www2.math.su.se/~per/files.php?file=contr_ex_1.pdf http://www2.math.su.se/~per/files.php?file=contr_ex_2.pdf http://www2.math.su.se/~per/files.php?file=contr_ex_5.pdf Example 1 is described as (0,1,4)(1,2,4)(2,3,4)(0,4,5) with distances (2,1,3). Example 2 is described as (0,1,4)(1,2,4)(2,3,4)(0,4,5) with distances (2,1,1). Example 5 is described as (0,1,4,5)(1,2,3,4) with distances (2). So, given the description/representation, I want to find some algorithm to decide if it is symmetric wrt real, imaginary, and rotation 180 degrees. The last example have 180 degree symmetry. (These symmetries corresponds to special kinds of potential in the Schroedinger equation, which has nice properties in quantum mechanics.)

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • Looking for an Open Source Project in need of help

    - by hvidgaard
    Hi StackOverflow! I'm a CS student on well on my way to graduate. I have had a difficult time of finding relevant student jobs (they seems to be taken merely hours after the notice gets on the board) , so instead I'm looking for an open source project in need of help. I'm aware that I should choose one that I use, but I'm not aware of any OS-project that I use that needs help. That's why I'm asking you. I don't have any deep experience, but I here are some of my biggest projects so far: BitTorrent-ish client in Python (a subset of BitTorrent) HTTP 1.1 webserver in Java Compiler from a subset of Java to run on JRE Flash-framework project to model an iPad look and feel (not to run actual iPad programs) complete with an API for programs. Complete MySQL database for a booking system, with departure and arrival times, so you could only book valid tickets (with a Java frontend). I know, Java and languages like AS3 and C# feels natural per se, Python, and have done a fair bit of hacking around in C, but I don't feel very comfortable with it. Mostly I'm afraid to make a fuckup because I have such a high degree of control. I would like to think I'm well aware of good software design practices, but in reality what I do is ask myself "would I like to use/maintain this?", and I love to refactor my code because I see optimizations. I love algorithms and to make them run in the best possible time. I don't have any preferred domain to work in, but I wouldn't mind it to be graphics or math heavy. Ideally I'm looking for a project in C++ to learn the in's and out's of it, but I'm well aware that I don't know that language very well. I would like to have a mentor-like figure until I'm confident enough to stand on my own, not one to review all my code (I'm sure someone will to start with anyway), but to ask questions about the project and language in question. I do have a wife and two children, so don't expect me to put in 10+ hours every week. In return I can work on my own, I strive to program modular and maintainable code. Know how to read an API, use Google, StackOverflow and online resources in general. If you have any questions, shoot. I'm looking forward to your suggestions.

    Read the article

  • Myself throwing NullReferenceException... needs help

    - by Amit Ranjan
    I know it might be a weird question and its Title too, but i need your help. I am a .net dev , working on platform for the last 1.5 years. I am bit confused on the term usually we say " A Good Programmer ". I dont know ,what are the qualities of a good programmer ? Is the guy who writes a bug free code? or Can develop applications solely? or blah blah blah...lots of points. I dont know... But as far i am concerned , I know I am not a good programmer, still in learning phase an needs a lot to learn in coming days. So you guys are requested to please help me with this two problems of mine My first problem is regarding the proper Error Handling, which is a most debatable aspect of programming. We all know we use ` try { } catch { } finally { } ` in our code to manage exception. But even if I use try { } catch(exception ex) { throw ex } finally { } , different guys have different views. I still dont know the good way to handle errors. I can write code, use try-catch but still i feel I lacks something. When I saw the codes generated by .net fx tools even they uses throw ex or `throw new Exception("this is my exception")`.. I am just wondering what will be the best way to achieve the above. All means the same thing but why we avoid something. If it has some demerits then it must be made obselete.Anyways I still dont have one [how to handle errors efficiently?]. I generally follow the try-catch(execoption ex){throw ex}, and usually got stucked in debates with leads why you follow this why not that... 2.Converting your entire code blocks in modules using Design patterns of some OOPs concepts. How do you guys decide what architeture or pattern will be the best for my upcoming application based on its working, flow etc. I need to know what you guys can see that I can't. Since I know , I dont have that much experience but I can say, with my experience that experience doesnot comes either from degree/certificates or success you made instead it cames from failures you faced or got stucking situations. Pleas help me out.

    Read the article

  • approximating log10[x^k0 + k1]

    - by Yale Zhang
    Greetings. I'm trying to approximate the function Log10[x^k0 + k1], where .21 < k0 < 21, 0 < k1 < ~2000, and x is integer < 2^14. k0 & k1 are constant. For practical purposes, you can assume k0 = 2.12, k1 = 2660. The desired accuracy is 5*10^-4 relative error. This function is virtually identical to Log[x], except near 0, where it differs a lot. I already have came up with a SIMD implementation that is ~1.15x faster than a simple lookup table, but would like to improve it if possible, which I think is very hard due to lack of efficient instructions. My SIMD implementation uses 16bit fixed point arithmetic to evaluate a 3rd degree polynomial (I use least squares fit). The polynomial uses different coefficients for different input ranges. There are 8 ranges, and range i spans (64)2^i to (64)2^(i + 1). The rational behind this is the derivatives of Log[x] drop rapidly with x, meaning a polynomial will fit it more accurately since polynomials are an exact fit for functions that have a derivative of 0 beyond a certain order. SIMD table lookups are done very efficiently with a single _mm_shuffle_epi8(). I use SSE's float to int conversion to get the exponent and significand used for the fixed point approximation. I also software pipelined the loop to get ~1.25x speedup, so further code optimizations are probably unlikely. What I'm asking is if there's a more efficient approximation at a higher level? For example: Can this function be decomposed into functions with a limited domain like log2((2^x) * significand) = x + log2(significand) hence eliminating the need to deal with different ranges (table lookups). The main problem I think is adding the k1 term kills all those nice log properties that we know and love, making it not possible. Or is it? Iterative method? don't think so because the Newton method for log[x] is already a complicated expression Exploiting locality of neighboring pixels? - if the range of the 8 inputs fall in the same approximation range, then I can look up a single coefficient, instead of looking up separate coefficients for each element. Thus, I can use this as a fast common case, and use a slower, general code path when it isn't. But for my data, the range needs to be ~2000 before this property hold 70% of the time, which doesn't seem to make this method competitive. Please, give me some opinion, especially if you're an applied mathematician, even if you say it can't be done. Thanks.

    Read the article

  • How to check for palindrome using Python logic

    - by DrOnline
    My background is only a 6 month college class in basic C/C++, and I'm trying to convert to Python. I may be talking nonsense, but it seems to me C, at least at my level, is very for-loop intensive. I solve most problems with these loops. And it seems to me the biggest mistake people do when going from C to Python is trying to implement C logic using Python, which makes things run slowly, and it's just not making the most of the language. I see on this website: http://hyperpolyglot.org/scripting (serach for "c-style for", that Python doesn't have C-style for loops. Might be outdated, but I interpret it to mean Python has its own methods for this. I've tried looking around, I can't find much up to date (Python 3) advice for this. How can I solve a palindrome challenge in Python, without using the for loop? I've done this in C in class, but I want to do it in Python, on a personal basis. The problem is from the Euler Project, great site btw. def isPalindrome(n): lst = [int(n) for n in str(n)] l=len(lst) if l==0 || l==1: return True elif len(lst)%2==0: for k in range (l) ##### else: while (k<=((l-1)/2)): if (list[]): ##### for i in range (999, 100, -1): for j in range (999,100, -1): if isPalindrome(i*j): print(i*j) break I'm missing a lot of code here. The five hashes are just reminders for myself. Concrete questions: 1) In C, I would make a for loop comparing index 0 to index max, and then index 0+1 with max-1, until something something. How to best do this in Python? 2) My for loop (in in range (999, 100, -1), is this a bad way to do it in Python? 3) Does anybody have any good advice, or good websites or resources for people in my position? I'm not a programmer, I don't aspire to be one, I just want to learn enough so that when I write my bachelor's degree thesis (electrical engineering), I don't have to simultaneously LEARN an applicable programming language while trying to obtain good results in the project. "How to go from basic C to great application of Python", that sort of thing. 4) Any specific bits of code to make a great solution to this problem would also be appreciated, I need to learn good algorithms.. I am envisioning 3 situations. If the value is zero or single digit, if it is of odd length, and if it is of even length. I was planning to write for loops... PS: The problem is: Find the highest value product of two 3 digit integers that is also a palindrome.

    Read the article

  • Working with friends. Poor career choice?

    - by a_person
    Hi all, Hope you can help me solve somewhat of a moral dilemma. Some time ago, after just a few years of living in U.S. and having to take any job I could get my hands on a friend of mine submitted recommended me for an open position at the company that he was working for. I could have not been happier. I do not have a degree of any sort, however, by being passionate about CS and with constant drive for self education I've became a somewhat of a strong generalist. Every place I worked for recognized me for that quality and used me on various projects where set of technology in hand had no overlap with set of knowledge of the team members. Rapidly I've advanced to Sr. Programmer position and the trend of me following a friend from one place to another have started and continued on for a few years. My friend's goal always been to become an IT Director, mine is to become the best programmer I can be. To my knowledge I've accommodated his goals as much as I could by taking a back seat, and letting him take the lead. Fast forward to today. He's a manager, and I am on his team. I am unhappy and I in considerable amount of suffering. I am not being utilized to my potential, it's almost exact opposite, I am being micromanaged to an unhealthy extent, my decisions, and suggestions are constantly met with negative connotation. Last week I had to hear about how my friend is a better programmer than I am. My ego was ecstatic about this one /s. In addition to that working in the field of BI have exhausted itself for most parts. The only pleasure of my work is being derived from making everything as dynamic and parameter driven as possible. This is the only area where a friend of mine does not feel competent enough to actually micromanage. Because of my situation I feel a fair amount of guilt and ever growing resentment. I need your advice, maybe you've dealt with this expression of ego before, needs of self vs the needs of your friend. Is working with a friend a poor choice? Thank you for reading in.

    Read the article

  • How do I create a thread-safe write-once read-many value in Java?

    - by Software Monkey
    This is a problem I encounter frequently in working with more complex systems and which I have never figured out a good way to solve. It usually involves variations on the theme of a shared object whose construction and initialization are necessarily two distinct steps. This is generally because of architectural requirements, similar to applets, so answers that suggest I consolidate construction and initialization are not useful. By way of example, let's say I have a class that is structured to fit into an application framework like so: public class MyClass { private /*ideally-final*/ SomeObject someObject; MyClass() { someObject=null; } public void startup() { someObject=new SomeObject(...arguments from environment which are not available until startup is called...); } public void shutdown() { someObject=null; // this is not necessary, I am just expressing the intended scope of someObject explicitly } } I can't make someObject final since it can't be set until startup() is invoked. But I would really like it to reflect it's write-once semantics and be able to directly access it from multiple threads, preferably avoiding synchronization. The idea being to express and enforce a degree of finalness, I conjecture that I could create a generic container, like so: public class WoRmObject<T> { private T object; WoRmObject() { object=null; } public WoRmObject set(T val) { object=val; return this; } public T get() { return object; } } and then in MyClass, above, do: private final WoRmObject<SomeObject> someObject; MyClass() { someObject=new WoRmObject<SomeObject>(); } public void startup() { someObject.set(SomeObject(...arguments from environment which are not available until startup is called...)); } Which raises some questions for me: Is there a better way, or existing Java object (would have to be available in Java 4)? Is this thread-safe provided that no other thread accesses someObject.get() until after it's set() has been called. The other threads will only invoke methods on MyClass between startup() and shutdown() - the framework guarantees this. Given the completely unsynchronized WoRmObject container, it is ever possible under either JMM to see a value of object which is neither null nor a reference to a SomeObject? In other words, does has the JMM always guaranteed that no thread can observe the memory of an object to be whatever values happened to be on the heap when the object was allocated.

    Read the article

  • Which style is preferable when writing this boolean expression?

    - by Jeppe Stig Nielsen
    I know this question is to some degree a matter of taste. I admit this is not something I don't understand, it's just something I want to hear others' opinion about. I need to write a method that takes two arguments, a boolean and a string. The boolean is in a sense (which will be obvious shortly) redundant, but it is part of a specification that the method must take in both arguments, and must raise an exception with a specific message text if the boolean has the "wrong" value. The bool must be true if and only if the string is not null or empty. So here are some different styles to write (hopefully!) the same thing. Which one do you find is the most readable, and compliant with good coding practice? // option A: Use two if, repeat throw statement and duplication of message string public void SomeMethod(bool useName, string name) { if (useName && string.IsNullOrEmpty(name)) throw new SomeException("..."); if (!useName && !string.IsNullOrEmpty(name)) throw new SomeException("..."); // rest of method } // option B: Long expression but using only && and || public void SomeMethod(bool useName, string name) { if (useName && string.IsNullOrEmpty(name) || !useName && !string.IsNullOrEmpty(name)) throw new SomeException("..."); // rest of method } // option C: With == operator between booleans public void SomeMethod(bool useName, string name) { if (useName == string.IsNullOrEmpty(name)) throw new SomeException("..."); // rest of method } // option D1: With XOR operator public void SomeMethod(bool useName, string name) { if (!(useName ^ string.IsNullOrEmpty(name))) throw new SomeException("..."); // rest of method } // option D2: With XOR operator public void SomeMethod(bool useName, string name) { if (useName ^ !string.IsNullOrEmpty(name)) throw new SomeException("..."); // rest of method } Of course you're welcome to suggest other possibilities too. Message text "..." would be something like "If 'useName' is true a name must be given, and if 'useName' is false no name is allowed".

    Read the article

  • How to Integrate C++ compiler in Visual Studio 2008

    - by Kasun
    Hi Can someone help me with this issue? I currently working on my project for final year of my honors degree. And we are developing a application to evaluate programming assignments of student ( for 1st year student level) I just want to know how to integrate C++ compiler using C# code to compile C++ code. In our case we are loading a student C++ code into text area, then with a click on button we want to compile the code. And if there any compilation errors it will be displayed on text area nearby. (Interface is attached herewith.) And finally it able to execute the code if there aren't any compilation errors. And results will be displayed in console. We were able to do this with a C#(C# code will be loaded to text area intead of C++ code) code using inbuilt compiler. But still not able to do for C# code. Can anyone suggest a method to do this? It is possible to integrate external compiler to VS C# code? If possible how to achieve it? Very grateful if anyone will contributing to solve this matter? This is code for Build button which we proceed with C# code compiling CodeDomProvider codeProvider = CodeDomProvider.CreateProvider("csharp"); string Output = "Out.exe"; Button ButtonObject = (Button)sender; rtbresult.Text = ""; System.CodeDom.Compiler.CompilerParameters parameters = new CompilerParameters(); //Make sure we generate an EXE, not a DLL parameters.GenerateExecutable = true; parameters.OutputAssembly = Output; CompilerResults results = codeProvider.CompileAssemblyFromSource(parameters, rtbcode.Text); if (results.Errors.Count > 0) { rtbresult.ForeColor = Color.Red; foreach (CompilerError CompErr in results.Errors) { rtbresult.Text = rtbresult.Text + "Line number " + CompErr.Line + ", Error Number: " + CompErr.ErrorNumber + ", '" + CompErr.ErrorText + ";" + Environment.NewLine + Environment.NewLine; } } else { //Successful Compile rtbresult.ForeColor = Color.Blue; rtbresult.Text = "Success!"; //If we clicked run then launch our EXE if (ButtonObject.Text == "Run") Process.Start(Output); // Run button }

    Read the article

  • WPF PathGeometry/RotateTransform optimization

    - by devinb
    I am having performance issues when rendering/rotating WPF triangles If I had a WPF triangle being displayed and it will be rotated to some degree around a centrepoint, I can do it one of two ways: Programatically determine the points and their offset in the backend, use XAML to simply place them on the canvas where they belong, it would look like this: <Path Stroke="Black"> <Path.Data> <PathGeometry> <PathFigure StartPoint ="{Binding CalculatedPointA, Mode=OneWay}"> <LineSegment Point="{Binding CalculatedPointB, Mode=OneWay}" /> <LineSegment Point="{Binding CalculatedPointC, Mode=OneWay}" /> <LineSegment Point="{Binding CalculatedPointA, Mode=OneWay}" /> </PathFigure> </PathGeometry> </Path.Data> </Path> Generate the 'same' triangle every time, and then use a RenderTransform (Rotate) to put it where it belongs. In this case, the rotation calculations are being obfuscated, because I don't have any access to how they are being done. <Path Stroke="Black"> <Path.Data> <PathGeometry> <PathFigure StartPoint ="{Binding TriPointA, Mode=OneWay}"> <LineSegment Point="{Binding TriPointB, Mode=OneWay}" /> <LineSegment Point="{Binding TriPointC, Mode=OneWay}" /> <LineSegment Point="{Binding TriPointA, Mode=OneWay}" /> </PathFigure> </PathGeometry> </Path.Data> <Path.RenderTransform> <RotateTransform CenterX="{Binding Centre.X, Mode=OneWay}" CenterY="{Binding Centre.Y, Mode=OneWay}" Angle="{Binding Orientation, Mode=OneWay}" /> </Path.RenderTransform> </Path> My question is which one is faster? I know I should test it myself but how do I measure the render time of objects with such granularity. I would need to be able to time how long the actual rendering time is for the form, but since I'm not the one that's kicking off the redraw, I don't know how to capture the start time.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43  | Next Page >