Search Results

Search found 857 results on 35 pages for 'scientific notation'.

Page 14/35 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • P-NP-Problem: What are the most promising methods?

    - by phimuemue
    Hello everybody, I know that P=NP has not been solved up to now, but can anybody tell me something about the following: What are currently the most promising mathematical / computer scientific methods that could be helpful to tackle this problem? Or are there even none such methods known to be potentially helpful up to now? Is there any (free) compendium on this topic where I can find all / most of the research done in this area?

    Read the article

  • What is your favorite NumPy feature?

    - by Gökhan Sever
    Share your favourite NumPy features / tips & tricks. Please try to limit one feature per line. The question is posted in parallel at ask.scipy.org We welcome you to join the conversation there -with the main idea of collecting the Scientific Python related questions under one roof. Feel free to dual-post or post at your favourite site...

    Read the article

  • P=NP?-Problem: What are the most promising methods?

    - by phimuemue
    Hello everybody, I know that P=NP has not been solved up to now, but can anybody tell me something about the following: What are currently the most promising mathematical / computer scientific methods that could be helpful to tackle this problem? Or are there even none such methods known to be potentially helpful up to now? Is there any (free) compendium on this topic where I can find all / most of the research done in this area?

    Read the article

  • How much one can trust the information published in the wikipedia? [closed]

    - by AKN
    Wikipedia has answers for many question almost in all categories. Let it be Technical Sports Personalities Important events (this day, that day) Scientific terms etc... I know the source of contents are from volunteers (Please correct me if I'm wrong here). But what measures they have to ensure that contents are properly written. Even if they have admin/moderator and all that, they may not be experts in all areas. So how do they validate the appropriateness of the content?

    Read the article

  • Contour graphs in JS or PHP ?

    - by vince83000
    Hi everybody, For a web application, I have to make a scientific graph. You can see an example here : http://www.ego-network.org/monitoring/plot_deployment.php?glider=eudoxus&deployment=Cascade&posti=4&postj=scaptemperature_lastweek&pposti=4&ppostj=scaoxygen_lastweek&hchk=&defsct=default_scatter I have 2 coordinates, time and depth, and I want the temperature to be represent by a color, exactly like the example. Someone know how to make this king of graph ? Thanks !

    Read the article

  • C#: Perform Operations on GPU, not CPU (Calculate Pi)

    - by Alex
    Hello, I've recently read a lot about software (mostly scientific/math and encryption related) that moves part of their calculation onto the GPU which causes a 100-1000 (!) fold increase in speed for supported operations. Is there a library, API or other way to run something on the GPU via C#? I'm thinking of simple Pi calculation. I have a GeForce 8800 GTX if that's relevant at all (would prefer card independent solution though). Any hints are appreciated!

    Read the article

  • take performance as the only criterion for a smal site, which framework should I choose on a shared

    - by john
    Dear friends, I'm trying to set up a small full functional website for a small community on a shared hosting. Scientific computing is quite heavy. Scalability is not important. The only criterion is performance. Which framework would you suggest among the following:(or more) from your list) 1)Ruby on Rails 2) Grails 3) asp.net 4) zend I'm really new to this area, only starting reading some books and googling different blogs...so your expertise is really appreciated! thanks!

    Read the article

  • C++ Expression Templates

    - by yCalleecharan
    Hi, I currently use C for numerical computations. I've heard that using C++ Expression Templates is better for scientific computing. What are C++ Expression Templates in simple terms? Are there books around that discuss numerical methods/computations using C++ Expression Templates? In what way, C++ Expression Templates are better than using pure C? Thanks a lot

    Read the article

  • Which computing publisher has the best refereed research resources for the working programmer?

    - by Stephen
    When I have a problem I often search the computing literature. Some of the resources[*] I use are: The professional associations? ACM Digital Library IEEE Xplore The scientific publishers? Lecture Notes in Computer Science HCI Bibliography What do you use? What is the best resource source (if there is one) for the working programmer? [*] after stackoverflow and google of course :) PS what tags should I use for this question?

    Read the article

  • A Plot Graph .NET WindowsForm Component Free

    - by user255946
    Hello All, I'm searching for a plot .NET component to plot a 2D line chart, given an array of data. It will be used with WindowsForm (C#) and It will be very helpful if it could be freeware. It is for a scientific application. This is my first asked question in stackoverflow, and excuse me for my terrible English written.

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • Doug Crockford: Geek of the Week

    Doug Crockford is the man behind JavaScript Object Notation (JSON). He is a well-known critic of XML and guides the development of Javascript on the ECMA Standards Committee, as well as being the senior JavaScript architect at Yahoo! He is also the author of the popular 'JavaScript: The Good Parts'. Richard Morris was dispatched to ask him which the good parts were....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Types of ER Diagrams

    - by syrion
    I'm currently taking a class for database design, and we're using the ER diagram style designed by Peter Chen. I have a couple of problems with this style: Keys in relationships don't seem realistic. In practice, synthetic keys like "orderid" seem to be used in almost all tables, including association tables, but the Chen style diagrams heavily favor (table1key, table2key) compound keys. There is no notation for datatype. The diamond shape for associations is horrible, and produces a cluttered diagram. In general, it just seems hard to capture some relationships with the Chen system. What ERD style, if any, do you use? What has been the most popular in your workplaces? What tools have you used, or do you use, to create these diagrams?

    Read the article

  • Chess as a team building exercise for software developers

    - by maple_shaft
    The last place I worked wasn't a particularly great place and there were more than a few nights where we were working late into the evening trying to meet our sprints. The team while stressed out got pretty close and people started bringing in little mind teasers and puzzles, just something we would all play around with and try to solve while a build/deploy was running for the test environment, or while we were waiting for the integration test run to finish. Eventually it turned into people bringing chess boards in and setting them at their desks. We would play by email sending each other moves in chess notation, but at a very casual pace, with games lasting sometimes two or three days. Management tolerated this when we were putting in overtime, but as things were being managed better and people weren't working much more than 40/wk, they started cracking down on this and told us that we weren't allowed to have chess boards at our desks, although they were okay with the puzzle games. What are the pros and cons in your opinion of allowing chess during software development lull time?

    Read the article

  • Javascript: Safely upload a client data file

    - by Jeffrey Sweeney
    I'm (still) working on a template-based XML editing program. It's a GUI-based XML editor that only allows users to add certain tags and attributes based off the requirements. You can see the current version here for an idea. Now, I'd like to allow users to upload their own data templates, but I'm concerned about potential XSS hacks. Currently, the template file is in Javascript object literal notation, which unsurprisingly is a security nightmare if the user can upload their own. I was thinking of using XML instead, but is there an even better alternative?

    Read the article

  • How to learn to draw UML sequence diagrams

    - by PeterT
    How can I learn to draw UML sequence diagrams. Even though I don't use UML much I find that type of diagrams to be quite expressive and want to learn how to draw them. I don't plan to use them to visualise a large chunks of code, hence I would like to avoid using tools, and learn how to draw them with just pen and paper. Muscle memory is good. I guess I would need to learn some basics of notation first, and then just practice it like in "take the piece of code, draw a seq. diagram visualising the code, then generate the diagram using some tool/website, then compare what I'd drawn to what the tool result. Find the differences, correct them, repeat.". Where do I start? Can you recommend a book or a web site or something else?

    Read the article

  • 2D isometric: screen to tile coordinates

    - by Dr_Asik
    I'm writing an isometric 2D game and I'm having difficulty figuring precisely on which tile the cursor is. Here's a drawing: where xs and ys are screen coordinates (pixels), xt and yt are tile coordinates, W and H are tile width and tile height in pixels, respectively. My notation for coordinates is (y, x) which may be confusing, sorry about that. The best I could figure out so far is this: int xtemp = xs / (W / 2); int ytemp = ys / (H / 2); int xt = (xs - ys) / 2; int yt = ytemp + xt; This seems almost correct but is giving me a very imprecise result, making it hard to select certain tiles, or sometimes it selects a tile next to the one I'm trying to click on. I don't understand why and I'd like if someone could help me understand the logic behind this. Thanks!

    Read the article

  • Literature in programming and computer science

    - by Peter Turner
    I hope, gentle programmers, that you'll forgive me for not asking a "Soft Question" on theoreticalCS.SE and asking this here. It has recently come to my attention that bigendian came from Jonathan Swift's Gulliver's Travels. I was pretty surprised when listening to the book on my commute to hear something I'd only heard before in Comp Sci / Engineering classes. I thought it was some sort of nouveau-politically incorrect piece of holdover jargon like Master and Slave drives or Polish Notation. Are there any other incidents, not of politically incorrect jargon, but of literature influencing aspects of computers, programming or software development?

    Read the article

  • User defined type for healthcare / Medical Records variable name prefixes?

    - by Peter Turner
    I was reading Code Complete regarding variable naming in trying to find an answer to this question and stumbled on a table of commonly accepted prefixes for programming word processor software. Well, I'm not a word processor software programmer, but if I was, I'd be happy to use those user defined types. Since I'm a programmer for a smallish healthcare ISV, and have no contact with the larger community of healthcare software programmers (other than the neglected and forsaken HealthCareIT.SE where I never had the chance to ask this question). I want to know if there is a coding convention for medical records. Like Patient = pnt and Chart = chrt and Medication = med or mdctn or whatever. I'm not talking full on hungarian notation, but just a standard that would fit in code complete in place of that wonderful chart of word processor UDT's which are of so little use to me.

    Read the article

  • How do I deal with code of bad quality contributed by a third party?

    - by lindelof
    I've recently been promoted into managing one of our most important projects. Most of the code in this project has been written by a partner of ours, not by ourselves. The code in question is of very questionable quality. Code duplication, global variables, 6-page long functions, hungarian notation, you name it. And it's in C. I want to do something about this problem, but I have very little leverage on our partner, especially since the code, for all its problems, "just works, doesn't it?". To make things worse, we're now nearing the end of this project and must ship soon. Our partner has committed a certain number of person-hours to this project and will not put in more hours. I would very much appreciate any advice or pointers you could give me on how to deal with this situation.

    Read the article

  • Sending JSON to an ASP.NET MVC Action Method Argument

    Javier G Money Lozano, one of the good folks involved with C4MVC, recently wrote a blog post on posting JSON (JavaScript Object Notation) encoded data to an MVC controller action. In his post, he describes an interesting approach of using a custom model binder to bind sent JSON data to an argument of an action method. Unfortunately, his sample left out the custom model binder and only demonstrates how to retrieve JSON data sent from a controller action, not how to send the JSON to the action method....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What's the best way to version CSS and JS URLs?

    - by David Eyk
    As per Yahoo's much-ballyhooed Best Practices for Speeding Up Your Site, we serve up static content from a CDN using far-future cache expiration headers. Of course, we need to occasionally update these "static" files, so we currently add an infix version as part of the filename (based on the SHA1 sum of the file contents). Thus: styles.min.css Becomes: styles.min.abcd1234.css However, managing the versioned files can become tedious, and I was wondering if a GET argument notation might be cleaner and better: styles.min.css?v=abcd1234 Which do you use, and why? Are there browser- or proxy/cache-related considerations that I should consider?

    Read the article

  • Reasons why crontab does not work

    - by Adam Matan
    Many a time and oft, crontab scripts are not executed as expected. There are numerous reasons for that, for example: wrong crontab notation, permissions, environment variables and many more. This community wiki aims to aggregate the top reasons for crontab scripts not being executed as expected. Write each reason in a separate answer. Please include one reason per answer - details about why it's not executed - and fix(es) for that one reason. Please write only cron-specific issues, e.g. commands that execute as expected from the shell but execute erroneously by cron.

    Read the article

  • Are the National Computer Science Academy certifications worth it?

    - by Horacio Nuñez
    Hi to every one. I have a question regarding the real value of having NCSA's certifications. Today I reach their site and I easily passed the JavaScript certification within minutes, but I never reach questions related to Literal Javascript Notation (Json), closures or browser specific apis. This facts let me to doubt a bit of the real value of the test (and the proper certification you can have if you pay them $34), but maybe Im wrong and just earned a respected certification within the States for easy questions... in wich case I can spend some time doing other certifications on the same site. Did you have an NCSA certification and think is worth having it in your resume, or you know of a better certification program? thanks in advance, and looking forward to see your considerations.

    Read the article

  • Reasons why crontab does not work

    - by Adam Matan
    Many a time and oft, crontab scripts are not executed as expected. There are numerous reasons for that, for example: wrong crontab notation, permissions, environment variables and many more. This community wiki aims to aggregate the top reasons for crontab scripts not being executed as expected. Write each reason in a separate answer. Please include one reason per answer - details about why it's not executed - and fix(es) for that one reason. Please write only cron-specific issues, e.g. commands that execute as expected from the shell but execute erroneously by cron.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >