Search Results

Search found 12183 results on 488 pages for 'general news'.

Page 115/488 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Negative number representation across multiple architechture

    - by Donotalo
    I'm working with OKI 431 micro controller. It can communicate with PC with appropriate software installed. An EEPROM is connected in the I2C bus of the micro which works as permanent memory. The PC software can read from and write to this EEPROM. Consider two numbers, B and C, each is two byte integer. B is known to both the PC software and the micro and is a constant. C will be a number so close to B such that B-C will fit in a signed 8 bit integer. After some testing, appropriate value for C will be determined by PC and will be stored into the EEPROM of the micro for later use. Now the micro can store C in two ways: The micro can store whole two byte representing C The micro can store B-C as one byte signed integer, and can later derive C from B and B-C I think that two's complement representation of negative number is now universally accepted by hardware manufacturers. Still I personally don't like negative numbers to be stored in a storage medium which will be accessed by two different architectures because negative number can be represented in different ways. For you information, 431 also uses two's complement. Should I get rid of the headache that negative number can be represented in different ways and accept the one byte solution as my other team member suggested? Or should I stick to the decision of the two byte solution because I don't need to deal with negative numbers? Which one would you prefer and why?

    Read the article

  • Is there really such a thing as "being good at math"?

    - by thezhaba
    Aside from gifted individuals able to perform complex calculations in their head, I'm wondering if proficiency in mathematics, namely calculus and algebra, has really got to do with one's natural inclination towards sciences, if you can put it that way. A number of students in my calculus course pick up material in seemingly no time whereas I, personally, have to spend time thinking about and understanding most concepts. Even then, if a question that requires a bit more 'imagination' comes up I don't always recognize the concepts behind it, as is the case with calculus proofs, for instance. Nevertheless, I refuse to believe that I'm simply not made for it. I do very well in programming and software engineering courses where a lot of students struggle. At first I could not grasp what they found to be so difficult, but eventually I realized that having previous programming experience is a great asset -- once I've seen and made practical use of the programming concepts learning about them in depth in an academic setting became much easier as I have then already seen their use "in the wild". I suppose I'm hoping that something similar happens with mathematics -- perhaps once the practical idea behind a concept (which authors of textbooks sure do a great job of concealing..) is evident, understanding the seemingly dry and symbolic ideas and proofs would be more obvious? I'm really not sure. All I'm sure of is I'd like to get better at calculus, but I don't yet understand why some of us pick it up easily while others have to spend considerable amounts of time on it and still not have complete understanding if an unusual problem is given.

    Read the article

  • in kernel and user space

    - by jtzero
    Now I know that developing an app that goes into kernel space should be avoided - its hard to debug, complex etc.... with that off the table what are some advanteages to moving an app from user space to the kernel?

    Read the article

  • Can a System Analyst come outside of programming?

    - by WeNeedAnswers
    Back in the day, if you wanted to be a Systems Analyst you had to work up the ranks from humble programmer. But these days there are University Courses/Programmes based on this topic alone. Is it thus valid for a graduate or post graduate to come into the world and just do systems analyst without doing the graft of being a programmer first, getting to know the nuances of what code is all about.

    Read the article

  • "Directly accessing" return values without referencing

    - by undocumented feature
    Look at this ruby example: puts ["Dog","Cat","Gates"].1 This will output Cat as ruby allows me to directly access the "anonymous" array created. If I try this in PHP, however: echo array("Dog","Cat,"Gates")[1] This won't work. What is this called, not only concerning arrays but all functions? Where else is it possible? Feel free to change the question title when you know how this "feature" is called.

    Read the article

  • Why do we have so many programming-languages?

    - by ntsbjctve
    Most people would probably answer with "You won't build a house using only a hammer", but my argument against this is: There is also only one real mathematical language used for everything from chemical to architectural calculations, and as programming-languages are in many ways similar to maths, why should it be so different with them?

    Read the article

  • What is the best method of assessment for computer science students?

    - by Gavimoss
    This question is a bit more philosophical so feel free to remove if you like but it's been bugging me for the last 4 years! As a final year student I find that exams can be often be passed with a couple of days of cramming, without necessarily retaining or understanding the content i.e. a regurgitation of lecture notes is often enough to gain high marks. A friend of mine is about to graduate with an honours degree whose final year evaluation was based solely on practical work (a project, assignment marks and the creation of a poster) yet all of this work could have been completed by a third party. Personally I don't think either of these methods of assessment is sufficient as I am currently on track for a 1st class honours in artificial intelligence and computer science and believe this is mostly due to my skill in passing exams not my skill as a programmer or my vast in depth knowledge of any of the subjects I have "studied". Surely there is a better way to assess our skills - isn't there?

    Read the article

  • Implementation Details as a "Document" ( In generic terms) - Python, C++

    - by mgj
    Hi..:) For documentation and presentation purposes, we often find professionals/students creating SRS, coding guidelines etc. for these things there is some kind of a checklist which one could use to appropriately match what could relate to a specific case and accordingly one does a documentation for each. On those grounds could you please give me some sort of a checklist( any points/guidelines) one could use for going about Implementation( in the form of Implementation Details) in Python and C++ . Although this might sound specific as the query is "Implementation Details" and is different for different cases as one goes about the REAL implementation, I just want a SET of guidelines one should follow ( Preferably In Python, C++ specific, even for any other language is Welcome) if this( Implementation Details ) has to also be documented or presented. Hope the question is clear, I am sorry if it still sounds ambiguous, I guess this is the best I could do to frame my query. Thank you for your time...:)

    Read the article

  • How much one can trust the information published in the wikipedia? [closed]

    - by AKN
    Wikipedia has answers for many question almost in all categories. Let it be Technical Sports Personalities Important events (this day, that day) Scientific terms etc... I know the source of contents are from volunteers (Please correct me if I'm wrong here). But what measures they have to ensure that contents are properly written. Even if they have admin/moderator and all that, they may not be experts in all areas. So how do they validate the appropriateness of the content?

    Read the article

  • Where do enumerations belong

    - by griegs
    At my current place of emplyment they were putting enumerations into the class they used them in. Whilst I didn't see any duplication of these enumerations I non the less thought that they didn't belong in the class so I moved them out into their own class. The reason I did that was that I wanted them to be re-usable w/out needing to reference the model class they were originally in. I got asked why I did that by the boss who disagreed with me as to my reasons for moving them and say nothing wrong with putting enumerations in a model class. So where should they be? Is it acceptable to leave enumerations in a class and hope that others in the project know to refactor your code if they want to re-use it elsewhere or should, as I did, you create an enumerations class and have them all in there?

    Read the article

  • Application for making animated advertisement banners?

    - by Camran
    Don't want to learn anything like Adobe Premiere or something... I have images, or Frames as they are called... Now is there any simple yet elegant application which can put those frames into a timeline and make the animation? I am after an easy application so if you know of any, please let me know... Thanks

    Read the article

  • Using code from this site

    - by user553798
    Hi, The legal section of this site mentions that all code by the contributors is licensed under the Creative Commons Attribution Share Alike License. Does this mean that I have to attribute the submitter even if the code in question is a few lines amongst hundreds or thousands of lines of code? What if I use code from a book or SDK? For eg; If I use the sample code from Google's Android SDK documentation, does that mean I need to attribute Google every time I write an app, after all it would be impossible to write an app without consulting the SDK docs, atleast initially... I am totally confused :S

    Read the article

  • where should this check logic go?

    - by Benny
    I have a function that draw a string on graphics: private void DrawSmallImage(Graphics g) { if (this.SmallImage == null) return; var smallPicHeight = this.Height / 5; var x = this.ClientSize.Width - smallPicHeight; var y = this.ClientSize.Height - smallPicHeight; g.DrawImage(this.SmallImage, x, y, smallPicHeight, smallPicHeight); } the check if (this.SmallImage == null) return; should be in the function DrawSmallImage or should be in the caller? which is better?

    Read the article

  • VBScript - using IF statements in a mail script?

    - by Kenny Bones
    I really need some quick tips here. I've got this VBScript script which sends an e-mail. And I want to do several checks to see if an attribute is true and if it is, write an additional line in the mail. How can I do this? This is part of the script: obMessage.HTMLBody = ""_ & "<MENU>"_ & "<LI type = square>This is a line</i>."_ I want something which looks like this: obMessage.HTMLBody = ""_ & "<MENU>"_ If statement1 = true Then & "<LI type = square>This is an additional line</i>."_ end if Preferrably, could some select statements be made? I don't really mind what the code looks like, I just want it to work as soon as possible :)

    Read the article

  • Problems with usually short solutions to test in a programming language

    - by sub
    I'm currently creating an experimental programming language for fun and educational purpose and in search for some tasks beyond the classical "Hello, World!"-program. I've already come up with these ideas: Print out the program's input Calculator Generate Prime numbers, Fibonacci series What other interesting programming problems do you have for me to test? It would be good if they required the language to solve a broad spectrum of task, take prime numbers for example: You need variables, increment them, divide them, perform actions under certain conditions, etc.

    Read the article

  • Why are mainframes still around?

    - by ThaDon
    It's a question you've probably asked or been asked several times. What's so great about Mainframes? The answer you've probably been given is "they are fast" "normal computers can't process as many 'transactions' per second as they do". Jeese, I mean it's not like Google is running a bunch of Mainframes and look how many transactions/sec they do! The question here really is "why?". When I ask this question to the mainframe devs I know, they can't answer, they simply restate "It's fast". With the advent of Cloud Computing, I can't imagine mainframes being able to compete both cost-wise and mindshare-wise (aren't all the Cobol devs going to retire at some point, or will offshore just pickup the slack?). And yet, I know a few companies that still pump out net-new Cobol/Mainframe apps, even for things we could do easily in say .NET and Java. Anyone have a real good answer as to why "The Mainframe is faster", or can point me to some good articles relating to the topic?

    Read the article

  • Reading programming tutorials while at work

    - by Teifion
    I work in a call centre and have been told by my manager that when there are no calls coming through they're happy for me to read stuff on the internet. There is a net filter in place but I can access SO from work which is always handy. I'm searching for articles that will make me a generally better programmer. I code in Python but combine it with Postgres and JS quite a lot if that helps. I'm currently reading Dive into python3 but I fear it won't last me much longer and would really like to expand my programming ability.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >