Search Results

Search found 1947 results on 78 pages for 'science'.

Page 13/78 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • What are the benefits of a classical structure over a prototyple one?

    - by Rixius
    I have only recently started programming significantly, and being completely self-taught, I unfortunately don't have the benefits of a detailed Computer science course. I've been reading a lot about JavaScript lately, and I'm trying to find the benefit in classes over the prototype nature of JavaScript. The question seems to be drawn down the middle of which one is better, and I want to see the classical side of it. When I look at the prototype example: var inst_a = { "X": 123, "Y": 321, add: function () { return this.X+this.Y; } }; document.write(inst_a.add()); And then the classical version function A(x,y){ this.X = x; this.Y = y; this.add = function(){ return this.X+this.Y; }; }; var inst_a = new A(123,321); document.write(inst_a.add()); I begun thinking about this because I'm looking at the new ecmascript revision 5 and a lot of people seem up in arms that they didn't add a Class system.

    Read the article

  • Objective-measures of the expressiveness of programming languages [closed]

    - by Casebash
    I am very interested in the expressiveness of different languages. Everyone who has programmed in multiple languages knows that sometimes a language allows you to express concepts which you can't express in other languages. You can have all kinds of subjective discussion about this, but naturally it would be better to have an objective measure. There do actually exist objective measures. One is Turing-Completeness, which means that a language is capable of generating any output that could be generated by following a sequential set of steps. There are also other lesser levels of expressiveness such as Finite State Automata. Now, except for domain specific languages, pretty much all modern languages are Turing complete. It is therefore natural to ask the following question: Can we can define any other formal measures of expressiveness which are greater than Turing completeness? Now of course we can't define this by considering the output that a program can generate, as Turing machines can already produce the same output that any other program can. But there are definitely different levels in what concepts can be expressed - surely no-one would argue that assembly language is as powerful as a modern object oriented language like Python. You could use your assembly to write a Python interpreter, so clearly any accurate objective measure would have to exclude this possibility. This also causes a problem with trying to define the expressiveness using the minimum number of symbols. How exactly to do so is not clear and indeed appears extremely difficult, but we can't assume that just because we don't know how to solve a problem, that nobody know how to. It is also doesn't really make sense to demand a definition of expressiveness before answering the question - after all the whole point of this question is to obtain such a definition. I think that my explanation will be clear enough for anyone with a strong theoretical background in computer science to understand what I am looking for. If you do have such a background and you disagree, please comment why, but if you don't thats probably why you don't understand the question.

    Read the article

  • How broad should a computer science/engineering student go?

    - by AskQuestions
    I have less than 2 years of college left and I still don't know what to focus on. But this is not about me, this is about being a future developer. I realize that questions like "Which language should I learn next?" are not really popular, but I think my question is broader than that. I often see people write things like "You have to learn many different things. Being a developer is not about learning one programming language / technology and then doing that for the rest of your life". Well, sure, but it's impossible to really learn everything thoroughly. Does that mean that one should just learn the basics of everything and then learn some things more thoroughly AFTER getting a particular job? I mean, the best way to learn programming is by actually programming stuff... But projects take time. Does an average developer really switch between (for example) being a web developer, doing artificial intelligence and machine learning related stuff and programming close to the hardware? I mean, I know a lot of different things, but I don't feel proficient in any of those things. If I want to find a job as a web developer (that's just an example) after I finish college, shouldn't I do some web related project (maybe using something I still don't know) rather than try to learn functional programming? So, the question is: How broad should a computer science student's field of focus be? One programming language is surely far too narrow, but what is too broad?

    Read the article

  • [NEWVERSION]algorithm || method to write prog[UNSOLVED] [closed]

    - by fatai
    I am one of the computer science student. Everyone solve problem with a different or same method, ( but actually I dont know whether they use method or I dont know whether there are such common method to approach problem.) if there are common method, What is it ? If there are different method, which method are you using ? All teacher give us problem which in simple form sometimes, but they donot introduce any approach or method(s) so that we cannot make a decision to choose the method then apply that one to problem , afterward find solution then write code.No help from teacher , push us to find method to solve homework. Ex: my friend is using no method , he says "I start to construct algorithm while I try to write prog." I have found one method when I failed the course, More accurately, my method: When I counter problem in language , I will get more paper and then ; first, input/ output step ; my prog will take this / these there argument(s) and return namely X , ex : in c, input length is not known and at same type , so I must use pointer desired output is in form of package , so use structure second, execution part ; in that step , I am writing all step which are goes to final output ex : in python ; 1.) [ + , [- , 4 , [ * , 1 , 2 ]], 5] 2.) [ + , [- , 4 , 2 ],5 ] 3.) [ + , 2 , 5] 4.) 7 ==> return 7 third, I will write test code ex : in c++ input : append 3 4 5 6 vector_x remove 0 1 desired output vector_x holds : 5 6 now, my other question is ; What is/are other method(s) which have/has been; used to construct class :::: for c++ , python, java used to communicate classes / computers used for solving embedded system problem ::::: for c by other user? Some programmer uses generalized method without considering prog-language(java , perl .. ), what is this method ? Why I wonder , because I learn if you dont costruct algorithm on paper, you may achieve your goal. Like no money no lunch , I can say no algorithm no prog therefore , feel free when you write your own method , a way which is introduced by someone else but you are using and you find it is so efficient

    Read the article

  • How can I save my university's Computer Science & Engineering department? [closed]

    - by Blake
    I'm currently pursuing a B.S. in Computer Engineering at the University of Florida, and we're having a bit of a problem right now... The state recently passed a budget plan that cuts funding for higher education in Florida. The dean of UF's College of Engineering decided that the best way for us to absorb the blow is by executing the following plan: All of the Computer Engineering Degree programs, BS, MS and PhD, would be moved from the Computer & Information Science and Engineering Dept. to the Electrical and Computer Engineering Dept. along with most of the advising staff. Roughly half of the faculty would be offered the opportunity to move to Electrical/Computer Eng., Biomedical Eng., or Industrial/Systems Eng. Staff positions in CISE which are currently supporting research and graduate programs would be eliminated. The activities currently covered by TAs would be reassigned to faculty and the TA budget for CISE would be eliminated. Any faculty member who wishes to stay in CISE may do so, but with a revised assignment focused on teaching and advising. In short: our department (at least as we know it) is being decimated. Computer & Information Sciences & Engineering (one of 9 departments in the College of Engineering) is taking more than 50% of the cuts. If you're interested in reading the full proposal, you can access it here. A vast, VAST majority of the students and faculty in the department are vehemently opposed to this plan, however the dean is already taking measures to implement it. This is the only proposal on the table right now, and she has not entertained our requests for alternatives. She sees it as an obvious (albeit drastic) solution to our budget problem, citing that many other universities have combined Computer and Electrical Engineering departments. I'll bet those universities didn't have to eliminate an established department to get there, though. The budget goes into effect July 1, 2012 (this is non-negotiable), and the dean's proposal is currently set to be finalized some time next week. We don't have much time! My question to everyone here is this: Are we overreacting to this plan, or are we justified? And could you explain why or why not? It's obvious that CISE students will resist any cuts to our department, but I'm curious to see what other people in the field have to say. Any feedback is greatly appreciated. I will select the answer that saves our department. Just kidding, I'll pick the one that best explains why this is a good or bad decision for the dean to make. Please note that anything you say can and will be used to further our cause (and we might track you down if you provide a compelling argument against us).

    Read the article

  • Big Data – How to become a Data Scientist and Learn Data Science? – Day 19 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the analytics in Big Data Story. In this article we will understand how to become a Data Scientist for Big Data Story. Data Scientist is a new buzz word, everyone seems to be wanting to become Data Scientist. Let us go over a few key topics related to Data Scientist in this blog post. First of all we will understand what is a Data Scientist. In the new world of Big Data, I see pretty much everyone wants to become Data Scientist and there are lots of people I have already met who claims that they are Data Scientist. When I ask what is their role, I have got a wide variety of answers. What is Data Scientist? Data scientists are the experts who understand various aspects of the business and know how to strategies data to achieve the business goals. They should have a solid foundation of various data algorithms, modeling and statistics methodology. What do Data Scientists do? Data scientists understand the data very well. They just go beyond the regular data algorithms and builds interesting trends from available data. They innovate and resurrect the entire new meaning from the existing data. They are artists in disguise of computer analyst. They look at the data traditionally as well as explore various new ways to look at the data. Data Scientists do not wait to build their solutions from existing data. They think creatively, they think before the data has entered into the system. Data Scientists are visionary experts who understands the business needs and plan ahead of the time, this tremendously help to build solutions at rapid speed. Besides being data expert, the major quality of Data Scientists is “curiosity”. They always wonder about what more they can get from their existing data and how to get maximum out of future incoming data. Data Scientists do wonders with the data, which goes beyond the job descriptions of Data Analysist or Business Analysist. Skills Required for Data Scientists Here are few of the skills a Data Scientist must have. Expert level skills with statistical tools like SAS, Excel, R etc. Understanding Mathematical Models Hands-on with Visualization Tools like Tableau, PowerPivots, D3. j’s etc. Analytical skills to understand business needs Communication skills On the technology front any Data Scientists should know underlying technologies like (Hadoop, Cloudera) as well as their entire ecosystem (programming language, analysis and visualization tools etc.) . Remember that for becoming a successful Data Scientist one require have par excellent skills, just having a degree in a relevant education field will not suffice. Final Note Data Scientists is indeed very exciting job profile. As per research there are not enough Data Scientists in the world to handle the current data explosion. In near future Data is going to expand exponentially, and the need of the Data Scientists will increase along with it. It is indeed the job one should focus if you like data and science of statistics. Courtesy: emc Tomorrow In tomorrow’s blog post we will discuss about various Big Data Learning resources. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Objective-measures of the power of programming languages

    - by Casebash
    Are there any objective measures for measuring the power of programming languages? Turing-completeness is one, but it is not particularly discriminating. I also remember there being a few others measures of power which are more limited versions (like finite-state-autonoma), but is there any objective measure that is more powerful?

    Read the article

  • Boosting my GA with Neural Networks and/or Reinforcement Learning

    - by AlexT
    As I have mentioned in previous questions I am writing a maze solving application to help me learn about more theoretical CS subjects, after some trouble I've got a Genetic Algorithm working that can evolve a set of rules (handled by boolean values) in order to find a good solution through a maze. That being said, the GA alone is okay, but I'd like to beef it up with a Neural Network, even though I have no real working knowledge of Neural Networks (no formal theoretical CS education). After doing a bit of reading on the subject I found that a Neural Network could be used to train a genome in order to improve results. Let's say I have a genome (group of genes), such as 1 0 0 1 0 1 0 1 0 1 1 1 0 0... How could I use a Neural Network (I'm assuming MLP?) to train and improve my genome? In addition to this as I know nothing about Neural Networks I've been looking into implementing some form of Reinforcement Learning, using my maze matrix (2 dimensional array), although I'm a bit stuck on what the following algorithm wants from me: (from http://people.revoledu.com/kardi/tutorial/ReinforcementLearning/Q-Learning-Algorithm.htm) 1. Set parameter , and environment reward matrix R 2. Initialize matrix Q as zero matrix 3. For each episode: * Select random initial state * Do while not reach goal state o Select one among all possible actions for the current state o Using this possible action, consider to go to the next state o Get maximum Q value of this next state based on all possible actions o Compute o Set the next state as the current state End Do End For The big problem for me is implementing a reward matrix R and what a Q matrix exactly is, and getting the Q value. I use a multi-dimensional array for my maze and enum states for every move. How would this be used in a Q-Learning algorithm? If someone could help out by explaining what I would need to do to implement the following, preferably in Java although C# would be nice too, possibly with some source code examples it'd be appreciated.

    Read the article

  • two's complement, why the name "two"

    - by lenatis
    i know unsigned,two's complement, ones' complement and sign magnitude, and the difference between these, but what i'm curious about is: why it's called two's(or ones') complement, so is there a more generalize N's complement? in which way did these genius deduce such a natural way to represent negative numbers?

    Read the article

  • Dynamic programming - Coin change decision problem?

    - by Tony
    I'm reviewing some old notes from my algorithms course and the dynamic programming problems are seeming a bit tricky to me. I have a problem where we have an unlimited supply of coins, with some denominations x1, x2, ... xn and we want to make change for some value X. We are trying to design a dynamic program to decide whether change for X can be made or not (not minimizing the number of coins, or returning which coins, just true or false). I've done some thinking about this problem, and I can see a recursive method of doing this where it's something like... MakeChange(X, x[1..n this is the coins]) for (int i = 1; i < n; i++) { if ( (X - x[i] ==0) || MakeChange(X - x[i]) ) return true; } return false; Converting this a dynamic program is not coming so easily to me. How might I approach this?

    Read the article

  • Converting EBNF to BNF

    - by Vivin Paliath
    It's been a few years since my computer-language class and so I've forgotten the finer points of BNF's and EBNF's and I don't have a textbook next to me. Specifically, I've forgotten how to convert an EBNF into BNF. From what little I remember, I know that one of the main points is to convert { term } into <term> | <many-terms>. But I don't remember the other rules. I've tried to look this up online but I can only find links to either homework questions, or a small comment about converting terms with curly braces. I can't find an exhaustive list of rules that define the translation.

    Read the article

  • I dont understand Access modifiers in OOP (JAVA)

    - by Imran
    I know this is a silly question but i don't understand Access Modifiers in OOP. Why do we make for example in JAVA instance variables private and then use public getter and setter methods to access them? I mean whats the reasoning/logic behind this? You still get to the instance variable but why use setter and getter methods when you can just make your variables public? please excuse my ignorance as i'm simply trying to understand why we do this? Thank you in advance;-)

    Read the article

  • Dijkstra’s algorithms - a complete list

    - by baris_a
    Hi guys, I have recently asked a question about one of the Dijkstra’s algorithms. But, almost everyone thought it was shortest path. Therefore, I opened this post to gather all the algorithms that were invented by Dijkstra. Please add any if you know. Thanks in advance. 1 ) Shunting-yard algorithm

    Read the article

  • netlogo programming question - catalyst implementation part 2

    - by user286190
    hi the catalyst speeds up the reaction but remains unchanged after the reaction has taken place i tried the following code breed [catalysts catalyst] breed [chemical-x chemical-x] ;then the forward reaction is sped up by the existence of catalysts to react-forward let num-catalysts count catalysts ;speed up by num-catalysts ;... end and it works fine but I want to make it so that the catalyst can be switched on and off with the 'switch' button ..so one can see the effects with and without the catalyst..i tried putting a switch in but catalyst has already been defined Also i want to make the catalyst visible so one can see it in the actual implementation (in the world) like making it a turtle is there are another way to implement this apart from using breeds i tried making the catalyst a turtle but it doesnt work ; Make catalyst visible in implementation clear-all crt catalysts 100 ask catalysts [ set color white ] show [breed] of one-of catalysts ; prints catalysts any help will be greatly appreciated thank you

    Read the article

  • Papers on Software Methodology recommendation

    - by kunjaan
    Please recommend me software engineering/methodology/practices paper. So far I have enjoyed: 1968 Dijkstra : Go To Statement Considered Harmful Reason about correctness about program Nikalus Wirth : Program Development by Stepwise Refinement Not worried about program structure 1971 David Parnas : Information Distribution Aspects of Design Methodology 1972 Liskov : Design Methodology for Reliable Software Systems Extensible Language : Schuman and P Jourrand R. Balzer Structured Programming : Dahl - Hierarchical Program Structures 1971 Jim Morris Protection in Programming Languages 1973 Bill Wulf and Mary Shaw Global Variable Considered Harmful 1974 : Lisko and Zilles ADTs

    Read the article

  • How do I work out IEEE 754 64-bit Floating Point Double Precision?

    - by yousef gassar
    enter code herehello i have done it in 32 but i could dont do it in 62bits please i need help I am stuck on this question and need help. I don't know how to work it out. This is the question. Below are two numbers represented in IEEE 754 64-bit Floating Point Double Precision, the bias of the signed exponent is -1023. Any particular real number ‘N’ represented in 64-bit form (i.e. with the following bit fields; 1-bit Sign, 11-bit Exponent, 52-bit Fraction) can be expressed in the form ±1.F2 × 2X by substituting the bit-field values using formula (IV.I): N = (-1) S × 1.F2 × 2(E – 1023) for 0 < E < 2047.........................….(IV.I) Where N= the number represented, S=Sign bit-value, E=Exponent=X +1023, F=Fraction or Mantissa are the values in the 1, 11 and 52-bit fields respectively in the IEEE 754 64-bit FP representation. Using formula (IV.I), express the 64-bit FP representation of each number as: (i) A binary number of the form:- ±1.F2 × 2X (ii) A decimal number of the form:- ±0.F10 × 10Y {limit F10 to 10 decimal places} Sign 0 1 Exponent 1000 0001 001 11 Fraction 1111 0111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 52 Sign 1 1 Exponent 1000 0000 000 11 Fraction 1001 0010 0001 1111 1011 0101 0100 0100 0100 0010 1101 0001 1000 52 I know I have to use the formula for each of the these but how do I work it out? Is it like this? N = (-1) S × 1.F2 × 2(E – 1023) = 1 x 1.1111 0111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 x 1000 0001 00111 (-1023)?

    Read the article

  • Theory of Computation - Showing that a language is regular..

    - by Tony
    I'm reviewing some notes for my course on Theory of Computation and I'm a little bit stuck on showing the following statement and I was hoping somebody could help me out with an explanation :) Let A be a regular language. The language B = {ab | a exists in A and b does not exist in A*} Why is B a regular language? Some points are obvious to me. If b is simply a constant string, this is trivial. Since we know a is in A and b is a string, regular languages are closed under union, so unioning the language that accepts these two strings is obviously regular. I'm not sure that b is constant, however. Maybe it is, and if so, then this isn't really an issue. I'm having a hard time making sense of it. Thanks!

    Read the article

  • Turing Model Vs Von Neuman model

    - by Santhosh
    First some background (based on my understanding).. The Von-Neumann architecture describes the stored-program computer where instructions and data are stored in memory and the machine works by changing it's internal state, i.e an instruction operated on some data and modifies the data. So inherently, there is state msintained in the system. The Turing machine architecture works by manipulating symbols on a tape. i.e A tape with infinite number of slots exists, and at any one point in time, the Turing machine is in a particular slot. Based on the symbol read at that slot, the machine change the symbol and move to a different slot. All of this is deterministic. My questions are Is there any relation between these two models (Was the Von Neuman model based on or inspired by the Turing model)? Can we say that Turing model is a superset of Von Newman model? Does functional Programming fit into Turing model. If so how? (I assume FP does not lend itself nicely to the Von Neuman model)

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >