Search Results

Search found 2838 results on 114 pages for 'considered harmful'.

Page 11/114 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • what differs a computer scientist/software engineer to regular people who learn programming language and APIs?

    - by Amumu
    In University, we learn and reinvent the wheel a lot to truly learn the programming concepts. For example, we may learn assembly language to understand, what happens inside the box, and how the system operates, when we execute our code. This helps understanding higher level concepts deeper. For example, memory management like in C is just an abstraction of manually managed memory contents and addresses. The problem is, when we're going to work, usually productivity is required more. I could program my own containers, or string class, or date/time (using POSIX with C system call) to do the job, but then, it would take much longer time to use existing STL or Boost library, which abstract all of those thing and very easy to use. This leads to an issue, that a regular person doesn't need to get through all the low level/under the hood stuffs, who learns only one programming language and using language-related APIs. These people may eventually compete with the mainstream graduates from computer science or software engineer and call themselves programmers. At first, I don't think it's valid to call them programmers. I used to think, a real programmer needs to understand the computer deeply (but not at the electronic level). But then I changed my mind. After all, they get the job done and satisfy all the test criteria (logic, performance, security...), and in business environment, who cares if you're an expert and understand how computer works or not. You may get behind the "amateurs" if you spend to much time learning about how things work inside. It is totally valid for those people to call themselves programmers. This makes me confuse. So, after all, programming should be considered an universal skill? Does programming language and concepts matter or the problems we solve matter? For example, many C/C++ vs Java and other high level language, one of the main reason is because C/C++ features performance, as well as accessing low level facility. One of the main reason (in my opinion), is coding in C/C++ seems complex, so people feel good about it (not trolling anyone, just my observation, and my experience as well. Try to google "C hacker syndrome"). While Java on the other hand, made for simplifying programming tasks to help developers concentrate on solving their problems. Based on Java rationale, if the programing language keeps evolve, one day everyone can map their logic directly with natural language. Everyone can program. On that day, maybe real programmers are mathematicians, who could perform most complex logic (including business logic and academic logic) without worrying about installing/configuring compiler, IDEs? What's our job as a computer scientist/software engineer? To solve computer specific problems or to solve problems in general? For example, take a look at this exame: http://cm.baylor.edu/ICPCWiki/attach/Problem%20Resources/2010WorldFinalProblemSet.pdf . The example requires only basic knowledge about the programming language, but focus more on problem solving with the language. In sum, what differs a computer scientist/software engineer to regular people who learn programming language and APIs? A mathematician can be considered a programmer, if he is good enough to use programming language to implement his formula. Can we programmer do this? Probably not for most of us, since we specialize about computer, not math. An electronic engineer, who learns how to use C to program for his devices, can be considered a programmer. If the programming languages keep being simplified, may one day the software engineers, who implements business logic and create softwares, be obsolete? (Not for computer scientist though, since many of the CS topics are scientific, and science won't change, but technology will).

    Read the article

  • Synchronous Actions

    - by Dan Krasinski-Oracle
    Since the introduction of SMF, svcadm(1M) has had the ability to enable or disable a service instance and wait for that service instance to reach a final state.  With Oracle Solaris 11.2, we’ve expanded the set of administrative actions which can be invoked synchronously. Now all subcommands of svcadm(1M) have synchronous behavior. Let’s take a look at the new usage: Usage: svcadm [-v] [cmd [args ... ]] svcadm enable [-rt] [-s [-T timeout]] <service> ... enable and online service(s) svcadm disable [-t] [-s [-T timeout]] <service> ... disable and offline service(s) svcadm restart [-s [-T timeout]] <service> ... restart specified service(s) svcadm refresh [-s [-T timeout]] <service> ... re-read service configuration svcadm mark [-It] [-s [-T timeout]] <state> <service> ... set maintenance state svcadm clear [-s [-T timeout]] <service> ... clear maintenance state svcadm milestone [-d] [-s [-T timeout]] <milestone> advance to a service milestone svcadm delegate [-s] <restarter> <svc> ... delegate service to a restarter As you can see, each subcommand now has a ‘-s’ flag. That flag tells svcadm(1M) to wait for the subcommand to complete before returning. For enables, that means waiting until the instance is either ‘online’ or in the ‘maintenance’ state. For disable, the instance must reach the ‘disabled’ state. Other subcommands complete when: restart A restart is considered complete once the instance has gone offline after running the ‘stop’ method, and then has either returned to the ‘online’ state or has entered the ‘maintenance’ state. refresh If an instance is in the ‘online’ state, a refresh is considered complete once the ‘refresh’ method for the instance has finished. mark maintenance Marking an instance for maintenance completes when the instance has reached the ‘maintenance’ state. mark degraded Marking an instance as degraded completes when the instance has reached the ‘degraded’ state from the ‘online’ state. milestone A milestone transition can occur in one of two directions. Either the transition moves from a lower milestone to a higher one, or from a higher one to a lower one. When moving to a higher milestone, the transition is considered complete when the instance representing that milestone reaches the ‘online’ state. The transition to a lower milestone, on the other hand, completes only when all instances which are part of higher milestones have reached the ‘disabled’ state. That’s not the whole story. svcadm(1M) will also try to determine if the actions initiated by a particular subcommand cannot complete. Trying to enable an instance which does not have its dependencies satisfied, for example, will cause svcadm(1M) to terminate before that instance reaches the ‘online’ state. You’ll also notice the optional ‘-T’ flag which can be used in conjunction with the ‘-s’ flag. This flag sets a timeout, in seconds, after which svcadm gives up on waiting for the subcommand to complete and terminates. This is useful in many cases, but in particular when the start method for an instance has an infinite timeout but might get stuck waiting for some resource that may never become available. For the C-oriented, each of these administrative actions has a corresponding function in libscf(3SCF), with names like smf_enable_instance_synchronous(3SCF) and smf_restart_instance_synchronous(3SCF).  Take a look at smf_enable_instance_synchronous(3SCF) for details.

    Read the article

  • C# using consts in static classes

    - by NickLarsen
    I was plugging away on an open source project this past weekend when I ran into a bit of code that confused me to look up the usage in the C# specification. The code in questions is as follows: internal static class SomeStaticClass { private const int CommonlyUsedValue = 42; internal static string UseCommonlyUsedValue(...) { // some code value = CommonlyUsedValue + ...; return value.ToString(); } } I was caught off guard because this appears to be a non static field being used by a static function which some how compiled just fine in a static class! The specification states (§10.4): A constant-declaration may include a set of attributes (§17), a new modifier (§10.3.4), and a valid combination of the four access modifiers (§10.3.5). The attributes and modifiers apply to all of the members declared by the constant-declaration. Even though constants are considered static members, a constant-declaration neither requires nor allows a static modifier. It is an error for the same modifier to appear multiple times in a constant declaration. So now it makes a little more sense because constants are considered static members, but the rest of the sentence is a bit surprising to me. Why is it that a constant-declaration neither requires nor allows a static modifier? Admittedly I did not know the spec well enough for this to immediately make sense in the first place, but why was the decision made to not force constants to use the static modifier if they are considered constants? Looking at the last sentence in that paragraph, I cannot figure out if it is regarding the previous statement directly and there is some implicit static modifier on constants to begin with, or if it stands on its own as another rule for constants. Can anyone help me clear this up?

    Read the article

  • Techniques for sharing a value among classes in a program

    - by Kenneth Cochran
    I'm using Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData) + "\MyProgram" As the path to store several files used by my program. I'd like to avoid pasting the same snippet of code all over the my applcation. I need to ensure that: The path cannot be accidentally changed once its been set The classes that need it have access to it. I've considered: Making it a singleton Using constructor dependency injection Using property dependency injection Using AOP to create the path where its needed. Each has pros and cons. The singleton is everyone's favorite whipping boy. I'm not opposed to using one but there are valid reasons to avoid it if possible. I'm already heavily using constructor injection through Castle Windsor. But this is a path string and Windsor doesn't handle system type dependencies very gracefully. I could always wrap it in a class but that seems like overkill for something as simple as a passing around a string value. In any case this route would add yet another constructor argument to each class where it is used. The problem I see with property injection in this case is that there is a large amount of indirection from the where the value is set to where it is needed. I would need a very long line of middlemen to reach all the places where its used. AOP looks promising and I'm planning on using AOP for logging anyway so this at least sounds like a simple solution. Is there any other options I haven't considered? Am I off base with my evaluation of the options I have considered?

    Read the article

  • MSMQ - Message Queue Abstraction and Pattern

    - by Maxim Gershkovich
    Hi All, Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind. I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted). The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues). However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly. Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have an of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-) Any and ALL input would be highly appreciated and seriously considered… Thanks again.

    Read the article

  • is it really necessary to run Apache as a front-end to Glassfish/JBoss/Tomcat?

    - by Caffeine Coma
    I'm primarily a Java developer, and I come to you with a question that straddles the divide between developers and sysadmins. Years ago, when it was a novel thing to run Tomcat as an app server, it was customary to front it with Apache. As I understand it, this was done because: Java was considered "slow", and it was helpful to have Apache serve static content directly. Tomcat couldn't listen to ports 80/443 unless run as root, which was dangerous. Java is no longer considered slow, and I doubt adding Apache to the mix will actually help speed things up. As for the ports issue, there are probably simpler ways to connect app servers to ports 80/443 these days. So my question is- is there really any benefit to fronting Java Webapps with Apache these days? If so, is Apache still the way to go? Should I look at Nginx? Instead of Tomcat I'm using Glassfish, if that matters.

    Read the article

  • Rate-Limit affects All clients or single IP?

    - by Asad Moeen
    Well up-til now I've considered iptables rate-limit commands with the "recent" module to work for each IP Address. For example rate-limit rule of 20k/s will trigger only if a single IP exceeds 20k/s rate and not if 4 different IPs exceed 5k/s rate. Please correct me if I considered this wrong as I've only used these rules for TCP/ UDP. But today I tried similar rules for ICMP and applied 4/s Input/Output. But then on trying to ping-test from just-ping.com I could see packet loss on almost all IP Addresses. How could that happen because if it worked for each IP Address then it wouldn't be triggering the rule because I believe each IP from just-ping has a rate of probably 1/s. I still think the first one is true because if it wasn't then my GameServer would block everyone if the combined rate ( in case of more connected players ) increased the threshold. This hasn't happened up til now so the ICMP thing really confused me. Thank you.

    Read the article

  • Windows repair console, impossible?

    - by Daniel
    I found an old Windows XP SP2 in my -trash- cd can and tried it on a 30 GB FAT32 partition. Installation went fine till the copying operation was completed and XP asked for reboot. After that either it starts over again or throws invalid disk. Starting over is an infinite loop the only way I see is to choose the "Repair console" but I'm not used to a DOS box. Can anyone help me through this harmful installation?

    Read the article

  • Mysterious OS X FileVault-related home directory

    - by Nick
    I recently enabled FileVault on Snow Leopard, and after doing so, found a directory /Users/<myusername>.4529809818604982560, containing the original (unencrypted) contents of my home directory, owned by root:wheel with permissions 700, side-by-side with my normal home directory. Does anyone know why this was created (maybe a temporary backup that didn't get erased), or whether deleting it will be harmful?

    Read the article

  • high resolution on small screen size

    - by vishesh
    I have recently got an intel ultrabook,but its screen size is 13.3' and the native resolution is 1600X900.So the problem is that the letters that appear on screen are very small.Reducing resolution blurs the display and making everything bigger also doesn't feel very good.is there a way to get around this problem without changing hardware. I am even ok with this high resolution but I am concerned about the harmful effects it might have on my eyes in long term. Any advice will be very useful.Please help

    Read the article

  • Win2008 - restrict VPN user permissions

    - by Sebas
    Windows 2008 R2 SP1 Foundations file server with no AD, only workgroup sharing some folders, and now a RRAS server. Shared folders are open to everyone in the office (XPs and Sevens) without accounts/passwords, but I was thinking about partially limiting access to the new "VPNuser" account. I'm new to Windows Server and its permissions settings: I thought about denying access to vpnuser through NTFS rights in some folders. It doesn't work, but now I'm guessing that the vpnuser is not considered as a logged user (doesn't appear as such) and is considered a "guest", like the rest of people connecting in the office. I say that because of this: http://social.technet.microsoft.com/Forums/windowsserver/en-US/ff6d3726-ff41-4d3f-9d97-5361af0206dd/vpn-users-on-server-shows-as-guest?forum=winserverNIS Also, because when I create a txt file using the VPN connection, owner field shows in description as "guest". Am I right? How can I set different rights for the VPNuser from the rest of "guest" users in the office?

    Read the article

  • Is there a point in installing antivirus on Ubuntu?

    - by Borewitsch
    I have recently started using UBUNTU. I am wondering about the point of installing antivirus programs. on SU, I found the opinion that it only detects "windows viruses" and removes them. Is there a point in installing antivirus if I don't have any other OS? As far as I know, there are no viruses for linux, what about malvare and any other harmful programs? Is it safe not t install any protecting software?

    Read the article

  • Advised auditing method for MS SQL to track changes made to a specific table by a specific user?

    - by scape
    What is the best method for tracking changes or logging the queries done to a table by a specific user when the person is using Management Studio? I'm using 2008 R2 Express Edition and want to specifically track a single user who logs in through Management studio and runs queries to make changes manually. I want to see what query was run and thus determine what was changed and how. I am not interested in restoring the information. I considered Change Tracking but read that it is not ideal for auditing as well I am unsure how to read the data, then I considered the Bulk-Logging option on the database however I then have to consider handling the log files which may grow huge as the database is used constantly by a web app. I am wondering if there is a more concise method to do what I want?

    Read the article

  • Are programmers bad testers?

    - by jhsowter
    I know this sounds a lot like other questions which have already being asked, but it is actually slightly different. It seems to be generally considered that programmers are not good at performing the role of testing an application. For example: Joel on Software - Top Five (Wrong) Reasons You Don't Have Testers (emphasis mine) Don't even think of trying to tell college CS graduates that they can come work for you, but "everyone has to do a stint in QA for a while before moving on to code". I've seen a lot of this. Programmers do not make good testers, and you'll lose a good programmer, who is a lot harder to replace. And in this question, one of the most popular answers says (again, my emphasis): Developers can be testers, but they shouldn't be testers. Developers tend to unintentionally/unconciously avoid to use the application in a way that might break it. That's because they wrote it and mostly test it in the way it should be used. So the question is are programmers bad at testing? What evidence or arguments are there to support this conclusion? Are programmers only bad at testing their own code? Is there any evidence to suggest that programmers are actually good at testing? What do I mean by "testing?" I do not mean unit testing or anything that is considered part of the methodology used by the software team to write software. I mean some kind of quality assurance method that is used after the code has been built and deployed to whatever that software team would call the "test environment."

    Read the article

  • Why is C++ backward compatibility important / necessary?

    - by Giorgio
    As far as understand it is a well-established opinion within the C++ community that C is an obsolete language that was useful 20 years ago but cannot support many modern good programming practices, or even encourages bad practices; certain features that were typical of C++ (C with classes) during the nineties are also obsolete and considered bad practice in modern C++ (e.g., new and delete should be replaced by smart pointer primitives). In view of this, I often wonder why backward compatibility with C and obsolete C++ features is still considered important: to my knowledge there is no 100% compatibility, but most of C and C++ are contained in C++11 as a subset. Of course, there is a lot of legacy code and libraries (possibly containing templates) that are written using a previous standard of the language and which still need to be maintained or used in connection with new code. Nevertheless, maybe it would still be possible to drop obsolete C and C++ features (e.g. the mentioned new / delete) from a future C++ standard so that it is impossible to use them in new code. In this way, old and dangerous programming practices would be quickly banned from new code, and modern, better programming practices would be enforced by the compiler. Legacy code could still be maintained using separate compilation (having C alongside C++ source files is already a common practice). Developers would have to choose between one compiler supporting the old-style C++ that was common during the nineties and a compiler supporting the modern C++? style (the question mark indicates a future, hypothetical revision). Only mixing the two styles would be forbidden. Would this be a viable strategy for encouraging the adoption of modern C++ practices? Are there conceptual reasons or technical problems (e.g. compiling existing templates) that make such a change undesirable or even impossible? Has such a development been proposed in the C++ community. If there has been some extended discussion on the topic, is there any material on-line?

    Read the article

  • Distinguishing repetitive code with the same implementation

    - by KyelJmD
    Given this sample code import java.util.ArrayList; import blackjack.model.items.Card; public class BlackJackPlayer extends Player { private double bet; private Hand hand01 = new Hand(); private Hand hand02 = new Hand(); public void addCardToHand01(Card c) { hand01.addCard(c); } public void addCardToHand02(Card c) { hand02.addCard(c); } public void bustHand01() { hand01.setBust(true); } public void bustHand02() { hand02.setBust(true); } public void standHand01() { hand01.setStand(true); } public void standHand02() { hand02.setStand(true); } public boolean isHand01Bust() { return hand01.isBust(); } public boolean isHand02Bust() { return hand02.isBust(); } public boolean isHand01Standing() { return hand01.isStanding(); } public boolean isHand02Standing() { return hand02.isStanding(); } public int getHand01Score(){ return hand01.getCardScore(); } public int getHand02Score(){ return hand02.getCardScore(); } } Is this considered as a repetitive code? providing that each method is operating a seperate field but doing the same implementation ? Note that hand01 and hand02 should be distinct. if this is considered as repetitive code, how would I address this? providing that each hand is a seperate entity

    Read the article

  • What do you think are the biggest software development issues, in small to medium businesses?

    - by Ron-Damon
    Hi, I own a small software development company that developes Web software to other small and medium companies in Chile. The business process is very complex and it is hard to stablish where to put the efforts to make our company better, more efficient, and give better solutions. I'm also a TI master's degree student and i'm making a paper about this subject, so any help would be great to help my company and my paper. I have considered 3 areas for the problems: 1) Software development problems 2) Web development problems 3) Small and Medium companies problems I don't know about you, but at least this "business formula" in Chile has not received very much support but it is getting better, but today my company is far from being self-sufficient. UPDATE: Thanks guys for your support so far, i'm updating because i have somewhat enough information so i decided to go deeper within the subjects, wish i would like you to consider for your next answers/commentaries on the subject: 1) Software development problems (3) 1.1 Incomplete problem picture 1.2 Useless delivered software 1.3 Unrealistic or inadequate schedule 2) Web development problems (3) 2.1 Apparently non-viable implementation 2.2 Unefficient module construction design 2.3 Reduced result system inter-operability 3) Small and Medium companies problems (3) 3.1 Very specific, but narrowed requerired system characteristics 3.2 Developed system is not used 3.3 Positivist demand for activities in project execution There are only 3 problems for category, to deliberately keep a thiner scope. Also, i have considered that it would have been apropiated to separate the third clasification on two, but won't be doing it just now: 3) Small and Medium software developement providers problems 4) Small and Medium software developement clients problems In that case, i think i would have made the scope of the problem wider and it is not what i want to do now, until at least i'm very trough with the other two clasifications. What you think?

    Read the article

  • Universal navigation menu across domains

    - by Jon Harley
    I'd like to start by saying that I've searched for hours and could not find a definitive answer to my question. Across different sites on different second-level domains exists a universal navigation bar with a collection of roughly 30 links. This universal bar is exactly the same for every page on each domain. The bar's HTML, CSS and JavaScript are all stored in a subfolder for each domain and the HTML is embedded upon serving the page and is not being injected on the client side. None of the links use any rel directives and are as vanilla as can be. My question is about Google's duplicate content rule. Would something like this be considered duplicate content? Matt Cutt's blog post about duplicate content mentions boilerplate repetition, but then he mentions lengthy legalese. Since the text in this universal bar is brief and uses common terms, I wonder if this same rule applies. If this is considered duplicate content, what would be a good way to correct the problem? Thank you for your help.

    Read the article

  • As a C# developer, would you learn Java to develop for Android or use MonoDroid instead?

    - by Dan Tao
    I'd consider myself pretty well versed in C#. It's my language of choice at the moment, and it's where basically all my professional experience lies. Still, I'm puzzled by the existence of the MonoDroid project. My understanding has always been that C# and Java are very close. Like, if you know one, you can learn the other really quickly. So, as I've considered developing my first Android app, I just assumed I would familiarize myself with Java enough to get started and then just sort of learn as I go. Wouldn't this make more sense than using MonoDroid, which is likely to be less feature-rich than the Java Android SDK, and requires learning its own API (albeit a .NET API) anyway? I just feel like it would be better to learn a new language (and an extremely popular one at that) and get some experience in it—when it's so close to what you already know anyway—rather than stick with a technology you're experienced with, without gaining any more valuable skills. Maybe I'm grossly misrepresenting the average potential MonoDroid user. Maybe it's more for people who are experienced in Java and .NET and just prefer .NET. Or maybe (in fact it's likely) there are other factors I just haven't considered. I'm just wondering, why would you use MonoDroid instead of just developing for Android using Java?

    Read the article

  • Multi language site - use of canonical link and link rel="alternate"

    - by julia
    I keep reading everywhere that if you have a multilanguage site, where the same page appears in, say, French and English, then this is considered as duplicate content by google. It is written that using canonical link is the solution, but I do not understand how to use it in this case. Should I: Choose either French URL or English URL to be the canonical (main) one, and where I will place the canonical link? If so, how do I decide which of the two URLs must be canonical? both languages are important to me and I want the content under both languages to be indexed by google and served to the user, depending on the language in which he searches. OR should I place a canonical link on both French and English URLs? If so, then I do not understand the meaning of using the canonical link? In this case would both URLs be indexed, are both of them considered as "important" by google and not duplicates? Also I read that link rel="alternate" can be used to indicate to google that, for example the French URL is the French-language equivalent of the English page. This makes sense and I understand how to use such links, but how are they combined with canonical links? Should I define both the canonical URL AND specify rel="alternate" in both URLs? Could someone help me to clarify this, cause I'm stuck with this and can't seem to find a good-enough explanation in different sources.

    Read the article

  • Data structure for pattern matching.

    - by alvonellos
    Let's say you have an input file with many entries like these: date, ticker, open, high, low, close, <and some other values> And you want to execute a pattern matching routine on the entries(rows) in that file, using a candlestick pattern, for example. (See, Doji) And that pattern can appear on any uniform time interval (let t = 1s, 5s, 10s, 1d, 7d, 2w, 2y, and so on...). Say a pattern matching routine can take an arbitrary number of rows to perform an analysis and contain an arbitrary number of subpatterns. In other words, some patterns may require 4 entries to operate on. Say also that the routine (may) later have to find and classify extrema (local and global maxima and minima as well as inflection points) for the ticker over a closed interval, for example, you could say that a cubic function (x^3) has the extrema on the interval [-1, 1]. (See link) What would be the most natural choice in terms of a data structure? What about an interface that conforms a Ticker object containing one row of data to a collection of Ticker so that an arbitrary pattern can be applied to the data. What's the first thing that comes to mind? I chose a doubly-linked circular linked list that has the following methods: push_front() push_back() pop_front() pop_back() [] //overloaded, can be used with negative parameters But that data structure seems very clumsy, since so much pushing and popping is going on, I have to make a deep copy of the data structure before running an analysis on it. So, I don't know if I made my question very clear -- but the main points are: What kind of data structures should be considered when analyzing sequential data points to conform to a pattern that does NOT require random access? What kind of data structures should be considered when classifying extrema of a set of data points?

    Read the article

  • How should I manage persistent score in Game Center leaderboards?

    - by Omega
    Let's say that I'm developing an iOS RPG where the player gains 1 point per monster kill. The amount of monsters killed is persistent data: it is an endless adventure, and the score keeps on growing. It isn't a "session score" like Fruit Ninja, but rather a "reputation score". There are Game Center leaderboards for that score. Keep killing monsters, your score goes up, and the leaderboards are updated. My problem is that, technically, you can log out and log in using a different Game Center account, kill one monster, and the leaderboards will be updated for the new GC account. Supposing that this score is a big deal, this could be considered as cheating, because if you have a score of 2000, any of your friends who have never played the game can simply log into your iPhone, play the game, and the system will update the score for their accounts, essentially giving them 2000 points in the leaderboards for doing nothing. I have considered linking one GC account to a specific save game. It won't update your score unless you're using the linked GC account. But what if the player actually needs to change their GC account? Technically they would be forced to start a new game and link their account to that profile. How should I prevent this kind of cheat? Essentially, I don't want someone to distribute a high schore to multiple GC accounts, given the fact that the game updates the score constantly since it isn't a "session score". I do realize that it isn't quite a big deal. But I'm curious about how to avoid this.

    Read the article

  • What to include in metadata?

    - by shyam
    I'm wondering if there are any general guidelines or best practices regarding when to split data into a metadata format, as oppose to directly embedding it within the data. (Specific example below). My understanding of metadata is that it describes data (without the need to actually look at the data), allowing for data to be quickly search/filtered for easy access. Let's take for example a simple 3D model format. The actual data file itself is a binary file containing vertices and colors. Things like creation date, modified data and author name would be things that describe the binary data, so I would say these belong as metadata (outside of the binary file). But what if the application had no need to search or filter by these fields? Would it be acceptable to embed these fields directly into the binary data itself? Could they be duplicated in both the binary data and the meta data, or would this be considered bad practice? What about more ambiguous fields such as the model name, which could be considered part of the data itself, but also as data describing the binary data?... How do you decide which data to embed in the actual binary file, as opposed to separating into a more flexible metadata format? Thanks!

    Read the article

  • Woman Is the World's First Computer Programmer? [closed]

    - by Sveta Bondarenko
    This week, on 10th December, we celebrate the 197th birth anniversary of Ada Lovelace, often considered as the world's first computer programmer. Ada became famous not only as a daughter of romantic poet Lord Byron but also as an outstanding 19th century mathematician. Her works on analytical engine are recognized as the first algorithm intended to be processed by a machine. Women always played a crucial role in the computer science evolution, but unfortunately, they are considered to be not so good at programming and engineering as men. Even though the fair sex makes up a growing portion of computer and Internet users, there is still a large gender gap in the field of Computer Science. But all is not lost! According to the study women's enrollment in the computer science raised from 7 percent in 1995 to 42 percent in 2000. And it is still increasing. Soon women will take a well-deserved position among the world's top computer programmers. After all, a number of notable female computer pioneers such as Ada Lovelace, Grace Hopper, and Anita Borg have proven that women make great computer scientists. But will women make great contributions to the modern technologies industry? Or successful and famous female computer programmer is just a pipe dream?

    Read the article

  • How to sync Ubuntu/software/configurations between N computers with free software and/or without a cloud?

    - by skanatek
    Note: this question is not about syncing data in a Dropbox-like way (files, folders), it is more about syncing configurations. I would like to have exactly the same version of Ubuntu with all the software installed and configured both on my Desktop PC and on my Laptop PC (and maybe on my small netbook PC) without using Ubuntu Sync and with minimal maintenance effort (setup once, run for a long time). The use case is the following: I work on my Laptop PC and do some changes to software configuration, for example: configure vim to have a new plugin update the Search Tracker / Recoll file search index configure Thunderbird to have an additional IMAP account ('remember password') add some new bookmarks in Firefox/Chrome change the desktop background image install new software with apt-get install build and install new software with checkinstall etc. I do some 'sync' operation I switch to my Desktop PC and get all the changes from (1) working on the Desktop PC I work on my Desktop PC and do some changes to software configuration, for example: add new directory to the list of directories to be backed up by DejaDup add a new check spelling dictionary to the Libreoffice Writer configure the Terminator software to have colored fonts install new font into the Ubuntu system configure Ekiga to make phone calls etc. I do some 'sync' operation I switch to my Laptop PC and get all the changes from (1) and (4) working on the Laptop PC. Question: What free/open-source software can I use to sync both machines' Ubuntu systems, installed software and configurations? Is it possible to do that without any cloud services? Complementary question: It is obvious that the Desktop PC and the Laptop PC have different hardware configurations. How does the 'sync software' in question deal with video drivers, wlan drivers and their configurations? Note: I do not need all the PCs to be synced at the same time, because I work with only one single machine at once. Note: I considered to use Chef to solve the problem, but it seems that it might be really cumbersome to maintain such a setup. Note: I also considered using a bootable USB with Ubuntu installed (portable Linux), but I am not sure that the video drivers will work then.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >