Search Results

Search found 11697 results on 468 pages for 'requires sense of humor'.

Page 192/468 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Using Clojure instead of Python for scalability (multi core) reasons, good idea?

    - by Vandell
    After reading http://clojure.org/rationale and other performance comparisons between Clojure and many languages, I started to think that apart from ease of use, I shouldn't be coding in Python anymore, but in Clojure instead. Actually, I began to fill irresponsisble for not learning clojure seeing it's benefits. Does it make sense? Can't I make really efficient use of all cores using a more imperative language like Python, than a lisp dialect or other functional language? It seems that all the benefits of it come from using immutable data, can't I do just that in Python and have all the benefits? I once started to learn some Common Lisp, read and done almost all exercices from a book I borrowod from my university library (I found it to be pretty good, despite it's low popularity on Amazon). But, after a while, I got myself struggling to much to do some simple things. I think there's somethings that are more imperative in their nature, that makes it difficult to model those thins in a functional way, I guess. The thing is, is Python as powerful as Clojure for building applications that takes advantages of this new multi core future? Note that I don't think that using semaphores, lock mechanisms or other similar concurrency mechanism are good alternatives to Clojure 'automatic' parallelization.

    Read the article

  • Oracle is Child&rsquo;s Play&hellip;in NSW

    - by divya.malik
    A few weeks ago, my colleague Michael Seback posted a blog entry on Oracle’s acquisition of Haley.  We recently read  an interesting report from Down Under, and here was our press release on the  implementation of Oracle’s Policy automation software in New South Wales, which I thought I would share. We always love hearing about our software “at work”, and especially in the Public Sector- social services area, where it makes a big difference to people’s lives. Here were some of the reasons, why NSW chose Oracle software: “One of the things Oracle’s Policy Automation system is good at is allowing you take decision  trees and rules that are obviously written in English and code them up using very much a natural language approach,” said Holling (CIO for Human Services). “So it was quite a short process to translate the final set of rules that were written on paper into business rules that were actually embedded in the system.” “Another reason why we chose Oracle’s Automation tool is because with future versions of Siebel it comes very tightly integrated with that. It allows us to then to basically take the results of the Policy Automation survey and actually populate our client management system database with that information,” said Holling. As per Surend Dayal, North America VP, Oracle’s Policy automation has applications across a wide range of industries, including public sector—especially health and human services—also financial services, insurance, and even airline rewards programs. In other words, any business process that requires consistent, accurate decision-making where complex legislation and/or internal policies are involved. Click here to read more about Oracle and Haley.

    Read the article

  • Restrictive routing best practices for Google App Engine with python?

    - by Aleksandr Makov
    Say I have a simple structure: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/profile', 'pages.profile'), (r'/dashboard', 'pages.dash'), ], debug=True) Basically all pages require authentication except for the login. If visitor tries to reach a restrictive page and he isn't authorized (or lacks privileges) then he gets redirected to the login view. The question is about the routing design. Should I check the auth and ACL privs in each of the modules (pages.profile and pages.dash from example above), or just pass all requests through the single routing mechanism: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/.+', 'router') ], debug=True) I'm still quite new to the GAE, but my app requires authentication as well as ACL. I'm aware that there's login directive on the server config level, but I don't know how it works and how I can tight it with my ACL logic and what's worse I cannot estimate time needed to get it running. Besides, it looks only to provide only 2 user groups: admin and user. In any case, that's the configuration I use: handlers: - url: /favicon.ico static_files: static/favicon.ico upload: static/favicon.ico - url: /static/* static_dir: static - url: .* script: main.app secure: always Or I miss something here and ACL can be set in the config file? Thanks.

    Read the article

  • Get to No as fast as possible

    - by Tim Hibbard
    There is a sales technique where the strategy is to get the customer to say “No deal” as soon as possible.  The idea being that by establishing terms that your customer is not comfortable with with, the sooner you can figure out what they will be willing to agree to.  The same principal can be applied to code design.  Instead of nested if…then statements, a code block should quickly eliminate the cases it is not equipped to handle and just focus on what it is meant to handle. This is code that will quickly become maintainable as requirements change: private void SaveClient(Client c) { if (c != null) { if (c.BirthDate != DateTime.MinValue) { foreach (Sale s in c.Sales) { if (s.IsProcessed) { SaveSaleToDatabase(s); } } SaveClientToDatabase(c); } } }   If an additional requirement comes along that requires the Client to have Manager approval or for a Sale to be under $20K, this code will get messy and unreadable. A better way to meet the same requirements would be: private void SaveClient(Client c) { if (c == null) { return; } if (c.BirthDate == DateTime.MinValue) { return; }   foreach (Sale s in c.Save) { if (!s.IsProcessed) { continue; } SaveSaleToDatabase(s); } SaveClientToDatabase(c); } This technique moves on quickly when it finds something it doesn’t like.  This makes it much easier to add a Manager approval constraint.  We would just insert the new requirement before the action takes place.

    Read the article

  • Do objects maintain identity under all non-cloning conditions in PHP?

    - by Buttle Butkus
    PHP 5.5 I'm doing a bunch of passing around of objects with the assumption that they will all maintain their identities - that any changes made to their states from inside other objects' methods will continue to hold true afterwards. Am I assuming correctly? I will give my basic structure here. class builder { protected $foo_ids = array(); // set in construct protected $foo_collection; protected $bar_ids = array(); // set in construct protected $bar_collection; protected function initFoos() { $this->foo_collection = new FooCollection(); foreach($this->food_ids as $id) { $this->foo_collection->addFoo(new foo($id)); } } protected function initBars() { // same idea as initFoos } protected function wireFoosAndBars(fooCollection $foos, barCollection $bars) { // arguments are passed in using $this->foo_collection and $this->bar_collection foreach($foos as $foo_obj) { // (foo_collection implements IteratorAggregate) $bar_ids = $foo_obj->getAssociatedBarIds(); if(!empty($bar_ids) ) { $bar_collection = new barCollection(); // sub-collection to be a component of each foo foreach($bar_ids as $bar_id) { $bar_collection->addBar(new bar($bar_id)); } $foo_obj->addBarCollection($bar_collection); // now each foo_obj has a collection of bar objects, each of which is also in the main collection. Are they the same objects? } } } } What has me worried is that foreach supposedly works on a copy of its arrays. I want all the $foo and $bar objects to maintain their identities no matter which $collection object they become of a part of. Does that make sense?

    Read the article

  • Getting started on Large Projects

    - by Mercfh
    So I just graduated from my College with a B.S. in Comp. Science (although it was a good school, we're the only accredited CS department in our state.....for w/e that means lol) I feel like im a decent programmer, not amazing....but not terrible. Anyways I got my first job about 2 weeks ago, it's a pretty entry level job: firmware development/tester (I know I know people look down on testers...but I gotta start somewhere). Anyways there isn't a whole lot of coding to be had right now (mostly simple stuff) but here soon I have the option of helping out with development (which is what I want to do) Thing is....I have NEVER worked on a huge project. I mean in school sure we had "group" projects but nothing really big. So I'm not super familiar with HUGE classes and such (main language was C++)....Is this something I'll just get used to with time? Some fellow students were used to that with internships and such...but I never got that chance. My job was mostly a "one man job" kinda thing. Mostly little things. Plus in class we never did huge projects anyways. So how do you guys I guess "plan" out these things? Do you use a whiteboard and plan out classes and such....or what. Also...another worry of mine is that I have to use google......ALOT for examples of code, because sometimes I just don't get how something works. Is this normal? It makes me feel sorta.....stupid I guess. I mean "technically" i've had 4-5 years coding experience......but it really only feels like I had 2 years of REAL experience. If that makes any sense? Thanks

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

  • What are my choices for server side sandboxed scripting?

    - by alfa64
    I'm building a public website where users share data and scripts to run over some data. The scripts are run serverside in some sort of sandbox without other interaction this cycle: my Perl program reads from a database a User made script, adds the data to be processed into the script ( ie: a JSON document) then calls the interpreter, it returns the response( a JSON document or plain text), i save it to the database with my perl script. The script should be able to have some access to built in functions added to the scripting language by myself, but nothing more. So i've stumbled upon node.js as a javascript interpreter, and and hour or so ago with Google's V8(does v8 makes sense for this kind of thing?). CoffeeScript also came to my mind, since it looks nice and it's still Javascript. I think javascript is widespread enough and more "sandboxeable" since it doesn't have OS calls or anything remotely insecure ( i think ). by the way, i'm writing the system on Perl and Php for the front end. To improve the question: I'm choosing Javascript because i think is secure and simple enough to implement with node.js, but what other alternatives are for achieving this kind of task? Lua? Python? I just can't find information on how to run a sandboxed interpreter in a proper way.

    Read the article

  • Handling Types for Real and Complex Matrices in a BLAS Wrapper

    - by mga
    I come from a C background and I'm now learning OOP with C++. As an exercise (so please don't just say "this already exists"), I want to implement a wrapper for BLAS that will let the user write matrix algebra in an intuitive way (e.g. similar to MATLAB) e.g.: A = B*C*D.Inverse() + E.Transpose(); My problem is how to go about dealing with real (R) and complex (C) matrices, because of C++'s "curse" of letting you do the same thing in N different ways. I do have a clear idea of what it should look like to the user: s/he should be able to define the two separately, but operations would return a type depending on the types of the operands (R*R = R, C*C = C, R*C = C*R = C). Additionally R can be cast into C and vice versa (just by setting the imaginary parts to 0). I have considered the following options: As a real number is a special case of a complex number, inherit CMatrix from RMatrix. I quickly dismissed this as the two would have to return different types for the same getter function. Inherit RMatrix and CMatrix from Matrix. However, I can't really think of any common code that would go into Matrix (because of the different return types). Templates. Declare Matrix<T> and declare the getter function as T Get(int i, int j), and operator functions as Matrix *(Matrix RHS). Then specialize Matrix<double> and Matrix<complex>, and overload the functions. Then I couldn't really see what I would gain with templates, so why not just define RMatrix and CMatrix separately from each other, and then overload functions as necessary? Although this last option makes sense to me, there's an annoying voice inside my head saying this is not elegant, because the two are clearly related. Perhaps I'm missing an appropriate design pattern? So I guess what I'm looking for is either absolution for doing this, or advice on how to do better.

    Read the article

  • What's the proper term for a function inverse to a constructor - to unwrap a value from a data type?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this/other terminology applicable to other functional languages, or is it used just in the Haskell? Perhaps also, is there any terminology for this in general, in non-functional languages? I've seen both terms, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense), or newtype DList a = DL { unDL :: [a] -> [a] } The unDL function is our deconstructor, which removes the DL constructor. ... in The Real World Haskell.

    Read the article

  • What's a way to implement a flexible buff/debuff system?

    - by gkimsey
    Overview: Lots of games which RPG-like statistics allow for character "buffs", ranging from simple "Deal 25% extra damage" to more complicated things like "Deal 15 damage back to attackers when hit." The specifics of each type of buff aren't really relevant. I'm looking for a (presumably object-oriented) way to handle arbitrary buffs. Details: In my particular case, I have multiple characters in a turn-based battle environment, so I envisioned buffs being tied to events like "OnTurnStart", "OnReceiveDamage", etc. Perhaps each buff is a subclass of a main Buff abstract class, where only the relevant events are overloaded. Then each character could have a vector of buffs currently applied. Does this solution make sense? I can certainly see dozens of event types being necessary, it feels like making a new subclass for each buff is overkill, and it doesn't seem to allow for any buff "interactions". That is, if I wanted to implement a cap on damage boosts so that even if you had 10 different buffs which all give 25% extra damage, you only do 100% extra instead of 250% extra. And there's more complicated situations that ideally I could control. I'm sure everyone can come up with examples of how more sophisticated buffs can potentially interact with each other in a way that as a game developer I may not want. As a relatively inexperienced C++ programmer (I generally have used C in embedded systems), I feel like my solution is simplistic and probably doesn't take full advantage of the object-oriented language. Thoughts? Has anyone here designed a fairly robust buff system before?

    Read the article

  • setting up freedns with an existing domain

    - by romeovs
    I've been running a webserver off of a pc at a static IP succesfully for the past 5 months. recently however, I've moved into another appartment and my ISP only provides a dynamic IP (my IP changes from time to time). I'm not an internet genius but I was thinking to fix this by using a Dynamic DNS provider. So I got on the web and found freedns. I'm a bit confused about how to set up everything though. I've managed to succesfully install the IP updater daemon on my web server. Then, in my registrars control panel, I set the NS records to point at ns1 through ns4.afraid.org (removing the old NS records). I'm not certain what I should do with the A records though (for now they are still pointing to the old static IP address). I have A records for www, blog, irc, etc. but I cannot point them at my new IP address, because it isn't Could someone explain this in the clearest possible sense (perhaps elaborating on what happens at each step of the DNS process). I never really knew what the A records are for anyway. (note that I haven't really found any documentation at the freedns website, or on google)

    Read the article

  • I can't program because the code I am using uses old coding styles. Is this normal to programmers? [closed]

    - by Renato Dinhani Conceição
    I'm in my first real job as programmer, but I can't solve any problems because of the coding style used. The code here: Does not have comments Does not have functions (50, 100, 200, 300 or more lines executed in sequence) Uses a lot of if statements with a lot of paths Has variables that make no sense (eg.: cf_cfop, CF_Natop, lnom, r_procod) Uses an old language (Visual FoxPro 8 from 2002), but there are new releases from 2007. I feel like I have gone back to 1970. Is it normal for a programmer familiar with OOP, clean-code, design patterns, etc. to have trouble with coding in this old-fashion way? EDIT: All the answers are very good. For my (un)hope, appears that there are a lot of this kind of code bases around the world. A point mentioned to all answers is refactor the code. Yeah, I really like to do it. In my personal project, I always do this, but... I can't refactor the code. Programmers are only allowed to change the files in the task that they are designed for. Every change in old code must be keep commented in the code (even with Subversion as version control), plus meta informations (date, programmer, task) related to that change (this became a mess, there are code with 3 used lines and 50 old lines commented). I'm thinking that is not only a code problem, but a management of software development problem.

    Read the article

  • How to approach scrum task burn down when tasks have multiple peoples involvement?

    - by AgileMan
    In my company, a single task can never be completed by one individual. There is going to be a separate person to QA and Code Review each task. What this means is that each individual will give their estimates, per task, as to how much time it will take to complete. The problem is, how should I approach burn down? If I aggregate the hours together, assume the following estimate: 10 hrs - Dev time 4 hrs - QA 4 hrs - Code Review. Task Estimate = 18hrs At the end of each day I ask that the task be updated with "how much time is left until it is done". However, each person generally just thinks about their part of it. Should they mark the effort remaining, and then ADD the effort estimates to that? How are you guys doing this? UPDATE To help clarify a few things, at my organization each Task within a story requires 3 people. Someone to develop the task. (do unit tests, ect...) A QA specialist to review task (they primarily do integration and regression tests) A Tech lead to do code review. I don't think there is a wrong way or a right way, but this is our way ... and that won't be changing. We work as a team to complete even the smallest level of a story whenever possible. You cannot actually test if something works until it is dev complete, and you cannot review the quality of the code either ... so the best you can do is split things up into small logical slices so that the bare minimum functionality can be tested and reviewed as early into the process as possible. My question to those that work this way would be how to burn down a "task" when they are setup this way. Unless a Task has it's own sub-tasks (which JIRA doesn't allow) ... I'm not sure the best way to accomplish tracking "what's left" on a daily basis.

    Read the article

  • How can I stop a process from moving to the background?

    - by Alex
    I have a machine running Ubuntu server version 12.04.3 LTS. On it, I'm attempting to run a node.js server that needs to stay up and running at all times. I'm running into an issue, however, where periodically I see this happen: [1]+ Stopped sudo node server.js When this happens, I have to manually bring it back with fg, which works fine, at least until it stops again. As far as I can tell, it isn't functioning properly while stopped, since I get no log files in those windows of time. So my question is this: Is there a way to prevent it from being stopped like that? I'm running it in a tmux window, if that changes anything. Also, to address the question before it gets asked: I'm running it as sudo due to some ecryptfs issues I've been having. I was originally running it in my home directory, but when it was left alive for too long things would get out of sync and the file writes it has to do would just stop working. To mitigate that, I moved it out of my home directory, but its new location requires me to use sudo permissions for everything to work correctly. Hopefully that isn't related to the whole background task thing. (sudo and tmux tags included in case one or both turn out to actually be relevant to the solution.)

    Read the article

  • Does LINQ require significantly more processing cycles and memory than lower-level data iteration techniques?

    - by Matthew Patrick Cashatt
    Background I am recently in the process of enduring grueling tech interviews for positions that use the .NET stack, some of which include silly questions like this one, and some questions that are more valid. I recently came across an issue that may be valid but I want to check with the community here to be sure. When asked by an interviewer how I would count the frequency of words in a text document and rank the results, I answered that I would Use a stream object put the text file in memory as a string. Split the string into an array on spaces while ignoring punctuation. Use LINQ against the array to .GroupBy() and .Count(), then OrderBy() said count. I got this answer wrong for two reasons: Streaming an entire text file into memory could be disasterous. What if it was an entire encyclopedia? Instead I should stream one block at a time and begin building a hash table. LINQ is too expensive and requires too many processing cycles. I should have built a hash table instead and, for each iteration, only added a word to the hash table if it didn't otherwise exist and then increment it's count. The first reason seems, well, reasonable. But the second gives me more pause. I thought that one of the selling points of LINQ is that it simply abstracts away lower-level operations like hash tables but that, under the veil, it is still the same implementation. Question Aside from a few additional processing cycles to call any abstracted methods, does LINQ require significantly more processing cycles to accomplish a given data iteration task than a lower-level task (such as building a hash table) would?

    Read the article

  • How to refactor a myriad of similar classes

    - by TobiMcNamobi
    I'm faced with similar classes A1, A2, ..., A100. Believe it or not but yeah, there are roughly hundred classes that almost look the same. None of these classes are unit tested (of course ;-) ). Each of theses classes is about 50 lines of code which is not too much by itself. Still this is way too much duplicated code. I consider the following options: Writing tests for A1, ..., A100. Then refactor by creating an abstract base class AA. Pro: I'm (near to totally) safe by the tests that nothing goes wrong. Con: Much effort. Duplication of test code. Writing tests for A1, A2. Abstracting the duplicated test code and using the abstraction to create the rest of the tests. Then create AA as in 1. Pro: Less effort than in 1 but maintaining a similar degree of safety. Con: I find generalized test code weird; it often seems ... incoherent (is this the right word?). Normally I prefer specialized test code for specialized classes. But that requires a good design which is my goal of this whole refactoring. Writing AA first, testing it with mock classes. Then inheriting A1, ..., A100 successively. Pro: Fastest way to eliminate duplicates. Con: Most Ax classes look very much the same. But if not, there is the danger of changing the code by inheriting from AA. Other options ... At first I went for 3. because the Ax classes are really very similar to each other. But now I'm a bit unsure if this is the right way (from a unit testing enthusiast's perspective).

    Read the article

  • Problem with missing JSON functions on PHP 5.2.6 / Plesk 8.4

    - by Drachenviech
    I have a vserver running openSuse 10.3, Apache 2 and Plesk 8.4. I can update/upgrade neither, as it is apparently not recommended to upgrade openSuse 10.3 (and an update to the EOL 10.4 does not seem to make much sense) and Plesk fails to update no matter what version I try (even fails to upgrade to 8.4.1). Still I can live with that somehow, primarily because I don’t have the time to do a fresh remote install on the vserver. What really is a problem is, that though the installed PHP is 5.2.6 it has no zip library and no json functions. The first is probably because PHP was not compiled with --enable-zip. The second is a big mystery though. As I understand it, it always comes with PHP unless its compiled with the --disable-json configure option. This is however not the case. And the json extension module is just not there. I even tried to enable it with extension=json.so with no luck either. the configure options of my PHP are (as shipped with Plesk 8.4) '../configure' '--prefix=/usr' '--datadir=/usr/share/php5' '--mandir=/usr/share/man' '--bindir=/usr/bin' '--with-libdir=lib' '--includedir=/usr/include' '--sysconfdir=/etc/php5/apache2' '--with-config-file-path=/etc/php5/apache2' '--with-config-file-scan-dir=/etc/php5/conf.d' '--enable-libxml' '--enable-session' '--with-mm' '--with-pcre-regex=/usr' '--enable-xml' '--enable-simplexml' '--enable-spl' '--enable-filter' '--disable-debug' '--enable-inline-optimization' '--disable-rpath' '--disable-static' '--enable-shared' '--program-suffix=5' '--with-pic' '--with-gnu-ld' '--with-system-tzdata=/usr/share/zoneinfo' '--with-apxs2=/usr/sbin/apxs2' '--disable-all' '--disable-cli' As I understand it, PECL is not an option with 5.2.6. Or am I mistaken? Even if I was not, the openSuse repository only goes as far as PHP 5.2.4. The openSuse install even came without zypper, which I had to manually install. So is there a way to get ziplib and json running in PHP 5.2.6 without having to recompile the binary?

    Read the article

  • How to deal with the need to know multiple programming languages? When to stop learning new languages?

    - by Raphael
    I am a relatively young programmer. I am 23 and I have been programming professionally for about 5 years. As most programmers I started with C, learned some x86 assembly for fun and then I found C++ which turned out to be my greatest passion in the programming world. Programming with C and C++ forces you to learn platform specific APIs, libs and frameworks all of each requires constant study and experimentation. After some time I had to move on to Java and C# as the demand on my region is basically for these languages. With these languages I entered the world of web development and then I had to learn javascript. Developing for the .NET Framework was exciting at first but I constantly felt as I was getting tied up by Microsoft (and of course the .NET Framework was driving me away from Linux). For desktop development I could do pretty much everything I did with .NET using C++ with Qt but for web development I had to look for an alternative. Quickly I found Django and then I proceeded to learn Python so I could use Django. Nowadays I am learning iOS development with Objective-C. So far it was pretty much easy to learn all these languages (C++ trained me well) but I am worried that someday I won't be able to keep track of them all. Just to clarify. The only languages I learned cause I had to were C# and Java. All of the others I learned for fun, because I love programming and learning new things. Also I like to keep my skills sharp on desktop, web and mobile development. My question is: How do you keep track of multiple programming languages? (I mean, keep track of changes to these languages and keep your skills sharp) and: Is there such a thing as enough programming languages?

    Read the article

  • 2D Image Creator for a video game

    - by user1276078
    I need to make a few images for an arcade video game I'm making in Java. As of right now, I have drawings that animate, but there are two problems. The drawings are horrible, and as a result, the game won't get enough attention. It's a pain to have to change each coordinate for the drawing, as the drawings are fairly complex. I'd like to use images. I feel they could solve my problem. They would look better than the drawings, and it would only have an x and a y coordinate, rather than the many coordinates I need for each drawing. So, in a sense, I have two questions. Would images actually help? Would they solve my 2 problems? I just want to clarify. How would I make these images. I don't think I can copy them off of the internet because I plan on publishing this game. So, is there any software where you can make your own images? (It has to be in an image type that Java can support. I'm working with java). It also, as stated by the header, needs to be a 2D image; not 3D

    Read the article

  • Copy to USB memory stick really slow?

    - by Eloff
    When I copy files to the USB device, it takes much longer than in windows (same usb device, same port) it's faster than USB 1.0 speeds (1MB/s) but much slower than USB 2.0 speeds (12MB/s). To copy 1.8GB takes me over 10 minutes (it should be < 3 min.) I have two identical SanDisk Cruzer 8GB sticks, and I have the same problem with both. I have a super talent 32GB USB SSD in the neighboring port and it works at expected speeds. The problem I seem to see in the GUI is that the progress bar goes to 90% almost instantly, completes to 100% a little slower and then hangs there for 10 minutes. Interrupting the copy at this point seems to result in corruption at the tail end of the file. If I wait for it to complete the copy is successful. Any ideas? dmesg output below: [64059.432309] usb 2-1.2: new high-speed USB device number 5 using ehci_hcd [64059.526419] scsi8 : usb-storage 2-1.2:1.0 [64060.529071] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.14 PQ: 0 ANSI: 2 [64060.530834] sd 8:0:0:0: Attached scsi generic sg4 type 0 [64060.531925] sd 8:0:0:0: [sdd] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [64060.533419] sd 8:0:0:0: [sdd] Write Protect is off [64060.533428] sd 8:0:0:0: [sdd] Mode Sense: 03 00 00 00 [64060.534319] sd 8:0:0:0: [sdd] No Caching mode page present [64060.534327] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.537988] sd 8:0:0:0: [sdd] No Caching mode page present [64060.537995] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.541290] sdd: sdd1 [64060.544617] sd 8:0:0:0: [sdd] No Caching mode page present [64060.544619] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.544621] sd 8:0:0:0: [sdd] Attached SCSI removable disk

    Read the article

  • Being stupid to get better productivity?

    - by loki2302
    I've spent a lot of time reading different books about "good design", "design patterns", etc. I'm a big fan of the SOLID approach and every time I need to write a simple piece of code, I think about the future. So, if implementing a new feature or a bug fix requires just adding three lines of code like this: if(xxx) { doSomething(); } It doesn't mean I'll do it this way. If I feel like this piece of code is likely to become larger in the nearest future, I'll think of adding abstractions, moving this functionality somewhere else and so on. The goal I'm pursuing is keeping average complexity the same as it was before my changes. I believe, that from the code standpoint, it's quite a good idea - my code is never long enough, and it's quite easy to understand the meanings for different entities, like classes, methods, and relations between classes and objects. The problem is, it takes too much time, and I often feel like it would be better if I just implemented that feature "as is". It's just about "three lines of code" vs. "new interface + two classes to implement that interface". From a product standpoint (when we're talking about the result), the things I do are quite senseless. I know that if we're going to work on the next version, having good code is really great. But on the other side, the time you've spent to make your code "good" may have been spent for implementing a couple of useful features. I often feel very unsatisfied with my results - good code that only can do A is worse than bad code that can do A, B, C, and D. Are there any books, articles, blogs, or your ideas that may help with developing one's "being stupid" approach?

    Read the article

  • Efficient path-finding in free space

    - by DeadMG
    I've got a game situated in space, and I'd like to issue movement orders, which requires pathfinding. Now, it's my understanding that A* and such mostly apply to trees, and not empty space which does not have pathfinding nodes. I have some obstacles, which are currently expressed as fixed AABBs- that is, there is no unbounded "terrain" obstacle. In addition, I expect most obstacles to be reasonably approximable as cubes or spheres. So I've been thinking of applying a much simpler pathfinding algorithm- that is, simply cast a ray from the current position to the target position, and then I can get a list of obstacles using spatial partitioning relatively quickly. What I'm not so sure about is how to determine the part where the ordered unit manoeuvres around the obstacles. What I've been thinking so far is that I will simply use potential fields- that is, all units will feel a strong repulsive force away from each other and a moderate force towards the desired point. This also has the advantage that to issue group orders, I can simply order a mid-level force towards another entity. But this obviously won't achieve the optimal solution. Will potential fields achieve a reasonable approximation given my parameters, or do I need another solution?

    Read the article

  • Why don't languages use explicit fall-through on switch statements?

    - by zzzzBov
    I was reading Why do we have to use break in switch?, and it led me to wonder why implicit fall-through is allowed in some languages (such as PHP and JavaScript), while there is no support (AFAIK) for explicit fall-through. It's not like a new keyword would need to be created, as continue would be perfectly appropriate, and would solve any issues of ambiguity for whether the author meant for a case to fall through. The currently supported form is: switch (s) { case 1: ... break; case 2: ... //ambiguous, was break forgotten? case 3: ... break; default: ... break; } Whereas it would make sense for it to be written as: switch (s) { case 1: ... break; case 2: ... continue; //unambiguous, the author was explicit case 3: ... break; default: ... break; } For purposes of this question lets ignore the issue of whether or not fall-throughs are a good coding style. Are there any languages that exist that allow fall-through and have made it explicit? Are there any historical reasons that switch allows for implicit fall-through instead of explicit?

    Read the article

  • Error loading libGL.so.1

    - by jdp407
    When attempting to run various pieces of software (notably Steam and Yenka), I have come across an error similar to this: enter code here error while loading shared libraries: libGL.so.1: cannot open shared object file: No such file or directory I'm running a 64 bit system, with an NVidia Optimus card (I dual boot for certain windows only software that requires a dedicated graphics card). I have bumblebee installed, and I am using the nvidia-current driver, rather that one downloaded from NVidia, as recommended. The library (libGL.so.1) is not present in the top directory of /usr/lib, however it is present in /usr/lib32/nvidia-current, as a softlink to /usr/lib32/nvidia-current/libGL.so.304.64. A section of the output from ldconfig -p: libGL.so.1 (libc6,x86-64, OS ABI: Linux 2.4.20) => /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 libGL.so (libc6,x86-64, OS ABI: Linux 2.4.20) => /usr/lib/x86_64-linux-gnu/libGL.so libGL.so (libc6,x86-64, OS ABI: Linux 2.4.20) => /usr/lib/x86_64-linux-gnu/mesa/libGL.so Obviously a library with that name is being loaded, but they are located in /usr/lib/x86_64-linux-gnu, however installed software doesn't seem to able to 'see' it. For Steam, running it with optirun causes it to work, but this is not the case for Yenka. I assume that optirun causes the library stored in /usr/lib32/nvidia-current to be used, which allows Steam to run, so I can't understant why Yenka won't run. Can anyone explain why software can't see the normal mesa library, and why Yenka refuses to run with the nvidia-current library?

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >