Search Results

Search found 10131 results on 406 pages for 'natural sort'.

Page 99/406 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Why do memory-managed languages retain the `new` keyword?

    - by Channel72
    The new keyword in languages like Java, Javascript, and C# creates a new instance of a class. This syntax seems to have been inherited from C++, where new is used specifically to allocate a new instance of a class on the heap, and return a pointer to the new instance. In C++, this is not the only way to construct an object. You can also construct an object on the stack, without using new - and in fact, this way of constructing objects is much more common in C++. So, coming from a C++ background, the new keyword in languages like Java, Javascript, and C# seemed natural and obvious to me. Then I started to learn Python, which doesn't have the new keyword. In Python, an instance is constructed simply by calling the constructor, like: f = Foo() At first, this seemed a bit off to me, until it occurred to me that there's no reason for Python to have new, because everything is an object so there's no need to disambiguate between various constructor syntaxes. But then I thought - what's really the point of new in Java? Why should we say Object o = new Object();? Why not just Object o = Object();? In C++ there's definitely a need for new, since we need to distinguish between allocating on the heap and allocating on the stack, but in Java all objects are constructed on the heap, so why even have the new keyword? The same question could be asked for Javascript. In C#, which I'm much less familiar with, I think new may have some purpose in terms of distinguishing between object types and value types, but I'm not sure. Regardless, it seems to me that many languages which came after C++ simply "inherited" the new keyword - without really needing it. It's almost like a vestigial keyword. We don't seem to need it for any reason, and yet it's there. Question: Am I correct about this? Or is there some compelling reason that new needs to be in C++-inspired memory-managed languages like Java, Javascript and C#?

    Read the article

  • Tools for game script / storyboard

    - by Pietro Polsinelli
    I am searching for a tool that will help in writing a game script. By "script" I mean the text core of a storyboard - without the drawing drafts, which may or may not be there (yet). What I'm thinking of will let write a piece of text of the script, define a simplified workflow from that step, and then define the text of next steps, and so on. Searching online, I found Inform http://inform7.com/ ("A Design System for Interactive Fiction Based on Natural Language") which in theory is exactly what I am searching for, but trying to use it it has this model of a space (a dungeon, a library) where you are picking up objects and exploring them. In my case I am designing more a Sims like game, the flow is entirely different. Considering non specific software, mind mapping tools miss the linearity of the process. What I am writing is a directed graph - simply a work-flow, but the way I want to design it is more text based than work-flow based. SO what I'm doing now is using a text editor, which I'll transform directly in code. Any suggestions?

    Read the article

  • Is there an open source clone of a game in the Total War Series?

    - by sinekonata
    I loved Shogun:Total War gameplay and then later on spent weeks re-enacting historical wars and battles with Europa Barbarorum. It's a mod for Rome:TW that focuses on historical accuracy in the peoples, units, sounds, visuals, everything from macro mechanics to actual battles (e.g. a lot more missiles). Since that time I kind of turned my back on Windows cause it sucks and use Linux cause Mac sucks even worse. So as I miss that game (Eur. Barb.) and consider it the most realistic RTS to date, I'd like to know if there are any free and open source alternatives to it because ever since I'm under linux, I became addicted to FOSS so I also turned my back on paying (even kickstarters) for closed source, pay to play games. I have found a clone/alternative for everyone of the best games like Minecraft, CSS, Natural Selection, TA/SupCom etc... It's kind of the last one I need. The Spring engine is amazing for example, is there another open source project of the source in current development? Or would Spring itself be enough (it certainly looks capable) to make it? Thanks in advance guys...

    Read the article

  • Algorithm to measure how "diffused" 5,000 pennies are in an economy?

    - by makerofthings7
    Please allow me to use this example/metaphor to describe an algorithm I need. Objects There are 5 thousand pennies. There are 50 cups. There is a tracking history (Passport "stamp" etc) that is associated with each penny as it moves between cups. Definition I'll define a "highly diffused" penny as one that passes through many cups. A "poorly diffused" penny is one that either passes back and forth between 2 cups Question How can I objectively measure the diffusion of a penny as: The number of moves the penny has gone through The number of cups the penny has been in A unit of time (day, week, month) Why am I doing this? I want to detect if a cup is hoarding pennies. Resistance from bad actors Since hoarding is bad, the "bad cup" may simply solicit a partner and simply move pennies between each other. This will reduce the amount of time a coin isn't in transit, and would skew hoarding detection. A solution might be to detect if a cup (or set of cups) are common "partners" with each other, though I'm not sure how to think though this problem. Broad applicability Any assistance would be helpful, since I would think that this algorithm is common to Economics The study of migration patterns of animals, citizens of a country Other natural occurring phenomena ... and probably exists as a term or concept I'm unfamiliar with.

    Read the article

  • Happy Tau Day! (Or: How Some Mathematicians Think We Should Retire Pi) [Video]

    - by Jason Fitzpatrick
    When you were in school you learned all about Pi and its relationship to circles and turn-based geometry. Some mathematicians are rallying for a new lesson, on about Tau. Michael Hartl is a mathematician on a mission, a mission to get people away from using Pi and to start using Tau. His manifesto opens: Welcome to The Tau Manifesto. This manifesto is dedicated to one of the most important numbers in mathematics, perhaps the most important: the circle constant relating the circumference of a circle to its linear dimension. For millennia, the circle has been considered the most perfect of shapes, and the circle constant captures the geometry of the circle in a single number. Of course, the traditional choice of circle constant is p—but, as mathematician Bob Palais notes in his delightful article “p Is Wrong!”,1 p is wrong. It’s time to set things right. Why is Pi wrong? Among the arguments is that Tau is the ration of a circumference to the radius of a circle and defining circles by their radius is more natural and that Pi is a 2-factor number but with Tau everything is based of a single unit–three quarters of a turn around a Tau-defined circle is simply three quarters of a Tau radian. Watch the video above to see the Tau sequence (which begins 6.2831853071…) turned into a musical composition. For more information about Tau hit up the link below to read the manifesto. The Tau Manifesto [TauDay] HTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)What is a Histogram, and How Can I Use it to Improve My Photos?

    Read the article

  • good literature for teaching object oriented thinking in C [closed]

    - by Dipan Mehta
    Quite often C is the primary platform for the development. And when things are large scale, I have seen partitioning of the system as different objects is quite a natural thing. Some or many of the object orientated analysis and design principles are used here very well. This is not a debate question about whether or not C is a good candidate for object oriented programming or not. This is also NOT a question how to do OO in C. You can refer to this question and there are probably many such citations. As far as I am concerned, I have learned some of this things while working with many open source and commercial projects. (libjpeg, ffmpeg, Gstreamer which is based on GObject). I can probably extend a few references that explains some of these concepts such as - 1. Event Helix article, 2. Linux Mag article 3. one of my answers which links Schreiner's reference. Unfortunately, when we induct younger folks, it seems too hard to make them learn all of it the hard way. Usually, when we say it's C, a general reaction is to throw away all of the "Object thinking". Looking for help extending above references from those who have been in the similar areas of work. Are there any good formal literature that explains how Object thinking can be made to use while you are working in C? I have seen tons of book on general "object oriented paradigms" but they all focus on advanced languages mostly not in C. You see most C books - but most focus only on the syntax and the obfuscated corners of C and that's it. There are hardly ANY good reference, specially books or any systematic (I mean formal) literature on how to apply OO in C. This is very surprising given that so many large scale open source projects use C which are truly using this very well; but we hardly see any good formal literature on this subject.

    Read the article

  • Why are embedded device apps still written in C/C++? Why not Java programming language?

    - by hinkmond
    At the recent Black Hat 2014 conference in Sin City, the Black Hatters were focusing on Embedded Devices and IoT. You know? Make your networked-toaster burn your bread 10,000 miles away, over the Web for grins and giggles. Well, apparently the Black Hatters say it can be done pretty easily these days, which is scary. See: Securing Embedded Devices & IoT Here's a quote: All these devices are still written in C and C++. The challenges associated with developing securely in these languages have been fought for nearly two decades. "You often hear people say, 'Well, why don't we just get rid of the C and C++ language if it's so problematic. Why don't we just write everything in C# or Java, or something that is a little safer to develop in?'," DeMott says. Gah! Why are all these IoT devices still using C/C++? Of course they should be using Java SE Embedded technology! It's a natural fit to use for better security on embedded devices. Or, I guess, developers really don't mind if their networked-toasters do char their breakfast. If it can be burned, it will be... That's what I say. Unless they use Java. Hinkmond

    Read the article

  • Missed OpenWorld? Fear not. Customer Service Presentations for you!

    - by Tuula Fai
    As a Customer Service professional, you know the most frightening thing is having mission-critical systems go down when you’re trying to support customers. So while others are munching on candy this Halloween, why not spend your time listening to these Oracle OpenWorld sessions?   Oracle Service Vision and Roadmap Oracle RightNow Cross-Channel Contact Center Oracle RightNow Web Customer Service Oracle RightNow Chat Cloud Service & Oracle RightNow Virtual Assistant Cloud Service Oracle RightNow Social Customer Service Oracle RightNow Cloud Service - Upgrades Oracle Service – EBS Field Service Oracle Service – Siebel Service Oracle Service – Siebel Field Service In these presentations, you will learn the latest capabilities available in Oracle’s Service solutions for delivering a great customer experience. Like the ability to … Serve Your Customers Anywhere to maintain one seamless dialogue Turn Your Contact Center into a Profit Center by giving personal offers Use Social to Get Ahead of Service Issues by capturing and responding to posts Offer Help a Click Away on your support site at the point of need Humanize Web Self-Service with a Virtual Assistant that uses natural conversation As journalist Robert Liparulo said, “Knowledge was like candy: you never turned it down, especially if you didn't have to work too hard to get it.” It’s right here. Listen, Learn and Lead.

    Read the article

  • Why doesn't it seem to be any development in the field of 3D VR gear, especially with regard to gaming?

    - by neuviemeporte
    I remember that way back around 1995, there was this big craze with VR in the media, a whole bunch of (mostly mediocre) games labeled as "virtual-reality-interactive-movie (...)" were published. If I recall correctly, the first 3D VR helmet was called VFX-1 and was sold bundled with Descent and some dedicated joystick. I never owned one, and I read just one review which was mostly enthusiastic, but pointed to some weak points, like the eyes getting tired after an hour or so of playing. Then the whole thing basically flickered down and died. I suppose the main reason it wasn't successful was that the hardware of the day was not powerful enough, the VR gear's design wasn't perfected to make it comfortable and natural to use, and the companies that made it failed to market it successfully. What I can't understand is why isn't there any development in the field today. There is some vr-ish hardware mostly targeted at the consoles (Kinect, Wii remote, TrackIR), but all projects of creating some 3d head-mounted display system seem to be in early infancy, appear once in a trade show somewhere and aren't heard of again. I think it could work great with head tracking and some of today's shooters, flight sims (trackIR is nice but the movement scale translation is awkward) and other games with an FPP POV. Is there any technological reason why decent vr headgear can't be made today, or is it just that nobody really cares/everyone is scared to repeat the '90s failure?

    Read the article

  • What is bondib1 used for on SPARC SuperCluster with InfiniBand, Solaris 11 networking & Oracle RAC?

    - by user12620111
    A co-worker asked the following question about a SPARC SuperCluster InfiniBand network: > on the database nodes the RAC nodes communicate over the cluster_interconnect. This is the > 192.168.10.0 network on bondib0. (according to ./crs/install/crsconfig_params NETWORKS> setting) > What is bondib1 used for? Is it a HA counterpart in case bondib0 dies? This is my response: Summary: bondib1 is currently only being used for outbound cluster interconnect interconnect traffic. Details: bondib0 is the cluster_interconnect $ oifcfg getif            bondeth0  10.129.184.0  global  public bondib0  192.168.10.0  global  cluster_interconnect ipmpapp0  192.168.30.0  global  public bondib0 and bondib1 are on 192.168.10.1 and 192.168.10.2 respectively. # ipadm show-addr | grep bondi bondib0/v4static  static   ok           192.168.10.1/24 bondib1/v4static  static   ok           192.168.10.2/24 Hostnames tied to the IPs are node1-priv1 and node1-priv2  # grep 192.168.10 /etc/hosts 192.168.10.1    node1-priv1.us.oracle.com   node1-priv1 192.168.10.2    node1-priv2.us.oracle.com   node1-priv2 For the 4 node RAC interconnect: Each node has 2 private IP address on the 192.168.10.0 network. Each IP address has an active InfiniBand link and a failover InfiniBand link. Thus, the 4 node RAC interconnect is using a total of 8 IP addresses and 16 InfiniBand links. bondib1 isn't being used for the Virtual IP (VIP): $ srvctl config vip -n node1 VIP exists: /node1-ib-vip/192.168.30.25/192.168.30.0/255.255.255.0/ipmpapp0, hosting node node1 VIP exists: /node1-vip/10.55.184.15/10.55.184.0/255.255.255.0/bondeth0, hosting node node1 bondib1 is on bondib1_0 and fails over to bondib1_1: # ipmpstat -g GROUP       GROUPNAME   STATE     FDT       INTERFACES ipmpapp0    ipmpapp0    ok        --        ipmpapp_0 (ipmpapp_1) bondeth0    bondeth0    degraded  --        net2 [net5] bondib1     bondib1     ok        --        bondib1_0 (bondib1_1) bondib0     bondib0     ok        --        bondib0_0 (bondib0_1) bondib1_0 goes over net24 # dladm show-link | grep bond LINK                CLASS     MTU    STATE    OVER bondib0_0           part      65520  up       net21 bondib0_1           part      65520  up       net22 bondib1_0           part      65520  up       net24 bondib1_1           part      65520  up       net23 net24 is IB Partition FFFF # dladm show-ib LINK         HCAGUID         PORTGUID        PORT STATE  PKEYS net24        21280001A1868A  21280001A1868C  2    up     FFFF net22        21280001CEBBDE  21280001CEBBE0  2    up     FFFF,8503 net23        21280001A1868A  21280001A1868B  1    up     FFFF,8503 net21        21280001CEBBDE  21280001CEBBDF  1    up     FFFF On Express Module 9 port 2: # dladm show-phys -L LINK              DEVICE       LOC net21             ibp4         PCI-EM1/PORT1 net22             ibp5         PCI-EM1/PORT2 net23             ibp6         PCI-EM9/PORT1 net24             ibp7         PCI-EM9/PORT2 Outbound traffic on the 192.168.10.0 network will be multiplexed between bondib0 & bondib1 # netstat -rn Routing Table: IPv4   Destination           Gateway           Flags  Ref     Use     Interface -------------------- -------------------- ----- ----- ---------- --------- 192.168.10.0         192.168.10.2         U        16    6551834 bondib1   192.168.10.0         192.168.10.1         U         9    5708924 bondib0   There is a lot more traffic on bondib0 than bondib1 # /bin/time snoop -I bondib0 -c 100 > /dev/null Using device ipnet/bondib0 (promiscuous mode) 100 packets captured real        4.3 user        0.0 sys         0.0 (100 packets in 4.3 seconds = 23.3 pkts/sec) # /bin/time snoop -I bondib1 -c 100 > /dev/null Using device ipnet/bondib1 (promiscuous mode) 100 packets captured real       13.3 user        0.0 sys         0.0 (100 packets in 13.3 seconds = 7.5 pkts/sec) Half of the packets on bondib0 are outbound (from self). The remaining packet are split evenly, from the other nodes in the cluster. # snoop -I bondib0 -c 100 | awk '{print $1}' | sort | uniq -c Using device ipnet/bondib0 (promiscuous mode) 100 packets captured   49 node1-priv1.us.oracle.com   24 node2-priv1.us.oracle.com   14 node3-priv1.us.oracle.com   13 node4-priv1.us.oracle.com 100% of the packets on bondib1 are outbound (from self), but the headers in the packets indicate that they are from the IP address associated with bondib0: # snoop -I bondib1 -c 100 | awk '{print $1}' | sort | uniq -c Using device ipnet/bondib1 (promiscuous mode) 100 packets captured  100 node1-priv1.us.oracle.com The destination of the bondib1 outbound packets are split evenly, to node3 and node 4. # snoop -I bondib1 -c 100 | awk '{print $3}' | sort | uniq -c Using device ipnet/bondib1 (promiscuous mode) 100 packets captured   51 node3-priv1.us.oracle.com   49 node4-priv1.us.oracle.com Conclusion: bondib1 is currently only being used for outbound cluster interconnect interconnect traffic.

    Read the article

  • Implementing `fling` logic without pan gesture recognizers

    - by KDiTraglia
    So I am trying to port over a simple game that I originally wrote to iphone into cocos2d-x. I've hit a minor bump however in implementing simple 'fling' logic I had in the iphone version that is difficult to port over to the c++. In iOS I could get the velocity of a pan gesture very easily: CGPoint velocity = [recognizer velocityInView:recognizer.view]; However now I basically only know where the touch began, where the touch ended, and all the touches that are logged in between. For now I logged all the pts onto a stack then pulled the last point and the 6th to last point (seemed to work the best), find the difference between those pts multiply by a constant and use that as the velocity. It works relatively well, but I'm wondering if anyone else has any better algorithms, when given a bunch of touch pts, to figure out a new speed upon releasing an object that feels natural (Note speed in my game is just a constant x and y, there's no drag or spin or anything tricky like that). Bonus points if anyone has figured out how to get pan gestures into the newest version (3.0 alpha) of cocos2d-x without losing ability to build cross platform.

    Read the article

  • Why is math taught "backwards"? [closed]

    - by Yorirou
    A friend of mine showed me a pretty practical Java example. It was a riddle. I got excited and quickly solved the problem. After it, he showed me the mathematical explanation of my solution (he proved why is it good), and it was completely clear for me. This seems like natural approach for me: solve problems, and generalize. This is very familiar to me, I do it all the time when I am programming: I write a function. When I have to write a similar function, I generalize the problem, grab the generic parts, and refactor them to a function, and solve the original problems as a specialization of the general function. At the university (or at least where I study), things work backwards. The professors shows just the highest possible level of the solutions ("cryptic" mathematical formulas). My problem is that this is too abstract for me. There is no connection of my previous knowledge (== reality in my sense), so even if I can understand it, I can't really learn it properly. Others are learning these formulas word-by-word, and get good grades, since they can write exactly the same to the test, but this is not an option for me. I am a curious person, I can learn interesting things, but I can't learn just text. My brain is for storing toughts, not strings. There are proofs for the theories, but they are also really hard to understand because of this, and in most of the cases they are omitted. What is the reason for this? I don't understand why is it a good idea to show the really high level of abstraction and then leave the practical connections (or some important ideas / practical motivations) out?

    Read the article

  • Oracle is Child&rsquo;s Play&hellip;in NSW

    - by divya.malik
    A few weeks ago, my colleague Michael Seback posted a blog entry on Oracle’s acquisition of Haley.  We recently read  an interesting report from Down Under, and here was our press release on the  implementation of Oracle’s Policy automation software in New South Wales, which I thought I would share. We always love hearing about our software “at work”, and especially in the Public Sector- social services area, where it makes a big difference to people’s lives. Here were some of the reasons, why NSW chose Oracle software: “One of the things Oracle’s Policy Automation system is good at is allowing you take decision  trees and rules that are obviously written in English and code them up using very much a natural language approach,” said Holling (CIO for Human Services). “So it was quite a short process to translate the final set of rules that were written on paper into business rules that were actually embedded in the system.” “Another reason why we chose Oracle’s Automation tool is because with future versions of Siebel it comes very tightly integrated with that. It allows us to then to basically take the results of the Policy Automation survey and actually populate our client management system database with that information,” said Holling. As per Surend Dayal, North America VP, Oracle’s Policy automation has applications across a wide range of industries, including public sector—especially health and human services—also financial services, insurance, and even airline rewards programs. In other words, any business process that requires consistent, accurate decision-making where complex legislation and/or internal policies are involved. Click here to read more about Oracle and Haley.

    Read the article

  • Is ZeroMQ a good choice to make a Python app and a C# managed assembly work together?

    - by Alex Bausk
    I have a task that involves talking to a .NET-based API (namely AutoCAD) to retrieve data, send commands, and react to events. I want to separate the API operations and the proper program logic (largely already implemented in Python) by using natural tools for both: a C# DLL for the former and a Python app for the latter. To connect these two pieces, I began exchanging JSON in ZeroMQ messages. I'm at early development stages but having recently discovered that ZeroMQ does not guarantee message delivery/order, I have reservations about whether this is a feasible way to go. Right now my app is a very basic REQ/REP pair and I plan to handle reacting to events and executing different commands by adding some sort of 'recipient-function' field to my message format. The reason that I want to use ZMQ is that I might be able to scale the software into a larger, multi-user, distributed solution sometime. I am a lay programmer so I would ask for your advice about this architecture. Should I just go ahead with it and plan to deal with message reliability/ordering when problems appear? Should I consider developing some kind of a REST wrapper around ZMQ?

    Read the article

  • Dealing with Fine-Grained Cache Entries in Coherence

    - by jpurdy
    On occasion we have seen significant memory overhead when using very small cache entries. Consider the case where there is a small key (say a synthetic key stored in a long) and a small value (perhaps a number or short string). With most backing maps, each cache entry will require an instance of Map.Entry, and in the case of a LocalCache backing map (used for expiry and eviction), there is additional metadata stored (such as last access time). Given the size of this data (usually a few dozen bytes) and the granularity of Java memory allocation (often a minimum of 32 bytes per object, depending on the specific JVM implementation), it is easily possible to end up with the case where the cache entry appears to be a couple dozen bytes but ends up occupying several hundred bytes of actual heap, resulting in anywhere from a 5x to 10x increase in stated memory requirements. In most cases, this increase applies to only a few small NamedCaches, and is inconsequential -- but in some cases it might apply to one or more very large NamedCaches, in which case it may dominate memory sizing calculations. Ultimately, the requirement is to avoid the per-entry overhead, which can be done either at the application level by grouping multiple logical entries into single cache entries, or at the backing map level, again by combining multiple entries into a smaller number of larger heap objects. At the application level, it may be possible to combine objects based on parent-child or sibling relationships (basically the same requirements that would apply to using partition affinity). If there is no natural relationship, it may still be possible to combine objects, effectively using a Coherence NamedCache as a "map of maps". This forces the application to first find a collection of objects (by performing a partial hash) and then to look within that collection for the desired object. This is most naturally implemented as a collection of entry processors to avoid pulling unnecessary data back to the client (and also to encapsulate that logic within a service layer). At the backing map level, the NIO storage option keeps keys on heap, and so has limited benefit for this situation. The Elastic Data features of Coherence naturally combine entries into larger heap objects, with the caveat that only data -- and not indexes -- can be stored in Elastic Data.

    Read the article

  • Prevent collisions between mobs/npcs/units piloted by computer AI : How to avoid mobile obstacles?

    - by Arthur Wulf White
    Lets says we have character a starting at point A and character b starting at point B. character a is headed to point B and character b is headed to point A. There are several simple ways to find the path(I will be using Dijkstra). The question is, how do I take preventative action in the code to stop the two from colliding with one another? case2: Characters a and b start from the same point in different times. Character b starts later and is the faster of the two. How do I make character b walk around character a without going through it? case3:Lets say we have m such characters in each side and there is sufficient room to pass through without the characters overlapping with one another. How do I stop the two groups of characters from "walking on top of one another" and allow them pass around one another in a natural organic way. A correct answer would be any algorithm, that given the path to the destination and a list of mobile objects that block the path, finds an alternative path or stops without stopping all units when there is sufficient room to traverse.

    Read the article

  • Should one use a separate database for application data and user data?

    - by trycatch
    I’ve been working on a project for a little while and I’m unsure which is the better architecture. I’m interested in the consensus. The answer to me seems fairly obvious but something about it is digging at me and I can't pick out what. The TL;DR is: how do you handle a program with application data and user data in the same DB which needs to be able to receive updates to the application data periodically? One database for user data and one for application, or both in one? The detailed version is.. if an application has a database which needs to maintain application data AND user data, and the user data all references application data, it feels more natural to me to store them in the same database. But if there exists a need to be able to update the application data within this database periodically, should this be stripped into two databases so that one can simply download the updated application data database file as an update and replace the old one? Or should they remain as one database, and the application data be updated via a script which inserts the new data into the existing database? The second sounds clearly preferable to me... but for some reason just doesn’t feel right, and I can't pick out quite why.

    Read the article

  • Unity gizmos vs. referenced game objects

    - by DuckMaestro
    I'm designing a Unity script that I intend to be highly reusable and as easy as possible to setup within the editor. To this end, a number of properties of this script really need some kind of visual representation on screen. It is an unresolved question to me whether the design of the script should require references to placeholder game objects, OR just Vector3's and float's that have associated gizmos drawn for them. Normally a gizmo would be a natural choice, except that Unity gizmos are not directly manipulable (as far as I can tell). Because of this shortcoming I'm having to consider whether depending on references to placeholder game objects is a more designer-friendly approach ultimately, in spite of the extra setup required, and that it might be counter-intuitive when the placeholder game objects disappear at run-time (which my script would do). Is there a community standard or preference here in this case? Can a Unity-experienced game programmer / designer speak to which approach they feel is more intuitive or more convenient to setup, when using a 3rd party script? Or is this just splitting hairs as long as I ship an example prefab with my script?

    Read the article

  • Why is quicksort better than other sorting algorithms in practice?

    - by Raphael
    This is a repost of a question on cs.SE by Janoma. Full credits and spoils to him or cs.SE. In a standard algorithms course we are taught that quicksort is O(n log n) on average and O(n²) in the worst case. At the same time, other sorting algorithms are studied which are O(n log n) in the worst case (like mergesort and heapsort), and even linear time in the best case (like bubblesort) but with some additional needs of memory. After a quick glance at some more running times it is natural to say that quicksort should not be as efficient as others. Also, consider that students learn in basic programming courses that recursion is not really good in general because it could use too much memory, etc. Therefore (and even though this is not a real argument), this gives the idea that quicksort might not be really good because it is a recursive algorithm. Why, then, does quicksort outperform other sorting algorithms in practice? Does it have to do with the structure of real-world data? Does it have to do with the way memory works in computers? I know that some memories are way faster than others, but I don't know if that's the real reason for this counter-intuitive performance (when compared to theoretical estimates).

    Read the article

  • Generating random tunnels

    - by IVlad
    What methods could we use to generate a random tunnel, similar to the one in this classic helicopter game? Other than that it should be smooth and allow you to navigate through it, while looking as natural as possible (not too symmetric but not overly distorted either), it should also: Most importantly - be infinite and allow me to control its thickness in time - make it narrower or wider as I see fit, when I see fit; Ideally, it should be possible to efficiently generate it with smooth curves, not rectangles as in the above game; I should be able to know in advance what its bounds are, so I can detect collisions and generate powerups inside the tunnel; Any other properties that let you have more control over it or offer optimization possibilities are welcome. Note: I'm not asking for which is best or what that game uses, which could spark extended discussion and would be subjective, I'm just asking for some methods that others know about or have used before or even think they might work. That is all, I can take it from there. Also asked on stackoverflow, where someone suggested I should ask here too. I think it fits in both places, since it's as much an algorithm question as it is a gamedev question, IMO.

    Read the article

  • What to do if you find a vulnerability in a competitor's site?

    - by user17610
    While working on a project for my company, I needed to build functionality that allows users to import/export data to/from our competitor's site. While doing this, I discovered a very serious security exploit that could, in short, perform any script on the competitor's website. My natural feeling is to report the issue to them in the spirit of good-will. Exploiting the issue to gain advantage crossed my mind, but I don't want to go down that path. So my question is, would you report a serious vulnerability to your direct competition, in order to help them? Or would you keep your mouth shut? Is there a better way of going about this, perhaps to gain at least some advantage from the fact that I'm helping them by reporting the issue? Update (Clarification): Thanks for all your feedback so far, I appreciate it. Would your answers change if I were to add that the competition in question is a behemoth in the market (hundreds of employees in several continents), and my company only started a few weeks ago (three employees)? It goes without saying, they most definitely will not remember us, and if anything, only realize that their site needs work (which is why we entered this market in the first place). I confess this is one of those moral vs. business toss-ups, but I appreciate all the advice.

    Read the article

  • Web dev/programmer with 4.5 yrs experience. Better for career: self-study or master's degree? [closed]

    - by Anonymous Programmer
    I'm a 28 year-old web developer/programmer with 4.5 years of experience, and I'm looking to jump-start my career. I'm trying to decide between self-study and a 1-year master's program in CS at a top school. I'm currently making 65K in a high cost-of-living area that is NOT a hot spot for technology firms. I code almost exclusively in Ruby/Rails, PHP/CodeIgniter, SQL, and JavaScript. I've slowly gained proficiency with Git. Roughly half the time I am architecting/coding, and half the time I am pounding out HTML/CSS for static brochureware sites. I'd like to make more more money while doing more challenging/interesting work, but I don't know where to start. I have an excellent academic record (math major with many CS credits, 3.9+ GPA), GRE scores, and recommendations, so I am confident that I could be admitted to a great CS master's program. On the other hand, there is the tuition and opportunity cost to consider. I feel like there are a number of practical languages/tools/skills worth knowing that I could teach myself - shell scripting, .NET, Python, Node.js, MongoDB, natural language processing techniques, etc. That said, it's one thing to read about a subject and another thing to have experience with it, which structured coursework provides. So, on to the concrete questions: What programming skills/knowledge should I develop to increase my earning potential and make me competitive for more interesting jobs? Will a master's degree in CS from a top school help me develop the above skills/knowledge, and if so, is it preferable to self-study (possibly for other reasons, e.g., the degree's value as a credential)?

    Read the article

  • At what point does programming become a useful skill?

    - by Elip
    This is probably a very difficult question to answer, because of its subjectivity, but even a vague guess would help me out: Now that Khan Academy is beginning to offer Computer Science lectures I'm getting an itch to learn programming again. I maybe am a bit more technical than your average computer user, using Ubuntu as my OS, LaTeX for writing and I know some small tricks like regular expressions or boolean search for google. However from my previous attempts to learn programming, I realized I do not have a natural aptitude for it and I also don't seem to enjoy the process. But I am fairly certain that a basic proficiency in programming could prove to be very beneficial for me career wise; I also often get ideas for little scripts that I cannot implement. My question is: Let's say you study programming 1 hour / day on average. At what point will you become good enough so that programming can be used for automating tasks and actually saving time? Do you think programming is worth picking up if you never have the ambition to make it your career or even your hobby, but use it strictly for utility purposes?

    Read the article

  • GLSL custom interpolation filter

    - by Cyan
    I'm currently building a fragment shader which is using several textures to render the final pixel color. The textures are not really textures, they are in fact "input data" to be used in the formula to generate the final color. The problem I've got is that the texture are getting bi-linear-filtered, and therefore the input data as well. This results in many unwanted side-effects, especially when final rendered texture is "zoomed" compared to original resolution. Removing the side effect is a complex task, and only result in "average" rendering. I was thinking : well, all my problems seems to come from the "default" bi-linear filtering on these input data. I can't move to GL_NEAREST either, since it would create "blocky" rendering. So i guess the better way to proceed is to be fully in charge of the interpolation. For this to work, i would need the input data at their "natural" resolution (so that means 4 samples), and a relative position between the sampled points. Is that possible, and if yes, how ? [EDIT] Since i started this question, i found this internet entry, which seems to (mostly) answer my needs. http://www.gamerendering.com/2008/10/05/bilinear-interpolation/ One aspect of the solution worry me though : the dimensions of the texture must be provided in an argument. It seems there is no way to "find this information transparently". Adding an argument into the rendering pipeline is unwelcomed though, since it's not under my responsibility, and translates into adding complexity for others.

    Read the article

  • On Developing Web Services with Global State

    - by user74418
    I'm new to web programming. I'm more experienced and comfortable with client-side code. Recently, I've been dabbling in web programming through Python's Google App Engine. I ran into some difficulty while trying to write some simple apps for the purposes of learning, mainly involving how to maintain some kind of consistent universally-accessible state for the application. I tried to write a simple queueing management system, the kind you would expect to be used in a small clinic, or at a cafeteria. Typically, this is done with hardware. You take a number from a ticketing machine, and when your number is displayed or called you approach the counter for service. Alternatively, you could be given a small pager, which will beep or vibrate when it is your turn to receive service. The former is somewhat better in that you have an idea of how many people are still ahead of you in the queue. In this situation, the global state is the last number in queue, which needs to be updated whenever a request is made to the server. I'm not sure how to best to store and maintain this value in a GAE context. The solution I thought of was to keep the value in the Datastore, attempt to query it during a ticket request, update the value, and then re-store it with put. My problem is that I haven't figured out how to lock the resource so that other requests do not check the value while it is in the middle of being updated. I am concerned that I may end up ticket requests that have the same queue number. Also, the whole solution feels awkward to me. I was wondering if there was a more natural way to accomplish this without having to go through the Datastore. Can anyone with more experience in this domain provide some advice on how to approach the design of the above application?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >