Search Results

Search found 403 results on 17 pages for 'likewise'.

Page 9/17 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Iphone 3d games using Open Gl Es

    - by Dave
    Hi, I want to make 3d games for the iphone and with all this doubt about Unity and Apples new sdk agreement I'm wondering what the best way forward is? A lot of people recommend opengl es and point me in the direction of the open gl es bible and likewise, the problem is none of this actually talk about setting a game up i.e loading a character, scene , AI etc. Yet a lot of people are using Open GL es please could someone help me out, I really feel like I'm missing out on something. Are there any good tutorials/books that cover this? Thanks,

    Read the article

  • How do I switch the table that is queried with linq-to-sql

    - by Ian Ringrose
    We have two tables with the same set of columns; depending on the “type” of object the value is stored in one of the two tables. I wish to use common code to access these two tables. If I was using “raw sql” I could just use String.Format() to change the table name. (Likewise for updates etc) The two separate tables are needed as the data access patterns are very different for the common queries on the two tables and therefore different indexes are needed. “Views” and “instead of triggers” etc to make the tables look like a single table are not liked here. A lot of our customers use low end version of SqlServer so we cannot use partition tables.

    Read the article

  • wordpress: previous_post_link() / next_post_link() placeholder

    - by superUntitled
    I am having trouble with the previous_post_link() and next_post_link() functionality. When there is no previous post, the function previous_post_link() does not display a link, likewise for the next_post_link() and the last post. I would like to have a placeholder image so that the design stays consistent. Currently I have images of green arrows pointing left and right, I would like to place an image of a grey arrow if there are no more posts to go back to. Is there a way to use the next_post_link()/previous_post_link() functions but not have the link removed. I also wonder if there is a way for the links to cycle, so that if you come to the most recent post, the next post link would bring you back to the first post.

    Read the article

  • Unwanted Retriggering of Textbox Events

    - by user348917
    This is one of those "seems obvious" as how to do, but came across interesting side effect in implementing. I'm trying to keep two text boxes syncronized when information is updated. In this example, I will be using txtStartDate and txtEndDate. If the txtStartDate is changed, then the txtEndDate should should be updated. Likewise, if the txtEndDate changes, I want the txtSartDate to be updated. The side effect I'm comming across is that when I set them up under the TextChanged event for both, the events seem to retrigger against each other indefinately (endless loop?). Am I using the wrong event? Is this a task for a delegate?

    Read the article

  • C# log base 10 and rounding up to nearest power of 10?

    - by Tom
    Hi, if i have a number between 100 and 1000 i want to get the value 3 because 10^3 = 1000. Likewise, if i had a number between 10 and 100 i would want to get the value 2, because 10^2 is 100. Incase you're wondering, its to do with calculating a probability and i always need to divide through by 10^value, to keep the probability between 0 and 1. For example if i calculate 9256, i need to divide through by 10^4, so that i get a probability of 0.92 I'm not sure how to do the rounding up and how to do the base 10, could someone please help?

    Read the article

  • Call subclass constructor from abstract class in Java

    - by Joel
    public abstract class Parent { private Parent peer; public Parent() { peer = new ??????("to call overloaded constructor"); } public Parent(String someString) { } } public class Child1 extends parent { } public class Child2 extends parent { } When I construct an instance of Child1, I want a "peer" to automatically be constructed which is also of type Child1, and be stored in the peer property. Likewise for Child2, with a peer of type Child2. The problem is, on the assignment of the peer property in the parent class. I can't construct a new Child class by calling new Child1() because then it wouldn't work for Child2. How can I do this? Is there a keyword that I can use that would refer to the child class? Something like new self()?

    Read the article

  • Why does `Array.length`, `Function.length`, `String.length`, etc return 1?

    - by zealoushacker
    While teaching my JavaScript class yesterday, my students and I came across some interesting functionality that I thought might be worth capturing in a question and the answer I've come to. Typing Array.length in the JS console in chrome returns 1. Likewise, Function.length returns 1. This is important because: Every function in JavaScript is actually a Function object. (MDN JS Ref: Function) Thus, Object.length and likely all other native objects will and should return 1 as the value of the length property. So, finally why is this behavior occurring?

    Read the article

  • The Oracle Enterprise Linux Software and Hardware Ecosystem

    - by sergio.leunissen
    It's been nearly four years since we launched the Unbreakable Linux support program and with it the free Oracle Enterprise Linux software. Since then, we've built up an extensive ecosystem of hardware and software partners. Oracle works directly with these vendors to ensure joint customers can run Oracle Enterprise Linux. As Oracle Enterprise Linux is fully--both source and binary--compatible with Red Hat Enterprise Linux (RHEL), there is minimal work involved for software and hardware vendors to test their products with it. We develop our software on Oracle Enterprise Linux and perform full certification testing on Oracle Enterprise Linux as well. Due to the compatibility between Oracle Enterprise Linux and RHEL, Oracle also certifies its software for use on RHEL, without any additional testing. Oracle Enterprise Linux tracks RHEL by publishing freely downloadable installation media on edelivery.oracle.com/linux and updates, bug fixes and security errata on Unbreakable Linux Network (ULN). At the same time, Oracle's Linux kernel team is shaping the future of enterprise Linux distributions by developing technologies and features that matter to customers who deploy Linux in the data center, including file systems, memory management, high performance computing, data integrity and virtualization. All this work is contributed to the Linux and Xen communities. The list below is a sample of the partners who have certified their products with Oracle Enterprise Linux. If you're interested in certifying your software or hardware with Oracle Enterprise Linux, please contact us via [email protected] Chip Manufacturers Intel, Intel Enabled Server Acceleration Alliance AMD Server vendors Cisco Unified Computing System Dawning Dell Egenera Fujitsu HP Huawei IBM NEC Sun/Oracle Storage Systems, Volume Management and File Systems 3Par Compellent EMC VPLEX FalconStor Fusion-io Hitachi Data Systems HP Storage Array Systems Lustre Network Appliance OCFS2 PillarData Symantec Veritas Storage Foundation Networking: Switches, Host Bus Adapters (HBAs), Converged Network Adapters (CNAs), InfiniBand Brocade Emulex Mellanox QLogic Voltaire SOA and Middleware ActiveState ActivePerl, ActivePython Tibco Zend Backup, Recovery & Replication Arkeia Network Backup Suite BakBone NetVault CommVault Simpana 8 EMC Networker, Replication Manager FalconStor Continuous Data Protector HP Data Protector NetApp Snapmanager Quest LiteSpeed Engine Steeleye Data Replication, Disaster Recovery Symantec NetBackup, Veritas Volume Replicator, Symantec Backup Exec Zmanda Amanda Enterprise Data Center Automation BMC CA Unicenter HP Server Automation (formerly Opsware), System Management Homepage Oracle Enterprise Manager Ops Center Quest Vizioncore vFoglight Pro TeamQuest Manager Clustering & High Availability FUJITSU x10sure NEC Express Cluster X Steeleye Lifekeeper Symantec Cluster Server Univa UniCluster Virtualization Platforms and Cloud Providers Amazon EC2 Citrix XenServer Rackspace Cloud VirtualBox VMWare ESX Security Management ArcSight: Enterprise Security Manager, Logger CA Access Control Centrify Suite Ecora Auditor FoxT Manager Likewise: Unix Account Management Lumension Endpoint Management and Security Suite QualysGuard Suite Quest Privilege Manager McAfee Application Control, Change ControlIntegrity Monitor, Integrity Control, PCI Pro Solidcore S3 Symantec Enterprise Security Manager (ESM) Tripwire Trusted Computer Solutions

    Read the article

  • Facebook Sponsored Results: Is It Getting Results?

    - by Mike Stiles
    Social marketers who like to focus on the paid aspect of the paid/earned hybrid Facebook represents may want to keep themselves aware of how the network’s new Sponsored Results ad product is performing. The ads, which appear when a user conducts a search from the Facebook search bar, have only been around a week or so. But the first statistics coming out of them are not bad. Marketer Nanigans says click-through rates on the Sponsored Results have been nearly 23 times better than regular Facebook ads. Some click-through rates have even gone over 3%. Just to give you some perspective, a TechCrunch article points out that’s the same kind of click-through rates that were being enjoyed during the go-go dot com boom of the 90’s. The average across the Internet in its entirety is now somewhere around .3% on a good day, so a 3% number should be enough to raise an eyebrow. Plus the cost-per-click price is turning up 78% lower than regular Facebook ads, so that should raise the other eyebrow. Marketers have gotten pretty used to being able to buy ads against certain keywords. Most any digital property worth its salt that sells ads offers this, and so does Facebook with its Sponsored Results product. But the unique prize Facebook brings to the table is the ability to also buy based on demographic and interest information gleaned from Facebook user profiles. With almost 950 million logging in, this is exactly the kind of leveraging of those users conventional wisdom says is necessary for Facebook to deliver on its amazing potential. So how does the Facebook user fit into this? Notorious for finding out exactly where sponsored marketing messages are appearing and training their eyeballs to avoid those areas, will the Facebook user reject these Sponsored Results? Well, Facebook may have found an area in addition to the News Feed where paid elements can’t be avoided and will be tolerated. If users want to read their News Feed, and they do, they’re going to see sponsored posts. Likewise, if they want to search for friends or Pages, and they do, they’re going to see Sponsored Results. The paid results are clearly marked as such. As long as their organic search results are not tainted or compromised, they will continue using search. But something more is going on. The early click-through rate numbers say not only do users not mind seeing these Sponsored Results, they’re finding them relevant enough to click on. And once they click, they seem to be liking what they find, with a reported 14% higher install rate than Marketplace Ads. It’s early, and obviously the jury is still out. But this is a new social paid marketing opportunity that’s well worth keeping an eye on, and that may wind up hitting the trifecta of being effective for the platform, the consumer, and the marketer.

    Read the article

  • Using SQL Source Control and Vault Professional Part 4

    - by Ajarn Mark Caldwell
    Two weeks ago I upgraded our installation of Fortress to the latest version, which is now named Vault Professional.  This is the version of Vault (i.e. Vault Standard 5.1 / Vault Professional 5.1) that will be officially supported with Red-Gate SQL Source Control 2.1.  While the folks at Red-Gate did a fantastic job of working with me to get SQL Source Control to work with the older Fortress version, we weren’t going to just sit on that.  There are a couple of things that Vault Professional cleaned up for us, such as improved integration with Visual Studio 2010, so it was a win all around. Shortly after that upgrade, I received notice from Red-Gate that they had a new Early Access version of SQL Source Control available that included the ability to source control static data.  The idea here is that you probably have a few fairly static lookup tables in your system, and those data values are similar in concept to source code, and should be versioned in your source control management system also.  I agree with this, but please be wise…somebody out there is bound to try to use this feature as their disaster recovery for their entire database, and that is NOT the purpose.  First off, you should never have your PROD (or LIVE, whatever you call it) system attached to source control.  Source Control is for development, not for PROD systems.  Second, use the features that are intended for this purpose, such as BACKUP and RESTORE. Laying that tangent aside, it is great that now you can include these critical values in your repository and make them part of a deployment process.  As you would guess, SQL Source Control uses SQL Data Compare to create the data change scripts just like it uses SQL Compare to create the schema change scripts.  Once again, they did a very good job with the integration to their other products.  At this point we are really starting to see some good payback on our investment in the full SQL Developer Bundle.  Those products were worth the investment back when we only used them sporadically for troubleshooting and DBA analysis, but now with SQL Source Control, they are becoming everyday-use products for the development team. I like this software (SQL Source Control) so much that I am about to break my own rules and distribute it to my team to use even though it is still in beta.  This is the first time that I have approved the use of any beta software in a production scenario (actively building our next versions of internal software) but I predict that the usability and productivity gain of using SQL Source Control over manual scripting is worth the risk.  Of course, I have also put this beta software through its paces pretty well to be comfortable with it, and Red-Gate has proven their responsiveness to issues that came up in my early beta testing, and so I am willing to bet on their continued support.  Likewise, SourceGear, the maker of Vault Professional, has proven itself to me as well, and so the combination of SQL Source Control with Vault Professional is the new standard for my development team.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • AI for a mixed Turn Based + Real Time battle system - Something "Gambit like" the right approach?

    - by Jason L.
    This is maybe a question that's been asked 100 times 1,000 different ways. I apologize for that :) I'm in the process of building the AI for a game I'm working on. The game is a turn based one, in the vein of Final Fantasy but also has a set of things that happen in real time (reactions). I've experimented with FSM, HFSMs, and Behavior Trees. None of them felt "right" to me and all felt either too limiting or too generic / big. The idea I'm toying with now is something like a "Rules engine" that could be likened to the Gambit system from Final Fantasy 12. I would have a set of predefined personalities. Each of these personalities would have a set of conditions it would check on each event (Turn start, time to react, etc). These conditions would be priority ordered, and the first one that returns true would be the action I take. These conditions can also point to a "choice" action, which is just an action that will make a choice based on some Utility function. Sort of a mix of FSM/HFSM and a Utility Function approach. So, a "gambit" with the personality of "Healer" may look something like this: (ON) Ally HP = 0% - Choose "Relife" spell (ON) Ally HP < 50% - Choose Heal spell (ON) Self HP < 65% - Choose Heal spell (ON) Ally Debuff - Choose Debuff Removal spell (ON) Ally Lost Buff - Choose Buff spell Likewise, a "gambit" with the personality of "Agressor" may look like this: (ON) Foe HP < 10% - Choose Attack skill (ON) Foe any - Choose target - Choose Attack skill (ON) Self Lost Buff - Choose Buff spell (ON) Foe HP = 0% - Taunt the player What I like about this approach is it makes sense in my head. It also would be extremely easy to build an "AI Editor" with an approach like this. What I'm worried about is.. would it be too limiting? Would it maybe get too complicated? Does anyone have any experience with AIs in Turn Based games that could maybe provide me some insight into this approach.. or suggest a different approach? Many thanks in advance!!!

    Read the article

  • AJAX event, prevents other page actions

    - by cobaltduck
    Here's a fairly average scenario, using JSF as an example, but this same concept I have observed in ASP.NET, Apache Wicket, and other frameworks with ajax capabilities. <h:inputText id="text1" value="#{myBacker.myBean.myStringVar}" styleClass="goodCSS"> <f:ajax event="change" listener="#{myBacker.text1ChangeEventMethod}" update="someOtherField" /> </h:inputText> <h:selectBooleanCheckbox id="check1" value="#{myBacker.myBean.myBoolVar}" /> Let's suppose that the 'text1ChangeEventListener' is essential to 'someOtherField' and perhaps toggles its disabled attribute, or changes its available options, based on the value of 'myStringVar.' The particulars aren't important, let's just accept that for some reason we need an ajax call when the 'text1' value is changed. So Jane User is working her way down the form. She arrives at the 'text1' field and types some value. The cursor focus is still in the text field, as she moves her mouse to the 'check1' box and clicks. It appears to her that nothing has happened. She clicks again, and this time the checkbox highlights and the icon indicating a selection appears in the box. Jane has to do several entries in the form today, and sees this happen every time, and it becomes very frustrating for her. Likewise, Jeff Admin is also perusing this form, and begins to type in 'text1.' He then realizes he doesn't really want to enter this data, and so moves his mouse to the "cancel" button elsewhere on the page, and clicks. Nothing seems to happen. Jeff clicks again, and after confirming he really does want to cancel, is returned to the home page. Jeff scratches his head. The problem is simply that the first thing the system does after 'text1' looses focus is run the listener and perform the ajax operation. It may only take a fraction of a second, but still, you can click other buttons all you want, but until that ajax has finished, everything else is ignored. I've spent the morning searching and reading, and it seems no one else has even noticed this. I could find not one article, blog, past question here or at SO, or anyting that addresses this obvious and glaring deficiency in ajax. So first of all, am I truly alone in thinking this is a big problem? Second, does anyone have a solution?

    Read the article

  • const vs. readonly for a singleton

    - by GlenH7
    First off, I understand there are folk who oppose the use of singletons. I think it's an appropriate use in this case as it's constant state information, but I'm open to differing opinions / solutions. (See The singleton pattern and When should the singleton pattern not be used?) Second, for a broader audience: C++/CLI has a similar keyword to readonly with initonly, so this isn't strictly a C# type question. (Literal field versus constant variable in C++/CLI) Sidenote: A discussion of some of the nuances on using const or readonly. My Question: I have a singleton that anchors together some different data structures. Part of what I expose through that singleton are some lists and other objects, which represent the necessary keys or columns in order to connect the linked data structures. I doubt that anyone would try to change these objects through a different module, but I want to explicitly protect them from that risk. So I'm currently using a "readonly" modifier on those objects*. I'm using readonly instead of const with the lists as I read that using const will embed those items in the referencing assemblies and will therefore trigger a rebuild of those referencing assemblies if / when the list(s) is/are modified. This seems like a tighter coupling than I would want between the modules, but I wonder if I'm obsessing over a moot point. (This is question #2 below) The alternative I see to using "readonly" is to make the variables private and then wrap them with a public get. I'm struggling to see the advantage of this approach as it seems like wrapper code that doesn't provide much additional benefit. (This is question #1 below) It's highly unlikely that we'll change the contents or format of the lists - they're a compilation of things to avoid using magic strings all over the place. Unfortunately, not all the code has converted over to using this singleton's presentation of those strings. Likewise, I don't know that we'd change the containers / classes for the lists. So while I normally argue for the encapsulations advantages a get wrapper provides, I'm just not feeling it in this case. A representative sample of my singleton public sealed class mySingl { private static volatile mySingl sngl; private static object lockObject = new Object(); public readonly Dictionary<string, string> myDict = new Dictionary<string, string>() { {"I", "index"}, {"D", "display"}, }; public enum parms { ABC = 10, DEF = 20, FGH = 30 }; public readonly List<parms> specParms = new List<parms>() { parms.ABC, parms.FGH }; public static mySingl Instance { get { if(sngl == null) { lock(lockObject) { if(sngl == null) sngl = new mySingl(); } } return sngl; } } private mySingl() { doSomething(); } } Questions: Am I taking the most reasonable approach in this case? Should I be worrying about const vs. readonly? is there a better way of providing this information?

    Read the article

  • Why Healthcare Today Needs BPM and SOA by Avio

    - by JuergenKress
    Within the past couple years, the Patient Protection and Affordable Care Act has led to significant changes in the healthcare industry. A highly-complex supply chain between patients, providers, buyers and insurance companies has led to a lack of overall collaboration when it comes to processes. The first open enrollment deadline for products on the Health Insurance Exchange has passed. So what now? Let’s take a brief look at how things have changed and what organizations can do to stay in (and ahead of) the game. New requirements, new processes Organizations that have not adapted processes to meet new regulatory requirements will fall further behind. New regulatory requirements effectively make some legacy applications obsolete, require batch process to move to real-time, and more. Business Process Management (BPM) can help organizations bring data processes in line while helping IT redesign processes rather than change code or replace existing applications. BPM fills in application gaps and links critical information systems for a more visible, efficient and auditable organization. Social and mobile solutions BPM technology also facilitates social and mobile solutions that can help meet new needs. Patients are dependent on a network of doctors, pharmacists, families and others. Social solutions can connect members of the patient’s community in ways never seen before - enabling real-time, relevant communication. Likewise, mobile technology supports social solutions, and BPM is the most efficient way to make processes simple and role-based. It unties medical professionals from their offices by enabling them to access timely information and alerts anywhere. Why SOA is also needed Integrating BPM with Service-Oriented Architecture (SOA) also plays a critical role in the development of healthcare solutions that work. SOA can create a single end-to-end process, integrate applications and move them into a common workflow. While SOA enables the reutilization of existing IT infrastructure, BPM supports the process optimization, monitoring and social aspects. SOA and BPM applications support business analysts as they model, create and monitor processes - providing real-time insight and a unified workflow of process activities. Read “New” Solutions for a New Healthcare Landscape on our blog to learn more. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Avio,Healthcare,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Domain Models (PHP)

    - by Calum Bulmer
    I have been programming in PHP for several years and have, in the past, adopted methods of my own to handle data within my applications. I have built my own MVC, in the past, and have a reasonable understanding of OOP within php but I know my implementation needs some serious work. In the past I have used an is-a relationship between a model and a database table. I now know after doing some research that this is not really the best way forward. As far as I understand it I should create models that don't really care about the underlying database (or whatever storage mechanism is to be used) but only care about their actions and their data. From this I have established that I can create models of lets say for example a Person an this person object could have some Children (human children) that are also Person objects held in an array (with addPerson and removePerson methods, accepting a Person object). I could then create a PersonMapper that I could use to get a Person with a specific 'id', or to save a Person. This could then lookup the relationship data in a lookup table and create the associated child objects for the Person that has been requested (if there are any) and likewise save the data in the lookup table on the save command. This is now pushing the limits to my knowledge..... What if I wanted to model a building with different levels and different rooms within those levels? What if I wanted to place some items in those rooms? Would I create a class for building, level, room and item with the following structure. building can have 1 or many level objects held in an array level can have 1 or many room objects held in an array room can have 1 or many item objects held in an array and mappers for each class with higher level mappers using the child mappers to populate the arrays (either on request of the top level object or lazy load on request) This seems to tightly couple the different objects albeit in one direction (ie. a floor does not need to be in a building but a building can have levels) Is this the correct way to go about things? Within the view I am wanting to show a building with an option to select a level and then show the level with an option to select a room etc.. but I may also want to show a tree like structure of items in the building and what level and room they are in. I hope this makes sense. I am just struggling with the concept of nesting objects within each other when the general concept of oop seems to be to separate things. If someone can help it would be really useful. Many thanks

    Read the article

  • What are some good realistic programming related movies (docu-dramas, documentaries, accurate fiction, etc)?

    - by EpsilonVector
    A while ago I asked this question and the result was this. Following the response I got in the meta question I'm re-asking the question with new guidelines to focus it on the direction I wanted it to have originally. ================================================================== The guidelines are as follows: by "programming related" I mean movies from which we can learn about stuff like the development process, or history of software/computers, or programming culture. In other words, they must be grounded in the industry. No tangential stuff. Good entries answer as many of the following criteria as possible: Teach you about the history of the industry, or the development process, or teach you about important industry related topics (software patents for example) Are based on real life events, companies, people, practices, and they are the main focus of the movie After watching them, you feel like you understand or know something about the programmers' world that you didn't before (or you can see how someone could have such a response). You can point to it and say "this faithfully represents the industry/programmer culture at some point in time". This might be something you would show laymen to explain to them what "your people" are like and what is it that you do. Examples for good entries include: Pirates of Silicon Valley- the story of how Microsoft and Apple started the industry. Revolution OS- The story of Linux's rise to fame, and a pretty good cover of the Free Software/Open Source world. Aardvark'd: 12 Weeks with Geeks- development process. Examples for bad entries: Movies who's sole relevance is that they can be appreciated by programmers. The point of this question is not to be "what are some good movies" with "for a programmer" appended to it. Just because the writers got a few computer jokes right in itself doesn't make it about the industry. Movies where there's a computer related element, but are not about the industry. For example, 24 (the TV series). It's a product of the information age but it isn't actually about it. Another example is movies where there's a really cool programmer character, but are overall about something completely different. Likewise, The Big Bang Theory is not about physics, even though they have a cool physicist as a character. Science fiction, even if it draws ideas from computers. For example, the Matrix trilogy. Movies that you can't point to them and say: this is a faithful representation of our world (at some point in time). If you can't do that then it doesn't mirror the industry. Keep it one entry per answer so that the voting could sort the entries out.

    Read the article

  • Can't save data for a member in a data form

    - by RahulS
    Implied sharing is an old thing everyone knows the reasons and solutions of that, still little theory about that: With Essbase implied sharing, some members are shared even if you do not explicitly set them as shared. These members are implied shared members. When an implied share relationship is created, each implied member assumes the other member’s value. Essbase assumes (or implies) a shared member relationship in these situations: 1. A parent has only one child 2. A parent has only one child that consolidates to the parent In a Planning form that contains members with an implied sharing relationship, when a value is added for the parent, the child assumes the same value after the form is saved. Likewise, if a value is added for the child, the parent usually assumes the same value after a form is saved.For example, when a calculation script or load rule populates an implied share member, the other implied share member assumes the value of the member populated by the calculation script or load rule. The last value calculated or imported takes precedence. The result is the same whether you refer to the parent or the child as a variable in a calculation script. For more information have a look at: http://docs.oracle.com/cd/E17236_01/epm.1112/hp_admin_11122/ch14s11.html Now the issue which we are going to talk about is We loose data on save even when the parent is dynamic calc and has a single child. A dynamic calc parent to a single child:  If we design the form with following selection: In the data form we will find parent below the member and this is by design whenever you make a selection using commands to select all the member below parent, always children will appear before the parent: Lets try to enter data, Save it Now, try to change the way we selected members Here we go: Now the question again why this behavior: 1. Data from Planning data form passes to Essbase row by row, 2. Because in data form the child member appears before the parent, 3. First, data goes to Essbase for child (SingleStoreChild), 4. Then when Planning passes the data for parent there was #Missing or No data,  5. Over writes the data to #missing. PS: As we know that dynamic calc members are calculated on the fly they are not allocated with any memory in the Essbase, here the parent was dynamic calc and it was pointing to same memory as child in the background, when Planning was passing data to Essbase for second row it has updated the child with missing data.(Little confusing, let me know if you need more explanation) 6. As one of the solutions just change the order of appearance of parent and child. Cheers..!!! Rahul S. https://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • Is there a pedagogical game engine?

    - by K.G.
    I'm looking for a book, website, or other resource that gives modern 3D game engines the same treatment as Operating Systems: Design and Implementation gave operating systems. I have read Jason Gregory's Game Engine Architecture, which I enjoyed. However, by intent the author treated components of the architecture as atomic units, whereas what I'm interested in is the plumbing between those units that makes a coherent whole out of ideally loosely coupled parts. In books such as these, one usually reads that "that's academic," but that's the point! I have also read Julian Gold's Object-oriented Game Development, which likewise was good, but I feel is beginning to show its age. Since even mobile platforms these days are multicore and have fast video memory, those kinds of things (concurrency, display item buffering) would ideally be covered. There are other resources, such as the Doom 3 source code, which is highly instructive for its being a shipped product. The problem with those is as follows: float Q_rsqrt( float number ) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; // evil floating point bit level hacking i = 0x5f3759df - ( i >> 1 ); // what the f***? y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration // y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed return y; } To wit, while brilliant, this kind of source requires more enlightenment than I can usually muster upon first read. In summary, here's my white whale: For an adult reader with experience in programming. I wish I could save all the trees killed by every. Single. Game Programming book ever devoting the first two chapters to "Now just what is a variable anyway?" In C or C++, very preferably C++. Languages that are more concise are fantastic for teaching, except for when what you want to learn is how to cope with a verbose language. There is also the benefit of the guardrails that C++ doesn't provide, such as garbage collection. Platform agnostic. I'm sincerely afraid that this book is out there and it's Visual C++/DirectX oriented. I'm a Linux guy, and I'd do what it takes, but I would very much like to be able to use OpenGL. Thanks for everything! Before anyone gets on my case about it, Fast inverse square root was from Quake III Arena, not Doom 3!

    Read the article

  • Why can't a blendShader sample anything but the current coordinate of the background image?

    - by Triynko
    In Flash, you can set a DisplayObject's blendShader property to a pixel shader (flash.shaders.Shader class). The mechanism is nice, because Flash automatically provides your Shader with two input images, including the background surface and the foreground display object's bitmap. The problem is that at runtime, the shader doesn't allow you to sample the background anywhere but under the current output coordinate. If you try to sample other coordinates, it just returns the color of the current coordinate instead, ignoring the coordinates you specified. This seems to occur only at runtime, because it works properly in the Pixel Bender toolkit. This limitation makes it impossible to simulate, for example, the Aero Glass effect in Windows Vista/7, because you cannot sample the background properly for blurring. I must mention that it is possible to create the effect in Flash through manual composition techniques, but it's hard to determine when it actually needs updated, because Flash does not provide information about when a particular area of the screen or a particular display object needs re-rendered. For example, you may have a fixed glass surface with objects moving underneath it that don't dispatch events when they move. The only alternative is to re-render the glass bar every frame, which is inefficient, which is why I am trying to do it through a blendShader so Flash determines when it needs rendered automatically. Is there a technical reason for this limitation, or is it an oversight of some sort? Does anyone know of a workaround, or a way I could provide my manual composition implementation with information about when it needs re-rendered? The limitation is mentioned with no explanation in the last note in this page: http://help.adobe.com/en_US/as3/dev/WSB19E965E-CCD2-4174-8077-8E5D0141A4A8.html It says: "Note: When a Pixel Bender shader program is run as a blend in Flash Player or AIR, the sampling and outCoord() functions behave differently than in other contexts.In a blend, a sampling function will always return the current pixel being evaluated by the shader. You cannot, for example, use add an offset to outCoord() in order to sample a neighboring pixel. Likewise, if you use the outCoord() function outside a sampling function, its coordinates always evaluate to 0. You cannot, for example, use the position of a pixel to influence how the blended images are combined."

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • How to implement a component based system for items in a web game.

    - by Landstander
    Reading several other questions and answers on using a component based system to define items I want to use one for the items and spells in a web game written in PHP. I'm just stuck on the implementation. I'm going to use a DB schema suggested in this series (part 5 describes the schema); http://t-machine.org/index.php/2007/09/03/entity-systems-are-the-future-of-mmog-development-part-1/ This means I'll have an items table with generic item properties, a table listing all of the components for an item and finally records in each component table used to make up the item. Assuming I can select the first two together in a single query, I'm still going to do N queries for each component type. I'm kind of fine with this because I can cache the data into memcache and check there first before doing any queries. I'll need to build up the items on every request they are used in so the implementation needs to be on the lean side even if they're pulled from memcache. But right there is where I feel confident about implementing a component system for my items ends. I figure I'd need to bring attributes and behaviors into the container from each component it uses. I'm just not sure how to do that effectively and not end up writing a lot of specialized code to deal with each component. For example an AttackComponent might need to know how to filter targets inside of a battle context and also maybe provide an attack behavior. That same item might also have a UsableComponent which allows the item to be used and apply some effect onto a different set of targets filtered differently from the same battle context. Then not every part of an item is an active part, an AttributeBonusComponent might need to only kick in when the item is in an equipped state or when displaying the item details page. Ultimately, how should I bring all of the components together into the container so when I use an item as a weapon I get the correct list of targets? Know when a weapon can also be used as an item? Or to apply the bonuses the item provides to a character object? I feel like I've gone too far down the rabbit hole and I can't grasp onto the simple solution in front of me. (If that makes any sense at all.) Likewise if I were to implement the best answer from here I feel like I'd have a lot of the same questions. How to model multiple "uses" (e.g. weapon) for usable-inventory/object/items (e.g. katana) within a relational database.

    Read the article

  • Remote Desktop - remote computer that was reached is not the one you specified

    - by Jim McKeeth
    We just setup some new Windows 2008 R2 servers and we are unable to Remote Desktop into them from our Windows 7 desktops. Remote desktop connects, but after we provide credentials we get: The connection cannot be completed because the remote computer that was reached is not the one you specified. This could be caused by an outdated entry in the DNS cache. Try using the IP address of the computer instead of the name. If we connect from Windows 7 to a machine not running Windows 2008 R2, or from a machine not running Windows 7 to the Windows 2008 R2 server, it works fine. Likewise if we connect to the Windows 2008 R2 server from Windows 7 via the IP address then it works fine (although that causes other problems later). I've only found one other mention of someone having this problem, so I don't think it is just our network. Any suggestions on how to connect from Windows 7 to Windows 2008 R2 via DNS? Both are 64-bit. Update: Turns out it does not need to be R2 to get the error. We have another server that is Windows 2008 R1 64-bit that also fails.

    Read the article

  • IMAP proxy as a POP3 hub?

    - by mailman stan
    Simple scenario, complicated technology: One family receiving mail from five email addresses via POP3 into one Outlook inbox on a single PC. Now we'd like to be able to replicate that single inbox across multiple devices (eg. desktop PC, laptop, netbook, smartphone). If we continue using POP3 as the mail transfer protocol, messages will be downloaded to one device and will not be visible to the others; replies will likewise be isolated on the sending machine. If we switch to IMAP, I understand that we can have multiple devices maintaining a shared view of an inbox hosted at the server end, but what about multiple accounts? I tried changing the account configuration in Outlook to fetch from the mail providers' IMAP service instead of POP3, which does give a shared view across multiple devices but also causes Outlook to create a separate inbox and PST for each account. This is awkward because it means there are five separate folders that need to be checked, and Outlook tools like search filters and rules don't seem to work across accounts. To get what I want (five accounts delivered into one shared mailbox) it seems that I would need some sort of intervening server that collects mail (using POP3) from all our accounts into a single inbox while preserving the original destination addresses, and then serves it up to all our devices using IMAP. Is this workable? Is it a good approach? Is there an easier way?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >