Search Results

Search found 4940 results on 198 pages for 'understanding'.

Page 92/198 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • glibc not found when trying to install Absinthe?

    - by RafLance
    I am trying to install Absinthe 2.0.4 on Ubuntu 11.10 on a netbook. When I try to run the install file, this keeps on happening: rafael@RafLaptop:~/Desktop/absinthe-linux-2.0.4$ ./absinthe.x86 ./absinthe.x86: /lib/i386-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by ./absinthe.x86) Do I need to upgrade GLIBC? If so, how do I do that? Since I'm on a netbook I can't use a LiveCD so I wanted to know if there was a way I could fix this issue without reinstalling my whole OS. Any explanations about what GLIBC is exactly would be great too since this is a learning experience for me. I know that GLIBC is a part of libc.so.6 and so I tried to run sudo apt-get install libc.so.6 but was told that it was up to date. But GLIBC isn't? I hope this articulates my problem well, if there are any pieces of missing info or questions to clarity my question, please let me know! ~-~ EDIT/UPDATE: So after some help on the AskUbuntu chat room from user izx, I have gathered the following information/understanding: -I need to run this program with Ubuntu 12.04 or recompile it from source -Upgrading libc on Oneiric to 2.15 while possible, is not an easy task and is not officially supported.

    Read the article

  • ADF Faces Skin Editor - How to Work with It

    - by Shay Shmeltzer
    The ODTUG Kscop11 conference was a great success with lots of sessions about FMW running in a special track. I did several sessions and labs in the conference, and I thought it might be a good idea to at least give you a taste of what you might have missed. So here is most of what I demoed in my ADF Faces Skinning session (not all though - that session was 60 minutes long, and while everyone did end up going out of the building in the middle because of a fire drill for about 5 minutes, there was other things covered in the session as well). In the demo here you'll see how to generate new images and default color scheme, how to identify a component class with Firebug, how to skin a component, how to identify the global selector of a property, how to change fonts and how to change strings. By the way, for more on ADF Skinning you should also listen to the ADF Insider seminar that Frank Nimphius recorded on skinning, it will give you better understanding of the overall skinning process. P.S. in the demo I add an entry to the web.xml file which prevent ADF Faces from compressing the HTML that is generated. The entry is for org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION  and I set it to true. This is very useful when you work on creating the skin, but don't forget to un-set it before you go production.

    Read the article

  • Developing momentum on open source projects

    - by sashang
    Hi I've been struggling to develop momentum contributing to open source projects. I have in the past tried with gcc and contributed a fix to libstdc++ but it was a once off and even though I spent months in my spare time on the dev mailing list and reading through things I just never seemed to develop any momentum with the code. Eventually I unsubscribed and got my free time back and uncluttered my mailbox. Like a lot of people I have some little open source defunct projects lying around on the net, but they're not large and I'm the only contributor. At the moment I'm more interested in contributing to a large open source project and want to know how people got started because I find it difficult while working full time to develop any momentum with the code base. Other more regular contributors, who are on the project full-time, are able to make changes at will and as result enter that positive feedback cycle where they understand the code and also know where it's heading. It makes the barrier to entry higher for those that come along later. My questions are to people who actively contribute to large opensource projects, like the Linux kernel, or gcc or clang/llvm or anything else with say a developer head count of more than 10. How did you get started? Was there a large chunk of time in your life that you just could dedicate to working on the project? I know in Linus's case he had a chunk of time (6 months) to get it started. What barriers to entry did you encounter? Can you describe the initial stages of the time spent with the project, from when you had little understanding of the code to when you understood enough to commit regularly. Thanks

    Read the article

  • How bad is it to have two methods with the same name but different signatures in two classes?

    - by Super User
    I have a design problem related to a public interface, the names of methods, and the understanding of my API and code. I have two classes like this: class A: ... function collision(self): .... ... class B: .... function _collision(self, another_object, l, r, t, b): .... The first class has one public method named collision, and the second has one private method called _collision. The two methods differs in argument type and number. As an example let's say that _collision checks if the object is colliding with another object with certain conditions l, r, t, b (collide on the left side, right side, etc) and returns true or false. The public collision method, on the other hand, resolves all the collisions of the object with other objects. The two methods have the same name because I think it's better to avoid overloading the design with different names for methods that do almost the same thing, but in distinct contexts and classes. Is this clear enough to the reader or I should change the method's name?

    Read the article

  • Field symbol and Data reference in SAP-ABAP

    - by PDP-21
    If we compare field symbol and data refernece with that of the pointer in C i concluded that :- In C language, Say we declare a variable "var" type "Integer" with default value "5". The variable "var" will be stored some where in the memory and say the memory address which holds this variable is "1000". Now we define a pointer "ptr" and this pointer is assigned to our variable. So, "&ptr" will be "1000" and " *ptr " will be 5. Lets comapre the above situation in SAP ABAP. Here we declare a Field symbol "FS" and assign that to the variable "var". Now my question is what "FS" holds ? I have searched this rigorously in the internet but found out many ABAP consultants have the opinion that FS holds the address of the variable i.e. 1000. But that is wrong. While debugging i found out that fs holds only 5. So fs (in ABAP) is equivalent to *ptr (in C). Please correct me if my understanding is wrong. Now lets declare a data reference "dref" and another filed symbol "fsym" and after creating the data reference we assign the same to field symbol . Now we can do operations on this field symbol. So the difference between data refernec and field symbol is :- in case of field symbol first we will declare a variable and assign it to a field symbol. in case of data reference first we craete a data reference and then assign that to field symbol. Then what is the use of data reference? The same functionality we can achive through field symbol also.

    Read the article

  • Transition from 2D to 3D Game development [closed]

    - by jakebird451
    I have been working in the 2D world for a long time from manual blitting in windows to SDL to Python (pygame, pyopengl) and a bunch in between. Needless to say I have been programming for a while. So a while ago I started to program in OpenGL via C++ on my Mac. I then got a little intricate with my work after a while (3D models with skeleton structure and terrain development). After a long time of tinkering, I stopped due to the heavy work just to yield a low level understanding of how OpenGL works. Still interested in Graphics and Game Development I went on a search for a stable game engine with some features to grow on. Licence Requirement: Anything other than GPL (LGPL will do) OS Requirement: Mac & Windows Shader: GLSL or CG (GLSL preferred due to experience) Models: Any model structure with rigging (bone) support & animation I am looking at http://www.ogre3d.org/ currently and am starting to meddle around with some examples. However I am a little reluctant to spend a lot of time on it only to yield another dead end. So instead of falling down a spiraling black pit, I am posting my question to you guys to lead me in the right direction based on my requirements. How was your experience with the engine you recommend? Is it well documented? Does it have well documented examples? Any library requirements (Boost, libpng, etc)?

    Read the article

  • Is an event loop just a for/while loop with optimized polling?

    - by Alan
    I'm trying to understand what an event loop is. Often the explanation is that in the event loop, you do something until you're notified that an event occurred. You than handle the event and continue doing what you did before. To map the above definition with an example. I have a server which 'listens' in a event loop, and when a socket connection is detected, the data from it gets read and displayed, after which the server goes to the listening it did before. However, this event happening and us getting notified 'just like that' are to much for me to handle. You can say: "It's not 'just like that' you have to register an event listener". But what's an event listener but a function which for some reason isn't returning. Is it in it's own loop, waiting to be notified when an event happens? Should the event listener also register an event listener? Where does it end? Events are a nice abstraction to work with, however just an abstraction. I believe that in the end, polling is unavoidable. Perhaps we are not doing it in our code, but the lower levels (the programming language implementation or the OS) are doing it for us. It basically comes down to the following pseudo code which is running somewhere low enough so it doesn't result in busy waiting: while(True): do stuff check if event has happened (poll) do other stuff This is my understanding of the whole idea, and i would like to hear if this is correct. I am open in accepting that the whole idea is fundamentally wrong, in which case I would like the correct explanation. Best regards

    Read the article

  • How can I ensure my Collada model fits on an iPhone screen?

    - by rakeshNS
    Hi I am new to game development. I see many examples and tried myself like displaying triangle, cube etc. Now I am looking to render a Collada object. So I created a Collada object using Google Sketch up and trying to render that now. But the thing I am not understanding is, in all examples the vertices are between -1.0 and +1.0 values. But when I looked into that Collada file, the vertices were ranging from -30.0 to 90.0. I know any vertices greater than 1.0 will not display on iPhone. So can you pleas tell my the secret behind converting Object coordinate to normalized vector coordinate? My previous triangle defined as struct Vertex{ float Position[3]; float Color[4]; }; const Vertex Vertices[] = { {{-0.5, -0.866}, {1, 1, 0.5f, 1}}, {{0.5, -0.866}, {1, 1, 0.5, 1}}, {{0, 1}, {1, 1, 0.5, 1}}, {{-0.5, -0.866}, {0.5f, 0.5f, 0.5f}}, {{0.5, -0.866}, {0.5f, 0.5f, 0.5f}}, {{0, -0.4f}, {0.5f, 0.5f, 0.5f}}, }; And now triangle from collada is const Vertex Vertices[] = { {{39.4202092, 90.1263924, 0.0000000}, {1, 1, 0.5f, 1}}, {{-20.2205588, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 176.3763924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 176.3763924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, {{39.4202092, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, };

    Read the article

  • Installing Ubuntu on Asus G75VW (UEFI)

    - by user101653
    You all are my last hope... help! I bought an Asus G75VW from Best Buy. It has the new UEFI BIOS instead of the old style BIOS (1980's) and has Windows 8 preinstalled. I cannot get the G75VW to install Ubuntu 12.10 in EFI mode. I did get Ubuntu to load if I changed the BIOS to CSM and the computer sees and installs Ubuntu in "legacy mode". I attempted boot repair, and Ubuntu will load after 1 minute but as legacy BIOS only. If I changed the BIOS to UEFI "Binary is whitelisted" is displayed and I get a purple screen. My goal... keep my preinstalled Windows 8 on internal drive bay 1 and install Ubuntu 12.10 on internal drive bay 2... and somehow make a choice on which to choose. I am at a loss. I am a software programmer, but I am very bad at understanding BIOS and partitioning. Any ideas? Has anyone done what I want to do. This is a full second day on my "issue"! If I cannot get Ubuntu installed, I'm returning the laptop. And "wait" until these obstacles of UEFI/EFI and properly handled to allow people to load EFI based Ubuntu without a hitch. Thanks, Dave

    Read the article

  • Who should be the architect in an agile project?

    - by woni
    We are developing the agile way for a few months now and I have some troubles understanding the agile manifesto as interpreted by my colleagues. The project we are developing is a framework for future projects and will be reused many times in the next years. Code is only written to fulfill the needs of the current user story. The product owner tells us what to do, but not how to do it. What would be right, in my opinion, because he is not implicitly a programmer. The project advanced and in my eyes it messed up a little bit. After I recognized an assembly that was responsible for 3 concerns (IoC-Container, communication layer and project internal things), I tried to address this to my colleagues. They answered that this would be the result of applying YAGNI, because know one told them to respect that functionalities have to be split up in different assemblies for further use. In my opinion no one has to tell us that we should respect the Separation of Concerns principle. On the other side, they mentioned to prefer YAGNI over SoC because it is less effort to implement and therefore faster and cheaper. We had changing requirements a lot at the beginning of the project and ended up in endless refactoring sessions, because to much has to be adapted. Is it better to make such rather simple design decisions up front, even there is no need in the current situation, or do we have to change a lot in the later progress of the project?

    Read the article

  • Vector vs Scalar velocity?

    - by Serguei Fedorov
    I am revamping an engine I have been working on and off on for the last few weeks to use a directional vector to dictate direction; this way I can dictate the displacement based on a direction. However, the issue I am trying to overcome is the following problem; the speed towards X and speed towards Y are unrelated to one another. If gravity pulls the object down by an increasing velocity my velocity towards the X should not change. This is very easy to implement if my speed is broken into a Vector datatype, Vector.X dictates one direction Vector.Y dictates the other (assuming we are not concerned about the Z axis). However, this defeats the purpose of the directional vector because: SpeedX = 10 SpeedY = 15 [1, 1] normalized = ~[0.7, 0.7] [0.7, 0.7] * [10, 15] = [7, 10.5] As you can see my direction is now "scaled" to my speed which is no longer the direction that I want to be moving in. I am very new to vector math and this is a learning project for me. I looked around a little bit on the internet but I still want to figure out things on my own (not just look at an example and copy off it). Is there way around this? Using a directional vector is extremely useful but I am a little bit stumped at this problem. I am sorry if my mathematical understanding maybe completely wrong.

    Read the article

  • Trouble with SAT style vector projection in C#/XNA

    - by ssb
    Simply put I'm having a hard time working out how to work with XNA's Vector2 types while maintaining spatial considerations. I'm working with separating axis theorem and trying to project vectors onto an arbitrary axis to check if those projections overlap, but the severe lack of XNA-specific help online combined with pseudo code everywhere that omits key parts of the algorithm, googling has left me little help. I'm aware of HOW to project a vector, but the way that I know of doing it involves the two vectors starting from the same point. Particularly here: http://www.metanetsoftware.com/technique/tutorialA.html So let's say I have a simple rectangle, and I store each of its corners in a list of Vector2s. How would I go about projecting that onto an arbitrary axis? The crux of my problem is that taking the dot product of say, a vector2 of (1, 0) and a vector2 of (50, 50) won't get me the dot product I'm looking for.. or will it? Because that (50, 50) won't be the vector of the polygon's vertex but from whatever XNA calculates. It's getting the calculation from the right starting point that's throwing me off. I'm sorry if this is unclear, but my brain is fried from trying to think about this. I need a better understanding of how XNA calculates Vector2s as actual vectors and not just as random points.

    Read the article

  • invitation: Oracle Endeca Information Discovery Bootcamp

    - by mseika
    The Oracle Endeca Information Discovery (OEID) Boot Camp is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. Participants will learn how to develop & implement solutions using a Data Discovery method. Training is in EnglishWhat will be covered?The Oracle Endeca Information Discovery (OEID) Boot Camp is a three-day class with a combination of lecture and hands-on exercises, tailored to make participants aware of the Oracle Endeca Information Discovery platform, and to gain valuable skills for the implementation of projects.The course will follow a combination of lectures and hands-on lab sessions, to allow participants to apply the knowledge they have gained by extracting from sample data sources, and creating an end-user application that will be used to answer several business questions. What You Will Learn Architecture: OEID Components, use of graphs, overview of clustering OEID Installation: Architecture planning, infrastructure requirements, installation process, production hints & tips OEID Administration: Data store management, administrative operations, portal configuration, data sources, system monitoring Indexing: Integration Suite, Data source analysis, Graph (ETL) creation, record design techniques Portlets: Studio portlets, custom portlet development, querying functions Reporting: Studio applications & best practices, visualizations, EQL PrerequisitesYou must bring a laptop with you for the Hands-on labs ENVIRONMENT – LAPTOP REQUIREMENTS For the OEID boot camp, participants will perform the hands-on lab exercises using a virtual machine image. These virtual machines will be provided to participants within a cloud environment, requiring participants to bring a laptop to the Boot Camp that can access a Windows server utilizing Microsoft RDP from their laptop. Participants will not need to install any software onto their laptops, but must ensure that they have the proper software installed for their OS, to connect through RDP to a server. HARDWARE • CPU: Dual-core, x64, 1.8Ghz or higher • RAM: 2GB SOFTWARE • Microsoft Remote Desktop Client • Internet Explorer 7, Firefox, or Google Chrome This boot camp is intended for prospective implementers of Oracle Endeca Information Discovery (OEID), or those in a presales role looking to gain insight into the technical benefits of this new package. Attendees should have experience and familiarity with the basic concepts of business intelligence. Where and When ? Monday, October 15th until wednesday, October 17th included 9:00 - 18:00 Oracle France 15, boulevard Charles de Gaulle 92715 Colombes Access Register Here Limited number of seats !

    Read the article

  • Does the term "Learning Curve" include the knowing of the gotchas?

    - by voroninp
    When you learn new technology you spend time understanding its concepts and tools. But when technology meets real life strange and not pleasant things happen. Reuqirements are often far from ideal and differ from 'classic' scenario. And soon I find myself bending the technology to my real needs. At this point I begin to know bugs of the system or that is is not so flexible as it seemed at the very begining. And this 'fighting' with technology consumes a great part of the time while developing. What is more depressing is that the bunch of such gotchas and workarounds are not concentrated at one place (book, site, etc.) And before you really confront it you cannot really ask the correct question because you do not even suspect the reason for the problem to occur (unknown-unknown). So my question consiststs of three: 1) Do you really manage (and how) to predict possible future problems? 2) How much time do you spend for finding the workaround/fix/solution before you leave it and switch to other problems. 3) What are the criteria for you to think about yourself as experienced in the tecnology. Do you take these gotchas into account?

    Read the article

  • Learning node.js

    - by john smith
    I am not sure if this is the right place to ask but, I thought this was the most suitable. I recently graduated from university. Learned the full php stack; basically all the LAMP stuff, obviously without counting all the other subjects. Not even got my degree and this whole node.js booming out of nowhere. You can imagine how one can feel about this, the story is always the same: you never end learning, and studying. So I recently got my hands on node.js; reading books, tutorials, and everything imaginable on the internet. The problem is one and simple: this is nowhere near to having a teacher standing near you helping you understanding and solving your problems, especially when all you can do is post your doubts on a website and patiently wait for replies. It's not that it isn't good, it's just much slower than what I just expressed above. So, in short words: is there a place where one can find someone willing to teach you about such contents? This would obviously done via remote means, like skype and such. Can anyone here point me into the right direction? Or just downvote me for being in the wrong website? Thanks in advance.

    Read the article

  • GeoTrust SSL brand name used by re-sellers

    - by Christopher
    I feel like a I got the bait-and-switch from my web host provider since they advertise "GeoTrust SSL" for $99. I purchased it, thinking the certificate is issued from geotrust.com, but then I get an email from Comodo saying they are providing it. My host provider says they get a discount by using Comodo. I purchased the certificate with the understanding it would be issued by GeoTrust. I called the host provider and they said they usually expect it from GeoTrust, but someone from email support responded saying the product name is "GeoTrust SSL" but they use Comodo to get a discount. I think this is bogus and unfair trade practice. However, searching for "GeoTrust" on google brings up a ton of websites selling "GeoTrust" certificates. How can companies get away with this? Since the host provider is part of BBB I plan to inform my host to update the purchase page on their website to state clearly that... "This certicate is provided at a discount and may be issued by a provider other than GeoTrust.com, such as Comodo.com" Any feedback on this is appreciated.

    Read the article

  • How to apply verification and validation on the following example

    - by user970696
    I have been following verification and validation questions here with my colleagues, yet we are unable to see the slight differences, probably caused by language barrier in technical English. An example: Requirement specification User wants to control the lights in 4 rooms by remote command sent from the UI for each room separately. Functional specification The UI will contain 4 checkboxes labelled according to rooms they control. When a checkbox is checked, the signal is sent to corresponding light. A green dot appears next to the checkbox When a checkbox is unchecked, the signal (turn off) is sent to corresponding light. A red dot appears next to the checkbox. Let me start with what I learned here: Verification, according to many great answers here, ensures that product reflects specified requirements - as functional spec is done by a producer based on requirements from customer, this one will be verified for completeness, correctness). Then design document will be checked against functional spec (it should design 4 checkboxes..), and the source code against design (is there a code for 4 checkboxes, functions to send the signals etc. - is it traceable to requirements). Okay, product is built and we need to test it, validate. Here comes our understanding trouble - validation should ensure the product meets requirements for its specific intended use which is basically business requirement (does it work? can I control the lights from the UI?) but testers will definitely work with the functional spec, making sure the checkboxes are there, working, labelled, etc. They are basically checking whether the requirements in functional spec were met in the final product, isn't that verification? (should not be, lets stick to ISO 12207 that only validation is the actual testing)

    Read the article

  • I've been told that Exceptions should only be used in exceptional cases. How do I know if my case is exceptional?

    - by tieTYT
    My specific case here is that the user can pass in a string into the application, the application parses it and assigns it to structured objects. Sometimes the user may type in something invalid. For example, their input may describe a person but they may say their age is "apple". Correct behavior in that case is roll back the transaction and to tell the user an error occurred and they'll have to try again. There may be a requirement to report on every error we can find in the input, not just the first. In this case, I argued we should throw an exception. He disagreed, saying, "Exceptions should be exceptional: It's expected that the user may input invalid data, so this isn't an exceptional case" I didn't really know how to argue that point, because by definition of the word, he seems to be right. But, it's my understanding that this is why Exceptions were invented in the first place. It used to be you had to inspect the result to see if an error occurred. If you failed to check, bad things could happen without you noticing. Without exceptions every level of the stack needs to check the result of the methods they call and if a programmer forgets to check in one of these levels, the code could accidentally proceed and save invalid data (for example). Seems more error prone that way. Anyway, feel free to correct anything I've said here. My main question is if someone says Exceptions should be exceptional, how do I know if my case is exceptional?

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • A Great Work : ADF Architecture TV

    - by mustafakaya
    I would like to information about Oracle ADF Product Management's great work ; ADF Architecture TV. This channel has various subjects such as before start a new ADF or any software project what will you need or how can you select team member's skills, or how to implement and design an ADF projects etc. When developing with a new technology, one of the challenges for technical staff is to both learn the features of the technology and how to implement them, and also consider the broader concepts of design, engineering and architecture. Many an IT project has come undone because IT staff have been focused on the nitty gritty details of writing software, rather than looking at the "bigger picture" of how it will all go together. Oracle's "ADF Architecture TV" plans to address this issue by focusing on architectural issues and developer guidelines for writing ADF software solutions. The goal, to give ADF developers an understanding of the decisions you need to build a successful ADF application, potential architectural blueprints to choose from when putting the ADF application together, and potential best practices to take back to your development team.  You can click here for ADF Architecture TV. 

    Read the article

  • NVIDIA X Server TwinView isses with 2 GeForce 8600GT cards

    - by Big Daddy
    Full disclosure, I am relatively new to Linux and loving the learning curve but I am undoubtedly capable of ignorant mistakes I have a rig I am building for my home office desktop. The basics: Gigabyte MB AMD 64bt processor Ubuntu 12.04 64bit two Nvidia GeForce 8600GT video cards using a SLI bridge two 22" DVI input HP monitors So here is my issue (it is driving me nuts) with X Server. If I plug both monitors into GPU 0 X Server auto configures TwinView and all is grand, works like a charm, though both are running off of GPU 0. If I plug one monitor into GPU 0 and one monitor into GPU 1, X Server enables the monitor on GPU 0, sees but keeps the monitor on GPU 1 disabled. My presumption (we all know the saying about presumptions) is that all I would have to do is select the disabled monitor on GPU 1 and drop down the Configure pull down and select TwinView...problem is when the monitors are plugged in this way, the TwinView option is greyed out and can not be selected. What am I not understanding here? Is there some sort of configuration I need to do elsewhere for Ubuntu to utilize both GPU's? Any help will be most appreciated, thanks in advance.

    Read the article

  • How bad it's have two methods with the same name but differents signatures in two classes?

    - by Super User
    I have a design problem relationated with the public interface, the names of methods and the understanding of my API and my code. I have two classes like this: class A: ... function collision(self): .... ... class B: .... function _collision(self, another_object, l, r, t, b): .... The first class have one public method named collision and the second have one private method called _collision. The two methods differs in arguments type and number. In the API _m method is private. For the example let's say that the _collision method checks if the object is colliding with another_ object with certain conditions l, r, t, b (for example, collide the left side, the right side, etc) and returns true or false according to the case. The collision method, on the other hand, resolves all the collisions of the object with other objects. The two methods have the same name because I think is better avoid overload the design with different names for methods who do almost the same think, but in distinct contexts and classes. This is clear enough to the reader or I should change the method's name?

    Read the article

  • Interpolation between two 3D points?

    - by meds
    I'm working with some splines which define a path a character follows (you can see a gameplay video here to get a better understanding of what's going on: http://www.youtube.com/watch?v=BndobjOiZ6g). Basically the characters 'forward' look direction is set to the 'forward' direction of the spline and when players tilt their phone left and right the character is strafed along its 'right' coordinate. The issue with this is (rather obviously) in performance, interpolating over a spline to find the nearest position and tangent relative to the player is an incredibly costly operation. To get by this I cache a finite number of positions in what I call 'SplineDetails', the class is as follows: public class SplineDetails { public SplineDetails() { Forward = Vector3.forward; Position = Vector3.one * float.MaxValue; Alpha = -1; } public float Alpha; // [0,1] measured along length of spline where 0 is the initial point and 1 is the end point of the spline public Vector3 Position; // the point of the spline at this alpha public Vector3 Forward; // the forward tangent of the spline at this alpha } I populate this with say 30 coordinates and I can give a rough estimate of a coordinate and 'forward' based on a position past in. It's not as accurate but it's much faster. But now I'd like to make the system work better by estimating positions and 'forward' directions by interpolating between two of the cached points though I'm stuck trying to figure out some logic. My first problem is, how can I determine between which two points the object is? Given each point can be placed at different intervals along the spline it could mean that two points in front or behind the object can be closer to the object. The other problem is to figure out the proportion between the two paths it's between, i.e. if there is a point a at coordinate (0,0,0) and point b at coordinate (1,0,0) if the object is at position (0.5,0,0) then the result it should give is '0.5' (as it is equal distance away from point a and point b). That's a simple example, but what if the object is at coordinate (0.5,3,0) for example?

    Read the article

  • How to manage and estimate unstructured requirements received from customers

    - by user20358
    A lot of the times I receive a software system's requirements from our customers in a very unstructured format. It is usually a bunch of "product development" guys from the customer's who come up with these "proposed solutions" to the business problems they have. While they are the experts at the business domain, a lot of the times they don't have the solutions right. This results in multiple versions of the same requirement mixing up of two requirements into one a few versions of the requirement later down the line, the requirements which were combined together get separated out again, each taking with it some of the new additions How do you work with such requirements coming in and sort them out into proper use cases and before development begins? What tools can we use to track a particular requirement's history, from the first time it was conceived till the time it gets crystallized into a proper use case? Estimating work against requirements received in such a fashion is a nightmare which ends up in making mistakes in understanding the requirement correctly and estimating the effort against it correctly. Any tips, tools, tricks to make this activity more manageable? I'm just trying to get some insights from someone more experienced than I am in requirements management and effort estimation.

    Read the article

  • Compressing/compacting messages over websocket on Node.js

    - by icelava
    We have a websocket implementation (Node.js/Sock.js) that exchanges data as JSON strings. As our use cases grow, so have the size of the data transmitted across the wire. The websocket protocol does not natively offer any compression feature, so in order to reduce the size of our messages we'd have to manually do something about the serialisation. There appear to be a variety of LZW implementations in Javascript, some which confuses me on their compatibility for in-browser use only versus transmission across the wire due to my lack of understanding on low-level encodings. More importantly, all of them seem to take a noticeable performance drag when Javascript is the engine doing the compression/decompression work, which is not desirable for mobile devices. Looking instead other forms of compact serialisation, MessagePack does not appear to have any active support in Javascript itself; BSON does not have any Javascript implementation; and an alternative BISON project that I tested does not deserialise everything back to their original values (large numbers), and it does not look like any further development will happen either. What are some other options others have explored for Node.js?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >