Search Results

Search found 6682 results on 268 pages for 'edge cases'.

Page 17/268 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Toon/cel shading with variable line width?

    - by Nick Wiggill
    I see a few broad approaches out there to doing cel shading: Duplication & enlargement of model with flipped normals (not an option for me) Sobel filter / fragment shader approaches to edge detection Stencil buffer approaches to edge detection Geometry (or vertex) shader approaches that calculate face and edge normals Am I correct in assuming the geometry-centric approach gives the greatest amount of control over lighting and line thickness, as well eg. for terrain where you might see the silhouette line of a hill merging gradually into a plain? What if I didn't need pixel lighting on my terrain surfaces? (And I probably won't as I plan to use cell-based vertex- or texturemap-based lighting/shadowing.) Would I then be better off sticking with the geometry-type approach, or go for a screen space / fragment approach instead to keep things simpler? If so, how would I get the "inking" of hills within the mesh silhouette, rather than only the outline of the entire mesh (with no "ink" details inside that outline? Lastly, is it possible to cheaply emulate the flipped-normals approach, using a geometry shader? Is that exactly what the GS approaches do? What I want - varying line thickness with intrusive lines inside the silhouette... What I don't want...

    Read the article

  • Limiting the speed of the mouse cursor

    - by idlewire
    I am working on a simple game where you can drag objects around with the mouse cursor. As I drag the object around quickly, I notice there is some juddering, which seems to be due to the fact that I can move the mouse cursor faster than the game's update/draw. So, although I maintain the offset from where the player initially clicked on the object, the mouse's relative position to the object shifts around slightly before settling as I move the object very quickly. The only way I have found to get smooth, exact 1:1 movement is if I turn both IsFixedTimeStep and SynchronizeWithVerticalRetrace to false. However, I'd rather not have to do that. I have also tried making a custom mouse cursor, hiding the real mouse, taking the real mouse delta and clamping it to a maximum speed. Here is the problem: In windowed mode, the "real" mouse cursor moves off the window while the custom mouse cursor (since it's movement is being scaled) is still somewhere inside the game window. This becomes bizarre and is obviously not desired, as clicking at this point means clicking on things outside the game window. Is there any way to accomplish this in windowed mode? In fullscreen mode, the "real" mouse cursor is bounded to the edges of the screen. So I get to a point where there is no more mouse delta, yet my custom cursor is still somewhere in the middle of the screen and hence can't move further in that direction. If I wanted to clamp it to the edge of the screen when the real cursor is at the edge, then I would get an abrupt jump to the edge of the screen, which isn't desired either Any help would be appreciated. I'd like to be able to limit the speed of the mouse, but also would appreciate help with the first issue (the non-smooth relative offset between mouse cursor movement and object movement).

    Read the article

  • How do you keep down your urge to learn many things [closed]

    - by devsundar
    One of the difficulties i have is to lower my urge to learn new things (Languages, tools, frameworks etc.). I know it's good to stay the bleeding edge, but at the same time i want to learn things properly. I really see that i need to strike a balance between staying bleeding edge and knowing things properly. For example: Before choosing Arch (Desktop), Ubuntu(Server) and Knoppix(Portable) -- depending on situation -- as favourite distributions. Virtually i have tried all popular linux distributions. You name any popular linux (Redhat, Ubuntu, Arch, Suse, Knoppix, Slax, Slackware) i have tried it for some time. In fact i have spent few years experimenting the operating systems. Before choosing Python, Javascript (nodejs). I have tried all the languages i cameacross Scala, Haskell, Erlang, Ruby, Python, Perl, Scheme. Same applies for database. All popular db RDBMS (Oracle, Mysql, Postgres, SQLite[Favourite] etc) and NoSQL (Mongo, Couch, Neo4j etc.). Advantages i see: We get a overall picture of the technologies/tools/languages. It's useful to select the right tool for the job. We develop a taste and choose the One we like. Disadvantages: I feel that i spend somuch time and see a need to strike a balance. In summary, for e.g. If i see a blog post in HackerNews about CofeeScript i will try it out irrespective of what i am currently learning (Say Haskell). I switch back to learning Haskell, then again i see DART i check it out. And this continues.. Effectively i take more time to learn Haskell, but learnt about other new stuff on the way. The quetion i have is how do you strike a balance between staying bleeding edge and learning properly.

    Read the article

  • RIDC Accelerator for Portal

    - by Stefan Krantz
    What is RIDC?Remote IntraDoc Client is a Java enabled API that leverages simple transportation protocols like Socket, HTTP and JAX/WS to execute content service operations in WebCenter Content Server. Each operation by design in the Content Server will execute stateless and return a complete result of the request. Each request object simply specifies the in a Map format (key and value pairs) what service to call and what parameters settings to apply. The result responded with will be built on the same Map format (key and value pairs). The possibilities with RIDC is endless since you can consume any available service (even custom made ones), RIDC can be executed from any Java SE application that has any WebCenter Content Services needs. WebCenter Portal and the example Accelerator RIDC adapter frameworkWebCenter Portal currently integrates and leverages WebCenter Content Services to enable available use cases in the portal today, like Content Presenter and Doc Lib. However the current use cases only covers few of the scenarios that the Content Server has to offer, in addition to the existing use cases it is not rare that the customer requirements requires additional steps and functionality that is provided by WebCenter Content but not part of the use cases from the WebCenter Portal.The good news to this is RIDC, the second good news is that WebCenter Portal already leverages the RIDC and has a connection management framework in place. The million dollar question here is how can I leverage this infrastructure for my custom use cases. Oracle A-Team has during its interactions produced a accelerator adapter framework that will reuse and leverage the existing connections provisioned in the webcenter portal application (works for WebCenter Spaces as well), as well as a very comprehensive design patter to minimize the work involved when exposing functionality. Let me introduce the RIDCCommon framework for accelerating WebCenter Content consumption from WebCenter Portal including Spaces. How do I get started?Through a few easy steps you will be on your way, Extract the zip file RIDCCommon.zip to the WebCenter Portal Application file structure (PortalApp) Open you Portal Application in JDeveloper (PS4/PS5) select to open the project in your application - this will add the project as a member of the application Update the Portal project dependencies to include the new RIDCCommon project Make sure that you WebCenter Content Server connection is marked as primary (a checkbox at the top of the connection properties form) You should by this stage have a similar structure in your JDeveloper Application Project Portal Project PortalWebAssets Project RIDCCommon Since the API is coming with some example operations that has already been exposed as DataControl actions, if you open Data Controls accordion you should see following: How do I implement my own operation? Create a new Java Class in for example com.oracle.ateam.portal.ridc.operation call it (GetDocInfoOperation) Extend the abstract class com.oracle.ateam.portal.ridc.operation.RIDCAbstractOperation and implement the interface com.oracle.ateam.portal.ridc.operation.IRIDCOperation The only method you actually are required to implement is execute(RIDCManager, IdcClient, IdcContext) The best practice to set object references for the operation is through the Constructor, example below public GetDocInfoOperation(String dDocName)By leveraging the constructor you can easily force the implementing class to pass right information, you can also overload the Constructor with more or less parameters as required Implement the execute method, the work you supposed to execute here is creating a new request binder and retrieve a response binder with the information in the request binder.In this case the dDocName for which we want the DocInfo Secondly you have to process the response binder by extracting the information you need from the request and restore this information in a simple POJO Java BeanIn the example below we do this in private void processResult(DataBinder responseData) - the new SearchDataObject is a Member of the GetDocInfoOperation so we can return this from a access method. Since the RIDCCommon API leverage template pattern for the operations you are now required to add a method that will enable access to the result after the execution of the operationIn the example below we added the method public SearchDataObject getDataObject() - this method returns the pre processed SearchDataObject from the execute method  This is it, as you can see on the code below you do not need more than 32 lines of very simple code 1: public class GetDocInfoOperation extends RIDCAbstractOperation implements IRIDCOperation { 2: private static final String DOC_INFO_BY_NAME = "DOC_INFO_BY_NAME"; 3: private String dDocName = null; 4: private SearchDataObject sdo = null; 5: 6: public GetDocInfoOperation(String dDocName) { 7: super(); 8: this.dDocName = dDocName; 9: } 10:   11: public boolean execute(RIDCManager manager, IdcClient client, 12: IdcContext userContext) throws Exception { 13: DataBinder dataBinder = createNewRequestBinder(DOC_INFO_BY_NAME); 14: dataBinder.putLocal(DocumentAttributeDef.NAME.getName(), dDocName); 15: 16: DataBinder responseData = getResponseBinder(dataBinder); 17: processResult(responseData); 18: return true; 19: } 20: 21: private void processResult(DataBinder responseData) { 22: DataResultSet rs = responseData.getResultSet("DOC_INFO"); 23: for(DataObject dobj : rs.getRows()) { 24: this.sdo = new SearchDataObject(dobj); 25: } 26: super.setMessage(responseData.getLocal(ATTR_MESSAGE)); 27: } 28: 29: public SearchDataObject getDataObject() { 30: return this.sdo; 31: } 32: } How do I execute my operation? In the previous section we described how to create a operation, so by now you should be ready to execute the operation Step one either add a method to the class  com.oracle.ateam.portal.datacontrol.ContentServicesDC or a class of your own choiceRemember the RIDCManager is a very light object and can be created where needed Create a method signature look like this public SearchDataObject getDocInfo(String dDocName) throws Exception In the method body - create a new instance of GetDocInfoOperation and meet the constructor requirements by passing the dDocNameGetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName) Execute the operation via the RIDCManager instance rMgr.executeOperation(docInfo) Return the result by accessing it from the executed operationreturn docInfo.getDataObject() 1: private RIDCManager rMgr = null; 2: private String lastOperationMessage = null; 3:   4: public ContentServicesDC() { 5: super(); 6: this.rMgr = new RIDCManager(); 7: } 8: .... 9: public SearchDataObject getDocInfo(String dDocName) throws Exception { 10: GetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName); 11: boolean boolVal = rMgr.executeOperation(docInfo); 12: lastOperationMessage = docInfo.getMessage(); 13: return docInfo.getDataObject(); 14: }   Get the binaries! The enclosed code in a example that can be used as a reference on how to consume and leverage similar use cases, user has to guarantee appropriate quality and support.  Download link: https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/RIDCCommon.zip RIDC API Referencehttp://docs.oracle.com/cd/E23943_01/apirefs.1111/e17274/toc.htm

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Is there such a thing as too much experience?

    - by sunpech
    For modern software developers in today's world, is there such a thing as having too much experience with a certain technology or programming language? To a recruiter, interviewer, or company hiring-- could there often be cases where a particular candidate has so much experience in a certain area or technology where it works against the candidate to being hired? I'm not talking about cases where a senior developer is applying for an entry-level developer position, and has a lot of experience in that sense. Nor am I talking about cases where a candidate is outright lying (e.g. 20+ years experience with Ruby on Rails). I've overheard this in conversations between hiring managers/developers during happy hours, yet I'm not quite sure I fully understand what they mean.

    Read the article

  • Do we need use case levels or not?

    - by Gabriel Šcerbák
    I guess no one would argue for decomposing use cases, that is just wrong. However sometimes it is necessary to specify use cases, which are on lower, more technical level, like for example authetication and authorization, which give the actor value, but are further from his business needs. Cockburn argues for levels when needed and explains how to move use cases from/to different levels and how to determine the right level. On the other hand, e.g. Bittner argues against use case levels, although he uses subflows and at the end of his book mentions, that at least two levels areneeded most of the time. My questionis, do you find use case levels necessary, helpful or unwanted? What are the reasons? Am I misssing some important arguments?

    Read the article

  • Behavior-Driven Development / Use case diagram

    - by Mik378
    Regarding growing of Behavior-Driven Development imposing acceptance testing, are use cases diagram useful or do they lead to an "over-documentation"? Indeed, acceptance tests representing specifications by example, as use cases promote despite of a more generic manner (since cases, not scenarios), aren't they too similar to treat them both at the time of a newly created project? From this link, one opinion is: Another realization I had is that if you do UseCases and automated AcceptanceTests you are essentially doubling your work. There is duplication between the UseCases and the AcceptanceTests. I think there is a good case to be made that UserStories + AcceptanceTests are more efficient way to work when compared to UseCases + AcceptanceTests. What to think about?

    Read the article

  • Dynamic tests with mstest and T4

    - by Victor Hurdugaci
    If you used mstest and NUnit you might be aware of the fact that the former doesn't support dynamic, data driven test cases. For example, the following scenario cannot be achieved with the out-of-box mstest: given a dataset, create distinct test cases for each entry in it, using a predefined generic test case. The best result that can be achieved using mstest is a single testcase that will iterate through the dataset. There is one disadvantage: if the test fails for one entry in the dataset, the whole test case fails. So, in order to overcome the previously mentioned limitation, I decided to create a text template that will generate the test cases for me. As an example, I will write some tests for an integer multiplication function that has 2 bugs in it: Read more >> [Cross post from victorhurdugaci.com]

    Read the article

  • unexplainable packet drops with 5 ethernet NICs and low traffic on Ubuntu

    - by jon
    I'm stuck on problem where my machine started to drops packets with no sign of ANY system load or high interrupt usage after an upgrade to Ubuntu 12.04. My server is a network monitoring sensor, running Ubuntu LTS 12.04, it passively collects packets from 5 interfaces doing network intrusion type stuff. Before the upgrade I managed to collect 200+GB of packets a day while writing them to disk with around 0% packet loss depending on the day with the help of CPU affinity and NIC IRQ to CPU bindings. Now I lose a great deal of packets with none of my applications running and at very low PPS rate which a modern workstation NIC would have no trouble with. Specs: x64 Xeon 4 cores 3.2 Ghz 16 GB RAM NICs: 5 Intel Pro NICs using the e1000 driver (NAPI). [1] eth0 and eth1 are integrated NICs (in the motherboard) There are 2 other PCI-X network cards, each with 2 Ethernet ports. 3 of the interfaces are running at Gigabit Ethernet, the others are not because they're attached to hubs. Specs: [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm uptime 17:36:00 up 1:43, 2 users, load average: 0.00, 0.01, 0.05 # uname -a Linux nms 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I also have the CPU governor set to performance mode and irqbalance off. The problem still occurs with them on. # lspci -t -vv -[0000:00]-+-00.0 Intel Corporation E7520 Memory Controller Hub +-02.0-[01-03]--+-00.0-[02]----0e.0 Dell PowerEdge Expandable RAID controller 4 | \-00.2-[03]-- +-04.0-[04]-- +-05.0-[05-07]--+-00.0-[06]----07.0 Intel Corporation 82541GI Gigabit Ethernet Controller | \-00.2-[07]----08.0 Intel Corporation 82541GI Gigabit Ethernet Controller +-06.0-[08-0a]--+-00.0-[09]--+-04.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | | \-04.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-00.2-[0a]--+-02.0 Digium, Inc. Wildcard TE210P/TE212P dual-span T1/E1/J1 card 3.3V | +-03.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-03.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) +-1d.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #1 +-1d.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #2 +-1d.2 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #3 +-1d.7 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 EHCI Controller +-1e.0-[0b]----0d.0 Advanced Micro Devices [AMD] nee ATI RV100 QY [Radeon 7000/VE] +-1f.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC Interface Bridge \-1f.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE Controller I believe the NIC nor the NIC drivers are dropping the packets because ethtool reports 0 under rx_missed_errors and rx_no_buffer_count for each interface. On the old system, if it couldn't keep up this is where the drops would be. I drop packets on multiple interfaces just about every second, usually in small increments of 2-4. I tried all these sysctl values, I'm currently using the uncommented ones. # cat /etc/sysctl.conf # high net.core.netdev_max_backlog = 3000000 net.core.rmem_max = 16000000 net.core.rmem_default = 8000000 # defaults #net.core.netdev_max_backlog = 1000 #net.core.rmem_max = 131071 #net.core.rmem_default = 163480 # moderate #net.core.netdev_max_backlog = 10000 #net.core.rmem_max = 33554432 #net.core.rmem_default = 33554432 Here's an example of an interface stats report with ethtool. They are all the same, nothing is out of the ordinary ( I think ), so I'm only going to show one: ethtool -S eth2 NIC statistics: rx_packets: 7498 tx_packets: 0 rx_bytes: 2722585 tx_bytes: 0 rx_broadcast: 327 tx_broadcast: 0 rx_multicast: 1504 tx_multicast: 0 rx_errors: 0 tx_errors: 0 tx_dropped: 0 multicast: 1504 collisions: 0 rx_length_errors: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_no_buffer_count: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 tx_window_errors: 0 tx_abort_late_coll: 0 tx_deferred_ok: 0 tx_single_coll_ok: 0 tx_multi_coll_ok: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 rx_align_errors: 0 tx_tcp_seg_good: 0 tx_tcp_seg_failed: 0 rx_flow_control_xon: 0 rx_flow_control_xoff: 0 tx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_long_byte_count: 2722585 rx_csum_offload_good: 0 rx_csum_offload_errors: 0 alloc_rx_buff_failed: 0 tx_smbus: 0 rx_smbus: 0 dropped_smbus: 01 # ifconfig eth0 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:373348 errors:16 dropped:95 overruns:0 frame:16 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:356830572 (356.8 MB) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8d UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:13616 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8690528 (8.6 MB) TX bytes:0 (0.0 B) eth2 Link encap:Ethernet HWaddr 00:04:23:e1:77:6a UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:7750 errors:0 dropped:471 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2780935 (2.7 MB) TX bytes:0 (0.0 B) eth3 Link encap:Ethernet HWaddr 00:04:23:e1:77:6b UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:5112 errors:0 dropped:206 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:639472 (639.4 KB) TX bytes:0 (0.0 B) eth4 Link encap:Ethernet HWaddr 00:04:23:b6:35:6c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:961467 errors:0 dropped:935 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:958561305 (958.5 MB) TX bytes:0 (0.0 B) eth5 Link encap:Ethernet HWaddr 00:04:23:b6:35:6d inet addr:192.168.1.6 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4264 errors:0 dropped:16 overruns:0 frame:0 TX packets:699 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:572228 (572.2 KB) TX bytes:124456 (124.4 KB) I tried the defaults, then started to play around with settings. I wasn't using any flow control and I increased the RxDescriptor count to 4096 before the upgrade as well without any problems. # cat /etc/modprobe.d/e1000.conf options e1000 XsumRX=0,0,0,0,0 RxDescriptors=4096,4096,4096,4096,4096 FlowControl=0,0,0,0,0 debug=16 Here's my network configuration file, I turned off checksumming and various offloading mechanisms along with setting CPU affinity with heavy use interfaces getting an entire CPU and light use interfaces sharing a CPU. I used these settings prior to the upgrade without problems. # cat /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual pre-up /sbin/ethtool -G eth0 rx 4096 tx 0 pre-up /sbin/ethtool -K eth0 gro off gso off rx off pre-up /sbin/ethtool -A eth0 rx off autoneg off up ifconfig eth0 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/48/smp_affinity down ifconfig eth0 down post-down /sbin/ethtool -G eth0 rx 256 tx 256 post-down /sbin/ethtool -K eth0 gro on gso on rx on post-down /sbin/ethtool -A eth0 rx on autoneg on auto eth1 iface eth1 inet manual pre-up /sbin/ethtool -G eth1 rx 4096 tx 0 pre-up /sbin/ethtool -K eth1 gro off gso off rx off pre-up /sbin/ethtool -A eth1 rx off autoneg off up ifconfig eth1 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/49/smp_affinity down ifconfig eth1 down post-down /sbin/ethtool -G eth1 rx 256 tx 256 post-down /sbin/ethtool -K eth1 gro on gso on rx on post-down /sbin/ethtool -A eth1 rx on autoneg on auto eth2 iface eth2 inet manual pre-up /sbin/ethtool -G eth2 rx 4096 tx 0 pre-up /sbin/ethtool -K eth2 gro off gso off rx off pre-up /sbin/ethtool -A eth2 rx off autoneg off up ifconfig eth2 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "1" > /proc/irq/82/smp_affinity down ifconfig eth2 down post-down /sbin/ethtool -G eth2 rx 256 tx 256 post-down /sbin/ethtool -K eth2 gro on gso on rx on post-down /sbin/ethtool -A eth2 rx on autoneg on auto eth3 iface eth3 inet manual pre-up /sbin/ethtool -G eth3 rx 4096 tx 0 pre-up /sbin/ethtool -K eth3 gro off gso off rx off pre-up /sbin/ethtool -A eth3 rx off autoneg off up ifconfig eth3 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "2" > /proc/irq/83/smp_affinity down ifconfig eth3 down post-down /sbin/ethtool -G eth3 rx 256 tx 256 post-down /sbin/ethtool -K eth3 gro on gso on rx on post-down /sbin/ethtool -A eth3 rx on autoneg on auto eth4 iface eth4 inet manual pre-up /sbin/ethtool -G eth4 rx 4096 tx 0 pre-up /sbin/ethtool -K eth4 gro off gso off rx off pre-up /sbin/ethtool -A eth4 rx off autoneg off up ifconfig eth4 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/77/smp_affinity down ifconfig eth4 down post-down /sbin/ethtool -G eth4 rx 256 tx 256 post-down /sbin/ethtool -K eth4 gro on gso on rx on post-down /sbin/ethtool -A eth4 rx on autoneg on auto eth5 iface eth5 inet static pre-up /etc/fw.conf address 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 192.168.1.2 192.168.1.3 up ifconfig eth5 up post-up echo "8" > /proc/irq/77/smp_affinity down ifconfig eth5 down Here's a few examples of packet drops, i ran one after another, probabling totaling 3 or 4 seconds. You can see increases in the drops from the 1st and 3rd. This was a non-busy time, very little traffic. # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 505 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 507 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 227 lo: 0 eth2: 512 eth1: 0 eth5: 17 eth0: 105 eth4: 1039 I tried the pci=noacpi options. With and without, it's the same. This is what my interrupt stats looked like before the upgrade, after, with ACPI on PCI it showed multiple NICs bound to an interrupt and shared with other devices such as USB drives which I didn't like so I think i'm going to keep it with ACPI off as it's easier to designate sole purpose interrupts. Is there any advantage I would have using the default i.e. ACPI w/ PCI. ? # cat /etc/default/grub | grep CMD_LINE GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1 noacpi pci=noacpi" GRUB_CMDLINE_LINUX="" # cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 0: 45 0 0 16 IO-APIC-edge timer 1: 1 0 0 7936 IO-APIC-edge i8042 2: 0 0 0 0 XT-PIC-XT-PIC cascade 6: 0 0 0 3 IO-APIC-edge floppy 8: 0 0 0 1 IO-APIC-edge rtc0 9: 0 0 0 0 IO-APIC-edge acpi 12: 0 0 0 1809 IO-APIC-edge i8042 14: 1 0 0 4498 IO-APIC-edge ata_piix 15: 0 0 0 0 IO-APIC-edge ata_piix 16: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb2 18: 0 0 0 1350 IO-APIC-fasteoi uhci_hcd:usb4, radeon 19: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3 23: 0 0 0 4099 IO-APIC-fasteoi ehci_hcd:usb1 38: 0 0 0 61963 IO-APIC-fasteoi megaraid 48: 0 0 1002319 4 IO-APIC-fasteoi eth0 49: 0 0 38772 3 IO-APIC-fasteoi eth1 77: 0 0 130076 432159 IO-APIC-fasteoi eth4 78: 0 0 0 23917 IO-APIC-fasteoi eth5 82: 1329033 0 0 4 IO-APIC-fasteoi eth2 83: 0 4886525 0 6 IO-APIC-fasteoi eth3 NMI: 5 6 4 5 Non-maskable interrupts LOC: 61409 57076 64257 114764 Local timer interrupts SPU: 0 0 0 0 Spurious interrupts IWI: 0 0 0 0 IRQ work interrupts RES: 17956 25333 13436 14789 Rescheduling interrupts CAL: 22436 607 539 478 Function call interrupts TLB: 1525 1458 4600 4151 TLB shootdowns TRM: 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 Threshold APIC interrupts MCE: 0 0 0 0 Machine check exceptions MCP: 16 16 16 16 Machine check polls ERR: 0 MIS: 0 Here's sample output of vmstat, showing the system. Barebones system right now. root@nms:~# vmstat -S m 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 14992 192 1029 0 0 56 2 419 29 1 0 99 0 0 0 0 14992 192 1029 0 0 0 0 922 27 0 0 100 0 0 0 0 14991 192 1029 0 0 0 36 763 50 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 646 35 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 722 54 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 793 27 0 0 100 0 ^C Here's dmesg output. I can't figure out why my PCI-X slots are negotiated as PCI. The network cards are all PCI-X with the exception of the integrated NICs that came with the server. In the output below it looks as if eth3 and eth2 negotiated at PCI-X speeds rather than PCI:66Mhz. Wouldn't they all drop to PCI:66Mhz? If your integrated NICs are PCI, as labeled below (eth0,eth1), then wouldn't all devices on your bus speed drop down to that slower bus speed? If not, I still don't know why only one of my NICs ( each has two ethernet ports) is labeled as PCI-X in the output below. Does that mean it is running at PCI-X speeds are is it showing that it's capable? # dmesg | grep e1000 [ 3678.349337] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI [ 3678.349342] e1000: Copyright (c) 1999-2006 Intel Corporation. [ 3678.349394] e1000 0000:06:07.0: PCI->APIC IRQ transform: INT A -> IRQ 48 [ 3678.409725] e1000 0000:06:07.0: Receive Descriptors set to 4096 [ 3678.409730] e1000 0000:06:07.0: Checksum Offload Disabled [ 3678.409734] e1000 0000:06:07.0: Flow Control Disabled [ 3678.586409] e1000 0000:06:07.0: eth0: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8c [ 3678.586419] e1000 0000:06:07.0: eth0: Intel(R) PRO/1000 Network Connection [ 3678.586642] e1000 0000:07:08.0: PCI->APIC IRQ transform: INT A -> IRQ 49 [ 3678.649854] e1000 0000:07:08.0: Receive Descriptors set to 4096 [ 3678.649859] e1000 0000:07:08.0: Checksum Offload Disabled [ 3678.649863] e1000 0000:07:08.0: Flow Control Disabled [ 3678.826436] e1000 0000:07:08.0: eth1: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8d [ 3678.826444] e1000 0000:07:08.0: eth1: Intel(R) PRO/1000 Network Connection [ 3678.826627] e1000 0000:09:04.0: PCI->APIC IRQ transform: INT A -> IRQ 82 [ 3679.093266] e1000 0000:09:04.0: Receive Descriptors set to 4096 [ 3679.093271] e1000 0000:09:04.0: Checksum Offload Disabled [ 3679.093275] e1000 0000:09:04.0: Flow Control Disabled [ 3679.130239] e1000 0000:09:04.0: eth2: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6a [ 3679.130246] e1000 0000:09:04.0: eth2: Intel(R) PRO/1000 Network Connection [ 3679.130449] e1000 0000:09:04.1: PCI->APIC IRQ transform: INT B -> IRQ 83 [ 3679.397312] e1000 0000:09:04.1: Receive Descriptors set to 4096 [ 3679.397318] e1000 0000:09:04.1: Checksum Offload Disabled [ 3679.397321] e1000 0000:09:04.1: Flow Control Disabled [ 3679.434350] e1000 0000:09:04.1: eth3: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6b [ 3679.434360] e1000 0000:09:04.1: eth3: Intel(R) PRO/1000 Network Connection [ 3679.434553] e1000 0000:0a:03.0: PCI->APIC IRQ transform: INT A -> IRQ 77 [ 3679.704072] e1000 0000:0a:03.0: Receive Descriptors set to 4096 [ 3679.704077] e1000 0000:0a:03.0: Checksum Offload Disabled [ 3679.704081] e1000 0000:0a:03.0: Flow Control Disabled [ 3679.738364] e1000 0000:0a:03.0: eth4: (PCI:33MHz:64-bit) 00:04:23:b6:35:6c [ 3679.738371] e1000 0000:0a:03.0: eth4: Intel(R) PRO/1000 Network Connection [ 3679.738538] e1000 0000:0a:03.1: PCI->APIC IRQ transform: INT B -> IRQ 78 [ 3680.046060] e1000 0000:0a:03.1: eth5: (PCI:33MHz:64-bit) 00:04:23:b6:35:6d [ 3680.046067] e1000 0000:0a:03.1: eth5: Intel(R) PRO/1000 Network Connection [ 3682.132415] e1000: eth0 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.224423] e1000: eth1 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.316385] e1000: eth2 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.408391] e1000: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.500396] e1000: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.708401] e1000: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX At first I thought it was the NIC drivers but I'm not so sure. I really have no idea where else to look at the moment. Any help is greatly appreciated as I'm struggling with this. If you need more information just ask. Thanks! [1]http://www.cs.fsu.edu/~baker/devices/lxr/http/source/linux/Documentation/networking/e1000.txt?v=2.6.11.8 [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm

    Read the article

  • Am I right about the differences between Floyd-Warshall, Dijkstra's and Bellman-Ford algorithms?

    - by Programming Noob
    I've been studying the three and I'm stating my inferences from them below. Could someone tell me if I have understood them accurately enough or not? Thank you. Dijkstra's algorithm is used only when you have a single source and you want to know the smallest path from one node to another, but fails in cases like this Floyd-Warshall's algorithm is used when any of all the nodes can be a source, so you want the shortest distance to reach any destination node from any source node. This only fails when there are negative cycles (this is the most important one. I mean, this is the one I'm least sure about:) 3.Bellman-Ford is used like Dijkstra's, when there is only one source. This can handle negative weights and its working is the same as Floyd-Warshall's except for one source, right? If you need to have a look, the corresponding algorithms are (courtesy Wikipedia): Bellman-Ford: procedure BellmanFord(list vertices, list edges, vertex source) // This implementation takes in a graph, represented as lists of vertices // and edges, and modifies the vertices so that their distance and // predecessor attributes store the shortest paths. // Step 1: initialize graph for each vertex v in vertices: if v is source then v.distance := 0 else v.distance := infinity v.predecessor := null // Step 2: relax edges repeatedly for i from 1 to size(vertices)-1: for each edge uv in edges: // uv is the edge from u to v u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: v.distance := u.distance + uv.weight v.predecessor := u // Step 3: check for negative-weight cycles for each edge uv in edges: u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: error "Graph contains a negative-weight cycle" Dijkstra: 1 function Dijkstra(Graph, source): 2 for each vertex v in Graph: // Initializations 3 dist[v] := infinity ; // Unknown distance function from 4 // source to v 5 previous[v] := undefined ; // Previous node in optimal path 6 // from source 7 8 dist[source] := 0 ; // Distance from source to source 9 Q := the set of all nodes in Graph ; // All nodes in the graph are 10 // unoptimized - thus are in Q 11 while Q is not empty: // The main loop 12 u := vertex in Q with smallest distance in dist[] ; // Start node in first case 13 if dist[u] = infinity: 14 break ; // all remaining vertices are 15 // inaccessible from source 16 17 remove u from Q ; 18 for each neighbor v of u: // where v has not yet been 19 removed from Q. 20 alt := dist[u] + dist_between(u, v) ; 21 if alt < dist[v]: // Relax (u,v,a) 22 dist[v] := alt ; 23 previous[v] := u ; 24 decrease-key v in Q; // Reorder v in the Queue 25 return dist; Floyd-Warshall: 1 /* Assume a function edgeCost(i,j) which returns the cost of the edge from i to j 2 (infinity if there is none). 3 Also assume that n is the number of vertices and edgeCost(i,i) = 0 4 */ 5 6 int path[][]; 7 /* A 2-dimensional matrix. At each step in the algorithm, path[i][j] is the shortest path 8 from i to j using intermediate vertices (1..k-1). Each path[i][j] is initialized to 9 edgeCost(i,j). 10 */ 11 12 procedure FloydWarshall () 13 for k := 1 to n 14 for i := 1 to n 15 for j := 1 to n 16 path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );

    Read the article

  • Dual monitor not working completely in 12.10 after upgrade

    - by Mark Baldridge
    At 12.04, dual monitors worked perfectly. After upgrading to 12.10, the primary monitor works, the second monitor only partly works. I am sure there is some difference between the releases that I have missed setting properly. System settings - Displays show both correctly as Acer 22" monitors at 1680x1050 (16:10). An icon on monitor 2 is present, but elongated; almost an artifact, since other icons on the primary screen are absent, but this one icon is there on th second monitor. Selecting the icons on both screens exist. Painting is weird on monitor 2. Launcher exists and works on both screens, but even with sticky edges off, the cursor stops at the left edge of monitor 2. Clicking on text editor on screen 2 launcer will launch gedit there. If I drag it, it leaves a trail of after images like repaint is failing. If I drive the cursor on the launcher, the help tags like "LibreOffice Writer" appear, but stay on screen unless I drag the active gedit window over them. Then part of the help bubbles are overwritten, leaving behind after images of the gedit window on screen. What is really fascinating is that the System settings - Displays is now ignoring monitor selection, after allowing it earlier. Just before this, the help popup which said "Select a monitor to change its properties; drag to rearrange its placement" actually let me do that. Maybe a trick of where I grab the edge of the monitor in the Displays setting. I just found a working handle. When I drag monitor 1 to the right of monitor 2, "Apply" and confirm, both monitors work normally (although the right monitor lets the cursor slide off the right edge onto the left edge of monitor 1 - which sounds correct). Painting of windows does not leave an after image. However, success is only temporary. The setting survives the reboot, but painting on the left monitor, now monitor 2, now replicates the issues from before. The after image of the gedit window and the small window for "Are you sure you want to close all programs and restart the computer?" are still on monitor 2 (on the left now), even though they are not real windows, nor do they have processes behind them. Curiously, in Displays, the "green" monitor on the left in the display window is matched by the right monitor color in the monitor upper left corner. Probably makes sense as the one on the right is now monitor 1. If I repeat the "drag the left monitor to the right of the right monitor on the "Displays" window, things are oriented properly, with no display artifacts as I drag windows around either screen. Also the description bubbles that pop up are overwritten on both screens, so none of those artifacts either. This goodness does not survive a reboot, however. Have not tried logging out and back in. All of this after positing that the motherboard VGA and HDMI ports could have been the issue. So, I installed an e-GeForce 7600 GT Dual DVI (I know the web thinks it is not DVI, but VGA, but the connectors are DVI). No change to the weird behavior. The good parts continue to work, the weirdness also works, and swapping monitor positions seems to cure the issue. So, is there a setting I have missed? Given "swapping" monitor 1 and 2 on the System Settings... - Displays makes it work, just not across boot, I suspect so.

    Read the article

  • php script google maps points from mysql (google example)

    - by user1637477
    I have recently added style information to my maps script, and it stopped working. Have I done something wrong? Guess you can tell I'm very new to this. Any help appreciated. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"><head> <meta http-equiv="content-type" content="text/html; charset=utf-8"><title>Google Maps AJAX + mySQL/PHP Example</title> <script src="http://maps.google.com/maps/api/js?sensor=false" type="text/javascript"></script> <script type="text/javascript"> //<![CDATA[ ( *******************I INSERTED HERE ) var styles = [ { stylers: [ { hue: "#00ffe6" }, { saturation: -20 } ] },{ featureType: "road", elementType: "geometry", stylers: [ { lightness: 100 }, { visibility: "simplified" } ] },{ featureType: "road", elementType: "labels", stylers: [ { visibility: "off" } ] } ];**( ******************************** THROUGH TO HERE ) map.setOptions({styles: styles}); var customIcons = { restaurant: { icon: 'http://labs.google.com/ridefinder/images/mm_20_blue.png', shadow: 'http://labs.google.com/ridefinder/images/mm_20_shadow.png' }, bar: { icon: 'http://labs.google.com/ridefinder/images/mm_20_red.png', shadow: 'http://labs.google.com/ridefinder/images/mm_20_shadow.png' } }; function load() { var map = new google.maps.Map(document.getElementById("map"), { center: new google.maps.LatLng(-37.7735, 175.1418), zoom: 10, mapTypeId: 'roadmap' }); var infoWindow = new google.maps.InfoWindow; // Change this depending on the name of your PHP fileBHBHBHBHBHBHBHBBBBBBBBBBBBBBBBBBBBBBBBBBB downloadUrl("mywebsite-no i did this just a minute ago", function(data) { var xml = data.responseXML; var markers = xml.documentElement.getElementsByTagName("marker"); for (var i = 0; i < markers.length; i++) { var name = markers[i].getAttribute("name"); var address = markers[i].getAttribute("address"); var type = markers[i].getAttribute("type"); var point = new google.maps.LatLng( parseFloat(markers[i].getAttribute("lat")), parseFloat(markers[i].getAttribute("lng"))); var html = "<b>" + name + "</b> <br/>" + address; var icon = customIcons[type] || {}; var marker = new google.maps.Marker({ map: map, position: point, icon: icon.icon, shadow: icon.shadow }); bindInfoWindow(marker, map, infoWindow, html); } }); } function bindInfoWindow(marker, map, infoWindow, html) { google.maps.event.addListener(marker, 'click', function() { infoWindow.setContent(html); infoWindow.open(map, marker); }); } function downloadUrl(url, callback) { var request = window.ActiveXObject ? new ActiveXObject('Microsoft.XMLHTTP') : new XMLHttpRequest; request.onreadystatechange = function() { if (request.readyState == 4) { request.onreadystatechange = doNothing; callback(request, request.status); } }; request.open('GET', url, true); request.send(null); } function doNothing() {} //]]> </script><!--Adobe Edge Runtime--> <script type="text/javascript" charset="utf-8" src="map500x1000_edgePreload.js"></script> <style> .edgeLoad-EDGE-12956064 { visibility:hidden; } </style> <!--Adobe Edge Runtime End--> </head> <body onload="load()"> <div id="map" style="width: 1000px; height: 1000px;" class="edgeLoad-EDGE-12956064"></div> </body></html>

    Read the article

  • UAT Testing for SOA 10G Clusters

    - by [email protected]
    A lot of customers ask how to verify their SOA clusters and make them production ready. Here is a list that I recommend using for 10G SOA Clusters. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} Test cases for each component - Oracle Application Server 10G General Application Server test cases This section is going to cover very General test cases to make sure that the Application Server cluster has been set up correctly and if you can start and stop all the components in the server via opmnct and AS Console. Test Case 1 Check if you can see AS instances in the console Implementation 1. Log on to the AS Console --> check to see if you can see all the nodes in your AS cluster. You should be able to see all the Oracle AS instances that are part of the cluster. This means that the OPMN clustering worked and the AS instances successfully joined the AS cluster. Result You should be able to see if all the instances in the AS cluster are listed in the EM console. If the instances are not listed here are the files to check to see if OPMN joined the cluster properly: $ORACLE_HOME\opmn\logs{*}opmn.log*$ORACLE_HOME\opmn\logs{*}opmn.dbg* If OPMN did not join the cluster properly, please check the opmn.xml file to make sure the discovery multicast address and port are correct (see this link  for opmn documentation). Restart the whole instance using opmnctl stopall followed by opmnctl startall. Log on to AS console to see if instance is listed as part of the cluster. Test Case 2 Check to see if you can start/stop each component Implementation Check each OC4J component on each AS instanceStart each and every component through the AS console to see if they will start and stop.Do that for each and every instance. Result Each component should start and stop through the AS console. You can also verify if the component started by checking opmnctl status by logging onto each box associated with the cluster Test Case 3 Add/modify a datasource entry through AS console on a remote AS instance (not on the instance where EM is physically running) Implementation Pick an OC4J instanceCreate a new data-source through the AS consoleModify an existing data-source or connection pool (optional) Result Open $ORACLE_HOME\j2ee\<oc4j_name>\config\data-sources.xml to see if the new (and or the modified) connection details and data-source exist. If they do then the AS console has successfully updated a remote file and MBeans are communicating correctly. Test Case 4 Start and stop AS instances using opmnctl @cluster command Implementation 1. Go to $ORACLE_HOME\opmn\bin and use the opmnctl @cluster to start and stop the AS instances Result Use opmnctl @cluster status to check for start and stop statuses.  HTTP server test cases This section will deal with use cases to test HTTP server failover scenarios. In these examples the HTTP server will be talking to the BPEL console (or any other web application that the client wants), so the URL will be _http://hostname:port\BPELConsole Test Case 1  Shut down one of the HTTP servers while accessing the BPEL console and see the requested routed to the second HTTP server in the cluster Implementation Access the BPELConsoleCheck $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like this 1xx.2x.2xx.xxx [24/Mar/2009:16:04:38 -0500] "GET /BPELConsole=System HTTP/1.1" 200 15 After you have figured out which HTTP server this is running on, shut down this HTTP server by using opmnctl stopproc --> this is a graceful shutdown.Access the BPELConsole again (please note that you should have a LoadBalancer in front of the HTTP server and configured the Apache Virtual Host, see EDG for steps)Check $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like above Result Even though you are shutting down the HTTP server the request is routed to the surviving HTTP server, which is then able to route the request to the BPEL Console and you are able to access the console. By checking the access log file you can confirm that the request is being picked up by the surviving node. Test Case 2 Repeat the same test as above but instead of calling opmnctl stopproc, pull the network cord of one of the HTTP servers, so that the LBR routes the request to the surviving HTTP node --> this is simulating a network failure. Test Case 3 In test case 1 we have simulated a graceful shutdown, in this case we will simulate an Apache crash Implementation Use opmnctl status -l to get the PID of the HTTP server that you would like forcefully bring downOn Linux use kill -9 <PID> to kill the HTTP serverAccess the BPEL console Result As you shut down the HTTP server, OPMN will restart the HTTP server. The restart may be so quick that the LBR may still route the request to the same server. One way to check if the HTTP server restared is to check the new PID and the timestamp in the access log for the BPEL console. BPEL test cases This section is going to cover scenarios dealing with BPEL clustering using jGroups, BPEL deployment and testing related to BPEL failover. Test Case 1 Verify that jGroups has initialized correctly. There is no real testing in this use case just a visual verification by looking at log files that jGroups has initialized correctly. Check the opmn log for the BPEL container for all nodes at $ORACLE_HOME/opmn/logs/<group name><container name><group name>~1.log. This logfile will contain jGroups related information during startup and steady-state operation. Soon after startup you should find log entries for UDP or TCP.Example jGroups Log Entries for UDPApr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets ·         INFO: sockets will use interface 144.25.142.172·          ·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets·          ·         INFO: socket information:·          ·         local_addr=144.25.142.172:1127, mcast_addr=228.8.15.75:45788, bind_addr=/144.25.142.172, ttl=32·         sock: bound to 144.25.142.172:1127, receive buffer size=64000, send buffer size=32000·         mcast_recv_sock: bound to 144.25.142.172:45788, send buffer size=32000, receive buffer size=64000·         mcast_send_sock: bound to 144.25.142.172:1128, send buffer size=32000, receive buffer size=64000·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·          ·         GMS: address is 144.25.142.172:1127·          ------------------------------------------------------- Example jGroups Log Entries for TCPApr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.172:7900·          ·         Apr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·         GMS: address is 144.25.142.172:7900------------------------------------------------------- In the log below the "socket created on" indicates that the TCP socket is established on the own node at that IP address and port the "created socket to" shows that the second node has connected to the first node, matching the logfile above with the IP address and port.Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.173:7901·          ·         Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         ------------------------------------------------------·         GMS: address is 144.25.142.173:7901·         -------------------------------------------------------·         Apr 3, 2008 6:25:41 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable getConnectionINFO: created socket to 144.25.142.172:7900  Result By reviewing the log files, you can confirm if BPEL clustering at the jGroups level is working and that the jGroup channel is communicating. Test Case 2  Test connectivity between BPEL Nodes Implementation Test connections between different cluster nodes using ping, telnet, and traceroute. The presence of firewalls and number of hops between cluster nodes can affect performance as they have a tendency to take down connections after some time or simply block them.Also reference Metalink Note 413783.1: "How to Test Whether Multicast is Enabled on the Network." Result Using the above tools you can confirm if Multicast is working  and whether BPEL nodes are commnunicating. Test Case3 Test deployment of BPEL suitcase to one BPEL node.  Implementation Deploy a HelloWorrld BPEL suitcase (or any other client specific BPEL suitcase) to only one BPEL instance using ant, or JDeveloper or via the BPEL consoleLog on to the second BPEL console to check if the BPEL suitcase has been deployed Result If jGroups has been configured and communicating correctly, BPEL clustering will allow you to deploy a suitcase to a single node, and jGroups will notify the second instance of the deployment. The second BPEL instance will go to the DB and pick up the new deployment after receiving notification. The result is that the new deployment will be "deployed" to each node, by only deploying to a single BPEL instance in the BPEL cluster. Test Case 4  Test to see if the BPEL server failsover and if all asynch processes are picked up by the secondary BPEL instance Implementation Deploy a 2 Asynch process: A ParentAsynch Process which calls a ChildAsynchProcess with a variable telling it how many times to loop or how many seconds to sleepA ChildAsynchProcess that loops or sleeps or has an onAlarmMake sure that the processes are deployed to both serversShut down one BPEL serverOn the active BPEL server call ParentAsynch a few times (use the load generation page)When you have enough ParentAsynch instances shut down this BPEL instance and start the other one. Please wait till this BPEL instance shuts down fully before starting up the second one.Log on to the BPEL console and see that the instance were picked up by the second BPEL node and completed Result The BPEL instance will failover to the secondary node and complete the flow ESB test cases This section covers the use cases involved with testing an ESB cluster. For this section please Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} follow Metalink Note 470267.1 which covers the basic tests to verify your ESB cluster.

    Read the article

  • How do DP and CC change in Piet?

    - by Paul Butcher
    According to the specification, Black colour blocks and the edges of the program restrict program flow. If the Piet interpreter attempts to move into a black block or off an edge, it is stopped and the CC is toggled. The interpreter then attempts to move from its current block again. If it fails a second time, the DP is moved clockwise one step. These attempts are repeated, with the CC and DP being changed between alternate attempts. If after eight attempts the interpreter cannot leave its current colour block, there is no way out and the program terminates. Unless I'm reading it incorrectly, this is at odds with the behaviour of the Fibonacci sequence example here: http://www.dangermouse.net/esoteric/piet/fibbig1.gif (from: http://www.dangermouse.net/esoteric/piet/samples.html) Specifically, why does the DP turn left at (0,3) ((0,0) being (top, left)) when it hits the left edge? At this point, both DP and CC are LEFT, so, by my reading, the sequence should then be: Attempt (and fail) to leave the block by going off the edge at (0,4), Toggle CC to RIGHT, Attempt (and fail) to leave the block by going off the edge at (0,2). Rotate DP to UP, Attempt (and succeed) to leave the block at (1,2) by entering the white block at (1,1) The behaviour indicated by the trace seems to be that DP gets rotated all the way, leaving CC at LEFT. What have I misunderstood?

    Read the article

  • Can this method to convert a name to proper case be improved?

    - by Kelsey
    I am writing a basic function to convert millions of names (one time batch process) from their current form, which is all upper case, to a proper mixed case. I came up with the following so far: public string ConvertToProperNameCase(string input) { TextInfo textInfo = new CultureInfo("en-US", false).TextInfo; char[] chars = textInfo.ToTitleCase(input.ToLower()).ToCharArray(); for (int i = 0; i + 1 < chars.Length; i++) { if ((chars[i].Equals('\'')) || (chars[i].Equals('-'))) { chars[i + 1] = Char.ToUpper(chars[i + 1]); } } return new string(chars);; } It works in most cases such as: JOHN SMITH - John Smith SMITH, JOHN T - Smith, John T JOHN O'BRIAN - John O'Brian JOHN DOE-SMITH - John Doe-Smith There are some edge cases that do no work like: JASON MCDONALD - Jason Mcdonald (Correct: Jason McDonald) OSCAR DE LA HOYA - Oscar De La Hoya (Correct: Oscar de la Hoya) MARIE DIFRANCO - Marie Difranco (Correct: Marie DiFranco) These are not captured and I am not sure if I can handle all these odd edge cases. Can anyone think of anything I could change or add to capture more edge case? I am sure there are tons of edge cases I am not even thinking of as well. All casing should following North American conventions too meaning that if certain countries expect a specific capitalization format, and that differs from the North American format, then the North American format takes precedence.

    Read the article

  • In a digital photo, how can I detect if a mountain is obscured by clouds?

    - by Gavin Brock
    The problem I have a collection of digital photos of a mountain in Japan. However the mountain is often obscured by clouds or fog. What techniques can I use to detect that the mountain is visible in the image? I am currently using Perl with the Imager module, but open to alternatives. All the images are taken from the exact same position - these are some samples. My naïve solution I started by taking several horizontal pixel samples of the mountain cone and comparing the brightness values to other samples from the sky. This worked well for differentiating good image 1 and bad image 2. However in the autumn it snowed and the mountain became brighter than the sky, like image 3, and my simple brightness test started to fail. Image 4 is an example of an edge case. I would classify this as a good image since some of the mountain is clearly visible. UPDATE 1 Thank you for the suggestions - I am happy you all vastly over-estimated my competence. Based on the answers, I have started trying the ImageMagick edge-detect transform, which gives me a much simpler image to analyze. convert sample.jpg -edge 1 edge.jpg I assume I should use some kind of masking to get rid of the trees and most of the clouds. Once I have the masked image, what is the best way to compare the similarity to a 'good' image? I guess the "compare" command suited for this job? How do I get a numeric 'similarity' value from this?

    Read the article

  • NSPredicate as a constraint solver?

    - by Felixyz
    I'm working on a project which includes some slightly more complex dynamic layout of interface elements than what I'm used to. I always feel stupid writing complex code that checks if so-and-so is close to such-and-such and in that case move it x% in some direction, etc. That's just not how programming should be done. Programming should be as declarative as possible! Precisely because what I'm going to do is fairly simple, I thought it would be a good opportunity to try something new, and I thought of using NSPredicate as a simple constraints solver. I've only used NSPredicate for very simple tasks so far, but I know that it capable of much more. Are there any ideas, experiences, examples, warnings, insights that could be useful here? I'll give a very simple example so there will be something concrete to answer. How could I use NSPredicate to solve the following constraints: viewB.xmid = (viewB.leftEdge + viewB.width) / 2 viewB.xmid = max(300, viewA.rightEdge + 20 + viewB.width/2) ("viewB should be horizontally centered on coordinate 300, unless its left edge gets within 20 pixels of viewB's right edge, in which case viewA's left edge should stay fixed at 20 pixels to the right of viewB's right edge and viewA's horizontal center get pushed to the right.") viewA.rightEdge and viewB.width can vary, and those are the 'input variables'. EDIT: Any solution would probably have to use the NSExpression method -(id)expressionValueWithObject:(id)object context:(NSMutableDictionary *)context. This answer is relevant.

    Read the article

  • SQL2k8 T-SQL: Output into XML file

    - by Nai
    I have two tables Table Name: Graph UID1 UID2 ----------- 12 23 12 32 41 51 32 41 Table Name: Profiles NodeID UID Name ----------------- 1 12 Robs 2 23 Jones 3 32 Lim 4 41 Teo 5 51 Zacks I want to get an xml file like this: <graph directed="0"> <node id="1"> <att name="UID" value="12"/> <att name="Name" value="Robs"/> </node> <node id="2"> <att name="UID" value="23"/> <att name="Name" value="Jones"/> </node> <node id="3"> <att name="UID" value="32"/> <att name="Name" value="Lim"/> </node> <node id="4"> <att name="UID" value="41"/> <att name="Name" value="Teo"/> </node> <node id="5"> <att name="UID" value="51"/> <att name="Name" value="Zacks"/> </node> <edge source="12" target="23" /> <edge source="12" target="32" /> <edge source="41" target="51" /> <edge source="32" target="41" /> </graph> Thanks very much!

    Read the article

  • What's the best way to format this simple HTML form using CSS?

    - by GregH
    I have have a simple HTML form with say four input widgets (see below)...two lines with two widgets on each line. However, when this renders it is pretty ugly. I want the whole form to be indented from the edge of the left page say 40px and I want the left edge of the widgets to line up with each other and the right edge of the labels to line up. I also want to be able to specify a minimum distance between the right edge of the first widget and the label of the widget next to it. How would I do this using CSS? Basically so it looks something like: Name: _____________ Common Names: _____________ Version: ____________ Status: ____________ See current un-formatted HTML below: <form name="detailData"> <div id="dataEntryForm"> <label> Name: <input type="text" class="input_text" name="ddName"/> Common Names: <input type="text" class="input_text" name="ddCommonNames"><P> Version: <input type="text" class="input_text" name="ddVer"/> Status: <select name="ddStatus"><option value="A" selected="selected">Active</option><option value="P">Planned</option><option value="D">Deprecated</option> </label> </div> </form>

    Read the article

  • Implementing list position locator in C++?

    - by jfrazier
    I am writing a basic Graph API in C++ (I know libraries already exist, but I am doing it for the practice/experience). The structure is basically that of an adjacency list representation. So there are Vertex objects and Edge objects, and the Graph class contains: list<Vertex *> vertexList list<Edge *> edgeList Each Edge object has two Vertex* members representing its endpoints, and each Vertex object has a list of Edge* members representing the edges incident to the Vertex. All this is quite standard, but here is my problem. I want to be able to implement deletion of Edges and Vertices in constant time, so for example each Vertex object should have a Locator member that points to the position of its Vertex* in the vertexList. The way I first implemented this was by saving a list::iterator, as follows: vertexList.push_back(v); v->locator = --vertexList.end(); Then if I need to delete this vertex later, then rather than searching the whole vertexList for its pointer, I can call: vertexList.erase(v->locator); This works fine at first, but it seems that if enough changes (deletions) are made to the list, the iterators will become out-of-date and I get all sorts of iterator errors at runtime. This seems strange for a linked list, because it doesn't seem like you should ever need to re-allocate the remaining members of the list after deletions, but maybe the STL does this to optimize by keeping memory somewhat contiguous? In any case, I would appreciate it if anyone has any insight as to why this happens. Is there a standard way in C++ to implement a locator that will keep track of an element's position in a list without becoming obsolete? Much thanks, Jeff

    Read the article

  • Android game logic problem

    - by semajhan
    I'm currently creating a game and have a problem which I think I know why it is occurring but not entirely sure and even if I knew, don't know how to solve. I have a 2D array 10 x 10 and have a "player" class that takes up a tile. Now, I have created 2 instances of the player and move them around via swiping. Around the edges I have put "walls" that the player cannot walk through and everything works fine, until I remove a wall. Once I remove a wall and move the character/player to the edge of the screen, the player cannot go any further. The problem occurs here, where the second instance of the player is not at the edge of the screen but say 2 tiles from the first instance of "player" who is at the edge. If I try moving them further into the direction of the edge, I understand that the first instance of player wouldn't move or do anything but the second instance of player should still move, but it won't. This is the code that executed when the user swipes: if (player.getArrayX() - 1 != player2.getArrayX()) { player.moveLeft(); } else if (player.getArrayX() - 1 == player2.getArrayX() && player.getArrayY() != player2.getArrayY()) { player.moveLeft(); } if (player2.getArrayX() - 1 != player.getArrayX()) { player2.moveLeft(); } else if (player2.getArrayX() - 1 == player.getArrayX() && player2.getArrayY() != player.getArrayY()) { player2.moveLeft(); } In the player class I have: public void moveLeft() { if (alive) { switch (levelMaster.getLevel1(getArrayX() - 1, getArrayY())) { case 0: break; case 1: subX(); // basically moves player left setArrayX(getArrayX() - 1); // shifts x coord of player 1 within tilemap Log.d("semajhan", "x: " + getArrayX()); break; case 9: subX(); setArrayX(getArrayX() - 1); setAlive(false); break; } } } Any help on the matter or further insight would be greatly appreciated, thanks.

    Read the article

  • javascript error of unterminated string

    - by OM The Eternity
    I want pass a parameter of javascript functiona which is a string. This javascript function is a hintbox on mousehover.. and the string i am using is like this: Hemmed Finish: Every side/edge (1/2" to 2") of the banner are folded and glued (special vinyl solution) or heat pressed. This is the most common and best finish option. Stitched Finish: Every side/edge (1" to 2") of the banner are folded in the back and stitched/sewed with white thread. This is not a common option as thread can be seen on the banner. Now in the hintbox on mousehover the above given text has to be display as it is displayed above along with the paragraph break.. But when i pass the above as parameter in that function along with appending some backslashes to recognise some punctuation, iots till gives me javascript error of unterminated string... I am doing this: onMouseover="showhint('Hemmed Finish\: Every side/edge \(1/2\'\' to 2\'\'\) of the banner are folded and glued \(special vinyl solution\) or heat pressed. This is the most common and best finish option.Stitched Finish\: Every side/edge \(1\'\' to 2\'\'\) of the banner are folded in the back and stitched/sewed with white thread. This is not a common option as thread can be seen on the banner', this, event, '250px')" pls could u help me in rectifying the issue...

    Read the article

  • Problem loading Oracle client libraries when running in a NAnt build

    - by Chris Farmer
    I am trying to use dbdeploy to manage Oracle schema changes. I can run it successfully from the command line to get it to generate my change scripts, but when I try to execute it via the dbdeploy NAnt task running through TeamCity, I get an error: System.Data.OracleClient requires Oracle client software version 8.1.7 or greater. I do have the Oracle 10.2.0.2 client software installed. It's the first entry in the system path, and the dbdeploy.exe app is able to successfully negotiate an Oracle connection. The dbdeploy code dynamically loads the System.Data.OracleClient assembly, which in-turn tries to use the Oracle client bits to talk to the database. This is what is failing in my NAnt environment. I have verified the following points: The same user identity is running the process in both cases The same working directory is used in both cases The same dbdeploy code is running in both cases and with the same supplied parameters The same database connection string is being used in both cases The same ADO.NET assembly is being dynamically loaded in both cases (System.Data.OracleClient, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089) Here's the top of the stack trace during the error: at System.Data.OracleClient.OCI.DetermineClientVersion() at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction (String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName) at System.Data.OracleClient.OracleInternalConnection..ctor( OracleConnectionString connectionOptions) at System.Data.OracleClient.OracleConnectionFactory.CreateConnection( DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection( DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection( DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection( DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OracleClient.OracleConnection.Open() at Net.Sf.Dbdeploy.Database.DatabaseSchemaVersionManager. GetCurrentVersionFromDb() My main question is this: how can I discover what's different about these running environments to see why my Oracle client software can't be loaded?

    Read the article

  • How to properly use glDiscardFramebufferEXT

    - by Rafael Spring
    This question relates to the OpenGL ES 2.0 Extension EXT_discard_framebuffer. It is unclear to me which cases justify the use of this extension. If I call glDiscardFramebufferEXT() and it puts the specified attachable images in an undefined state this means that either: - I don't care about the content anymore since it has been used with glReadPixels() already, - I don't care about the content anymore since it has been used with glCopyTexSubImage() already, - I shouldn't have made the render in the first place. Clearly, only the 1st two cases make sense or are there other cases in which glDiscardFramebufferEXT() is useful? If yes, which are these cases?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >