Search Results

Search found 11141 results on 446 pages for 'base conversion'.

Page 197/446 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Where should I draw the line between unit tests and integration tests? Should they be separate?

    - by Earlz
    I have a small MVC framework I've been working on. It's code base definitely isn't big, but it's not longer just a couple of classes. I finally decided to take the plunge and start writing tests for it(yes, I know I should've been doing that all along, but it's API was super unstable up until now) Anyway, my plan is to make it extremely easy to test, including integration tests. An example integration test would go something along these lines: Fake HTTP request object - MVC framework - HTTP response object - check the response is correct Because this is all doable without any state or special tools(browser automation etc), I could actually do this with ease with regular unit test frameworks(I use NUnit). Now the big question. Where exactly should I draw the line between unit tests and integration tests? Should I only test one class at a time(as much as possible) with unit tests? Also, should integration tests be placed in the same testing project as my unit testing project?

    Read the article

  • SharePoint Content Database Sizing

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information SharePoint stores majority of its content in SQL Server databases. Many of these databases are concerned with the overall configuration of the system, or managed services support. However, a majority of these databases are those that accept uploaded content, or collaborative content. These databases need to be sized with various factors in mind, such as, Ability to backup/restore the content quickly, thereby allowing for quicker SLAs and isolation in event of database failure. SharePoint as a system avoids SQL transactions in many instances. It does so to avoid locks, but does so at the cost of resultant orphan data or possible data corruption. Larger databases are known to have more orphan items than smaller ones. Also smaller databases keep the problems isolated. As a result, it is very important for any project to estimate content database base sizing estimation. This is especially important in collaborative document centric projects. Not doing this upfront planning can Read full article ....

    Read the article

  • Améliorer la pertinence des réponses aux demandes clients et réduire les DMT*

    - by Valérie De Montvallon
    Le Knowledge Management pour améliorer la pertinence des réponses aux demandes clients et réduire les DMT Avec le témoignage de SFR Lundi 2 juillet de 8h30 à 10h30 à l’Automobile Club de France, Paris Web, appel vocal, rendez-vous en agence, vos clients s'attendent aujourd’hui à obtenir une réponse unique, pertinente et rapide, quel que soit le canal de contact. Vos conseillers clients et vos agents ont besoin d'avoir accès facilement à l'information nécessaire. La volumétrie des données utilisées par les services de relation clients (centres de contacts, vendeurs en magasin, community manager…) est impressionnante, mixant souvent plusieurs sources d’information et nécessitant des recherches sémantiques. Rendez-vous le 2 juillet pour découvrir comment un outil de Knowledge Management permet d’optimiser la pertinence des résultats de recherche. Au cours de cette matinée d’échanges, Jocelyn Aubry, DSI Relation Client Grand Public chez SFR, partagera son expérience d'intégration de la solution Oracle InQuira pour constituer une base de connaissance unique et cross-canal, au service de ses conseillers clients. Inscription : [email protected]

    Read the article

  • Oracle Security Webcast Slides and Replay now available

    - by Alex Blyth
    Hi EveryoneThanks for attending the "Oracle Database Security" last week. Slides are available here Oracle Database Security OverviewView more presentations from Oracle Australia. You can download the replay here. Next week's session is on Oracle Application Express. APEX is one of the best kept secrets in the Oracle database and can be used to make very simple apps such as phone directories all the way to complex knowledge base style apps that are driven heavily by data. You can enroll for this session here. Thanks again Cheers Alex

    Read the article

  • Interviewing a DBA

    - by kev
    Our Company is in the Process of recuiting a DBA. I have built a group test of questions from basic questions such as Pk and Fk constraints, simple querries(fizzbuzz style) to more advanced things such as indexes, Collation, isolation levels and how to trace deadlocks. However, that is the limit of my knowledge. So my question to all the DBA's is what is the base level knowledge that all DBA's should have? We are really looking for someone that will be able to manage our replication, analyzing some of our slower running queries(that the devs can go to for help) and someone that can trace some of the deadlock issues that we are having. Any help would be most appreciated!

    Read the article

  • Oracle BI and EPM Partner Blogs

    - by Mike.Hallett(at)Oracle-BI&EPM
    Below is a simple list of some of our specialist Oracle BI and EPM Partner Blogs, where there is lots of great material and discussions.   http://www.aortabi.nl/news/ Netherlands http://www.clearpeaks.com/blog/ English http://www.peakindicators.com/index.php/knowledge-base English http://www.project.eu.com/blog/ English http://www.qubix.co.uk/insights English http://www.rittmanmead.com/blog/ English https://www.endecacommunity.com/ English   If you are a specialist OPN EMEA BI and EPM Partner with hints and tips to share, and would like your Blog to be added to this list, then just let me know @ [email protected].

    Read the article

  • Interviewing a DBA

    - by kev
    Our Company is in the Process of recuiting a DBA. I have built a group test of questions from basic questions such as Pk and Fk constraints, simple querries(fizzbuzz style) to more advanced things such as indexes, Collation, isolation levels and how to trace deadlocks. However, that is the limit of my knowledge. So my question to all the DBA's is what is the base level knowledge that all DBA's should have? We are really looking for someone that will be able to manage our replication, analyzing some of our slower running queries(that the devs can go to for help) and someone that can trace some of the deadlock issues that we are having. Any help would be most appreciated!

    Read the article

  • Oracle ETPM is renamed Oracle Public Sector Revenue Management (PSRM)

    - by Rick Finley
    We are excited to announce to that with the upcoming release of v2.4, we are renaming ETPM to Oracle Public Sector Revenue Management (Oracle PSRM).  This is a pure name change, and all terms and conditions for existing customer licensing remain unchanged.  We feel that this updated naming is a better reflection of our current customer base, which includes tax revenue for many Departments of Revenue, as well as agencies that at manage non-tax revenue, such as regulatory fees, loans, and social benefits.    Please note, as part of this name change, related products in the Oracle ETPM family, such as Oracle Tax Analytics, and Oracle ETPM Self Service, will be renamed at their next major product release to align to the Oracle PSRM theme.   

    Read the article

  • How do i tell ubuntu to only install is asked for and do not bring other dependencies which will break the whole system?

    - by YumYumYum
    How can i only install python-webkit but not other packages? which is showing to install? (no gstreamer*.*, i do not want to have any single files installed in my distro because of GPL license and it slows my machine a lot) $ sudo apt-get install libwebkitgtk-1.0-0 python-webkit Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libgstreamer-plugins-base0.10-0 libgstreamer0.10-0 Suggested packages: gstreamer-codec-install gnome-codec-install gstreamer0.10-tools gstreamer0.10-plugins-base The following NEW packages will be installed: libgstreamer-plugins-base0.10-0 libgstreamer0.10-0 libwebkitgtk-1.0-0 0 upgraded, 3 newly installed, 0 to remove and 333 not upgraded. Need to get 8,231 kB of archives. After this operation, 28.2 MB of additional disk space will be used. Do you want to continue [Y/n]? n Abort.

    Read the article

  • dependency problems now not able to install or remove any package

    - by Manish gour
    I was installing wvdial, now I do not need that, but because of that machine got problem with dependencies, and due to that I am unable to install any package, please help, Below is my stack trace: The following packages have unmet dependencies: libqt3-mt:i386: Depends: libjpeg62 but it is not installed libusb-dev: Depends: libusb-0.1-4 (= 2:0.1.12-14) but 2:0.1.12-20 is installed dependency problems:i386: Depends: libc6 (= 2.4) but 2.15-0ubuntu10.4 is installed Depends: libuniconf4.4 but it is not installed Depends: libwvstreams4.4-base but it is not installed Depends: libwvstreams4.4-extras but it is not installed Depends: libxplc0.3.13 but it is not installed Depends: ppp (= 2.3.0) but it is not installed

    Read the article

  • Multiple vulnerabilities in Oracle Java Web Console

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-0534 Resource Management Errors vulnerability 5.0 Apache Tomcat Solaris 10 SPARC: 147673-04 X86: 147674-04 CVE-2011-1184 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-2204 Information Exposure vulnerability 1.9 CVE-2011-2526 Improper Input Validation vulnerability 4.4 CVE-2011-2729 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-3190 Permissions, Privileges, and Access Controls vulnerability 7.5 CVE-2011-3375 Information Exposure vulnerability 5.0 CVE-2011-4858 Resource Management Errors vulnerability 5.0 CVE-2011-5062 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2011-5063 Improper Authentication vulnerability 4.3 CVE-2011-5064 Cryptographic Issues vulnerability 4.3 CVE-2012-0022 Numeric Errors vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • 2D Pixel/sprite game in unity? [on hold]

    - by acidzombie24
    Hi I'm an absolute newbie in unity. In the past I was told unity is terrible for 2d games so I look away after looking at it for a few days. I don't remember if this was right before unity4 came out or after. I hear unity is fairly good at 2d now. I tried googling for tutorials but I'm doing it wrong. I could not find a good tetris or tic tac toe tutorial. What assets/tutorials do I want for a 2D game? Side question is what tutorials are good if I want to make a fire emblem/advance wars type game (HUD heavy grid base game)

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Multiple Denial of Service vulnerabilities in libpng

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2007-5266 Denial of Service (DoS) vulnerability 4.3 PNG reference library (libpng) Solaris 10 SPARC: 137080-03 X86: 137081-03 Solaris 9 SPARC: 139382-02 114822-06 X86: 139383-02 Solaris 8 SPARC: 114816-04 X86: 114817-04 CVE-2007-5267 Denial of Service (DoS) vulnerability 4.3 CVE-2007-5268 Denial of Service (DoS) vulnerability 4.3 CVE-2007-5269 Denial of Service (DoS) vulnerability 5.0 CVE-2008-1382 Denial of Service (DoS) vulnerability 7.5 CVE-2008-3964 Denial of Service (DoS) vulnerability 4.3 CVE-2009-0040 Denial of Service (DoS) vulnerability 6.8 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • How do you dive into large code bases?

    - by miku
    What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like grep, ctags, unit-tests, functional test, class-diagram generators, call graphs, code metrics like sloccount and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the codebase, with which you worked with. I realize, that this is also a process (happening over time) and that learning can mean "can give a ten minute intro" to "can refactor and shrink this to 30% of the size". Let's leave that open for now.

    Read the article

  • How does an Engine like Source process entities?

    - by Júlio Souza
    [background information] On the Source engine (and it's antecessor, goldsrc, quake's) the game objects are divided on two types, world and entities. The world is the map geometry and the entities are players, particles, sounds, scores, etc (for the Source Engine). Every entity has a think function, which do all the logic for that entity. So, if everything that needs to be processed comes from a base class with the think function, the game engine could store everything on a list and, on every frame, loop through it and call that function. On a first look, this idea is reasonable, but it can take too much resources, if the game has a lot of entities.. [end of background information] So, how does a engine like Source take care (process, update, draw, etc) of the game objects?

    Read the article

  • Why don't computers store decimal numbers as a second whole number?

    - by SomeKittens
    Computers have trouble storing fractional numbers where the denominator is something other than a solution to 2^x. This is because the first digit after the decimal is worth 1/2, the second 1/4 (or 1/(2^1) and 1/(2^2)) etc. Why deal with all sorts of rounding errors when the computer could have just stored the decimal part of the number as another whole number (which is therefore accurate?) The only thing I can think of is dealing with repeating decimals (in base 10), but there could have been an edge solution to that (like we currently have with infinity).

    Read the article

  • Oracle sort Oracle Application Express 4.2, son outil de développement rapide intégré à Oracle Database

    Oracle Application Express, l'outil de développement rapide d'Oracle est désormais disponible en version 4.2. Cet outil, intégré à Oracle Database sans coût supplémentaire, permet de développer des applications Web exploitant les données d'une base Oracle. Cette mise à jour est très centrée vers le mobile et apporte un certain nombre de nouveautés pour le développement d'applications mobiles avec APEX. Notons en particulier :Le rendu des applications qui s'adaptent pour proposer une interface adaptée au terminal mobile utilisé L'utilisation de jQuery Mobile qui permet la compatibilité sur la majorité des terminaux sans modification La possibilité de créer des graphiques en HTML5 pour les terminaux ne disposant pas de Flash L'utilisat...

    Read the article

  • Migration from XNA to SharpDX

    - by Wouter
    My fear is that XNA has reached the end of the road. To keep up with the latest technology a shift to another game framework might be needed. We have many games in a large codebase, all based on XNA. My question is, how much work would it be to migrate to SharpDX and are there other possibilities? Our code base mainly uses basic 3D rendering and the SpriteBatch, no fancy shader stuff. Update: I should have mentioned we only use 2.5D, we have a simple engine that builds textured quads to render text and animated sprites. Also for sound we use XACT (what else..) with some effects.

    Read the article

  • How important is to sacriface your free time for accomplishing goals? [closed]

    - by Darf Zon
    I was reading a book about XP programming and about agile teams. While I was reading, I saw this scenario. I've never worked with a development team (just in school). So I would like what do you opine on this situation: Your boss has asked you to deliver software in a time that can only be possible to meet the project team asking if you want to work overtime without pay. All team members have young children. Discuss whether it should accept this request from your boss or should persuade the team to give their time to the organization rather than their families. What could be significant factors in the decision? As a programmer, you are offered an upgrade as project manager, but his feeling is that you can have a more effective contribution in a technical role in one administrative. Write when you should accept that promotion. Somethimes, I sacrifice my free time for accomplishing hits at work, so it's very important to me to know your opinion base of your experience.

    Read the article

  • How to integrate game logic in game engines

    - by MahanGM
    Recently I'm working on a 2d game engine example in .Net with C#. My main problem is that I can't figure out how I should include the game logic within the game. Currently I have a base engine which is a set of classes that they are running sub-systems like Render, Sound, Input and Core functionality. There is an editor which helps the user to add resources, build levels, write scripts and other stuffs. I came up with an idea to use Reflection and CSharpCodeProvider from my editor to compile the written code. This way I can get an executable of my product too. This way is quite well but I would like to know what's really the solution and architecture to do this. My engine's role is 2d platform. The scripting language is C# right now because I can't consist any other embeddable language for now. The game needs compilation and CSharpCodeProvider is the only way for me to do it meantime.

    Read the article

  • How to create a fountain in UDK

    - by user36425
    I'm trying to make a fountain in my level in UDK, I made the base of the fountain by using a Cylinder build and now I'm trying to put water in it. I went to use the fluidSurfaceActor but I notice that this is square but my fountain is a cylinder. Is there a way that I can change the shape of the fluidSurfaceActor to fit the builder brush shape or is there another way to do this? Or is it hopeless and I have to make my fountain into a cube? Here is a link/picture to the screenprint of what I'm talking about:

    Read the article

  • Need help with testdisk output

    - by dan
    I had (note the past tense) an ubuntu 12.04 system with separate partitions for the base and /home directories. It started acting wonky, so I decided to do a reinstall with 12.10, intending just to do a reinstall to the base partition. After several seconds, I realize that the installer was repartitioning the drive and reinstalling, so I pulled the power cord. I'm now trying to recover as much as I can with testdisk, but it seems that testdisk is finding 100 unique partitions when I run it - they mostly tend to be HFS+ or solaris /home (which I think is just an ext4; I've never had solaris on the machine). I've pasted an abbreviated version of the testdisk output below (first ~100 lines, and then ~100 lines from the middle of the output). Is there a way to combine or recreate the partitions and then data recovery, or some other way maximize what I can recover (ideally as much of the file system as possible)? I really only care about what was in the /home directory - I'd rather not use photorec since I don't have another 2 TB HD lying around to recover to. Thanks, Dan Mon Dec 10 06:03:00 2012 Command line: TestDisk TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org OS: Linux, kernel 3.2.34-std312-amd64 (#2 SMP Sat Nov 17 08:06:32 UTC 2012) x86_64 Compiler: GCC 4.4 Compilation date: 2012-11-27T22:44:52 ext2fs lib: 1.42.6, ntfs lib: libntfs-3g, reiserfs lib: 0.3.1-rc8, ewf lib: none /dev/sda: LBA, HPA, LBA48, DCO support /dev/sda: size 3907029168 sectors /dev/sda: user_max 3907029168 sectors /dev/sda: native_max 3907029168 sectors Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512 /dev/sr0 is not an ATA disk Hard disk list Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - WDC WD20EARS-00J2GB0, S/N:WD-WCAYY0075071, FW:80.00A80 Disk /dev/sdb - 1013 MB / 967 MiB - CHS 1014 32 61, sector size=512 - Generic Flash Disk, FW:8.07 Disk /dev/sr0 - 367 MB / 350 MiB - CHS 179470 1 1 (RO), sector size=2048 - PLDS DVD+/-RW DH-16AAS, FW:JD12 Partition table type (auto): Intel Disk /dev/sda - 2000 GB / 1863 GiB - WDC WD20EARS-00J2GB0 Partition table type: EFI GPT Analyse Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Bad GPT partition, invalid signature. search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB Results P MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB P Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB interface_write() 1 P MS Data 2048 3900753919 3900751872 2 P Linux Swap 3900755968 3907028975 6273008 search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14584, s_mnt_count=0/27, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 477915164 recover_EXT2: part_size 3823321312 MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB block_group_nr 1 ....snip...... MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 4096 3823325407 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 7028840 7033383 4544 FAT12, 2326 KB / 2272 KiB Mac HFS 67856948 67862179 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67862176 67867407 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67862244 67867475 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67867404 67872635 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67867472 67872703 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67872700 67877931 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67937834 67948067 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67938012 67948155 10144 HFS+ found using backup sector!, 5193 KB / 5072 KiB Mac HFS 67948064 67958297 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67948070 67958303 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67948152 67958295 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958292 67968435 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958300 67968533 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67992596 67997827 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67997824 68003055 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67997892 68003123 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 68003052 68008283 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68003120 68008351 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68008348 68013579 5232 HFS+, 2678 KB / 2616 KiB Solaris /home 84429840 123499141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84429952 123499253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493136 123562437 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493248 123562549 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566088 123635389 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566200 123635501 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571232 123640533 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571344 123640645 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84659952 123729253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84660064 123729365 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690504 123759805 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690616 123759917 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700424 123769725 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700536 123769837 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797720 123867021 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797832 123867133 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812544 123881845 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812656 123881957 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824552 123893853 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824664 123893965 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847528 123916829 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847640 123916941 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886840 123956141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886952 123956253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945488 124014789 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945600 124014901 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84957992 124027293 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84958104 124027405 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962240 124031541 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962352 124031653 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977168 124046469 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977280 124046581 39069302 UFS1, 20 GB / 18 GiB MS Data 174395467 178483851 4088385 ..... snip (it keeps going on for quite a while)

    Read the article

  • Project planning and customer tracking system

    - by Daniel Hollands
    First off, sorry if this is the wrong 'stack' site, but it seemed like a good place to start. I'm happy to report that my services as a web developer are starting to be in quite a lot of demand, and I have a few existing and potentially new customers all lining up - but I'm finding it very hard to keep track of everything. What I'm hoping for is some (preferably web-based) system which I can use to keep track of who my customers are, the various projects that I've got going on for them, and (if possible) the individual sub-tasks that make up each project. What would be even better is if the relevant customer was able to log into the site, and see the process of their projects. I do hope you know what I'm talking about, and that you'll be able to offer some suggestions of either web-base sites that offer something along these lines, or of some open source solution or something like that? Thank you

    Read the article

  • What's Your Supply Chain+Manufacturing Strategy for Success

    - by [email protected]
    Forward thinking enterprises look to eliminate their dependence on legacy applications that manage information in batch - replacing them with real-time integrated/modern information managment. With rapid manufacturing and global supply chains much more complex today, with the pace of chance ever increasing, leading organizations need better ways to orchestrate their supply chain synchronization with their partner and customer base. EM magazine Mar/Apr'10 edition, covers this topic in an article "Strategising for Success" pgs 26-27, and discusses the available options to organizations as they drive improvements in the levels of collaboration with their partners, suppliers, shippers, distributors and ultimately their end-users, the customer! I'll past the link to the article here as soon as i validate/confirm it.

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >