Search Results

Search found 28896 results on 1156 pages for 'simple greeter'.

Page 583/1156 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • How to keep track of previous scenes and return to them in libgdx

    - by MxyL
    I have three scenes: SceneTitle, SceneMenu, SceneLoad. (The difference between the title scene and the menu scene is that the title scene is what you see when you first turn on the game, and the menu scene is what you can access during the game. During the game, meaning, after you've hit "play!" in the title scene.) I provide the ability to save progress and consequently load a particular game. An issue that I've run into is being able to easily keep track of the previous scene. For example, if you enter the load scene and then decide to change your mind, the game needs to go back to where you were before; this isn't something that can be hardcoded. Now, an easy solution off the top of my head is to simply maintain a scene stack, which basically keeps track of history for me. A simple transaction would be as follows I'm currently in the menu scene, so the top of the stack is SceneMenu I go to the load scene, so the game pushes SceneLoad onto the stack. When I return from the load scene, the game pops SceneLoad off the stack and initializes the scene that's currently at the top, which is SceneMenu I'm coding in Java, so I can't simply pass around Classes as if they were objects, so I've decided implemented as enum for eac scene and put that on the stack and then have my scene managing class go through a list of if conditions to return the appropriate instance of the class. How can I implement my scene stack without having to do too much work maintaining it?

    Read the article

  • Strategy for hosting 700+ domains, each with static HTML site

    - by jonschlinkert
    I have a portfolio of more than 700 domain names, and ideally I'd like to put up a single-page HTML/CSS/JavaScript webpage for each domain. Is there a system/strategy/workflow that will allow me to: Automate the deployment of new websites, quickly and easily without having to manually initiate each new website in an admin panel. For instance, I've seen dropbox-based solutions that claim to make it simple to setup new websites on your dropbox account, but you still have to set each one up in an admin interface first. It would be so much easier to have a folder naming convention that allowed the user to easily clone/copy/duplicate sites inside their Dropbox App folder (https://www.dropbox.com/developers/blog/23) to create new ones. Sounds interesting, however... It's easy to managing CNAMEs on the registrar-side, is there a way to quickly associate CNAMEs with new websites, maybe gh-pages-style (https://help.github.com/articles/setting-up-a-custom-domain-with-pages)? With GitHub's gh-pages, all you have to do is drop a file called CNAME into your repo, with the domain name you want associated with the repo inside the file. gh-pages isn't a good solution for what I'm doing though unfortunately. I'm also a front-end developer, specializing in rapid web development and "front-end build systems", so I building and maintaining static assets for hundreds of sites is no problem. It's the hosting-side that I really struggle with. Any suggestions?

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • Trouble with SAT style vector projection in C#/XNA

    - by ssb
    Simply put I'm having a hard time working out how to work with XNA's Vector2 types while maintaining spatial considerations. I'm working with separating axis theorem and trying to project vectors onto an arbitrary axis to check if those projections overlap, but the severe lack of XNA-specific help online combined with pseudo code everywhere that omits key parts of the algorithm, googling has left me little help. I'm aware of HOW to project a vector, but the way that I know of doing it involves the two vectors starting from the same point. Particularly here: http://www.metanetsoftware.com/technique/tutorialA.html So let's say I have a simple rectangle, and I store each of its corners in a list of Vector2s. How would I go about projecting that onto an arbitrary axis? The crux of my problem is that taking the dot product of say, a vector2 of (1, 0) and a vector2 of (50, 50) won't get me the dot product I'm looking for.. or will it? Because that (50, 50) won't be the vector of the polygon's vertex but from whatever XNA calculates. It's getting the calculation from the right starting point that's throwing me off. I'm sorry if this is unclear, but my brain is fried from trying to think about this. I need a better understanding of how XNA calculates Vector2s as actual vectors and not just as random points.

    Read the article

  • Steps to send patch to Launchpad

    - by Alois Mahdal
    With a Git/Github background and knowing very little about Bazaar VCS, I would like to occasionally report a bug to Launchpad and even send a patch. I'd like to do it in a "proper" way so that it's ready for merging or improvement while not getting in way. I can't seem to find a decent simple How-to suited for my needs. So what I did so far: I have created a Launchpad account, reported the bug, installed Bazaar and setup SSH keys etc. Now if it was Github, I'd fork the repo, clone the forked repo, create a sanely named branch and do the work, commit + push, create a pull request using Github WUI. But it's not Github, and both LP and Bazaar architectures seem quite different from their Github/Git cunterparts. So could a kind soul save me from drowning in tons of documents and complile a straightforward step path, mainly the second part? Possibly including relevant CLI commands when they are needed? Edit: It seems that I should clarify if I'm asking specifically about Ubuntu packages (whatever it means) or Launchpad packages. I don't really care much about distinction between Ubuntu packages and non-Ubuntu packages. Any software could be in Ubuntu today and out of it tomorrow, or vice-versa. The development is what matters much more than distribution. Ao I was assuming that not every single package distributed in Ubuntu is hosted on Launchpad, an "official" or "default" workflow for Launchpad exists (well if all devs can agree on using Bazaar, why couldn't most of them agree on a patching workflow?), so I'm asking about the Launchpad way, not the Ubuntu way. And I chose AU because since the intersection is vast, I guess it's pretty on topic here.

    Read the article

  • Application Scope v's Static - Not Quite the same

    - by Duncan Mills
    An interesting question came up today which, innocent as it sounded, needed a second or two to consider. What's the difference between storing say a Map of reference information as a Static as opposed to storing the same map as an application scoped variable in JSF?  From the perspective of the web application itself there seems to be no functional difference, in both cases, the information is confined to the current JVM and potentially visible to your app code (note that Application Scope is not magically propagated across a cluster, you would need a separate instance on each VM). To my mind the primary consideration here is a matter of leakage. A static will be (potentially) visible to everything running within the same VM (OK this depends on which class-loader was used but let's keep this simple), and this includes your model code and indeed other web applications running in the same container. An Application Scoped object, in JSF terms, is much more ring-fenced and is only visible to the Web app itself, not other web apps running on the same server and not directly to the business model layer if that is running in the same VM. So given that I'm a big fan of coding applications to say what I mean, then using Application Scope appeals because it explicitly states how I expect the data to be used and a provides a more explicit statement about visibility and indeed dependency as I'd generally explicitly inject it where it is needed.  Alternative viewpoints / thoughts are, as ever, welcomed...

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • Limiting the speed of the mouse cursor

    - by idlewire
    I am working on a simple game where you can drag objects around with the mouse cursor. As I drag the object around quickly, I notice there is some juddering, which seems to be due to the fact that I can move the mouse cursor faster than the game's update/draw. So, although I maintain the offset from where the player initially clicked on the object, the mouse's relative position to the object shifts around slightly before settling as I move the object very quickly. The only way I have found to get smooth, exact 1:1 movement is if I turn both IsFixedTimeStep and SynchronizeWithVerticalRetrace to false. However, I'd rather not have to do that. I have also tried making a custom mouse cursor, hiding the real mouse, taking the real mouse delta and clamping it to a maximum speed. Here is the problem: In windowed mode, the "real" mouse cursor moves off the window while the custom mouse cursor (since it's movement is being scaled) is still somewhere inside the game window. This becomes bizarre and is obviously not desired, as clicking at this point means clicking on things outside the game window. Is there any way to accomplish this in windowed mode? In fullscreen mode, the "real" mouse cursor is bounded to the edges of the screen. So I get to a point where there is no more mouse delta, yet my custom cursor is still somewhere in the middle of the screen and hence can't move further in that direction. If I wanted to clamp it to the edge of the screen when the real cursor is at the edge, then I would get an abrupt jump to the edge of the screen, which isn't desired either Any help would be appreciated. I'd like to be able to limit the speed of the mouse, but also would appreciate help with the first issue (the non-smooth relative offset between mouse cursor movement and object movement).

    Read the article

  • Functional programming compared to OOP with classes

    - by luckysmack
    I have been interested in some of the concepts of functional programming lately. I have used OOP for some time now. I can see how I would build a fairly complex app in OOP. Each object would know how to do things that object does. Or anything it's parents class does as well. So I can simply tell Person().speak() to make the person talk. But how do I do similar things in functional programming? I see how functions are first class items. But that function only does one specific thing. Would I simply have a say() method floating around and call it with an equivalent of Person() argument so I know what kind of thing is saying something? So I can see the simple things, just how would I do the comparable of OOP and objects in functional programming, so I can modularize and organize my code base? For reference, my primary experience with OOP is Python, PHP, and some C#. The languages that I am looking at that have functional features are Scala and Haskell. Though I am leaning towards Scala. Basic Example (Python): Animal(object): def say(self, what): print(what) Dog(Animal): def say(self, what): super().say('dog barks: {0}'.format(what)) Cat(Animal): def say(self, what): super().say('cat meows: {0}'.format(what)) dog = Dog() cat = Cat() dog.say('ruff') cat.say('purr')

    Read the article

  • How should I describe the process of learning someone else's code? (In an invoicing situation.)

    - by MattyG
    I have a contract to upgrade some in-house software for a large company. The company has requested multiple feature additions and a few bug fixes. This is my first freelance style job. First, I needed to become familiar with how the application worked - I learnt it as if I was a user. Next, I had to learn how the software worked. I started with broad concepts, and then narrowed down into necessary detail before working on each bug fix and feature. At least at the start of the project, it took me a lot longer to learn the existing code than it did to write the additional features. How can I describe the process of learning the existing code on the invoice? (This part of the company usually does things in-house, so doesn't have much experience dealing with software contractors like me, and I fear they may not understand the overhead of learning someone else's code). I don't want to just tack the learning time onto the actual feature upgrade, because in some cases this would make a 'simple task' look like it took me way too long. I want break the invoice into relevant steps, and communicate that I'm charging for the large overhead of learning someone else's code before being able to add my own to it. Is there a standard way of describing this sort of activity when billing for a job?

    Read the article

  • Ubuntu server 11.04 recognize only 1 core instead of 4

    - by Kreker
    I searched for other questions and googled a lot but I don't find a solution for solving this problem. Ubuntu Server 11.04 64bit installed on Dell Poweredge with Intel Xeon X5450. He only recognize 1 of the 4 cores I have. Tried to modify the GRUB config but didn't work. IN the machine BIOS I didn't find anything useful. CPU root@darwin:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU X5450 @ 3.00GHz stepping : 10 cpu MHz : 2992.180 cache size : 6144 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 xsave lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 5984.36 clflush size : 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: GRUB root@darwin:~# cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=2 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX="noapic nolapic" #was with acpi=off # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" Complete dmesg Too long, posted on pastebin http://pastebin.com/bsKPBhzu

    Read the article

  • Learning node.js

    - by john smith
    I am not sure if this is the right place to ask but, I thought this was the most suitable. I recently graduated from university. Learned the full php stack; basically all the LAMP stuff, obviously without counting all the other subjects. Not even got my degree and this whole node.js booming out of nowhere. You can imagine how one can feel about this, the story is always the same: you never end learning, and studying. So I recently got my hands on node.js; reading books, tutorials, and everything imaginable on the internet. The problem is one and simple: this is nowhere near to having a teacher standing near you helping you understanding and solving your problems, especially when all you can do is post your doubts on a website and patiently wait for replies. It's not that it isn't good, it's just much slower than what I just expressed above. So, in short words: is there a place where one can find someone willing to teach you about such contents? This would obviously done via remote means, like skype and such. Can anyone here point me into the right direction? Or just downvote me for being in the wrong website? Thanks in advance.

    Read the article

  • Network: Incoming connections work, outgoing fails

    - by anirvan
    i recently set up my own server at home to run Ubuntu 12.04 server ed. on booting up, i noticed that a message related to networking comes up, and the booting process pauses. the message read something like - waiting for network configuration and after a while - waiting another 60 seconds... on booting up, I realised that any command which requires a network connection was not working - ping, apt-get install, etc. on firing the ifup eth0 command, I get the error RTNETLINK answers: File exists. Failed to bring up eth0. I also realised, while searching the web for this problem, that this is probably one of the most common networking related issues - however, most of the questions are around setting up multiple IPs for the same machine. ifdown eth0 also fails, stating that eth0 is not configured. my /etc/network/interfaces file has a simple configuration for a static IP: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address xx.xx.xx.xx netmask xx.xx.xx.xx broadcast xx.xx.xx.xx gateway xx.xx.xx.xx dns-nameservers xx.xx.xx.xx The strangest part of this problem is that, while I can't connect to anything outside, I can ping to this particular server using the static IP configured in the interface file, and, i can even SSH into it! I'm really at ends here with this problem, and any guidance is much appreciated. Thanks!

    Read the article

  • How do you convince the client their application's backend needs a rewrite?

    - by Richard DesLonde
    I have been supporting a LOB winforms application for a client the last 3 years. The application is built with a simple monolithic architecture and uses .NET 2.0. The application is a core part of their operations and its longevity is paramount. It needs to evolve with their evolving business processes, as well as implement improved functionality etc....this brings me to believe that this application needs an overhaul of sorts on the back-end. The problem is changing a back-end is "invisible"...i.e. the user never actually sees it. It's a quality of the system that is changing (stability, maintainability, reliability, longevity), not some functional requirement that will be easily seen...i.e. the ROI is not obvious. There is a lot of new functionality to be added to the front-end as well (user experience). I am considering a strategy of changing the back-end over time...i.e. when making a change or adding a feature to the front-end, change those components in the back-end that are affected, eventually you get to everything. How do I convince the client that we need to rebuild the back-end?

    Read the article

  • Turning a problem into a data

    - by Fogmeister
    OK, I have an app that I'm creating but I'm just really not sure about how to approach the problem. The idea is fairly simple. I'm just not sure how to wrap it in a data model (or even if I should). TBH I feel like I'm making it more complicated than it needs to be. How it works. The app will have circles along the top in a row that need to be connected to circles along the bottom in a row. 10 circles at the top. 10 at the bottom. One connection per pair of dots. Anyway, I can get the dots to connect I'm just not sure how to wrap it in a data model so that I can analyse what has been connected and see if it is right or not. The circles will be questions and answers. I can make an array of question objects with a question and answer properties. I can then display these as the dot pairs. I'm just not sure how to record which questions have been connected to which answers. It is valid for a user to connect a wrong answer as they all get checked at the end. I was thinking of using SpriteKit but this isn't a restriction. I could use UIKit or something else. TBH, this question is fairly language free as I'm just after a way of modelling it.

    Read the article

  • Best Architecture for ASP.NET WebForms Application

    - by stack man
    I have written an ASP.NET WebForms portal for a client. The project has kind of evolved rather than being properly planned and structured from the beginning. Consequently, all the code is mashed together within the same project and without any layers. The client is now happy with the functionality, so I would like to refactor the code such that I will be confident about releasing the project. As there seems to be many differing ways to design the architecture, I would like some opinions about the best approach to take. FUNCTIONALITY The portal allows administrators to configure HTML templates. Other associated "partners" will be able to display these templates by adding IFrame code to their site. Within these templates, customers can register and purchase products. An API has been implemented using WCF allowing external companies to interface with the system also. An Admin section allows Administrators to configure various functionality and view reports for each partner. The system sends out invoices and email notifications to customers. CURRENT ARCHITECTURE It is currently using EF4 to read/write to the database. The EF objects are used directly within the aspx files. This has facilitated rapid development while I have been writing the site but it is probably unacceptable to keep it like that as it is tightly coupling the db with the UI. Specific business logic has been added to partial classes of the EF objects. QUESTIONS The goal of refactoring will be to make the site scalable, easily maintainable and secure. 1) What kind of architecture would be best for this? Please describe what should be in each layer, whether I should use DTO's / POCO / Active Record pattern etc. 2) Is there a robust way to auto-generate DTO's / BOs so that any future enhancements will be simple to implement despite the extra layers? 3) Would it be beneficial to convert the project from WebForms to MVC?

    Read the article

  • How to calculate square root in PHP [explained] [on hold]

    - by Enes Imsirovic
    At first code ! Don't forget embed the JQuery ! <html> <head> <title>Simple jQuery and PHP Square Root example</title> <script src="js/jquery-1.10.1.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $('#form').submit(function(){ var number = $('#number').val(); $.ajax({type:"post",url:"calculate.php",data:"number=" +number,success:function(msg){$('#result').hide(); $("#result").html("<h3>" + msg + "</h3>").fadeIn("slow"); } }); return false; }); }); </script> </head> <body> <form id="form" action="calculate.php" method="post"> Enter number: <input id="number" type="text" name="number" /> <input id="submit" type="submit" value="Calculate Square Root" name="submit"/> </form> <p id="result"></p> </body> </html> Second code witch would be connected with first : calculate.php <?php if($_POST['number']==null){ echo "Please Enter a Number"; }else { if (!is_numeric($_POST['number'])) { echo "Please enter only numbers"; }else{ echo "Square Root of " .$_POST['number'] ." is ".sqrt($_POST['number']); } } ?> Chiefly for begginers, to see the power of PHP :) xD Load this on your localhost.. PHP files and JS : https://mega.co.nz/#!Et8zWSBb!KX2PFxa2Pzw_l-wi6QU8xi_eKTlHbtQuBsT_DvXrifk At least it look like this : http://imgur.com/vNnDRQ3

    Read the article

  • Balancing dependency injection with public API design

    - by kolektiv
    I've been contemplating how to balance testable design using dependency injection with providing simple fixed public API. My dilemma is: people would want to do something like var server = new Server(){ ... } and not have to worry about creating the many dependencies and graph of dependencies that a Server(,,,,,,) may have. While developing, I don't worry too much, as I use an IoC/DI framework to handle all that (I'm not using the lifecycle management aspects of any container, which would complicate things further). Now, the dependencies are unlikely to be re-implemented. Componentisation in this case is almost purely for testability (and decent design!) rather than creating seams for extension, etc. People will 99.999% of the time wish to use a default configuration. So. I could hardcode the dependencies. Don't want to do that, we lose our testing! I could provide a default constructor with hard-coded dependencies and one which takes dependencies. That's... messy, and likely to be confusing, but viable. I could make the dependency receiving constructor internal and make my unit tests a friend assembly (assuming C#), which tidies the public API but leaves a nasty hidden trap lurking for maintenance. Having two constructors which are implicitly connected rather than explicitly would be bad design in general in my book. At the moment that's about the least evil I can think of. Opinions? Wisdom?

    Read the article

  • Best Method of function parameter validation

    - by Aglystas
    I've been dabbling with the idea of creating my own CMS for the experience and because it would be fun to run my website off my own code base. One of the decisions I keep coming back to is how best to validate incoming parameters for functions. This is mostly in reference to simple data types since object validation would be quite a bit more complex. At first I debated creating a naming convention that would contain information about what the parameters should be, (int, string, bool, etc) then I also figured I could create options to validate against. But then in every function I still need to run some sort of parameter validation that parses the parameter name to determine what the value can be then validate against it, granted this would be handled by passing the list of parameters to function but that still needs to happen and one of my goals is to remove the parameter validation from the function itself so that you can only have the actual function code that accomplishes the intended task without the additional code for validation. Is there any good way of handling this, or is it so low level that typically parameter validation is just done at the start of the function call anyway, so I should stick with doing that.

    Read the article

  • relationship between the model and the renderer

    - by acrilige
    I tried to build a simple graphics engine, and faced with this problems: i have a list of models that i need to draw, and object (renderer) that implements IRenderer interface with method DrawObject(Object* obj). Implementation of renderer depends on using graphics library (opengl/directx). 1st question: model should not know nothing about renderer implementation, but in this case where can i hold (cache) information that depends on renderer implementation? For example, if model have this definition: class Model { public: Model(); Vertex* GetVertices() const; private: Vertex* m_vertices; }; what is the best way to cache, for example, vertex buffer of this model for dx11? Hold it in renderer object? 2nd question: what is the best way for model to say renderer HOW it must be rendered (for example with texture, bump mapping, or may be just in one color). I thought it can be done with flags, like this: model-SetRenderOptions(RENDER_TEXTURE | RENDER_BUMPMAPPING | RENDER_LIGHTING); and in Renderer::DrawModel method check for each flag. But looks like it will become uncomfortable with the options count growth...

    Read the article

  • Dojo and Separate JavaScript File

    - by Bunch
    For a project I needed to use the ArcGIS API for some mapping. To use this you need to use Dojo but in this case all it really comes down to is adding some require lines and a addOnLoad on your web page. At first everything was working great, the maps rendered and the various layers would populate as needed. Once it was working I started moving the various javascript functions into their own files to keep everything nice and neat. Then the problems started, mainly the map would not show up any more. So that was a pretty big problem. Luckily the fix was pretty simple, just move the dojo.addOnLoad line into it’s own script tag. If I had the dojo.addOnLoad in the same script block as the various require lines it would not work as expected. Works: <script type="text/javascript" language="javascript" src="javascript/test.js" />     <script type="text/javascript">       dojo.require("esri.map");       dojo.require("esri.tasks.locator");       dojo.require("esri.tasks.query");       dojo.require("esri.tasks.geometry");  </script>  <script type="text/javascript">      dojo.addOnLoad(init);  </script> Does not work: <script type="text/javascript" language="javascript" src="javascript/test.js" /> <script type="text/javascript">       dojo.require("esri.map");       dojo.require("esri.tasks.locator");       dojo.require("esri.tasks.query");       dojo.require("esri.tasks.geometry");       dojo.addOnLoad(init); </script> Technorati Tags: JavaScript,Dojo

    Read the article

  • The Enterprise Side of JavaFX: Part Two

    - by Janice J. Heiss
    A new article, part of a three-part series, now up on the front page of otn/java, by Java Champion Adam Bien, titled “The Enterprise Side of JavaFX,” shows developers how to implement the LightView UI dashboard with JavaFX 2. Bien explains that “the RESTful back end of the LightView application comes with a rudimentary HTML page that is used to start/stop the monitoring service, set the snapshot interval, and activate/deactivate the GlassFish monitoring capabilities.”He explains that “the configuration view implemented in the org.lightview.view.Browser component is needed only to start or stop the monitoring process or set the monitoring interval.”Bien concludes his article with a general summary of the principles applied:“JavaFX encourages encapsulation without forcing you to build models for each visual component. With the availability of bindable properties, the boundary between the view and the model can be reduced to an expressive set of bindable properties. Wrapping JavaFX components with ordinary Java classes further reduces the complexity. Instead of dealing with low-level JavaFX mechanics all the time, you can build simple components and break down the complexity of the presentation logic into understandable pieces. CSS skinning further helps with the separation of the code that is needed for the implementation of the presentation logic and the visual appearance of the application on the screen. You can adjust significant portions of an application's look and feel directly in CSS files without touching the actual source code.”Check out the article here.

    Read the article

  • Do your own design jobs and make it look professional

    - by Webgui
    Looks and design is becoming more and more important for customers and organizations event when we deal with internal enterprise applications. However,  many web developers who work on business apps end up not investing resources on the design. The reason may be that they ran out of time so with their client's pressure there was no choice but to skip past the design process. In some cases, especially in sall software houses, there are no trained professional designers and the developers have to do both jobs. Since designing web applications can be very complex and requires mastering several languages and concepts, unless a big budget was allocated to the project it is very hard to produce a professional custom design. For that exact reasons, Visual WebGui integrated Point & Click Design Tools within its Web/Cloud Development Platform. Those tools allow developers to customize the UI look of the applications they build in a visual way that is fairly simple and doesn't require coding or mastering HTML, CSS and JavaScript in order to design. The development tools also allow professional designers easier work interface with the developers and quicly create new skins. So if you are interested in getting your design job done much easier, you should probably tune in for about an hour and find out how. Click here to register: https://www1.gotomeeting.com/register/740450625

    Read the article

  • Three New Videos on Social Development

    - by Bob Rhubart
    By now it should be clear to even the most tenacious Luddite that the social media phenomenon is no mere fad. Those ubiquitous icons for Facebook and Twitter and other social networks are little beacons of disruptive change signalling yet again that the 20th century is over, dude. And that presents an opportunity for software developers with the necessary insight and expertise to tap into and expand social platforms for forward-thinking organizations. If you're a developer and you're interested in exploiting these emerging opportunities you'll want to check out three new videos that focus on software development for social platforms. Developing with Facebook: An Introduction to Social Design James Pearce, Facebook's head of Mobile Developer Relations, provides an overview of the Facebook platform and the underlying APIs that are available to the developer community. Building on the LinkedIn Platform: Content Amplified Adam Trachtenberg, Director of LinkedIn's Developer Network, discusses how you can make it simple for a professional audience to discover and distribute your content on LinkedIn. Emergence of the Social Enterprise Roland Smart, Oracle's VP of Social Marketing, shares Oracle’s vision for the social-enabled enterprise and highlights the role developers will play in the next phase of enterprise development. OTN has also created the Oracle Social Developer Community, a new Facebook page devoted to the promotion of community conversation and resources to support Social Developers. If you're working on a social development project, visit the page and tell us about it.

    Read the article

  • TraceTune: Larger Files and History

    - by Bill Graziano
    I updated TraceTune over the weekend.  I increased the trace file upload size to 20MB.  We’ve processed over half a million rows of trace data so far and I’m confident this won’t kill the server. I added average CPU and average disk reads to the screen that lists the SQL statements in a trace file. I only added these two.  I’m pretty sure average writes isn’t that import.  I’m still thinking about average duration.  I’m trying to balance showing you what you need with a clean, simple interface.  Plus I have a way to see the averages that I describe further down. TraceTune now keeps the last 10 files that you’ve uploaded and will give you some basic details about each file. I think the last change I made is the most interesting. For each SQL statement, I show the history of that statement. You’ll see each trace file where this statement was found.  It will list the averages for CPU, reads, writes and duration.  This will quickly show you if you’re improving the performance of that query.  In my screen shot above you can that even though the execution counts are very different the averages are consistent. If you want to see what queries are consuming the most resources on your server give TraceTune a try.

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >