Search Results

Search found 3282 results on 132 pages for 'individual'.

Page 56/132 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Migrating a Windows Server to Ubuntu Server to provide Samba, AFP and Roaming Profiles

    - by Dan
    I'm replacing our old Windows XP Pro office server with a HP Microserver running Ubuntu Server 12.04 LTS. I'm not a Linux expert but I can find my way around a terminal prompt, I'm a Mac user by choice. The office use a mix of Windows XP Pro machines and OSX Lion laptops. I included Samba during installation, and I'm planning on using Netatalk for the AFP and Bonjour sharing. I'd quite like to have samba make the server appear in 'My network places' on the Windows machines the way Bonjour makes it appear in finder on the Macs, if this is possible? I want to get to a point so that a user logging into Windows, gets connected to the Ubuntu server (do they need an Ubuntu user account?) which get them their shares and their Windows user profile (though a standard profile across users would do). The upshot is to make centralised control of user accounts (e.g. If a person leaves, killing their account on the server stops their Windows logon and ability to access Samba shares) and to ensure files aren't stored on the individual machines for backup/security purposes. I want to make this as simple as possible, so don't want to have loads of stuff I don't need, I just can't figure out: What I need at the server end: - will Samba be enough (already installed as part of initial installation), or will I need to cock around with LDAP (and how does this interact with Samba) - For someone of moderate Linux competence like me, is there a package that offers easy admin of user accounts, e.g. a GUI like phpLDAPadmin (if LDAP is necessary) How to configure the XP machines: - do I need to have the XP machines set up as a domain controller (I've no idea, really) - roaming profiles looks to offer the feature of putting the user's files on the server rather than the machine itself along with a profile that follows the user from machine to machine. Syncing Mac user's home folders with the server This is less of a concern because I can set up Time Machine if it comes to it, but I'd appreciate any recommendations of what approach I should take having the Mac home folders synced to the server.

    Read the article

  • Transient VO : Powerful J2EE Design Pattern

    - by Vijay Mohan
    We had a use-case wherein, the communication has to happen between regions residing under differenet taskfows. Essentially, they had a common set of parameters to be used. Initially, we resorted to the  use of pageFlowScope variables, but they are tightly coupled with the individual task flows. So, how the communication has to happen..?Some of the alternatives that we brainstormed into are - 1.usage of adf contextual event - This is a powerful feature indeed for such use-cases, but there is a considerable cost involved with it. So, before resorting to it, you have to make sure that you have good enough reason to use it.It actually does a server roundtrip and also the issue of an event and listening part to it is also something which requires your attention !!2.Use a transientVO with shared data control scope - with shared data control scope, the transient VO rows would be persistent across the task flows in your application. All you have to do is to create the attributes in the transientVO(prefereably with the same names - for the ease of conversion) and create some utility methods in VOImpl for creating row, updating row and deleting a row. You also have to make sure that the vo row is initialized per http request( this you can do in a bookmark method of your index.jspx - residing in adfc-config.xml), else the ui fields binded to the transient vo attributes won't render in UI.Hope, this helps and this should be a common use-case across apps.

    Read the article

  • Oracle Buys Compendium - Adds Leading Content Marketing Platform to Oracle Eloqua Marketing Cloud

    - by Richard Lefebvre
    News Facts Oracle today announced that it has acquired Compendium, a cloud-based content marketing provider that helps companies plan, produce and deliver engaging content across multiple channels throughout their customers’ lifecycle. Compendium’s data-driven approach aligns relevant content with customer data and profiles to help companies more effectively attract prospects, engage buyers, accelerate conversion of prospects to opportunities, increase adoption, and drive revenue growth. Compendium’s innovative solution complements Oracle’s industry leading Eloqua Marketing Cloud which is a part of Oracle’s comprehensive Customer Experience solution. The combination of Oracle Eloqua Marketing Cloud with Compendium is expected to enable modern marketers to align persona-based content to customers’ digital body language to increase “top-of-funnel” customer engagement, improve the quality of sales leads, realize the highest return on their marketing investment, and increase customer loyalty. More information on this announcement can be found at http://www.oracle.com/compendium. Supporting Quotes “As customers increasingly access information through online and mobile channels, the buying process is shifting from sales-driven to marketing-driven. Now, more than ever, marketers are challenged to deliver relevant and engaging content across multiple channels and throughout the customer lifecycle,” said Thomas Kurian, Executive Vice President, Oracle Development. “By adding Compendium’s content marketing platform to Oracle Eloqua Marketing Cloud, customers will be able to capture more prospects, improve the customer experience and drive top line revenue.” “Oracle Eloqua Marketing Cloud is uniquely positioned to capture a prospect’s digital body language to help companies know each buyer’s demographics, behaviors and influencers,” said Chris Baggott, Compendium CEO. “By combining this buyer profile with Compendium’s data-driven content marketing platform, marketers will be able to deliver the right content, to the right individual across the right channel at the right time. We are very excited to now be a part of the industry’s most complete marketing cloud solution, giving us a global stage to deliver innovative content marketing solutions.” Supporting Resources About Oracle and Compendium General Presentation Customer and Partner Letter FAQ

    Read the article

  • Can unit tests verify software requirements?

    - by Peter Smith
    I have often heard unit tests help programmers build confidence in their software. But is it enough for verifying that software requirements are met? I am losing confidence that software is working just because the unit tests pass. We have experienced some failures in production deployment due to an untested\unverified execution path. These failures are sometimes quite large, impact business operations and often requires an immediate fix. The failure is very rarely traced back to a failing unit test. We have large unit test bodies that have reasonable line coverage but almost all of these focus on individual classes and not on their interactions. Manual testing seems to be ineffective because the software being worked on is typically large with many execution paths and many integration points with other software. It is very painful to manually test all of the functionality and it never seems to flush out all the bugs. Are we doing unit testing wrong when it seems we still are failing to verify the software correctly before deployment? Or do most shops have another layer of automated testing in addition to unit tests?

    Read the article

  • Dealing with selfish team member(s)

    - by thegreendroid
    My team is facing a difficult quandary, a couple of team members are essentially selfish (not to be confused with dominant!) and are cherry-picking stories/tasks that will give them the most recognition within the company (at sprint reviews etc. when all the stakeholders are present). These team members are very good at what they do and are fully aware of what they are doing. When we first started using agile about a year ago, I can say I was quite selfish too (coming from a very individual-focused past). I took ownership of certain stories and didn't involve anyone else in it, which in hindsight wasn't the right thing to do and I learnt from that experience almost immediately. We are a young team of very ambitious twenty somethings so I can understand the selfishness to some extent (after all everyone should be ambitious!). But the level to which this selfishness has reached of late has started to bother me and a few others within my team. The way I see it, agile/scrum is all about the team and not individuals. We should be looking out for each other and helping each other improve. I made this quite clear during our last retrospective, that we should be fair and give everyone a chance. I'll wait and see what comes out of it in the next few sprints. In the meantime, what are some of the troubles that you have faced with selfish members and how did you overcome them?

    Read the article

  • How to Export Flash Animation Data

    - by charliep
    I'd love for my partner, the artist, to be able to animate using flash movieclips and timelines. Then I, the programmer, would like to read the raw Flash info and re-program it into my engine of choice (which happens to be Torque2D). The data I'd want is the bitmap images that were used in Flash, like the head and body the links between the images, like where the head connects to the body the motion data from the flash animation, like move, rotate (at what speed), shear, etc. for the head or arms or whatever. Is there any way to get this data? Here's what I know so far. There are tools like SWFSheet and Spriteloq that convert the entire flash animation into a frame by frame sprite animation (in a sprite sheet). This would take too much space in my case, so I'd like to avoid that. Re-animating on the fly would take much less texture memory. There is a PDF that describes the SWF file format but NOT the individual components like the movieclips. So anyone know of a library I can use, or how I can learn more about the movieclip components and whatnot? (more better tags: transform, export, convert)

    Read the article

  • RUEI 12.1.0.3.0 dependency requirement for php-soap-5.1.6

    - by sthieme
    Dear Readers,please be aware of the new php-soap-5.1.6 dependency in RUEI 12.1.0.3.For a swift upgrade to RUEI 12.1.0.3 you should be aware of this pre-requisite as it can be a time-eater to obtain individual rpm-packages inside of a datacenter for an old OS revision once you have started the upgrade process. You may use the following procedure to retrieve the required package via http://public-yum.oracle.com:Customers will have to check the /etc/issue, /etc/issue.net (or /etc/redhat-release for RHEL based OS) for their current release in order to obtain the fitting package version.Customers of OEL can download the packages from our public-yum.oracle.com Server: http://public-yum.oracle.com/repo/,  e.g. http://public-yum.oracle.com/repo/OracleLinux/OL5/8/base/x86_64/php-soap-5.1.6-32.el5.x86_64.rpmEarlier releases (up to 5.5) are located under the EnterpriseLinux instead of OracleLinux path, e.g.http://public-yum.oracle.com/repo/EnterpriseLinux/EL5/5/base/x86_64/php-soap-5.1.6-27.el5.x86_64.rpmNote: you will have to obtain the relevant RedHat rpm-packages via the login protected RHN URLs. Oracle can only provide support for Oracle Enterprise Linux and RHEL packages are not available publicly via rpm-seek.com to my knowledge. Kind regards,Stefan

    Read the article

  • Copyrights, Trademarks, Patents - Oh My!

    - by kennedysteve
    Good references when looking to see if someone really legally owns a name, copyright, etc. Copyrights = http://cocatalog.loc.gov/ Trademarks = http://tess2.uspto.gov Patents = http://patft.uspto.gov/ Website Address = http://www.internic.net/whois.html   Copyright Copyright, a form of intellectual property law, protects original works of authorship including literary, dramatic, musical, and artistic works, such as poetry, novels, movies, songs, computer software, and architecture. Copyright does not protect facts, ideas, systems, or methods of operation, although it may protect the way these things are expressed.   Trademark A trademark protects words, phrases, symbols, or designs identifying the source of the goods or services of one party and distinguishing them from those of others.   Patents Set of exclusive rights to an inventor for a limited period of time in exchange for a public disclosure of an invention.   Website Address (aka "Domain name") The core portion of a website name (such as "apple.com" or "msn.com") of a web site, which is uniquely registered to an individual or company (also found to the right of the @ sign in an email address such as "[email protected]".)   Side note #1. LLC Company Names appear to be registered and maintained by state only. If you want to reserve a LLC name nation wide, you may have to register with each state.   Side note #2. The copyright office's FAQ has a question called "How do I protect my sighting of Elvis?". No kidding. Check it out. http://www.

    Read the article

  • XNA Masking Mayhem

    - by TropicalFlesh
    I'd like to start by mentioning that I'm just an amateur programmer of the past 2 years with no formal training and know very little about maximizing the potential of graphics hardware. I can write shaders and manipulate a multi-layered drawing environment, but I've basically stuck to minimalist pixel shaders. I'm working on putting dynamic point light shadows in my 2d sidescroller, and have had it working to a reasonable degree. Just chucking it in without working on serious optimizations outside of basic culling, I can get 50 lights or so onscreen at once and still hover around 100 fps. The only issue is that I'm on a very high end machine and would like to target the game at as many platforms I can, low and high end. The way I'm doing shadows involves a lot of masking before I can finally draw the light to my light layer. Basically, my technique to achieveing such shadows is as follows. See pics in this album http://imgur.com/a/m2fWw#0 The dark gray represents the background tiles, the light gray represents the foreground tiles, and the yellow represents the shadow-emitting foreground tile. I'll draw the light using a radial gradient and a color of choice I'll then exclude light from the mask by drawing some geometry extending through the tile from my point light. I actually don't mask the light yet at this point, but I'm just illustrating the technique in this image Finally, I'll re-include the foreground layer in my mask, as I only want shadows to collect on the background layer and finally multiply the light with it's mask to the light layer My question is simple - How can I go about reducing the amount of render target switches I need to do to achieve the following: a. Draw mask to exclude shadows from the foreground to it's own target once per frame b. For each light that emits shadows, -Begin light mask as full white -Render shadow geometry as transparent with an opaque blendmode to eliminate shadowed areas from the mask -Render foreground mask back over the light mask to reintroduce light to the foreground c. Multiply light texture with it's individual mask to the main light layer.

    Read the article

  • Is it better to hard code data or find an algorithm?

    - by OghmaOsiris
    I've been working on a boardgame that has a hex grid as the board (the upper right grid in the image below) Since the board will never change and the spaces on the board will always be linked to the same other spaces around it, should I just hard code every space with the values that I need? Or should I use various algorithms to calculate links and traversals? To be more specific, my board game is a 4 player game where each player has a 5x5x5x5x5x5 hex grid (again, the upper right grid in th eimage above). The object is to get from the bottom of the grid to the top, with various obstacles in the way, and each players being able to attack eachother from the edge of their grid onto other players based on a range multiplier. Since the players grid will never change and the distance of any arbitrary space from the edge of the grid will always be the same, should I just hard code this number into each of the spaces, or should I still use a breadth first search algorithm when players are attacking? The only con I can think of for hard coding everything is that I'm going to code 9+ 2(5+6+7+8) = 61 individual cells. Is there anything else that I'm missing that I should consider using more complex algorithms?

    Read the article

  • Top 10 Posts in 2010

    - by dwahlin
    Blogging’s a lot of fun and a great way to share what you’ve learned. It’s also a great way to learn based upon comments people leave that help you see things in an entirely new way in some cases.  Since we’ve now moved on to 2011 (Happy New Year’s!) I wanted to list the Top 10 posts from my blog during 2010 based on individual views.  Thanks to everyone who follows my blog and adds comments from time to time. Here’s wishing everyone a great 2011!   1. Reducing Code by Using jQuery Templates 2. Integrating HTML into Silverlight Applications 3. Silverlight is Dead, the Moon is Made of Cheese, and HTML 5 is Ready for Prime Time 4. Understanding the Role of Commanding in Silverlight 4 Applications 5. New Article – Getting Started with WCF RIA Services 6. Simplify Your Code with LINQ 7. My Favorite iPad Apps….So Far 8. Final Release of Silverlight Tools for Visual Studio 2010 Released 9. Handling WCF Service Paths in Silverlight 4 – Relative Path Support 10. Tales from the Trenches – Building a Real-World Silverlight Line of Business Application   Getting Started with the MVVM Pattern in Silverlight Applications – Posted late 2009 so I’m giving it honorable mention status since it’s still one of the most popular posts.

    Read the article

  • Developing a feature which sole purpose to be taken out?

    - by adib
    What is the name of the pattern in which individual contributors (programmers/designers) developed an artifact for the sole purpose is to serve as a diversion so that management can remove that feature in the final product? This is a folklore I heard from an ex-colleague who used to work at a large game development company. At that company, it is well known that middle management is pressurized to "give inputs" and "make changes" to the product otherwise they risk being seen as not contributing to the project. This situation have delayed many projects because of these superfluous "management inputs". In one project at the above company, the artists and developers created a supernumerary animated character that appears in every cutscene and sticks out like a sore thumb. They designed it in such a way that it can be easily removed before the game is shipped (this was when games were still sold in physical media and not a downloadable product). Obviously the management then voted to remove the animation. On the positive side, management didn't introduced any unnecessary changes that would have delayed the project because they have shown that they provided constructive inputs to the product. This process pattern has a name among game programmers that work in corporates, but I forgot what was the actual name. I believe it's duck-something. Anybody can help pointing out the name and perhaps some rather credible reference to how the pattern develops?.

    Read the article

  • Amazon Upgrades FreeTime; More Content for the Kid-Friendly Walled Garden

    - by Jason Fitzpatrick
    Earlier this year Amazon introduced FreeTime, a walled garden area intended to provide a kids-only app gallery on the Kindle Fire. It was up to parents to populate the content but now, with the recent update, Amazon brings together unlimited books, movies, games, and apps. Intended for children ages 3-8 the upgraded service eschews the you-pick-it-all approach and goes with a hand-curated collection of games, educational apps, books and more. In addition to the pile of hand-curated content, FreeTime also has built in time limits and individual profiles for different children. Every Kindle Fire, Kindle Fire HD, and Kindle Fire HD 8.9″ user can try out the service for thirty days without charge. After the thirty day trial the subscription price is $4.99 per month ($2.99 for Prime members). Hit up the link below to check out the full description of the service. Amazon FreeTime [Amazon] Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus?

    Read the article

  • How to handle growing QA reporting requirements?

    - by Phillip Jackson
    Some Background: Our company is growing very quickly - in 3 years we've tripled in size and there are no signs of stopping any time soon. Our marketing department has expanded and our IT requirements have as well. When I first arrived everything was managed in Dreamweaver and Excel spreadsheets and we've worked hard to implement bug tracking, version control, continuous integration, and multi-stage deployment. It's been a long hard road, and now we need to get more organized. The Problem at Hand: Management would like to track, per-developer, who is generating the most issues at the QA stage (post unit testing, regression, and post-production issues specifically). This becomes a fine balance because many issues can't be reported granularly (e.g. per-url or per-"page") but yet that's how Management would like reporting to be broken down. Further, severity has to be taken into account. We have drafted standards for each of these areas specific to our environment. Developers don't want to be nicked for 100+ instances of an issue if it was a problem with an include or inheritance... I had a suggestion to "score" bugs based on severity... but nobody likes that. We can't enter issues for every individual module affected by a global issue. [UPDATED] The Actual Questions: How do medium sized businesses and code shops handle bug tracking, reporting, and providing useful metrics to management? What kinds of KPIs are better metrics for employee performance? What is the most common way to provide per-developer reporting as far as time-to-close, reopens, etc.? Do large enterprises ignore the efforts of the individuals and rather focus on the team? Some other questions: Is this too granular of reporting? Is this considered 'blame culture'? If you were the developer working in this environment, what would you define as a measureable goal for this year to track your progress, with the reward of achieving the goal a bonus?

    Read the article

  • Designing configuration for subobjects

    - by Stefano Borini
    I have the following situation: I have a class (let's call it Main) encapsulating a complex process. This class in turn orchestrates a sequence of subalgorithms (AlgoA, AlgoB), each one represented by an individual class. To configure Main, I have a configuration stored into a configuration object MainConfig. This object contains all the config information for AlgoA and AlgoB with their specific parameters. AlgoA has no interest to the information relative to the configuration of AlgoB, so technically I could have (and in practice I have) a contained MainConfig.AlgoAConfig and MainConfig.AlgoBConfig instances, and initialize as AlgoA(MainConfig.AlgoAConfig) and AlgoB(MainConfig.AlgoBConfig). The problem is that there is some common configuration data. One example is the printLevel. I currently have MainConfig.printLevel. I need to propagate this information to both AlgoA and AlgoB, because they have to know how much to print. MainConfig also needs to know how much to print. So the solutions available are I pass the MainConfig to AlgoA and AlgoB. This way, AlgoA has technically access to the whole configuration (even that of AlgoB) and is less self-contained I copy the MainConfig.printLevel into AlgoAConfig and AlgoBConfig, so I basically have three printLevel information repeated. I create a third configuration class PrintingConfig. I have an instance variable MainConfig.printingConfig, and then pass to AlgoA both MainConfig.AlgoAConfig and MainConfig.printingConfig. Have you ever found this situation? How did you solve it ? Which one is stylistically clearer to a new reader of the code ?

    Read the article

  • Nova Software Becomes Kentico Certified Partner

    - by chanva
    Nova Software was awarded Kentico Certified Partner status. The new status confirms that Nova Software is qualified to provide professional services using the Kentico CMS. Nova Software has earned a reputation for excellence thanks to our in-depth technology knowledge and business acumen. By consistently applying this expertise to customers' individual business needs, Nova Software helps provide a sustainable competitive advantage based upon unique industry knowledge and relationships. Nova Software chose Kentico CMS as the platform for their clients' websites for its robust feature set, affordable licensing and solid core structure. As a custom software developer, Nova Software is drawn to the Kentico CMS both for its developer-centric environment as well as for its user-friendly CMS Desktop that will enhance the user experience of its clients. While commenting on the potentiality of this major collaboration with Kentico Software, Our customers come to us for high-quality websites that can offer the most up-to-date features. By using Kentico CMS, we feel confident that we will be able to cover all the needs of our customers, deliver the project on time and provide them services at a very affordable price.Partner Manager at Kentico, Lenka Navratilova, says the partnership with Nova Software is important to her company, "Choosing the right platform for a web project is only a part of its way to success. The skills and expertise of the company that delivers it makes the rest. With our partnership with Nova Software, we are sure that the end users of our product will be provided with top-level professional services." Kentico is currently used in 84 countries by more than 6,000 websites including some of the world's biggest corporations such as McDonalds, Mazda and Vodafone, This is an exciting development for large businesses and organisations as it will enable the building and management of any sized website, from simple 'brochure' sites to comprehensive, data hungry sites in a robust and technically superior platform. Kentico is modular so clients can start with a basic site and later add functions such as blogs, newsletters and e-commerce. Technical knowledge is not needed in order to update a Kentico website. If clients can use Microsoft Word, they can easily edit their site.

    Read the article

  • Taking our Friendships to the next level.

    - by RedAndTheCommunity
    Red Gate have been running the Friends of Red Gate program for years now, and over that time we've built some great relationships with some truly awesome members of the SQL and .NET communities. When I took over the running of the program from Annabel in 2011, I was overwhelmed by the enthusiasm and commitment of our Friends. There were just so many of them, however, that it was hard to make the most of the relationships we had with people, and I wanted to fix that. I decided to survey all our Friends, to find out what they wanted to get out of, and put into, being in the Friends of Red Gate (FoRG) program. From the results of that survey, I identified 30 FoRGs that were really willing and able to go that step further to help Red Gate improve their tools, improve their relationship with the community, and improve the Friends of Red Gate program. Those 30 Friends of Red Gate have been awarded 'FoRG+' status. That means they'll: Have a closer relationship with the product teams, by getting involved in projects Have even more access to the inside track about the tools they're interested in Get the opportunity to come visit us at the Red Gate office and really influence the development of the tools. Plus more, depending on how the individual FoRG+ wants to work with us. This doesn't mean I've forgotten our other Friends; I'm working on ways to improve their experience of the Friends of Red Gate program. I'll write about them in another post. If you're an existing Friend of Red Gate, and you're interested in finding out how to get involved in the FoRG+ program, then I'd love to chat to you. For anyone that's interested in joining the Friend of Red Gate program, take a look at the web page dedicated to the program, and get in touch at [email protected] to be put on the waiting list for our 2013 program.

    Read the article

  • Handling changes to data types and entries in a database migration

    - by jandjorgensen
    I'm fully redesigning a site that indexes a number of articles with basic search functionality. The previous site was written about a decade ago, and I'm salvaging about 30,000 entries with data stored in less-than-ideal formats. While I'm moving from MSSQL to MySQL, I don't need to make any "live" changes, so this is not a production-level migration issue so much as a redesign. For instance, dates are stored the same as tags/subjects about the articles, but in strings as "YYYYMMDDd" (the lowercase d stands for "date" in the string). Essentially, before or after I move from the previous database format to a new one, I'm going to need to do a lot of replacement of individual entries. While I understand how to do operations with regular expressions in non-database issues, my database experience isn't robust enough to know the best way to handle this. What is the best (or standard) way to handle major changes like this? Is there an SQL operation I should be looking into? Please let me know if the problem isn't clear--I'm not entirely sure what kind of answer I'm looking for.

    Read the article

  • Level Creating Help

    - by Brandon oubiub
    I am making a little 2d overhead RPG type game just for fun. I have almost all the basic stuff set up, but I just need a little help on level creation. I can already make a level and place each tile how I want it, but having to place each tile gets annoying after a while. I noticed that in a lot of games, even extremely simple ones, they have LOTS of levels with LOTS of tiles in each. Creating all that in this fashion would take forever. So I guess my question is, as a game developer, am I supposed to do all that, or maybe make a little level editor so I can see things as I create it? What do game developers do? I'm using Java. EDIT: Okay, say if I had an image for a map, that I made in MS paint or photoshop, and each pixel represent a tile value, could I somehow in Java detect what color an individual pixel is? If so, that would be perfect. If so, how?

    Read the article

  • libgdx ActorGestureListener.pan() parameters not moving actor in smooth line

    - by Roar Skullestad
    I override the pan method in ActorGestureListener to implement dragging actors in libgdx (scene2d). When I move individual pieces on a board they move smoothly, but when moving the whole board, the x and y coordinates that is sent to pan is "jumping", and in an increasingly amount the longer it is dragged. These are an example of the deltaY coordinates sent to pan when dragging smoothly downwards: 1.1156368 -0.13125038 -1.0500145 0.98439217 -1.0500202 0.91877174 -0.984396 0.9187679 -0.98439026 0.9187641 -0.13125038 This is how I move the camera: public void pan (InputEvent event, float x, float y, float deltaX, float deltaY) { cam.translate(-deltaX, -deltaY); I have been using both the delta values sent to pan and the real position values, but similar results. And since it is the coordinates that are wrong, it doesn't matter whether I move the board itself or the camera. What could the cause be for this and what is the solution? When I move camera only half the delta-values, it moves smoothly but only at half the speed of the mouse pointer: cam.translate(-deltaX / 2, -deltaY / 2); It seems like the moving of camera or board affects the mouse input coordinates. How can I drag at "mouse speed" and still get smooth movements? (This question was also posted on stackoverflow: http://stackoverflow.com/questions/20693020/libgdx-actorgesturelistener-pan-parameters-not-moving-actor-in-smooth-line)

    Read the article

  • Pathfinding and BSP with Box2D

    - by Amplify91
    I'm looking into implementing AI in my 2D side-scrolling platformer, and I'm looking into using algorithms such as A*. For many kinds of pathfinding, we need some sort of grid or systems of nodes or polygon areas. My problem is that I am using Box2d for physics and I am not sure how best to create a structure that my AI can use besides placing individual nodes manually (something I really want to avoid) and using some sort of steering behavior. My level design is tile-based with each tile being about half of the height/width of my main character. The tiles are not all square (some are sloped). I'd like to have a system that can see what the terrain looks like for pathfinding and also keep track of the positions of other actors such as enemies. I'd like to avoid directly placing any nodes into my level design except for possible endpoints or goals. This question is related: How do you do AI path following within a 2d physics engine like farseer/box2d?, but it doesn't specify what kind of structure I could use instead of a list of nodes. I'm looking for some kind of grid or type of BSP that I can query for algorithms like A*.

    Read the article

  • Oracle Global HR Cloud Implementation Training Can Help Meet Your Business Needs

    - by HCM-Oracle
    By Jim Vonick A key goal for the deployment of your Oracle Global HR Cloud applications is to accelerate the implementation and adoption of your applications, so that your business can start realizing all of the benefits that this rich solution offers.    Implementation team members need to have the skills and knowledge to ensure a smooth, rapid and successful implementation of your applications. During set-up, you want to optimize the configuration to best meet your business needs. In order to do this you need to understand the foundation and configuration options of your applications, so that decisions can be made during set-up that best align with your business.  To that end product level implementation training is recommended for Oracle Global HR Cloud deployments. Training For Implementation Team Members and Consultants Fusion Applications: HCM Security: Learn how to implement security for Oracle Fusion HCM applications by creating and customizing roles. You'll learn how to create security profiles to restrict data access, provision roles to users, create and manage user accounts, and verify security setup. Fusion Applications: HCM Global Human Resources: Learn how to set up your enterprise and workforce structures, how to perform functional tasks, and how to configure security for Global Human Resources data. Fusion Applications: HCM Compensation: Learn how to implement, configure, and use Oracle Fusion Compensation to manage base pay, individual compensation, workforce compensation, and total compensation statements. Fusion Applications: HCM Benefits: This course teaches you to implement, configure and manage Oracle Fusion Benefits, including how to implement benefit plans and programs.  Fusion Applications: HCM Payroll Implementation (US): This course provides implementation training for payroll managers or payroll administrators. Learn how to process payroll to ensure accurate setup results.  Learn More: See all Fusion HCM Training Jim Vonick is a Senior Product Manager with Oracle University focusing on training for Oracle Applications and Industry Solutions.

    Read the article

  • A good tool for browser automation/client-side Web scripting

    - by hardmath
    I'm interested in adopting a tool/scripting language to automate some daily tasks connected with fighting forum spammers. A brief overview of these tasks: analyze new registrations and posts on a phpBB forum, and delete or deactivate spammers using a website/community that collects such spam reports. Typically such automation is integrated into the phpBB installation itself, which certainly has its advantages. My approach has the advantage of independent operation, etc. One way to think about this is in terms of browser automation. I've used iOpus iMacros for Firefox (the free version) in the past to respond to individual spammers, but current attacks are highly distributed. My "logic" for pigeonholing spammers vs. nonspammers seems beyond the easy reach of the free version of iMacros. From a more technical perspective one can think about dispensing with the browser altogether and programming GET/POST requests directed to my forum and other Web-based resources. I'm familiar with some scripting languages like Ruby and Lua, but I could be persuaded that a compiled application is better suited for these tasks. However in my experience the dynamic flexibility of interpreted environments is very useful in prototyping and debugging the application logic. So I'm leaning in the direction of scripting languages. Among browsers I favor Firefox and Chrome. I use both Windows and Linux platforms, and if the tool can adapt to an Android platform, it would make a neat demonstration of skills, yes? Thanks in advance for your suggestions!

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • What Poor Project Management Might Be Costing You

    - by Sylvie MacKenzie, PMP
    For project-intensive organizations, capital investment decisions define both success and failure. Getting them wrong—the risk of delays and schedule and cost overruns are ever present—introduces the potential for huge financial losses. The resulting consequences can be significant, and directly impact both a company’s profit outlook and its share price performance—which in turn is the fundamental measure of executive performance. This intrinsic link between long-term investment planning and short-term market performance is investigated in the independent report Stock Shock, written by a consultant from Clarity Economics and commissioned by the EPPM Board. A new international steering group organized by Oracle, the EPPM Board brings together senior executives from leading public and private sector organizations to explore the critical role played by enterprise project and portfolio management (EPPM). Stock Shock reviews several high-profile recent project failures, and combined with other research reviews the lessons to be learned. It analyzes how portfolio management is an exercise in balancing risk and reward, a process that places the emphasis firmly on executives to correctly determine which potential investments will deliver the greatest value and contribute most to the bottom line. Conversely, it also details how poor evaluation decisions can quickly impact the overall value of an organization’s project portfolio and compromise long-range capital planning goals. Failure to Deliver—In Search of ROI The report also cites figures from the Economist Intelligence Unit survey that found that more organizations (12 percent) expected to deliver planned ROI less than half the time, than those (11 percent) who claim to deliver it 90 percent or more of the time. This fact is linked to a recent report from Booz & Co. that shows how the average tenure of a global chief executive has fallen from 8.1 years to 6.3 years. “Senior executives need to begin looking at effective project delivery not as a bonus, but as an essential facet of business success,” according to Stock Shock author Phil Thornton. “Consolidated and integrated visibility into individual projects is the most practical solution to overcoming these challenges, which explains the increasing popularity of PPM technologies as an effective oversight and delivery platform.” Stock Shock is available for download on the EPPM microsite at http://www.oracle.com/oms/eppm/us/stock-shock-report-1691569.html

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >