Search Results

Search found 16418 results on 657 pages for 'either'.

Page 18/657 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Finding the Right Solution to Source and Manage Your Contractors

    - by mark.rosenberg(at)oracle.com
    Many of our PeopleSoft Enterprise applications customers operate in service-based industries, and all of our customers have at least some internal service units, such as IT, marketing, and facilities. Employing the services of contractors, often referred to as "contingent labor," to deliver either or both internal and external services is common practice. As we've transitioned from an industrial age to a knowledge age, talent has become a primary competitive advantage for most organizations. Contingent labor offers talent on flexible terms; it offers the ability to scale up operations, close skill gaps, and manage risk in the process of delivering services. Talent comes from many sources and the rise in the contingent worker (contractor, consultant, temporary, part time) has increased significantly in the past decade and is expected to reach 40 percent in the next decade. Managing the total pool of talent in a seamless integrated fashion not only saves organizations money and increases efficiency, but creates a better place for workers of all kinds to work. Although the term "contingent labor" is frequently used to describe both contractors and employees who have flexible schedules and relationships with an organization, the remainder of this discussion focuses on contractors. The term "contingent labor" is used interchangeably with "contractor." Recognizing the importance of contingent labor, our PeopleSoft customers often ask our team, "What Oracle vendor management system (VMS) applications should I evaluate for managing contractors?" In response, I thought it would be useful to describe and compare the three most common Oracle-based options available to our customers. They are:   The enterprise licensed software model in which you implement and utilize the PeopleSoft Services Procurement (sPro) application and potentially other PeopleSoft applications;  The software-as-a-service model in which you gain access to a derivative of PeopleSoft sPro from an Oracle Business Process Outsourcing Partner; and  The managed service provider (MSP) model in which staffing industry professionals utilize either your enterprise licensed software or the software-as-a-service application to administer your contingent labor program. At this point, you may be asking yourself, "Why three options?" The answer is that since there is no "one size fits all" in terms of talent, there is also no "one size fits all" for effectively sourcing and managing contingent workers. Various factors influence how an organization thinks about and relates to its contractors, and each of the three Oracle-based options addresses an organization's needs and preferences differently. For the purposes of this discussion, I will describe the options with respect to (A) pricing and software provisioning models; (B) control and flexibility; (C) level of engagement with contractors; and (D) approach to sourcing, employment law, and financial settlement. Option 1:  Enterprise Licensed Software In this model, you purchase from Oracle the license and support for the applications you need. Typically, you license PeopleSoft sPro as your VMS tool for sourcing, monitoring, and paying your contract labor. In conjunction with sPro, you can also utilize PeopleSoft Human Capital Management (HCM) applications (if you do not already) to configure more advanced business processes for recruiting, training, and tracking your contractors. Many customers choose this enterprise license software model because of the functionality and natural integration of the PeopleSoft applications and because the cost for the PeopleSoft software is explicit. There is no fee per transaction to source each contractor under this model. Our customers that employ contractors to augment their permanent staff on billable client engagements often find this model appealing because there are no fees to affect their profit margins. With this model, you decide whether to have your own IT organization run the software or have the software hosted and managed by either Oracle or another application services provider. Your organization, perhaps with the assistance of consultants, configures, deploys, and operates the software for managing your contingent workforce. This model offers you the highest level of control and flexibility since your organization can configure the contractor process flow exactly to your business and security requirements and can extend the functionality with PeopleTools. This option has proven very valuable and applicable to our customers engaged in government contracting because their contingent labor management practices are subject to complex standards and regulations. Customers find a great deal of value in the application functionality and configurability the enterprise licensed software offers for managing contingent labor. Some examples of that functionality are... The ability to create a tiered network of preferred suppliers including competencies, pricing agreements, and elaborate candidate management capabilities. Configurable alerts and online collaboration for bid, resource requisition, timesheet, and deliverable entry, routing, and approval for both resource and deliverable-based services. The ability to manage contractors with the same PeopleSoft HCM and Projects applications that are used to manage the permanent workforce. Because it allows you to utilize much of the same PeopleSoft HCM and Projects application functionality for contractors that you use for permanent employees, the enterprise licensed software model supports the deepest level of engagement with the contingent workforce. For example, you can: fill job openings with contingent labor; guide contingent workers through essential safety and compliance training with PeopleSoft Enterprise Learning Management; and source contingent workers directly to project-based assignments in PeopleSoft Resource Management and PeopleSoft Program Management. This option enables contingent workers to collaborate closely with your permanent staff on complex, knowledge-based efforts - R&D projects, billable client contracts, architecture and engineering projects spanning multiple years, and so on. With the enterprise licensed software model, your organization maintains responsibility for the sourcing, onboarding (including adherence to employment laws), and financial settlement processes. This means your organization maintains on staff or hires the expertise in these domains to utilize the software and interact with suppliers and contractors. Option 2:  Software as a Service (SaaS) The effort involved in setting up and operating VMS software to handle a contingent workforce leads many organizations to seek a system that can be activated and configured within a few days and for which they can pay based on usage. Oracle's Business Process Outsourcing partner, Provade, Inc., provides exactly this option to our customers. Provade offers its vendor management software as a service over the Internet and usually charges your organization a fee that is a percentage of your total contingent labor spending processed through the Provade software. (Percentage of spend is the predominant fee model, although not the only one.) In addition to lower implementation costs, the effort of configuring and maintaining the software is largely upon Provade, not your organization. This can be very appealing to IT organizations that are thinly stretched supporting other important information technology initiatives. Built upon PeopleSoft sPro, the Provade solution is tailored for simple and quick deployment and administration. Provade has added capabilities to clone users rapidly and has simplified business documents, like work orders and change orders, to facilitate enterprise-wide, self-service adoption with little to no training. Provade also leverages Oracle Business Intelligence Enterprise Edition (OBIEE) to provide integrated spend analytics and dashboards. Although pure customization is more limited than with the enterprise licensed software model, Provade offers a very effective option for organizations that are regularly on-boarding and off-boarding high volumes of contingent staff hired to perform discrete support tasks (for example, order fulfillment during the holiday season, hourly clerical work, desktop technology repairs, and so on) or project tasks. The software is very configurable and at the same time very intuitive to even the most computer-phobic users. The level of contingent worker engagement your organization can achieve with the Provade option is generally the same as with the enterprise licensed software model since Provade can automatically establish contingent labor resources in your PeopleSoft applications. Provade has pre-built integrations to Oracle's PeopleSoft and the Oracle E-Business Suite procurement, projects, payables, and HCM applications, so that you can evaluate, train, assign, and track contingent workers like your permanent employees. Similar to the enterprise licensed software model, your organization is responsible for the contingent worker sourcing, administration, and financial settlement processes. This means your organization needs to maintain the staff expertise in these domains. Option 3:  Managed Services Provider (MSP) Whether you are using the enterprise licensed model or the SaaS model, you may want to engage the services of sourcing, employment, payroll, and financial settlement professionals to administer your contingent workforce program. Firms that offer this expertise are often referred to as "MSPs," and they are typically staffing companies that also offer permanent and temporary hiring services. (In fact, many of the major MSPs are Oracle applications customers themselves, and they utilize the PeopleSoft Solution for the Staffing Industry to run their own business operations.) Usually, MSPs place their staff on-site at your facilities, and they can utilize either your enterprise licensed PeopleSoft sPro application or the Provade VMS SaaS software to administer the network of suppliers providing contingent workers. When you utilize an MSP, there is a separate fee for the MSP's service that is typically funded by the participating suppliers of the contingent labor. Also in this model, the suppliers of the contingent labor (not the MSP) usually pay the contingent labor force. With an MSP, you are intentionally turning over business process control for the advantages associated with having someone else manage the processes. The software option you choose will to a certain extent affect your process flexibility; however, the MSPs are often able to adapt their processes to the unique demands of your business. When you engage an MSP, you will want to give some thought to the level of engagement and "partnering" you need with your contingent workforce. Because the MSP acts as an intermediary, it can be very valuable in handling high volume, routine contracting for which there is a relatively low need for "partnering" with the contingent workforce. However, if your organization (or part of your organization) engages contingent workers for high-profile client projects that require diplomacy, intensive amounts of interaction, and personal trust, introducing an MSP into the process may prove less effective than handling the process with your own staff. In fact, in many organizations, it is common to enlist an MSP to handle contractors working on internal projects and to have permanent employees handle the contractor relationships that affect the portion of the services portfolio focused on customer-facing, billable projects. One of the key advantages of enlisting an MSP is that you do not have to maintain the expertise required for orchestrating the sourcing, hiring, and paying of contingent workers.  These are the domain of the MSPs. If your own staff members are not prepared to manage the essential "overhead" processes associated with contingent labor, working with an MSP can make solid business sense. Proper administration of a contingent workforce can make the difference between project success and failure, operating profit and loss, and legal compliance and fines. Concluding Thoughts There is little doubt that thoughtfully and purposefully constructing a service delivery strategy that leverages the strengths of contingent workers can lead to better projects, deliverables, and business results. What requires a bit more thinking is determining the platform (or platforms) that will enable each part of your organization to best deliver on its mission.

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Question: what's the best way to handle sub-projects in eclipse when using git for SCM? Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link to a subdirectory of the directory containing the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • BPM 11gR1 now available on Amazon EC2

    - by Prasen Palvankar
    BPM 11gR1 now available on Amazon EC2The new Oracle BPM 11gR1, including the latest Oracle SOA Suite 11gR1 Patchset-2 is now available as an Amazon Machine Image (AMI). This is a fully configured image which requires absolutely no installation and lets you get hands on experience with the software within minutes. This image has all the required software installed and configured and includes the following:Oracle 11g Database Standard Edition Oracle SOA Suite 11gR1 Patch-set 2Oracle BPM 11gR1Oracle Webcenter with BPM Process SpacesOracle Universal Content ManagementOracle JDeveloper with SOA and BPM pluginsNote: Use of this AMI requires acceptance of Oracle Technology Network (OTN) terms of use.To use this AMI, follow these steps: Login to your Amazon account and browse to Amazon AWS Console. If this is the first time you are using Amazon Web Services please visit https://aws.amazon.com/ec2/ for information on Amazon Elastic Cloud Computing and how to get started with Amazon EC2Make sure your security group that you will be using to launch the instance allows the following ports to be opened:22 (SSH)1521, 7001, 8001, 8888, 9001Click on AMIsChange the Viewing filters to 64-bit and enter soa-bpm in the search box. You should see the following AMI:083342568607/oracle-soa-bpm-11gr1-ps2-4.1-pubSelect the AMI and click on Launch or Spot Request. For more information on spot requests, please visit the Amazon EC2 link aboveAccept all the defaults and launch the instanceWhen the instance state changes to running, copy the assigned public host name and connect to it using either PuTTY or SSH command. For PuTTY usage, refer to this document.Once you are connected to the instance using PuTTY or SSH, you will be presented with the terms of use.Accept the terms of use to proceed. This will prompt you to set passwords for your oracle OS login as well as for VNC. Note that the instance will not be usable until you have accepted the terms of use.The instance is now ready to use. The SOA/BPM and other servers are automatically started once you accept the term of use. Initial startups can take about 5-10 minutes.If you would like to use the JDeveloper installed in the AMI, you can access it either using VNC or NX. You can get the NX client from NoMachine./home/oracle/README.txt contains all the URLs that you can use to access the Enterprise Manager, BPM Composer, BPM Workspace, Webcenter etc.

    Read the article

  • SO-overflow induced passivity - how to cope?

    - by Ruben
    After not really working on my pet project for a while, I discovered Stackoverflow and upon perusing it more intensely I was quite amazed. I'm a bit of a perfectionist, so when I found eye-openers here highlighting many of the mistakes I made, I first wanted to fix everything. However, it's a pet project for a reason: I'm self-taught and I'm studying psychology, so programming skills can never become priority one (though it often helps, even in this field). Issues that stuck out were numerous security issues (e.g. CSRF-prevention and bcrypt eluded me) not object-oriented (at least the PHP part, the JS-part mostly is) no PHP framework used, so many of my DIY takes on commonly-tackled components (auth, ...) are either bad or inefficient really poor MySQL usage (no prepared statements, mysql extension, heard about setting proper indices two days ago) using mootools even though JQuery seems to be fashionable, so there's more probably always going to be better integration with services I'd like to use (like google visualization) So, my SO-induced frenzy turned into passivity. I can't do it all (soon) in the rather small amount of spare time I can spend on working on my project. I can leave some of the issues be in good conscience (speed stuff: an unfinished & unpublished project will never become popular, right?). No clear conscience without good security though and if I don't use a framework for auth and other complex stuff I'll regret having to do it myself. One obvious answer would probably be going open-source, but I think the project would need to become more impressive before others would commit to it. I can't afford to employ someone either. I do think the project deserves being worked on, though. How should I tackle it anyway? What's the best practice for little-practice people?

    Read the article

  • Color Profiles in Windows 7 vs. XP

    - by flxkid
    I have a Brother Color Laser Printer and an HP 8150DN. I have a local Windows 7 Pro machine that I do graphics work on. I created a letterhead that when printed from my machine looks dark and rich on either the mono HP or the color Brother laser. I take this same letterhead, and move it onto our network for use by our users which are all on XP. Then they print the same file, it is washed out on either printer. I've confirmed that the printer settings we're using are identical. I've confirmed that its not related to the program or even specifically to the letterhead. I can duplicate this with other files too. I'm down to XP vs Windows 7 being the issue. I'm fairly certain now that color profiles are involved. I have no clue how to fix it though. Any suggestions would be much appreciated.

    Read the article

  • How are events in games handled?

    - by Alex
    In may games that I have played, I have seen events being triggered, such as when you walk into a certain land area while holding a specific object, it will trigger a special creature to spawn. I was wondering, how do games deal with events such as this? Not in a specific game, but in general among games. The first thought I had was that each place has a hard-coded set of events that it will call when something happens there. However, that would be too inefficient to maintain, as when something new is added, that would require modification of every part of the game that would potentially cause the event to be called. Next up, I had the idea of maybe how GUI programming works. In all of the GUI programming I've done, you create a component and a callback function, or a listener. Then, when the user interacts when the button, the callback function is called, allowing you to do something with it. So, I was thinking that in terms of a game, when a land area gets loaded the game loops over a list of all events, creating instances of them and calling public methods to bind them to the current scene. The events themselves then handle what scene it is, and if it is a scene that pertains to the event, will call the public method of the scene to bind the event to an action. Then, when the action takes place, the scene would call all events that are bound to that action. However, I'm sure that's not how games would operate either, as that would require a lot of creating of events all the time. So how to video games handle events, are either of those methods correct, or is it something completely different?

    Read the article

  • How can we protect the namespace of an object in Javascript?

    - by Eduard Florinescu
    Continuing from my previous question: Javascript simple code to understand prototype-based OOP basics Let's say we run into console this two separate objects(even if they are called child and parent there is no inheritance between them): var parent = { name: "parent", print: function(){ console.log("Hello, "+this.name); } }; var child = { name: "child", print: function(){ console.log("Hi, "+this.name); } }; parent.print() // This will print: Hello, parent child.print() // This will print: Hi, child temp =parent; parent = child; child = temp; parent.print() // This will now print: Hi, child child.print() // This will now print: Hello, parent Now suppose that parent is a library, as a HTML5 application in a browser this cannot do much harm because is practically running sandboxed, but now with the advent of the ChromeOS, FirefoxOS and other [Browser] OS they will also be linked to a native API, that would be a head out of the „sandbox”. Now if someone changes the namespace it would be harder for a code reviewer (either automated or not ) to spot an incorrect use if the namespaces changes. My question would be: Are there many ways in which the above situation can be done and what can be done to protect this namespaces? (Either in the javascript itself or by some static code analysis tool)

    Read the article

  • Per-vertex animation with VBOs: VBO per character or VBO per animation?

    - by charstar
    Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs), if possible. Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will likely be at different frames of the same animation at the same time). Assume color and texture coordinate buffers are static. Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? I've found a lot of information on what you can do, but no real best practices. Are there other considerations or methods that I am missing?

    Read the article

  • Color Profiles in Windows 7 vs. XP

    - by flxkid
    I have a Brother Color Laser Printer and an HP 8150DN. I have a local Windows 7 Pro machine that I do graphics work on. I created a letterhead that when printed from my machine looks dark and rich on either the mono HP or the color Brother laser. I take this same letterhead, and move it onto our network for use by our users which are all on XP. Then they print the same file, it is washed out on either printer. I've confirmed that the printer settings we're using are identical. I've confirmed that its not related to the program or even specifically to the letterhead. I can duplicate this with other files too. I'm down to XP vs Windows 7 being the issue. I'm fairly certain now that color profiles are involved. I have no clue how to fix it though. Any suggestions would be much appreciated.

    Read the article

  • Can't connect to sql server 2008 named instance

    - by hcsrpm
    I have sql server 2005 and 2008 running on a server on my local (and very straightforward) network. Using sql management studio 2008 and visual studio 2008, I can connect over the network to the 2005 instance which is the default instance. I can't connect to the 08 instance (named MC08). I can connect to both when logged in to the server. Remote connections have been enabled for MC08 and dynamic ports is turned off (assigned to port 1045). The sql browser service is running as well. This used to work so I'm not sure what has changed. I can't connect using the IP address either. Nothing unusual in the event log either. Any ideas?

    Read the article

  • Displays tabs in Firefox 3.6 across multiple rows?

    - by overtherainbow
    Hello, After upgrading to Firefox 3.6, I noticed that there no longer seems to be a way to have tabs all displayed in several rows when there are too many to fit in the window: http://www.ghacks.net/wp-content/uploads/2008/08/firefox_tabs_multiple_rows-500x179.jpg I find it very inconvenient to have to click on the down arrow at the far-right side to go through the list of open tabs. The TabMixPlus add-on is incompatible with ReloadEvery 3.6.2, so that's not a good solution either. Does someone know if it's possible to either configure FireFox 3.6 to go back to the previous way of displaying multiple tabs, or if there's an add-on that's compatible with ReloadEvery that would do the trick? Thank you.

    Read the article

  • PDF rendering issue os OSX

    - by 2Ti
    I came across some very odd rendering when trying to view a PDF file that needed to print out. I was wondering anyone has come across a similar problem before or has any ideas as to what might be causing this. PDF when viewed on OSX 10.7.4 - Preview version 6.0. I've tried opening the file in Skim but that doesn't work either. PDF as it should be, and as Chrome renders it in browser, but not if I download it onto my machine. Illustrator complains about "an unknown imaging construct" when I open the file, but renders it fine nevertheless, Photoshop doesn't have any problems either.

    Read the article

  • Where did this incorrect cached DNS lookup come from?

    - by Stephen Jennings
    Somehow, I've been having a chronic issue where my computer will get an invalid DNS lookup in its cache for either of the two Exchange servers I use from Mail.app. My workplace runs one of the Exchange servers and I run the other (they are totally unrelated, hosted by different companies, etc.). The problem manifests as a certificate domain error. When it happens, I can run nslookup mail.mydomain.com and I see the incorrect IP address (usually owned by either Apple or Akamai), but if I run nslookup mail.mydomain.com 8.8.8.8, I get the correct address. My real quest is to find out why this keeps happening, and to do that, I'd like to know which server is supplying me this bad DNS entry. Is there a way to check my DNS cache to see where this bad lookup came from?

    Read the article

  • DNS hijack - prevention tips

    - by user578359
    Hi there, Over the weekend it looks like the DNS was hijacked on two of my domains. My set up is I have the sites registered on 1and1.co.uk, with dns nameservers pointing to Hostgator in the US where the sites are hosted. I also had cloudflare CDN running on the sites (via hostgator cpanel). My question is any ideas as to how this happened, and how I could either monitor it so I know if it occurs again, or strengthen the set up/service to minimise the risk. History: I received a ping from my site monitoring service that the sites were down. When I checked the sites were up so I assumed it was local to the monitoring service I received a ping last night the sites were up When I checked, one site was redirecting to download-manual.com (and checking that URL now, the home page is not the same as the one I saw, so they too may have been hijacked/hacked) The other site URL remained the same but had one of those standard site search pages which bounce you off to either phishing or paid for search sites I notified Hostgator who told me Cloudflare or 1and1 were the issue. I removed cloudflare, and contacted both them and hostgator, and am awaiting a response, but am not holding my breath. Is this common? I've never heard of this or come across this before. It's pretty scary that this can happen so easily. Appreciate any input. **Update: I've now spoken to support at 1and1, Hostgator, and Cloudflare, and each one claims it has nothing to do with them, and must be one of the others. Larry, curly, moe.

    Read the article

  • PHP on Loading php.ini in directory or following error_reporting() on Windows 7

    - by Marcus
    Normally I develop under E_ALL error level, but for sanity on this project I want notices and strict off. So initially tried: error_reporting(E_ALL & ~(E_STRICT|E_NOTICE)); And several other combinations of the same thing, nothing worked. Next I tried to create a local php.ini the directory with error_reporting = E_ALL & ~E_NOTICE but nope, that didn't work either. phpinfo() is reporting: Scan this dir for additional .ini files: (none) Can someone help me fix either of these problems? Preferably both! Thanks! I'm running PHP Version 5.2.13 on Apache/2.2.14 under Windows 7 x64.

    Read the article

  • Run Rails 3 app on a Rails 2 server/machine?

    - by chucknelson
    I'm trying to run a Rail 3 (3.0.10) app on a shared joyent smartmachine server (I don't have root access) which has Rails 2 (2.3.11) installed , and I'm not sure what to do after I freeze my Rails 3 app with bundle install --deployment. It seems like with the Rails 3 and bundler gems not being installed on the server locally, my app isn't even recognizing the local version of Rails I have frozen with my app. Has anyone gotten this to work, or have any advice? The server runs Apache, and I think I can get lighttpd installed too - but I'd rather stay with Apache if I can. Also, if it matters, Passenger is not an installed gem either...and I'm not sure I can freeze that with my app. Update 11/30/2011 12:30 PM EST Bundler is not installed on this server, either. Not sure if having that would enable the new Rails 3 "freeze" (bundle --deployment) to work or not...

    Read the article

  • Keeping websites from knowing where I live

    - by D Connors
    This questions is related to issues and practicality, not security. I live in Brazil and, apparently, every single website I visit knows about it. Usually that's okay. But there are quite a few sites that don't make use of that information adequately. For instance: Bing keeps thinking that Brazilian pages are way more relevant to me than American ones (which they're not). google.com always redirects me to google.com.br Microsoft automatically sends me to horribly translated support pages in Portuguese (which would just be easier to read in English). These are just a few examples. Usually it's stuff I can live with (or work around), but some of them are just plain irritating. I have geolocation disabled in Firefox, so I guess they're either getting this information from my IP or from Windows itself (which I bought here). Is there a way to avoid this? Either tell them nothing or make them think I live somewhere else?

    Read the article

  • Apache 2.0.55/PHP 5.3.5 Hangs

    - by Rushyo
    Recently inherited a Windows 2k3 server running XAMPP, including ancient copies of PHP, MySQL and Apache. I'm attempting to install a second, up-to-date PHP installation on the machine so I can reference that in future instead of XAMPP's old one. Apache starts up with this new PHP installation happily. Unfortunately, when Apache references the new PHP install, whenever I try a view a page (even a non-PHP page) it hangs. The server simply doesn't respond to any HTTP request - it doesn't crash either. It just sits there with the connection open. There are no errors/warnings/notices in either the PHP or Apache logs. I've pretty much ruled out PHP's extensions (by process of systematic elimination) and most of the INI settings. I've tried reinstalling PHP from scratch. The PHP installation is PHP 5.3.5 x86 thread-safe compiled w/ VC6. The Apache installation is Apache 2.0.55. Anyone encountered similar behaviour?

    Read the article

  • How atomic is a SELECT INTO?

    - by leo.pasta
    Last week I got an interesting situation that prompted me to challenge a long standing assumption. I always thought that a SELECT INTO was an atomic statement, i.e. it would either complete successfully or the table would not be created. So I got very surprised when, after a “select into” query was chosen as a deadlock victim, the next execution (as the app would handle the deadlock and retry) would fail with: Msg 2714, Level 16, State 6, Line 1 There is already an object named '#test' in the database. The only hypothesis we could come up was that the “create table” part of the statement was committed independently from the actual “insert”. We can confirm that by capturing the “Transaction Log” event on Profiler (filtering by SPID0). The result is that when we run: SELECT * INTO #results FROM master.sys.objects we get the following output on Profiler: It is easy to see the two independent transactions. Although this behaviour was a surprise to me, it is very easy to workaround it if you feel the need (as we did in this case). You can either change it into independent “CREATE TABLE / INSERT SELECT” or you can enclose the SELECT INTO in an explicit transaction: SET XACT_ABORT ON BEGIN TRANSACTION SELECT * INTO #results FROM master.sys.objects COMMIT

    Read the article

  • Packing jar files into library jar files

    - by Hillel
    Firstly, this question is not about packing a simple jar file (e.g. lwjgl) into a runnable jar file. I know how to do this using JarSplice. So if I have a game which uses JInput, I will pack my game jar and jinput.jar using JarSplice and enter the natives in the process. The problem arises when I want to create a custom library that uses JInput, and then pack that into my games. See, the whole idea of writing a game library is that I don't ever have to even copy code like the wrapper I wrote for JInput Controller, and I always have a definitive version inside a library jar. Basically what I wanna do is create a jar file of my library, pack jinput.jar into it using JarSplice, possibly with the natives as well, and then when I want to export a jar of my game, I either export it automatically through Eclipse with the library jar, or, if that doesn't work, use JarSplice. I've tried several solutions, and nothing works. When I try to pack the game jar and the library jar using JarSplice, I get an error saying that there's either duplicate .project or .classpath. When I try to export my game through Eclipse with the library jar, it won't run (which is to be expected), but then, if I try to attach the natives with JarSplice, it doesn't give me any errors but the jar doesn't run. I'm not expecting anyone to solve this, but if anyone has an idea, something that will allow me to never look at the Gamepad code ever again, that would be awesome. I don't care if I have to package my library jar using JarSplice 5 times, and then do the same with the game jar, as long as it works. Otherwise I'll just have to copy the Gamepad class into every project alongside the library jar. :(

    Read the article

  • What is the rationale behind Apache Jena's *everything is an interface if possible* design philosophy?

    - by David Cowden
    If you are familiar with the Java RDF and OWL engine Jena, then you have run across their philosophy that everything should be specified as an interface when possible. This means that a Resource, Statement, RDFNode, Property, and even the RDF Model, etc., are, contrary to what you might first think, Interfaces instead of concrete classes. This leads to the use of Factories quite often. Since you can't instantiate a Property or Model, you must have something else do it for you --the Factory design pattern. My question, then, is, what is the reasoning behind using this pattern as opposed to a traditional class hierarchy system? It is often perfectly viable to use either one. For example, if I want a memory backed Model instead of a database-backed Model I could just instantiate those classes, I don't need ask a Factory to give me one. As an aside, I'm in the process of writing a library for manipulating Pearltrees data, which is exported from their website in the form of an RDF/XML document. As I write this library, I have many options for defining the relationships present in the Peartrees data. What is nice about the Pearltrees data is that it has a very logical class system: A tree is made up of pearls, which can be either Page, Reference, Alias, or Root pearls. My question comes from trying to figure out if I should adopt the Jena philosophy in my library which uses Jena, or if I should disregard it, pick my own design philosophy, and stick with it.

    Read the article

  • How do I encrypt but share a number of folders?

    - by d3vid
    I want to achieve the following functionality. Is it possible? Boot up computer (possibly via WakeOnLan or WakeOnPlan). Either be automatically logged in, or log in via login screen, or log in remotely. I change this behavior occasionally, so full disk encryption wouldn't work for me because it requires a password on bootup (which would it would prevent the remote bootup options, and the automatic login option). I am only interested in encrypting data, not the entire harddrive. Once logged in either: a launcher/tray icon is available to launch encryption app (preferred) run encryption app from the dash Prompted to unlock encrypted folder(s) individually. Unlocked folders are available to: me, apps I am running (e.g. editors, SpiderOak) Ideally, folders that I share with bindfs can be locked/unlocked by other users too. A key point is that once I have unlocked an encrypted folder, I don't want to have to think about it again. I currently achieve this via TrueCrypt (except for the last part). Unfortunately TrueCrypt isn't well integrated with Ubuntu (licensing issues prevent Debian from including it in their repo, the interface isn't quite integrated with Unity, setting it as a startup app doesn't quite work, sharing encrypted folders isn't really part of its design). Is there an alternative to TrueCrypt that is better integrated with the Ubuntu GUI and would suit this workflow?

    Read the article

  • NFS denies mount, even though the client is listed in exports

    - by ajdecon
    We have a couple of servers (part of an HPC cluster) in which we're currently seeing some NFS behavior which is not making sense to me. node1 exports its /lscratch directory via NFS to node2, mounted at /scratch/node1. node2 also exports its own lscratch, which is correspondingly mounted at /scratch/node2 on node1. Unfortunately, whenever I attempt to mount either NFS export on the opposite node, I get the following error: mount: node1:/lscratch failed, reason given by server: Permission denied This despite the fact that I have included first the IP range (10.6.0.0) and then the specific IPs (10.6.7.1, 10.6.7.2) in /etc/exports. Any suggestions? Edit to remove ambiguity: I've made sure that exports only contains either the range, or the specific IPs, not both at the same time.

    Read the article

  • Does SpinRite do what it claims to do?

    - by romandas
    I don't have any real (i.e. professional) experience with Steve Gibson's SpinRite so I'd like to put this to the SF community. Does SpinRite actually do what it claims? Is it a good product to use? With a proper backup solution and RAID fault tolerance, I've never found need for it, but I'm curious. There seems to be some conflicting messages regarding it, and no hard data to be found either way. On one hand, I've heard many home users claim it helped them, but I've heard home users say a lot of things -- most of the time they don't have the knowledge or experience to accurately describe what really happened. On the other hand, Steve's own description and documentation don't give me a warm fuzzy about it either. So what is the truth of the matter? Would you use it?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >