Search Results

Search found 6580 results on 264 pages for 'require'.

Page 59/264 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Best Practices of Performance Management Plan (PMP)

    - by Robert Story
    Upcoming WebcastTitle: Best Practices of Performance Management Plan (PMP)Date: April 22, 2010Time: 11 AM EST / 8 AM PST / 8.30 PM IST  Product Family: EBS HRMS SummaryThis webcast will cover the best practices of Performance Management Plan(PMP) in very common scenarios. The best practices will address major issues around plan dates, new hire, manager transfer and related events. The session will also cover HRMS Patching Strategy, Key References and various customer communication channels.A short, live demonstration (only if applicable) and question and answer period will be included.Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Gradual approaches to dependency injection

    - by JW01
    I'm working on making my classes unit-testable, using dependency injection. But some of these classes have a lot of clients, and I'm not ready to refactor all of them to start passing in the dependencies yet. So I'm trying to do it gradually; keeping the default dependencies for now, but allowing them to be overridden for testing. One approach I'm conisdering is just moving all the "new" calls into their own methods, e.g.: public MyObject createMyObject(args) { return new MyObject(args); } Then in my unit tests, I can just subclass this class, and override the create functions, so they create fake objects instead. Is this a good approach? Are there any disadvantages? More generally, is it okay to have hard-coded dependencies, as long as you can replace them for testing? I know the preferred approach is to explicitly require them in the constructor, and I'd like to get there eventually. But I'm wondering if this is a good first step.

    Read the article

  • Latest DSTv15 Timezone Patches Available for E-Business Suite

    - by Steven Chan
    If your E-Business Suite Release 11i or 12 environment is configured to support Daylight Saving Time (DST) or international time zones, it's important to keep your timezone definition files up-to-date. They were last changed in July 2010 and released as DSTv14. DSTv15 is now available and certified with Oracle E-Business Suite Release 11i and 12. Is Your Apps Environment Affected?When a country or region changes DST rules or their time zone definitions, your Oracle E-Business Suite environment will require patching if:Your Oracle E-Business Suite environment is located in the affected country or region ORYour Oracle E-Business Suite environment is located outside the affected country or region but you conduct business or have customers or suppliers in the affected country or region We last discussed the DSTv14 patches on this blog. The latest "DSTv15" timezone definition file is cumulative and includes all DST changes released in earlier time zone definition files. DSTv15 includes changes to the following timezones since the DSTv14 release:Africa/Cairo 2010 2010Egypt 2010 2010America/Bahia_Banderas 2010 2010Asia/Amman 2002Asia/Gaza 2010 2010Europe/Helsinki 1981 1982Pacific/Fiji 2011Pacific/Apia 2011Hongkong 1977 1977Asia/Hong_Kong 1977 1977Europe/Mariehamn 1981 1982

    Read the article

  • Outdoor Programming Jobs...

    - by Rodrick Chapman
    Are there any kinds of jobs that require programming (or at least competency) but take place outdoors for a significant portion of the time? As long as I'm fantasizing, an ideal job would involve programming in a high level language like Haskell, F#, or Scala* for, say, 50% of the time and doing something like digging an irrigation trench the rest of the time. My background: I triple majored in mathematics, philosophy, and history (BS/BA) and have been working as a web developer for the past six years. I love hacking but I'm feeling a bit burned out. *I only chose these languages as examples since, ideally, I'd want to work among high caliber people... but it really doesn't matter.

    Read the article

  • CSS Style Element if it does not contain another specific type of Element [migrated]

    - by Chris S
    My CSS includes the following: #mainbody a[href ^='http'] { background:transparent url('/images/icons/external.svg') no-repeat top right; padding-right: 12px; } This places an "external" icon next to links that start with "http" (all internal site links are relative). Works perfectly except if I link an Image, it also get this icon. For example: <a href='http://example.com'><img src='whatever.jpg'/></a> would also get the "external" icon next to the image. I can live with this if necessary, but would like to eliminate it. This must be implement in CSS (no JS); must not require any special IDs, Classes, styling in the html for the image or anchor around the image. Is this possible?

    Read the article

  • Where can I buy freely redistributable (creative commons) game assets?

    - by Erlend
    I'd like to know about any 3D asset shops out there that specialize in game assets and, most importantly, license their assets under an open license like Creative Commons or similarly permissive. We are looking to buy some professional looking assets for use and redistribution with our open source 3D game engine. The problem is that all the commercial 3D assets we've come by are only sold under very restrictive licenses, which won't allow us to include the models in our code repository (since free code hosting repositories require that all your data, including media, is open source or otherwise copyleft) nor in turn redistribute the assets as part of our downloadable SDK. I realize this sounds like a weak business idea, since users could just buy the asset and start sharing it with everyone. But somehow this has worked for hundreds of WordPress theme shops, so I was hoping maybe someone's trying similar things for commercial game assets.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • How do I remove webapps completely?

    - by user104106
    I'd like to be able to remove webapps from my system, I don't want any of the packages associated with it. But trying to remove it I am told there are several other packages that require it. Please change this, I don't like the idea of webapps and I want it gone. I'm tired of getting annoying popups that don't explain anything asking me to install something! It feels like a huge security breach and the user is given no information as to what they are choosing. This is a very bad decision on Canonicals part. I should be working for them, something like this would never ship.

    Read the article

  • Why is trailing whitespace a big deal?

    - by EpsilonVector
    Trailing whitespace is enough of a problem for programmers that editors like Emacs have special functions that highlight it or get rid of it automatically, and many coding standards require you to eliminate all instances of it. I'm not entirely sure why though. I can think of one practical reason of avoiding unnecessary whitespace, and it is that if people are not careful about avoiding it, then they might change it in between commits, and then we get diffs polluted with seemingly unchanged lines, just because someone removed or added a space. This already sounds like a pretty good reason to avoid it, but I do want to see if there's more to it than that. So, why is trailing whitespace such a big deal?

    Read the article

  • Starting off with web dev with php

    - by pavan kumar
    I'm currently working with Java / C++. I'm interested in web development and am planning to shift my stream. I heard that PHP is a good platform to start off and also it does not require that much of knowledge in technologies like JSP / Servlets or frameworks like springs / struts / hibernate. I have basic ideas about HTML and Javascript as well. I have gone through previous posts in SO and found out the relevant resources as well: http://net.tutsplus.com/tutorials/php/the-best-way-to-learn-php/ http://www.webhostingtalk.com/archive/index.php/t-1028265.html http://www.killerphp.com/ http://phpforms.net/tutorial/tutorial.html http://www.php5-tutorial.com/ etc. Now, my question is: I heard of PHP frameworks like CodeIgniter, Zend Frameworkd and Yii. Doesn't learning PHP & MySql implicitly makes us aware of these frameworks? Am I making a good choice in stating with PHP? Is it a good idea to shift streams?

    Read the article

  • Can a loosely typed language be considered true object oriented?

    - by user61852
    Can a loosely typed programming language like PHP be really considered object oriented? I mean, the methods don't have returning types and method parameters has no declared type either. Doesn't class design require methods to have a return type? Don't methods signatures have specifically-typed parameters? How can OOP techniques help you code in PHP if you always have to check the types of parameters received because the language doesn't enforce types? Please, if I'm wrong, explain it to me. When you design things using UML, then code classes in PHP with no return-typed methods and no-type parameters... Is the code really compliant with the UML design? You spend time designing the architecture of your software, then the compiler doesn't force the programmer to follow your design while coding, letting he/she assign any object variable to any other variable with no "type-mismatch" warning.

    Read the article

  • Ubuntu 11.10 with KDE installed does not prompt for elevation for privileged ops in all apps

    - by Michael Goldshteyn
    I installed the KDE window manager on top of Ubuntu 11.10 and while I am using KDE, I do not get an elevation dialog when I try to perform tasks that require root privileges. Instead, the operations silently fail, unless I launch apps from a terminal, in which case I get errors like: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/softwareproperties/gtk/SoftwarePropertiesGtk.py", line 649, in on_isv_source_toggled self.backend.ToggleSourceUse(str(source_entry)) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 143, in __call__ **keywords) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 630, in call_blocking message, timeout) dbus.exceptions.DBusException: com.ubuntu.SoftwareProperties.PermissionDeniedByPolicy: com.ubuntu.softwareproperties.applychanges Or from the muon package manager, an error dialog such as: Does anyone know what I need to do to fix this, so that I get a proper dialog asking for elevation? Otherwise, I have to start each app that may need root privs with sudo from a terminal or gksudo. Thanks

    Read the article

  • Apt-get update through tor

    - by Alexander
    I'm trying to update my apt-get list. In my country a lot of sites are blocked or have been blocked from companies. When I use a proxy for the whole system I get errors, tor works perfectly when browsing. My question is can I update apt-get through a connection from tor? I mean I want to unblock the blocked sites using tor connection so I can perform "apt-get update" without errors ... Thanks in advance. Edit BTW : I'm using Ubuntu 13.10 and Tor 0.2.21 alexander@Alexander-PC:~$ sudo apt-get update [sudo] password for alexander: Ign http://extras.ubuntu.com saucy InRelease Ign http://security.ubuntu.com saucy-security InRelease Ign http://us.archive.ubuntu.com saucy InRelease Hit http://extras.ubuntu.com saucy Release.gpg Get:1 http://dl.google.com stable InRelease [1,540 B] 100% [1 InRelease gpgv 1,540 B] [Waiting for headers] [Waiting for headers] [WaSplitting up/var/lib/apt/lists/partial/dl.google.com_linux_chrome_deb_dists_stabIgn http://dl.google.com stable InRelease E: GPG error: http://dl.google.com stable InRelease: Clearsigned file isn't valid, got 'NODATA' (does the network require authentication?

    Read the article

  • Is finding graph minors without single node pinch points possible?

    - by Alturis
    Is it possible to robustly find all the graph minors within an arbitrary node graph where the pinch points are generally not single nodes? I have read some other posts on here about how to break up your graph into a Hamiltonian cycle and then from that find the graph minors but it seems to be such an algorithm would require that each "room" had "doorways" consisting of single nodes. To explain a bit more a visual aid is necessary. Lets say the nodes below are an example of the typical node graph. What I am looking for is a way to automatically find the different colored regions of the graph (or graph minors)

    Read the article

  • Incorrect instructions on Upgrading to 12.10 from 12.04LTS

    - by Russ F
    https://wiki.ubuntu.com/QuantalQuetzal/TechnicalOverview/Beta1 reports incorrect instructions for upgrading from Ubuntu 12.04 LTS. The correct steps are: Alt+F2, Update Manager, choose settings, updates tab and set notify to "For any new version." Close the manager. Press Alt+F2, Terminal, then enter "sudo update-manager -d" (without the quotes)... Sorry to pester this list, but the Ubuntu wiki has no provisions for "Talk" or "Discussion" that do not require registration and a login. I feel like I should be able to point out a problem without signing in.

    Read the article

  • 2D Tile-Based Concept Art App

    - by ashes999
    I'm making a bunch of 2D games (now and in the near future) that use a 2D, RPG-like interface. I would like to be able to quickly paint tiles down and drop character sprites to create concept art. Sure, I could do it in GIMP or Photoshop. But that would require manually adding each tile, layering on more tiles, cutting and pasting particular character sprites, etc. and I really don't need that level of granularity; I need a quick and fast way to churn out concept art. Is there a tool that I can use for this? Perhaps some sort of 2D tile editor which lets me draw sprites and tiles given that I can provide the graphics files.

    Read the article

  • 12.04 Ubuntu studio PREEMPT_RT kernel options

    - by Nate Iverson
    My audio processing needs require a preempt_rt kernel. I roughly followed the guide: https://wiki.ubuntu.com/KernelTeam/GitKernelBuild with a little help from: https://rt.wiki.kernel.org/index.php/RT_PREEMPT_HOWTO Currently I am using the 3.4 branch (which is the most recent at the time of this post): http://www.kernel.org/pub/linux/kernel/projects/rt/ I think I have a reasonable kernel config ( for my machine at least ). Multiple trials confirm I need the option: CONFIG_PREEMPT_RT_FULL=y I have the following questions: Is anyone maintaining a recent CONFIG_PREEMPT_RT_FULL kernel in a ppa? Is there any interest in providing a CONFIG_PREEMPT_RT_FULL in the official ubuntu-studio distribution? Does anyone have recent config pointers for a CONFIG_PREEMPT_RT_FULL kernel?

    Read the article

  • Chrome Apps + Native Client

    Chrome Apps + Native Client Did you know that you can use Native Client inside a Chrome App? Join +John Mccutchan and +Pete LePage as they introduce Native Client Acceleration Modules (NaCl AM) which expose C++ libraries to JavaScript programs. NaCl AMs can, be used for bulk data processing (compression, encryption) but they also work well in interactive applications that require low latency. We'll explain how to build a NaCl Acceleration Module and demo a Bullet Physics engine running inside a Chrome App with a NaCl AM interacting with an HTML and JavaScript UI using three.js. We'll be live on Tuesday December 11th at at 9am PT, showing you code, samples and answering your questions. From: GoogleDevelopers Views: 0 0 ratings Time: 45:00 More in Science & Technology

    Read the article

  • Open World Day 3

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part IV My third day was exhibition day for me!  I took the opportunity to wander around the JavaOne and OpenWorld exhibitions to see what might be useful for me when selling WebLogic, Coherence & SOA Suite.  I found a number of interesting vendors and thought I would share what I found here.  These are not necessarily endorsements, but observations on companies that I thought had interesting looking products that fill a need I have seen at customers. Highly Available EBS Upgrades A few years ago I worked with a customer that was a port authority.  They wanted to tie E-Business Suite into their operations to provide faster processing of cargo and passengers.  However they only had a 2 hour downtime window to perform upgrades.  This was not a problem for core database and middleware technology, this could accommodate those upgrade timescales easily.  It was a problem for EBS however so I intrigued to find Rapid E-Suite Inc offering an 11i to 12i upgrade service that claims to require no outage.  This could be a real boon to EBS customers like my port friends that need to upgrade without disruption to their business. Mobile on WebLogic I have come across a number of customers who want a comprehensive mobile solution, connected and disconnected operation and so forth.  ADF only addresses part of these requirements currently so I was excited to discover mFrontiers Inc offering an apparently comprehensive solution that should integrate easily with Oracle SOA Suite to mobile enable a SOA infrastructure.  The ability to operate without a network is important for many applications, particularly in industries that require their engineers to enter buildings to perform maintenance or repairs, because network access is not always available – many of my colleagues don’t have mobile access from their homes because they live in the middle of nowhere – and disconnected support is crucial in these situations. Sharepoint Connector for WebCenter Content Obviously Sharepoint is an evil pernicious intrusion into a companies IT estate but it is widely deployed and many people like it but also would like to take advantage of Oracle products such as WebCenter Content.  So I was encouraged to see that Fishbowl Solutions have created a connector for Sharepoint that allows it to bring in content from WebCenter, it looks like a valuable way to maintain the Sharepoint interface end users are used to but extend the range of content by pulling stuff (technical term for content) from WebCenter.   Load Balancing The Enterprise Deployment Guides are Oracles bible on building highly available FMW environments, and each of them requires a front end load balancer.  I have been asked to help configure F5 Load Balancers on a number of occasions over my time at Oracle and each time I come back to it I find more useful features have been added to the BigIP line of load balancers that F5 sell, many of their documents are tailored to FMW.  I like F5, they provide (relatively) easy to use products that do what they say on the side of the box.  They may not have all the bells and whistles of some of their more expensive competitors but they do the job and do it well!  Besides which I like their logo! Other Stuff I saw lots of other interesting products and services, such as a lightweight monitoring tool for Coherence, Forms migration services, JCAPS migration services and lots of cool freebies to take home to the children! A Quiet Night Wednesday night was the partner appreciation event and I had decided to go back to the hotel and have an early night.  I decided to attend the last session of the day – a Maven/Hudson/WebLogic tutorial.  I got the wrong hotel for the session and snuck in 20 minutes late at the back and starting working on the hands on workshop.  One of my co-attendees raised his hand for help and as the presenter came over to help he suddenly stopped and yelled – “Is that Antony”!  It was my old friend Steve Button who used to be based in Redwood Shores but is now a WebLogic guru PM in Australia.  It was good to catch up with him.  As he yelled out a guy with really bad posture turned around to see who he was talking to, this turned out to be my friend Simon Haslan, Oracle ACE from the UK.  After the tutorial Simon and I retired to the coffee shop to catch up and share stories.  2 and half hours later we decided it was time to retire, so much for an early night but great to renew old friendships and find out what real customers are worrying about.

    Read the article

  • Authorizing a module in a framework

    - by Devon
    I've been studying PHP frameworks and I've been looking for how you would go about properly authorizing a module for classes, methods, and database actions. For example, lets say I want a framework that includes different modules from different programmers: Some core class may require special access, not all modules should have access to every core class unless authorized to. I do not want one module to be able to call another module's class/method if it is not supposed to be able to. I also don't want a security flaw in one module to be able to affect another module's database tables. I suppose an easy way to go about this is have a database table for authorization to consult, but I doubt that is the best way to go about this. I'd appreciate any advice or pointing me in the right direction for some reading.

    Read the article

  • In MATLAB, how can 'preallocating' cell arrays improve performance?

    - by Alex McMurray
    I was reading this article on MathWorks about improving MATLAB performance and you will notice that one of the first suggestions is to preallocate arrays, which makes sense. But it also says that preallocating Cell arrays (that is arrays which may contain different, unknown datatypes) will improve performance. But how will doing so improve performance because the datatypes are unknown so it doesn't know how much contiguous memory it will require even if it knows the shape of the cell array, and therefore it can't preallocate the memory surely? So how does this result in any improvement in performance? I apologise if this question is better suited for StackOverflow than Programmers but it isn't asking about a specific problem so I thought it fit better here, please let me know if I am mistaken though. Any explanation would be greatly appreciated :)

    Read the article

  • Learning c++ by contributing to open source projects

    - by user1189880
    I have some general programming experience with a few different languages, my most skilled being php. I want to spend a lot of time over the next year learning c++ in much more depth and then eventually get to a good enough level to find a job as a junior developer working in c++. I really struggle to find things to develop as toy programs so want to contribute to an open source project in c++ to get really stuck in to. But the projects I see on github in c++ are very large and will require a lot of knowlege to even get started. Are there any smaller projects that I can contribute to or are there any other good ideas for learning c++ from a practical level.

    Read the article

  • How to recover unavailable memory in /dev/shm

    - by Alain Labbe
    Good day to all, I have a question regarding the use of /dev/shm. I use it as a temporary folder for large files to speed up processing and save IO off the HD. My problem is that some of my scripts sometimes require "forceful" interruption for a variety of reasons. I can then manually remove the files left over in /dev/shm but the memory is not returned to available space (as seen by df -h). Is there any way to recover the memory without restarting the system? I'm using LTS12.04 and most of the scripts are PERL running system call on C programs (bioinformatics tools). Thanks.

    Read the article

  • Using model tools as map editor

    - by cooky451
    I want to make a game which would require a 3D map editor. Of course, I would like to avoid creating such an editor. My idea is now to use modeling tools (3DS Max, Maya, Blender) to create the map, and to give game specific objects specified names. This way I'd just need to write an COLLADA - native map format converter. But I'm not sure if this is possible the way I imagine it, that's why I'd like to hear your thoughts on the matter. Are modeling tools suitable to create big open world maps? Can this "naming convention"-idea for game specific objects work? Are the modeling tools able to export a scene in chunks / in a way that occlusion culling and collision detection can be properly done? If not: Is there a way to build a suitable data structure from the exported data?

    Read the article

  • Oracle E-Business Suite 12 Certified on Additional Linux Platforms

    - by John Abraham
    As a follow up to our original certification announcement regarding Oracle Linux 6, Oracle E-Business Suite Release 12 (12.1.1 and higher) is now certified on the following additional Linux x86/x86-64 operating systems: Oracle Linux 6 (32-bit) Red Hat Enterprise Linux 6 (32-bit) Red Hat Enterprise Linux 6 (64-bit) Novell SUSE Linux Enterprise Server (SLES) version 11 (64-bit) New installations of the E-Business Suite on these operating systems require version 12.1.1 of the Release 12 media.  Cloning of existing 12.1 Linux environments to this new OS is also certified using the standard Rapid Clone process. There are specific requirements to upgrade technology components such as the Oracle Database (to 11gR2) and Fusion Middleware as necessary. These and other requirements are noted in the Installation and Upgrade Notes (IUN) below. References Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.1.1) for Linux x86-64 (My Oracle Support Document 761566.1) Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.1.1) for Linux x86 (My Oracle Support Document 761564.1) Cloning Oracle Applications Release 12 with Rapid Clone (My Oracle Support Document 406982.1) Interoperability Notes Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) (My Oracle Support Document 1058763.1) Oracle Linux website

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >