Search Results

Search found 5998 results on 240 pages for 'rise against'.

Page 110/240 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • 10 Innovations in PeopleSoft 9.2 - #2 Lower TCO With The Peoplesoft Update Manager

    - by John Webb
    With the new PeopleSoft Update Manager in PeopleSoft 9.2 the way you manage updates to your PeopleSoft systems puts you in control of all changes on your schedule.   You can selectively apply patches with reduced time, effort, and cost.    Bundles and Maintenance Packs are no longer used.      Instead, a tailored custom package is automatically generated based on the parameters you select from the latest PeopleSoft source image.   You have access to all updates from Oracle on a cumulative basis and can select and search for specific updates such as new features, legal and regulatory changes, or a patch related to a specific issue, process or object.    Any prerequisites are automatically identified.  The  process of generating a change package is enabled through a new wizard with easy to follow steps and options.     As changes are introduced to your test environment the PeopleSoft Test Framework provides a closed loop process to run regression tests scripts against your changes.  For a quick overview of the PeopleSoft Update Manager check out the Video Feature Overview here: PeopleSoft Update Manager Video Feature Overview

    Read the article

  • CodePlex Daily Summary for Monday, June 24, 2013

    CodePlex Daily Summary for Monday, June 24, 2013Popular ReleasesVG-Ripper & PG-Ripper: VG-Ripper 2.9.43: changes NEW: Added Support for "ImageJumbo.com" links NEW: Added Support for "ImgTiger.com" links NEW: Added Support for "ImageDax.net" links NEW: Added Support for "HosterBin.com" links FIXED: "ImgServe.net" linksWPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Windows.Forms.Controls Revisited (SSTA.WinForms): SSTA.WinForms 1.0.0: Latest stable releaseAscend 3D: Ascend 2.0: Release notes: Implemented bone/armature animation Refactored class hierarchy and naming Addressed high CPU usage issue during animation Updated the Blender exporter and Ascend model format (now XML) Created AscendViewer, a tool for viewing Ascend modelsIndent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...Magick.NET: Magick.NET 6.8.5.1001: Magick.NET compiled against ImageMagick 6.8.5.10. Breaking changes: - MagickNET.Initialize has been made obsolete because the ImageMagick files in the directory are no longer necessary. - MagickGeometry is no longer IDisposable. - Renamed dll's so they include the platform name. - Image profiles can now only be accessed and modified with ImageProfile classes. - Renamed DrawableBase to Drawable. - Removed Args part of PathArc/PathCurvetoArgs/PathQuadraticCurvetoArgs classes. The...Three-Dimensional Maneuver Gear for Minecraft: TDMG 1.1.0.0 for 1.5.2: CodePlex???(????????) ?????????(???1/4) ??????????? ?????????? ???????????(??????????) ??????????????????????? ↑????、?????????????????????(???????) ???、??????????、?????????????????????、????????1.5?????????? Shift+W(????)??????????????????10°、?10°(?????????)???Hyper-V Management Pack Extensions 2012: HyperVMPE2012 (v1.0.1.126): Hyper-V Management Pack Extensions 2012 Beta ReleaseOutlook 2013 Add-In: Email appointments: This new version includes the following changes: - Ability to drag emails to the calendar to create appointments. Will gather all the recipients from all the emails and create an appointment on the day you drop the emails, with the text and subject of the last selected email (if more than one selected). - Increased maximum of numbers to display appointments to 30. You will have to uninstall the previous version (add/remove programs) if you had installed it before. Before unzipping the file...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.5.2: v1.5.2 - This is a service release. We've fixed a number of issues with Tasks and IoC. We've made some consistency improvements across platforms and fixed a number of minor bugs. See changes.txt for details. Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro inversion of control container (IoC); source code...SQL Compact Query Analyzer: 1.0.1.1511: Beta build of SQL Compact Query Analyzer Bug fixes: - Resolved issue where the application crashes when loading a database that contains tables without a primary key Features: - Displays database information (database version, filename, size, creation date) - Displays schema summary (number of tables, columns, primary keys, identity fields, nullable fields) - Displays the information schema views - Displays column information (database type, clr type, max length, allows null, etc) - Support...CODE Framework: 4.0.30618.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.Toolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.18): XrmToolbox improvement Use new connection controls (use of Microsoft.Xrm.Client.dll) New display capabilities for tools (size, image and colors) Added prerequisites check Added Most Used Tools feature Tools improvementNew toolSolution Transfer Tool (v1.0.0.0) developed by DamSim Updated toolView Layout Replicator (v1.2013.6.17) Double click on source view to display its layoutXml All tools list Access Checker (v1.2013.6.17) Attribute Bulk Updater (v1.2013.6.18) FetchXml Tester (v1.2013.6.1...Media Companion: Media Companion MC3.570b: New* Movie - using XBMC TMDB - now renames movies if option selected. * Movie - using Xbmc Tmdb - Actor images saved from TMDb if option selected. Fixed* Movie - Checks for poster.jpg against missing poster filter * Movie - Fixed continual scraping of vob movie file (not DVD structure) * Both - Correctly display audio channels * Both - Correctly populate audio info in nfo's if multiple audio tracks. * Both - added icons and checked for DTS ES and Dolby TrueHD audio tracks. * Both - Stream d...Document.Editor: 2013.24: What's new for Document.Editor 2013.24: Improved Video Editing support Improved Link Editing support Minor Bug Fix's, improvements and speed upsExtJS based ASP.NET Controls: FineUI v3.3.0: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI???? ExtJS ?????????,???? ExtJS ?。 ????? FineUI ? ExtJS ?:http://fineui.com/bbs/forum.php?mod=viewthrea...BarbaTunnel: BarbaTunnel 8.0: Check Version History for more information about this release.ExpressProfiler: ExpressProfiler v1.5: [+] added Start time, End time event columns [+] added SP:StmtStarting, SP:StmtCompleted events [*] fixed bug with Audit:Logout eventpatterns & practices: Data Access Guidance: Data Access Guidance Drop4 2013.06.17: Drop 4New ProjectsActivity Injector: Activity Injector is mainly used in unit testing workflow activities.Application Starter Tremens: Application for starting ApplicationaAudiwikiREF: This project is created to finish my referral assignment because i faced many problems with the previous one where i couldn't connect the project and commit. AVTS GLUCK ANTIVIRUS SYSTEM: AVTS Gluck AntiVirus System - the best antivirus made by young developers from Ukraine. It was writed in C#, C++, VB.NET, Excutable files, PHP DS, .and more....Base64 File Converter: File converting tool that converts any file into Base64 TXT file and vice versa just by drag-and-drop gesture.Basket Builders Umbraco bootstrap packages: Our umbraco packages to speed up project Bugzy Bug Tracking and Work Management: Creating Bug Tracking and Work Managment system while helping the JSON-RPC standard become as strong as the Bloated XML-RPC(SOAP)Dalmatian Build Script: Dalmatian allows you to automate your build using C# of VB.NETDécouverte Majeure MISN Chrono-courses: Chronocourses projectDocToolTip Library: DocToolTip library allows creating screenshots of winform forms with the name and type of component controls.DotaPick: SummaryHiUpdateTools - easy publish and update your app: HiUpdateTools is a easy tools to use publish new version of your application Kinect Skeleton Recorder: Records and shows XBox 360 Kinect skeleton data using windows forms. Saves the data as XML that can be read by other applications for use in animation.MARIT (BETA): MARIT is a simple drawing app for Android. More features are on the todo-list so stay tuned! Feedback and requests are appreciated. MvcToolbox: MVC Toolbox is a library which contains lots of usefull methods which will help developer to write less code and to be more productive.PrototypeRef: list of testing toolsSmartPlay: SmartPlaySolid Geometry Simulation: Implements a simple fur/hair dynamic engine using physBAM. This package includes a semi-implicit solver for hair simulation and plugins for maya/houdini.Windows.Forms.Controls Revisited (SSTA.WinForms): SearchableListBox control built on the Window,Form.ListBox renders a list box items with multiple search terms highlighted.YiLeSystem: This is a dining system.

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Upgrade 10.04LTS to 10.10 problem

    - by Gopal
    Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tools Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade has aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/tmpe_xVWd/maverick", line 7, in <module> sys.exit(main()) File "/tmp/tmpe_xVWd/DistUpgradeMain.py", line 158, in main if app.run(): File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 1616, in run return self.fullUpgrade() File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 1534, in fullUpgrade if not self.updateSourcesList(): File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 664, in updateSourcesList if not self.rewriteSourcesList(mirror_check=True): File "/tmp/tmpe_xVWd/DistUpgradeController.py", line 486, in rewriteSourcesList distro.get_sources(self.sources) File "/tmp/tmpe_xVWd/distro.py", line 103, in get_sources source.template.official == True and AttributeError: 'Template' object has no attribute 'official' This is what i got when i tried to upgrade the desktop edition:sudo do-release-upgrade. One more info: I have kde installed.

    Read the article

  • Restoring an Ubuntu Server using ZFS RAID-Z for data

    - by andybjackson
    Having become disillusioned with hacking Buffalo NAS devices, I've decided to roll my own Home server. After some research, I have settled on an HP Proliant Microserver with Ubuntu Server and ZFS (OS on 1 Ext4 disk, Data on 3 RAID-Z disks). As Joel Spolsky and Geoff Atwood say with regards to backup, I can't rest until I have done a restore in all of the failure scenarios that I am seeking to protect against. Q: How to configure Ubuntu Server to recognise a pre-existing RAID-Z array? Clearly if one of the data disks die - then that is a resilvering scenario, which is well documented. If two of the data disks die, then I am into regular backup/restore land. If the OS dies and I can restore, also an easy scenario. But if the OS dies and I can't restore, then I need to recreate an Ubuntu server. But how do I get this to recognise my RAID-Z array? Is the necessary configuration information stored within and across the RAID-Z array and simply need to be found (if so, how)? Or does it reside on the OS ext4 disk (in which case how do I recreate it)?

    Read the article

  • Rawr Code Clone Analysis&ndash;Part 0

    - by Dylan Smith
    Code Clone Analysis is a cool new feature in Visual Studio 11 (vNext).  It analyzes all the code in your solution and attempts to identify blocks of code that are similar, and thus candidates for refactoring to eliminate the duplication.  The power lies in the fact that the blocks of code don't need to be identical for Code Clone to identify them, it will report Exact, Strong, Medium and Weak matches indicating how similar the blocks of code in question are.   People that know me know that I'm anal enthusiastic about both writing clean code, and taking old crappy code and making it suck less. So the possibilities for this feature have me pretty excited if it works well - and thats a big if that I'm hoping to explore over the next few blog posts. I'm going to grab the Rawr source code from CodePlex (a World Of Warcraft gear calculator engine program), run Code Clone Analysis against it, then go through the results one-by-one and refactor where appropriate blogging along the way.  My goals with this blog series are twofold: Evaluate and demonstrate Code Clone Analysis Provide some concrete examples of refactoring code to eliminate duplication and improve the code-base Here are the initial results:   Code Clone Analysis has found: 129 Exact Matches 201 Strong Matches 300 Medium Matches 193 Weak Matches Also indicated is that there was a total of 45,181 potentially duplicated lines of code that could be eliminated through refactoring.  Considering the entire solution only has 109,763 lines of code, if true, the duplicates lines of code number is pretty significant. In the next post we’ll start examining some of the individual results and determine if they really do indicate a potential refactoring.

    Read the article

  • Short Season, Long Models - Dealing with Seasonality

    - by Michel Adar
    Accounting for seasonality presents a challenge for the accurate prediction of events. Examples of seasonality include: ·         Boxed cosmetics sets are more popular during Christmas. They sell at other times of the year, but they rise higher than other products during the holiday season. ·         Interest in a promotion rises around the time advertising on TV airs ·         Interest in the Sports section of a newspaper rises when there is a big football match There are several ways of dealing with seasonality in predictions. Time Windows If the length of the model time windows is short enough relative to the seasonality effect, then the models will see only seasonal data, and therefore will be accurate in their predictions. For example, a model with a weekly time window may be quick enough to adapt during the holiday season. In order for time windows to be useful in dealing with seasonality it is necessary that: The time window is significantly shorter than the season changes There is enough volume of data in the short time windows to produce an accurate model An additional issue to consider is that sometimes the season may have an abrupt end, for example the day after Christmas. Input Data If available, it is possible to include the seasonality effect in the input data for the model. For example the customer record may include a list of all the promotions advertised in the area of residence. A model with these inputs will have to learn the effect of the input. It is possible to learn it specific to the promotion – and by the way learn about inter-promotion cross feeding – by leaving the list of ads as it is; or it is possible to learn the general effect by having a flag that indicates if the promotion is being advertised. For inputs to properly represent the effect in the model it is necessary that: The model sees enough events with the input present. For example, by virtue of the model lifetime (or time window) being long enough to see several “seasons” or by having enough volume for the model to learn seasonality quickly. Proportional Frequency If we create a model that ignores seasonality it is possible to use that model to predict how the specific person likelihood differs from average. If we have a divergence from average then we can transfer that divergence proportionally to the observed frequency at the time of the prediction. Definitions: Ft = trailing average frequency of the event at time “t”. The average is done over a suitable period of to achieve a statistical significant estimate. F = average frequency as seen by the model. L = likelihood predicted by the model for a specific person Lt = predicted likelihood proportionally scaled for time “t”. If the model is good at predicting deviation from average, and this holds over the interesting range of seasons, then we can estimate Lt as: Lt = L * (Ft / F) Considering that: L = (L – F) + F Substituting we get: Lt = [(L – F) + F] * (Ft / F) Which simplifies to: (i)                  Lt = (L – F) * (Ft / F)  +  Ft This latest expression can be understood as “The adjusted likelihood at time t is the average likelihood at time t plus the effect from the model, which is calculated as the difference from average time the proportion of frequencies”. The formula above assumes a linear translation of the proportion. It is possible to generalize the formula using a factor which we will call “a” as follows: (ii)                Lt = (L – F) * (Ft / F) * a  +  Ft It is also possible to use a formula that does not scale the difference, like: (iii)               Lt = (L – F) * a  +  Ft While these formulas seem reasonable, they should be taken as hypothesis to be proven with empirical data. A theoretical analysis provides the following insights: The Cumulative Gains Chart (lift) should stay the same, as at any given time the order of the likelihood for different customers is preserved If F is equal to Ft then the formula reverts to “L” If (Ft = 0) then Lt in (i) and (ii) is 0 It is possible for Lt to be above 1. If it is desired to avoid going over 1, for relatively high base frequencies it is possible to use a relative interpretation of the multiplicative factor. For example, if we say that Y is twice as likely as X, then we can interpret this sentence as: If X is 3%, then Y is 6% If X is 11%, then Y is 22% If X is 70%, then Y is 85% - in this case we interpret “twice as likely” as “half as likely to not happen” Applying this reasoning to (i) for example we would get: If (L < F) or (Ft < (1 / ((L/F) + 1)) Then  Lt = L * (Ft / F) Else Lt = 1 – (F / L) + (Ft * F / L)  

    Read the article

  • PublishingWeb.ExcludeFromNavigation Missing

    - by Michael Van Cleave
    So recently I have had to make the transition from the SharePoint 2007 codebase to SharePoint 2010 codebase. Needless to say there hasn't been much difficulty in the changeover. However in a set of code that I was playing around with and transitioning I did find one change that might cause some pain to others out there that have been programming against the PublishingWeb object in the Microsoft.SharePoint.Publishing namespace. The 2007 snippet of code that work just fine in 2007 looks like: using (SPSite site = new SPSite(url)) using (SPWeb web = site.OpenWeb()) {      PublishingWeb publishingWeb = PublishingWeb.GetPublishingWeb(web);     publishingWeb.ExcludeFromNavigation(true, itemID);     publishingWeb.Update(); } The 2010 update to the code looks like: using (SPSite site = new SPSite(url)) using (SPWeb web = site.OpenWeb()) {     PublishingWeb publishingWeb = PublishingWeb.GetPublishingWeb(web);     publishingWeb.Navigation.ExcludeFromNavigation(true, itemID); //--Had to reference the Navigation object.     publishingWeb.Update(); } The purpose of the code is to keep a page from showing up on the global or current navigation when it is added to the pages library. However you see that the update to the 2010 codebase actually makes more "object" sense. It specifies that you are technically affecting the Navigation of the Publishing Web, instead of the Publishing Web itself. I know that this isn't a difficult problem to fix, but I thought it would be something quick to help out the general public. Michael

    Read the article

  • Exadata X3 In-Memory Database Machine: To be or not to be

    - by Luis Moreno Campos
    Since Larry Ellison announced Oracle Exadata X3 as the new generation of the Database Machine, he established the product in the In-Memory Database arena. And that annoyed some people. We all know that In-Memory Databases are the ones that *only* execute in memory and use the other layers of storage for persistency (mainly disk). Oracle database has always been a technology that uses memory as a caching mechanism and that hasn't change nor it will change with Oracle Database 12c. So this is the central point of fuss when it comes to announcing an Engineered Systems as In-Memory Database, when in fact it still runs Oracle Database, not vanilla but still the same product. Let me tell you purist people out there: when you find no new ground breaking point to get all excited about you decide to bash it, and go against its claims. It's not like a car manufacturer that launches a mini-van in the market and calls it a Sports Car, we are talking about a fundamental change in the ILM stack: level 2 of caching is now self sufficient. It's not DRAM? Who cares, still let's you put in flash amounts of data not done up until now, so I guess Oracle can name it whatever Larry wants because in the end it's something never done before. Now let's imagine that you hop on the pure In-Memory Database bandwagon. You would be stuck with a database technology that lags behind the Oracle Database hundreds of light years in man/hours innovations and features. Do you really want to travel back in time? Remember, the first rule about time travelling is that "Security is not Guaranteed". Your choice. LMC

    Read the article

  • Fixing SharePoint 2010 Permission Problems on Windows 7

    - by Ricardo Peres
    I had a tough time trying to have SharePoint working perfectly on a Windows 7 development machine that was occasionally disconnected from the Active Directory (when I am home I must connect through a VPN). I mostly had problems with service applications such as User Profile, Managed Metadata, Business Connectivity Services and the like, and all I knew were cryptical messages such as “access denied” or “the service or application pool is not started”. I was sure that both the services and application pools were running under a domain account that had proper permissions on the SQL Server instance, and basically it was a fresh installation. Lots of people are having the same problem, apparently. After banging my head against the wall for several days, I remembered about farm (what I had) versus stand-alone (which I had never tried) installations. Bingo! Here’s what I did: I dropped all SharePoint databases and logins and reinstalled SP from scratch, only this time not in farm mode, but as stand-alone. After the SharePoint Configuration Wizard started, I cancelled it and started the Management Shell. I created the configuration database manually by using the New-SPConfigurationDatabase cmdlet where I specified a local account – something that the Configuration Wizard wouldn’t allow me to do. Then I restarted the Configuration Wizard and everything began working perfectly! Yes, I got some pre-configured service applications and also some content which I didn’t need, but I realized it was possible to drop and recreate everything the way I wanted to. All services and application pools are now running under local accounts, which is fine for my development needs. Really, Microsoft… I hope this will bring light to someone facing the same problems!

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Can windows XP be better than any Ubuntu (and Linux) distro for an old PC?

    - by Robert Vila
    The old laptop is a Toshiba 1800-100: CPU: Intel Celeron 800h Ram 128 MB (works ok) HDD: 15GB (works ok) Graphics adapter: Integrated 64-bit AGP graphics accelerator, BitBIT, 3D graphic acceleration, 8 MB Video RAM Only WindowsXP is installed, and works ok: it can be used, but it is slow (and hateful). I thought that I could improve performance (and its look) easily, since it is an old PC (drivers and everything known for years...) by installing a light Linux distro. So, I decided to install a light or customized Ubuntu distro, or Ubuntu/Debian derivative, but haven't been successful with any; not even booting LiveCDs: not even AntiX, not even Puppy. Lubuntu wiki says that it won't work because the last to releases need more ram (and some blogs say much more cpu -even core duo for new Lubuntu!-), let alone Xubuntu. The problems I have faced are: 1.There are thousands of pages talking about the same 10/15 lightweight distros, and saying more or less the same things, but NONE talks about a simple thing as to how should the RAM/swap-partition proportion be for this kind of installations. NONE! 2.Loading the LiveCD I have tried several different boot options (don't understand much about this and there's ALWAYS a line of explanation missing) and never receive error messages. Booting just stops at different stages but often seems to stop just when the X server is going to start. I am able to boot to command line. 3.I ignore whether the problem is ram size or a problem with the graphics driver (which surprises me because it is a well known brand and line of computers). So I don't know if doing a partition with a swap partition would help booting the LiveCD. 4.I would like to try the graphical interface with the LiveCD before installing. If doing the swap partition for this purpose would help. How can I do the partition? I tried to use Boot Rescue CD, but it advises me against continuing forward. I would appreciate any ideas as regards these questions. Thank you

    Read the article

  • I cannot solve the "Install these packages without verification" problem

    - by Yonatan Orlev
    I Googled and Googled and I just cannot find a solution to this problem: sudo apt-get install <whatever> Gives me: WARNING: The following packages cannot be authenticated! and Install these packages without verification [y/N]? I cannot find a decent solution. The closest I got was to run: sudo apt-get install debian-keyring debian-archive-keyring But then, even thought, and against my good judgment I agreed to install the package without confirmation, I get: (I replaced http with XXXX because of forum limitations). Install these packages without verification [y/N]? y Err XXXX://il.archive.ubuntu.com gutsy/universe debian-archive-keyring 2007.02.19-0.1 404 Not Found Err XXXX://il.archive.ubuntu.com gutsy/universe debian-keyring 2005.05.28 404 Not Found Failed to fetch XXXX://il.archive.ubuntu.com/ubuntu/pool/universe/d/debian-archive-keyring/debian-archive-keyring_2007.02.19-0.1_all.deb 404 Not Found Failed to fetch XXXX://il.archive.ubuntu.com/ubuntu/pool/universe/d/debian-keyring/debian-keyring_2005.05.28_all.deb 404 Not Found E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Trying to run apt-get update also does not help: I get tons of "404 Not Found" errors. Can someone please direct me to a good solution to this problem? I cannot understand why this issue is not better documented. There must be a simple solution which allows me to update my list of sources or whatever.

    Read the article

  • TechEd 2012: A Little Cloud And Too Little Windows Phone

    - by Tim Murphy
    It is Monday afternoon and the last couple of sessions have been disappointing.  I started out in the Nokia: Learning to Tile session.  I guess I should have read the summary more closely because it turned out to be more of a Nokia/WP7 history and sales pitch. “I’m outa here!” I made a quick venue change and now we are learning about Private Cloud Architecture.  The topic and the material were very informative.  The speaker even had a couple of quotable statements. The first quote was “You can trust me … I’m a doctor”.  The second was a new acronym (at least for me): CAVE – committee against virtually everything.  I am sure I have dealt with them more than once in my career. Unfortunately he didn’t just have a doctorate, the presentation was overdone like a medical journal.  While I didn’t enjoy the presentation, I am looking forward to getting my hands on the slides to review. Here is looking forward to the next sessions. del.icio.us Tags: Windows Phone,Cloud,Architecture

    Read the article

  • Security Alert For CVE-2010-4476 Released

    - by eric.maurice
    Hello, this is Eric Maurice again. Oracle just released a Security Alert with a fix for the vulnerability CVE-2010-4476, which affects Oracle Java SE and Oracle Java For Business. This vulnerability is present in Java running on servers as well as standalone Java desktop applications. Its successful exploitation by a malicious attacker can result in a complete denial of service for the affected servers. While only recently publicly disclosed, a number of Internet sites have since then reproduced details about this vulnerability, including exploit codes, which may result in allowing a malicious attacker to create a denial of service condition against the targeted system. Oracle therefore strongly recommends that affected organizations apply this fix as soon as possible. Please note that a fix for this vulnerability will also be included in the upcoming Java Critical Patch Update (Java SE and Java for Business Critical Patch Update - February 2011), which will be released on February 15th 2011. Note that the impact of this vulnerability on desktops is minimal: the affected applications or applets running in Internet browsers for example, might stop responding and may need to be restarted; however the desktop itself will not be compromised (i.e. no compromise at the desktop OS level). Oracle therefore recommends that consumers use the Java auto-update mechanism to get this fix. This will prompt them to install the latest version of the Java Runtime Environment 6 update 24 or higher (JRE), which includes the fix for this vulnerability. JRE 6 update 24 will also be distributed with the Java SE and Java for Business Critical Patch Update - February 2011. For More Information: The Critical Patch Updates and Security Alerts page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html The Advisory for Security Alert CVE-2010-4476 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2010-4476-305811.html More information on Oracle Software Security Assurance is located at http://www.oracle.com/us/support/assurance/index.html Consumers can go to http://www.java.com/en/download/installed.jsp to ensure that they have the latest version of Java running on their desktops. More information on Java Update is available at http://www.java.com/en/download/help/java_update.xml

    Read the article

  • unable to upgrade to 12.10 beta 2 from 12.04 [closed]

    - by user85959
    Possible Duplicate: There's an issue with an Alpha/Beta Release of Ubuntu, what should I do? authenticate 'quantal.tar.gz' against 'quantal.tar.gz.gpg' exception from gpg: GnuPG exited non-zero, with code 2 Debug information: gpg: Signature made Fri 28 Sep 2012 03:55:55 AM IST using DSA key ID 437D05B5 gpg: /tmp/update-manager-bpIptI/trustdb.gpg: trustdb created gpg: Good signature from "Ubuntu Archive Automatic Signing Key " gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 6302 39CC 130E 1A7F D81A 27B1 4097 6EAF 437D 05B5 gpg: Signature made Fri 28 Sep 2012 03:55:55 AM IST using RSA key ID C0B21F32 gpg: Can't check signature: public key not found Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/UpdateManager/UpdateManager.py", line 1110, in on_button_dist_upgrade_clicked fetcher.run() File "/usr/lib/python2.7/dist-packages/UpdateManager/Core/DistUpgradeFetcherCore.py", line 253, in run _("Authenticating the upgrade failed. There may be a problem " File "/usr/lib/python2.7/dist-packages/UpdateManager/DistUpgradeFetcher.py", line 41, in error return error(self.window_main, summary, message) File "/usr/lib/python2.7/dist-packages/UpdateManager/Core/utils.py", line 384, in error d.window.set_functions(Gdk.FUNC_MOVE) RuntimeError: unable to get the value gpg: /tmp/tmplqoLDu/trustdb.gpg: trustdb created

    Read the article

  • Project Codenames - Yea or Nay?

    - by rmx
    Where I work, most of our projects have (or at least attempt) descriptive, useful names. However we have a few with names that make no sense: I found that an assembly named WiFi which actually has nothing whatsoever to do with wi-fi, but is a codename. When I asked why, I was told that it's to protect company secrets incase some intern has few too many at the pub on Friday and starts chatting about the brand new 'WiFi' project he's been working on. Its clear that some people find enjoyment in finding silly / amusing codenames for their projects (like in this question). My question is: is it really a good idea to use codenames for your projects or are you better off spending the time to decide upon a descriptive name? My opinion is that in the long-run its better to give your projects relevant names. My reasoning is that if you can't think of a decent name, perhaps you don't really know the requirements well enough. I think there are better ways to 'protect company secrets' and I find it quite confusing when the name does not correlate at all with the content. It's just common sense, surely?! So do you use codenames and what the your reasons for or against this seemingly common, yet annoying (to me at least) practice?

    Read the article

  • links for 2011-02-14

    - by Bob Rhubart
    Glenn Fawcett: Solaris Eye for the Linux Guy, or how I learned to stop worrying about Linux and Love Solaris (Part 1) Glenn says: "This entry goes out to my Oracle techie friends that have been in the Linux camp for sometime now and are suddenly finding themselves needing to know more about Solaris… hmmmm… I wonder if this has anything to do with Solaris now being an available option with Exadata?"  (tags: linux solaris oracle) Enterprise Software Development with Java: High Performance JPA with GlassFish and Coherence - Part 2 Oracle ACE Director Markus Eisele describes "the steps you have to take to configure a JPA backed Cache with Coherence and how you could use it from within GlassFish as a high performance data store." (tags: oracle otn oracleace java glassfish coherence) TOGAF a Registered Trademark and Surpasses 15k Certifications EA Blogs Mike Walker relays news on the TOGAF standard. (tags: entarch togaf) Weblogic or wait? | Capping IT Off | Capgemini "So when would you move over to the new Oracle Technology?" asks Arjan Kramer. " Well, as always there can be several reasons..." (tags: oracle capgemini weblogic) Random Monday Thoughs (Art of SOA Governance) "Governance is what insurance is to new cars, be it to SOA, IT transformations and software development. Governance is a insurance policy against risk of failure." - Terry Goldman (tags: oracle otn soa soagovernance)

    Read the article

  • What are the best ways to professionally increase your online presence?

    - by Rob S.
    I've been hunting around the job market for a little bit now and I've been shocked by some of the things I am seeing. Software developers who make themselves more "known" online are getting far more and far better positions than people competing against them who are not as well "known" online. After doing some reading on the subject I realized that I actually shouldn't be so shocked by this. We are living in the most fast paced era of mankind and employers want to be able to learn as much as they can about a potential employee before they hire them. The easier we as software developers make it for us to be found it seems the better are chances of landing that dream job become. In some cases, employers are even finding us instead of us applying to them. So what are some of the best ways for me as a software developer to increase my online presence? I already hang around stack exchange sites such as programmers and stack overflow increasing my rep whenever I can. I maintain many open source projects as both a committer and project owner on Google Code and Github. I have a Twitter account, a website, and a blog. What else can I do to give myself a bigger online presence? Additionally, are there any good do's and don'ts for handling your web presence? Bashing your employer is an obvious don't but I'm interested in everything from the most basic to most subtle suggestions to give yourself a more appealing online presence. Thank you very much in advance for your time.

    Read the article

  • How can I justify software testing to management?

    - by Nate
    I work for a small company (less than 200 employees) whose software group only makes up a small part of our staff (4 employees, occasionally with a few contractors). The four of us have been making strides in transitioning to better practices, and one of the next logical steps is to improve our testing. As anyone who has done any meaningful tests knows, testing takes a lot of time - and at my company, it takes too much time to justify to management, so we generally do what little we do on the sly. I don't think this is serving us well, as we keep coming up against otherwise avoidable problems when we ship under-tested software. I would like to be able to come to management with a justification for hiring a dedicated software test engineer (someone who can both write automated tests and perform manual ones). Are there any good published studies that show the benefits of adding such a position to a small company? Where can I find information about costs associated with the position? I plan on doing a little number crunching on our own history, but having some external sources to point to would help bolster my case.

    Read the article

  • Explanation on how "Tell, Don't Ask" is considered good OO

    - by Pubby
    This blogpost was posted on Hacker News with several upvotes. Coming from C++, most of these examples seem to go against what I've been taught. Such as example #2: Bad: def check_for_overheating(system_monitor) if system_monitor.temperature > 100 system_monitor.sound_alarms end end versus good: system_monitor.check_for_overheating class SystemMonitor def check_for_overheating if temperature > 100 sound_alarms end end end The advice in C++ is that you should prefer free functions instead of member functions as they increase encapsulation. Both of these are identical semantically, so why prefer the choice that has access to more state? Example 4: Bad: def street_name(user) if user.address user.address.street_name else 'No street name on file' end end versus good: def street_name(user) user.address.street_name end class User def address @address || NullAddress.new end end class NullAddress def street_name 'No street name on file' end end Why is it the responsibility of User to format an unrelated error string? What if I want to do something besides print 'No street name on file' if it has no street? What if the street is named the same thing? Could someone enlighten me on the "Tell, Don't Ask" advantages and rationale? I am not looking for which is better, but instead trying to understand the author's viewpoint.

    Read the article

  • ORA-600 Troubleshooting

    - by [email protected]
    Have you observed an ORA-0600 or ORA-07445 reported in your alert log? The ORA-600 error is the generic internal error number for Oracle program exceptions. It indicates that a process has encountered a low-level, unexpected condition. The ORA-600 error statement includes a list of arguments in square brackets: ORA 600 "internal error code, arguments: [%s], [%s],[%s], [%s], [%s]" The first argument is the internal message number or character string. This argument and the database version number are critical in identifying the root cause and the potential impact to your system.  The remaining arguments in the ORA-600 error text are used to supply further information (e.g. values of internal variables etc).   Looking for the best way to diagnose? There is an ORA-600 Troubleshooter Tool available in My Oracle Support.  This tool will lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out the first 10 or 15 stack pointers from the associated trace file to match up against known bugs. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] Also, take a quick look at the Master Note for Diagnosing ORA-600 ( MasterNoteORA600.docx) for some tips on diagnosing.

    Read the article

  • Interesting SSMS issue with waittype of PREEMPTIVE_OS_LOOKUPACCOUNTSID

    - by steveh99999
    Saw a recent issue with SQL2008 sp 1 where SQL Server Management Studio (SSMS) appeared to hang when a DBA expanded the database users tab of a database… ie like this :-   When we looked at the waittype of the SSMS session - via sys.dm_os_waiting_tasks– it was waiting on PREEMPTIVE_OS_LOOKUPACCOUNTSID. I’d not come across this waittype before, and there was very little information out on the web for it.. Looking at this issue using SQL Profiler – it them became apparent that SSMS was waiting on the function SUSER_SNAME. I did a quick test using SET STATISTICS TIME to prove it was the SUSER_SNAME function causing our issue :- SELECT * FROM sys.database_principals   -- completes in 6 milliseconds SELECT SUSER_SNAME(sid),* FROM sys.database_principals  -- completes in 188941 milliseconds. suser_sname validates a sid against Active Directory.. What I haven’t mentioned so far is that the database in question had been restored from a server on a different domain. This domain was untrusted from the current domain. However, it appeared that SSMS was still trying to validate the Windows login of each user when displaying the users list…. and taking a long time to return NULL for each user from the untrusted domain. When all users that had a login from the untrusted domain are removed from the database – this fixed the issue. To find out the users that had this issue :- SELECT name FROM sys.database_principals WHERE type_desc IN(‘WINDOWS_USER’, ‘WINDOWS_GROUP’) and SUSER_SNAME(sid) IS NULL Although I have not raised this as a case with MS – I do think this is a bug in SSMS – I do not see why windows logins should need to be validated when displaying only user details..

    Read the article

  • Why am I not getting an sRGB default framebuffer?

    - by Aaron Rotenberg
    I'm trying to make my OpenGL Haskell program gamma correct by making appropriate use of sRGB framebuffers and textures, but I'm running into issues making the default framebuffer sRGB. Consider the following Haskell program, compiled for 32-bit Windows using GHC and linked against 32-bit freeglut: import Foreign.Marshal.Alloc(alloca) import Foreign.Ptr(Ptr) import Foreign.Storable(Storable, peek) import Graphics.Rendering.OpenGL.Raw import qualified Graphics.UI.GLUT as GLUT import Graphics.UI.GLUT(($=)) main :: IO () main = do (_progName, _args) <- GLUT.getArgsAndInitialize GLUT.initialDisplayMode $= [GLUT.SRGBMode] _window <- GLUT.createWindow "sRGB Test" -- To prove that I actually have freeglut working correctly. -- This will fail at runtime under classic GLUT. GLUT.closeCallback $= Just (return ()) glEnable gl_FRAMEBUFFER_SRGB colorEncoding <- allocaOut $ glGetFramebufferAttachmentParameteriv gl_FRAMEBUFFER gl_FRONT_LEFT gl_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING print colorEncoding allocaOut :: Storable a => (Ptr a -> IO b) -> IO a allocaOut f = alloca $ \ptr -> do f ptr peek ptr On my desktop (Windows 8 64-bit with a GeForce GTX 760 graphics card) this program outputs 9729, a.k.a. gl_LINEAR, indicating that the default framebuffer is using linear color space, even though I explicitly requested an sRGB window. This is reflected in the rendering results of the actual program I'm trying to write - everything looks washed out because my linear color values aren't being converted to sRGB before being written to the framebuffer. On the other hand, on my laptop (Windows 7 64-bit with an Intel graphics chip), the program prints 0 (huh?) and I get an sRGB default framebuffer by default whether I request one or not! And on both machines, if I manually create a non-default framebuffer bound to an sRGB texture, the program correctly prints 35904, a.k.a. gl_SRGB. Why am I getting different results on different hardware? Am I doing something wrong? How can I get an sRGB framebuffer consistently on all hardware and target OSes?

    Read the article

  • BI&EPM in Focus - November 2011

    - by Mike.Hallett(at)Oracle-BI&EPM
    Enterprise Performance Management A Thing of Beauty, by Alison WeissAvon’s enterprise performance management system delivers accurate information and critical insight to managers at every level of the organization Oracle Crystal Ball Helps Managers Guard Against Volatility, by Alison Weiss The Insight Game, by Aaron LazenbyEnterprise performance management can deliver insights crucial to navigating the volatility of the global economy—and that’s no game of checkers. KPI vs. the Bottom Line, by Edward RoskeFor managers, is tracking the key metrics for their departments enough to ensure success for the entire business? The CEO for Oracle partner interRel shares his opinion. Deep Integration, by Aaron LazenbyThe synthesis of Oracle Hyperion applications and core Oracle technologies can deliver deep benefits to analytics-driven businesses. Oracle Crystal Ball. Oracle's #1 Solution for Risk Management Follow EPM Documentation at Hyperion EPM Info for news about EPM documentation releases and updates (twitter | facebook | Linkedin) Whitepaper: Integrating XBRL Into Your Financial Reporting Process Oracle Hyperion Disclosure Management Customer Story: StealthGas Inc. Saves 12 Accountant Days Yearly, Validates XBRL-Compliant Financial Filing Data in One Day Sherwin-Williams Argentina I.C.S.A. Accelerates Budget Preparation Process by 75% BBDO Germany GmbH Consolidates Financial and Planning Processes for More Than 50 Agencies StealthGas Inc. Saves 12 Accountant Days Yearly, Validates XBRL-Compliant Financial Filing Data in One Day Business Intelligence Webcast Replay: Oracle Data Mining & BI EE - Predictive Analytics (Part 2) Innovation Award Winners - BI/EPM: HealthSouth, State of MD, Clorox Company, Telenor and Dunkin Brands Leeds Teaching Hospitals National Health Service Trust Builds Budget Reports Six Times Faster, Achieves 100% ROI in 12 Months with Oracle Business Intelligence Home Credit Group Consolidates Reporting and Saves Time across All Business Units w/ Oracle Essbase & OBIEE Autoglass Improves Business Visibility and Services to Customers and Partners with Oracle Business Intelligence Events Download Oracle OpenWorld Oct 2011 Presentations select Middleware - BI or Applications - Hyperion Oracle Business Analytics Summits:learn about the latest trends, best practices, and innovations in business intelligence, analytics applications, and data warehousing Webcast Nov 15 9am PST: Running the Last Mile, Beyond Financial Consolidations - Streamlining the Close and Addressing the SEC's XBRL Mandate Webcast Dec 13 1pm PST: Defining Your Mobile BI Strategy (BICG) New Training Available: Oracle BI Publisher 11g R1: Fundamentals Webcast Replay: How to Expand the Usage of Analytics in your Organization while Driving Down IT Spend Webcast Replay: Real-Time Decisions (RTD) Updated Use Cases for Ecommerce Personalization in Financial Services & Retail

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >