Search Results

Search found 657 results on 27 pages for 'metrics'.

Page 19/27 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Best Design for creating Historic Reports on GAE

    - by charming30
    My App requires Daily reports based on various user activities. My current design does not sum the daily totals in database, which means I must compute them everytime. For example A report that shows Top 100 users based on the number of submissions they have made on a given day. For such a report If I have 50,000 users, what is the best way to create daily report? How to create monthly and yearly report with such data? If this is not a good design, then how to deal with such design decision when the metrics of the report are not clear during db design and by the time it is clear we already have huge data with limited parameters (fields). Please advice.

    Read the article

  • Is there a generic open-source reporting system out there?

    - by syntactico
    I recently started a position at a new company, and one of the first projects they want is an internal reporting system that points at database A, B, C and reports various metrics/statistics/predictions. Basically, the same thing I've done or worked on and every company I've ever been hired by. Since this gets a bit boring after a while, I was wondering if there already exists some sort of open-source package that accomplishes this goal. Ideally, it would work with multiple databases out-of-box (PostgreSQL, MySQL, MSSQL, Oracle minimally), determine relationships between tables (either automatically from their schemas, or allow you to manually set them up after pooling all the tables), allow you to create reports based on a subset of tables, customizing what data you wanted to be displayed/calculated (I suppose this would be challenging since you've no idea what every audience wants, and would need to make this flexible) I'm debating making something like this in my spare time if one does not already exist. Just curious.

    Read the article

  • Why would SQL be very slow when doing updates?

    - by ooo
    Suddenly doing updates into a few tables have gotten 10 times slower than they used to be. What are some good recommendations to determine root cause and optimization? Could it be that indexing certain columns are causing updates to be slow? Any other recommendations? I guess more important than guesses would be help on the process of identifying the root cause or metrics around performance. Is there anything in Fluent NHibernate that you can use to help identify the root cause of performance issues?

    Read the article

  • How to place a symbol (path) relative to the far end of svg text?

    - by dugeen
    I'm working on a program which generates SVG maps. Some of the map items have captions which need a symbol after them (like a plane symbol for an airport caption). If I have a text element thus <text x="30" y="30">Pericles</text> I can place another bit of text at the next character position by saying <text x="30" y="30">Pericles <tspan>!</tspan></text> but I'd like to draw my own symbol at that position with a <path> element. What I'm doing at the moment is having the generating program guess the extent of the text from tables of font metrics etc, but this isn't accurate enough to place the symbol consistently. Is there any way round this - like specifying a <marker> to be used when drawing the text, and using a tspan with an invisible dash in it or something to get the marker placed?

    Read the article

  • When to give in and start The Big Rewrite?

    - by John Cromartie
    I've had my share of projects where the first thing I think is "let's just rewrite it in ." Everybody feels the urge at some point. In fact, I think I've had the urge to rewrite pretty much every project I've ever been on. However, it is accepted wisdom that a total rewrite is generally a bad idea. The question is: when do you look at a project and say: "OK, it's time to start over." What sort of metrics or examples can you cite of where a rewrite was truly necessary? How bad does the code have to be? How old can a project get before there too much invested?

    Read the article

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • Measuring Social Media Efforts

    - by David Dorf
    So you're on the bandwagon and you've created a Facebook page, you're tweeting everyday, and maybe you've even got a YouTube channel. Now what? After you put any program in place, you need to measure, set new goals, then execute and this is no different. But how does one measure social media efforts? First, I guess we need some goals. Typical ones might be to acquire customers, engage them, then convert them. So that translates to: Increase Facebook fans and Twitter followers Increase comments/posting and retweets Increase redemption of offers via Facebook and Twitter Counting fans and followers is easy, and tracking the redemption of coupons isn't that hard either, but measuring engagement is a tough one. How do you know whether your fans are reading your posts, and whether your posts have any meaning to them? For Facebook, the fan page administrator has access to analytics called Facebook Insights. There you can check weekly metrics such as total fans, new fans, lost fans, demographics of fans, number of postings, numbers clicks, etc. Not nearly as comprehensive as Google Analytics, but well on its way. For Twitter, getting information is a little tougher. Again, its easy to track followers and you can use tools like TweetMeme to encourage and track retweets. An interesting website called WeFollow tries to measure influence for certain topics. For example, the top three influencers for the topic "retail" are retailweek, retailwire, and retailerdaily. Other notables are #10 BestBuy, #11 GapOfficial, #12 JeffPR, and #17 OracleRetail. I assume influence is calculated based on number of followers, number of retweets, frequency of tweets, and perhaps depth of dialogs. If you want to get serious about monitoring and measuring social marketing efforts, you'd be wise to invest in a strong tool. Several are listed on this wiki, including big ones like Radian6, Nielsen, Omniture, and Buzzient. Buzzient might be particularly interesting because its integrated with Oracle CRM OnDemand -- see the demo. As always, I'm interested in hearing how others approach goal setting and monitoring of social media efforts, so feel free to post comments.

    Read the article

  • On configuring GC 10.2.0.5 to monitor LISTENER SCAN using UDMs ...

    - by [email protected]
    Hi,Looks like Grid Control 10.2.0.5 is not fully prepared for monitoringthe Grid Infrastructure (11gR2).Even I'm pretty sure the upcoming version of GC (11g) will of course support all the new features of 11gR2, some customersare asking for some "hand-made" procedures for monitoring all the new stuff.I think one of the most critical components that cant be monitored are the LISTENER SCAN, so I have developed a little script for doing sousing the GC User Defined Metrics ( at host level )I am more than happy to share with you:#!/bin/ksh   ###    NAME###     monitor_scan.sh######    DESCRIPTION###      SCAN Listener monitoring######    RETURNS######    NOTES######    MODIFIED           (DD/MM/YY)###      Oracle            25/03/10     - Creation###export ORACLE_HOME=/opt/oracle/soft/11.2/gridRSC_KEY=$1AWK=/sbin/awk   LISTENER_DOWN_COUNT=$(${ORACLE_HOME}/bin/crsctl status resource -w 'TYPE = ora.scan_listener.type' | grep OFFLINE | wc -l)if [ ${LISTENER_DOWN_COUNT} != 0 ]; then  SCAN_DOWN_LIST=$(${ORACLE_HOME}/bin/crsctl status resource  -w 'TYPE = ora.scan_listener.type' | $AWK \ 'BEGIN { FS="="; state = 0; }  $1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state=1};  state == 0 {next;}  $1~/TARGET/ && state == 1 {apptarget = $2; state=2;}  $1~/STATE/ && state == 2 {appstate = $2; state=3;}  state == 3 {printf "%-45s %-10s %-18s\n", appname, apptarget, appstate; state=0;}' | grep OFFLINE | awk '{ print $1 }')  echo em_result=ALERT  echo em_message=There are LISTENER SCAN with down status: [${SCAN_DOWN_LIST}]else  echo em_result=NORMAL  echo em_message=All SCAN Listener are UPfiHope it helpsL

    Read the article

  • Next Phase of ECM 11g Now Available - New UCM & URM 11g, & Updated I/PM & IRM 11g

    - by michelle.huff
    We're excited to announce that the Oracle Enterprise Content Management Suite 11g is now available! Today, Oracle announced ECM Suite 11g, a part of Fusion Middleware 11gR1 Patchset 2 release, which builds upon the Imaging and Process Management (I/PM) and Information Rights Management (IRM) 11g release earlier this year. Universal Content Management (UCM) and Universal Records Management (URM) 11g are now available with many new features and enhancements. All ECM products are localized into 27 languages, use a single repository, a single installer, centralized administration, and all run on the same Fusion Middleware tech stack. Oracle ECM Suite 11g, is better integrated to fit the way you work, with extreme performance and extreme scalability. Universal Content Management One click Web content management - brings Web content management authoring, design and presentation capabilities directly into how organizations design sites, portals, and custom Web applications. Simply take in the right amount of WCM that meets your needs - all without having to rewrite the application or port it over to a new technology stack or framework. Greater business user empowerment - with next generation desktop integrations and "smart productivity folders", new Web site "design mode" for business users, and enhanced rich media support enabling users to better work with photography, graphics, videos & podcasts created today as well as contribute content within Flash files directly from the Web. Advanced manageability with extreme performance & scalability - centralized system monitoring, installation, logging, performance metrics & diagnostics, with new built in "fast check-in" features, redesigned component management interface - all running on Fusion Middleware infrastructure. Universal Records Management Enhanced user experience: Oracle URM 11g makes records management easier for both business users and records administrators. Simplifications in the end user experience allow the creation of bookmarks into often-used part of the file plan, easy copying of categories and dispositions, and integrated folder and records search. The records management dashboard provides a consolidated view into records administrator tasks and system performance. DoD 5015.02 v3: Oracle URM is fully certified against all part of the US Department of Defense records management standard - baseline, classified, and Freedom of Information and Privacy Act. This enables Federal, state, & local governments & public agencies, as well as private companies, to maintain regulated compliance. Expanded functionality through Oracle integrations: Oracle URM 11g allows for an expanded set of functionality through integration capabilities with other Oracle products. This includes configurable records definition capabilities directly within a UCM instance. An out of the box integration with Oracle BI Publisher provides easily configured and robust reporting. Additionally, 11g offers an out of the box Oracle Secure Enterprise Search integration enabling real time full text discovery across disparate systems in an organization. Read the Press Release Watch the 3 Minute ECM 11g Video Get Up to Speed with the What's New in ECM Suite Datasheet Learn More on OTN with new tutorials, downloads and whitepapers

    Read the article

  • Guidance and Pricing for MSDN 2010

    - by John Alexander
    Sorry for the rather lengthy post here. I get asked this all the time so I decided to post it…Visual Studio 2010 editions will be available on April 12, 2010. Product Features Professional with MSDN Essentials Professional with MSDN Premium with MSDN Ultimate with MSDN Test Professional with MSDN Debugging and Diagnostics IntelliTrace (Historical Debugger)         Static Code Analysis       Code Metrics       Profiling       Debugger   Testing Tools Unit Testing   Code Coverage       Test Impact Analysis       Coded UI Test       Web Performance Testing         Load Testing1         Microsoft Test Manager 2010       Test Case Management2       Manual Test Execution       Fast-Forward for Manual Testing       Lab Management Configuration3       Integrated Development Environment Multiple Monitor Support   Multi-Targeting   One Click Web Deployment   JavaScript and jQuery Support   Extensible WPF-Based Environment Database Development Database Deployment       Database Change Management2       Database Unit Testing       Database Test Data Generation       Data Access   Development Platform Support Windows Development   Web Development   Office and SharePoint Development   Cloud Development   Customizable Development Experience   Architecture and Modeling Architecture Explorer         UML® 2.0 Compliant Diagrams (Activity, Use Case, Sequence, Class, Component)         Layer Diagram and Dependency Validation         Read-only diagrams (UML, Layer, DGML Graphs)         Lab Management Virtual environment setup & tear down3       Provision environment from template3       Checkpoint environment3       Team Foundation Server Version Control2   Work Item Tracking2   Build Automation2   Team Portal2   Reporting & Business Intelligence2   Agile Planning Workbook2   Microsoft Visual Studio Team Explorer 2010   Test Case Management2       MSDN Subscription – Software and Services for Production Use Windows Azure Platform 20 hrs/mo † 50 hrs/mo † 100 hrs/mo † 250 hrs/mo † n/a Microsoft Visual Studio Team Foundation Server 2010   Microsoft Visual Studio Team Foundation Server 2010 CAL   1 1 1 1 Microsoft Expression Studio 3       Microsoft Office Professional Plus 2010, Project Professional 2010, Visio Premium 2010 (following Office 2010 launch)       MSDN Subscription – Software for Development and Testing 4 Windows 7, Windows Server 2008 R2 and SQL Server 2008 Toolkits, Software Development Kits, Driver Development Kits Previous versions of Windows (client and server operation systems)   Previous versions of Microsoft SQL Server   Microsoft Office       Microsoft Dynamics       All other Servers       Windows Embedded operating systems       Teamprise         MSDN Subscription – Other Benefits Technical support incidents 0 2 4 4 2 Priority support in MSDN Forums Microsoft e-learning collections (typically 10 courses or 20 hours) 0 1 2 2 1 MSDN Flash newsletter MSDN Online Concierge MSDN Magazine   System Requirements View View View View View Buy from (MSRP) $799 $1,199 $5,469 $11,899 $2,169 Renew from (MSRP) $549 (upgrade) $799 $2,299 $3,799 $899 † Availability varies by country and subscription level.  Details available on the MSDN site 1. May require one or more Microsoft Visual Studio Load Test Virtual User Pack 2010 2. Requires Team Foundation Server and a Team Foundation Server CAL 3. Requires Microsoft Visual Studio Lab Management 2010 4. Per-user license allows unlimited installations and use for designing, developing, testing, and demonstrating applications. UML is a registered trademark of Object Management Group, Inc. Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries.

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • Architecture strategies for a complex competition scoring system

    - by mikewassmer
    Competition description: There are about 10 teams competing against each other over a 6-week period. Each team's total score (out of a 1000 total available points) is based on the total of its scores in about 25,000 different scoring elements. Most scoring elements are worth a small fraction of a point and there will about 10 X 25,000 = 250,000 total raw input data points. The points for some scoring elements are awarded at frequent regular time intervals during the competition. The points for other scoring elements are awarded at either irregular time intervals or at just one moment in time. There are about 20 different types of scoring elements. Each of the 20 types of scoring elements has a different set of inputs, a different algorithm for calculating the earned score from the raw inputs, and a different number of total available points. The simplest algorithms require one input and one simple calculation. The most complex algorithms consist of hundreds or thousands of raw inputs and a more complicated calculation. Some types of raw inputs are automatically generated. Other types of raw inputs are manually entered. All raw inputs are subject to possible manual retroactive adjustments by competition officials. Primary requirements: The scoring system UI for competitors and other competition followers will show current and historical total team scores, team standings, team scores by scoring element, raw input data (at several levels of aggregation, e.g. daily, weekly, etc.), and other metrics. There will be charts, tables, and other widgets for displaying historical raw data inputs and scores. There will be a quasi-real-time dashboard that will show current scores and raw data inputs. Aggregate scores should be updated/refreshed whenever new raw data inputs arrive or existing raw data inputs are adjusted. There will be a "scorekeeper UI" for manually entering new inputs, manually adjusting existing inputs, and manually adjusting calculated scores. Decisions: Should the scoring calculations be performed on the database layer (T-SQL/SQL Server, in my case) or on the application layer (C#/ASP.NET MVC, in my case)? What are some recommended approaches for calculating updated total team scores whenever new raw inputs arrives? Calculating each of the teams' total scores from scratch every time a new input arrives will probably slow the system to a crawl. I've considered some kind of "diff" approach, but that approach may pose problems for ad-hoc queries and some aggegates. I'm trying draw some sports analogies, but it's tough because most games consist of no more than 20 or 30 scoring elements per game (I'm thinking of a high-scoring baseball game; football and soccer have fewer scoring events per game). Perhaps a financial balance sheet analogy makes more sense because financial "bottom line" calcs may be calculated from 250,000 or more transactions. Should I be making heavy use of caching for this application? Are there any obvious approaches or similar case studies that I may be overlooking?

    Read the article

  • Oracle WebCenter: Social Networking & Collaboration

    - by kellsey.ruppel(at)oracle.com
    We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how the new release of Oracle WebCenter provides unprecedented Social Networking and Collaboration.We recently talked with Carin Chan, Principal Product Manager at Oracle, around the topic of Social Networking and Collaboration. In today’s work environment, employees have come to expect social and collaborative services to augment their work environment. Whether it is to post a blog or to poll fellow coworkers, employees expect and demand access to highly integrated, collaborative work environments that allow them to quickly contribute at work -- whether it is to make informed decisions, contribute on projects, or share knowledge.Social and collaborative services from Oracle WebCenter add an immeasurable amount of value to achieving a modern user experience. Oracle WebCenter Services provides rich and comprehensive social computing services that include services such as wikis, blogs, instant messaging, presence, activity streams and graphs, and polls/surveys that offer employees access to rich collaborative services to work efficiently.Employees can create pages or spaces that mix and match collaborative services while bringing in data from other applications to share with groups, teams, or organizations. These out of the box social and collaborative services include: People Connections and Activity Streams enable users to quickly assemble and visualize their social business networks and track user activities.Activity Graphs tracks all user activities in real-time and gathers intelligence about these users, their connections and the way they use information to make educated recommendations and provide on the spot information discovery.Wikis and blogs enable the community authoring of documents and sharing of ideas and also allow for the gathering of feedback and comments on those ideas.Tags and links allow users to easily mark, connect and share information with others.RSS feeds are available to track new or changed information related to discussion forums, processes or activities in an Oracle WebCenter environment.Discussion forums enable sharing of group knowledge and easy creation of communities around specific topics.Announcements allow you to manage and publish important news to your user base.Instant Messaging and Presence enable real-time awareness and communication with available users in the context of a business task.Web and Voice Conferencing enables real-time communication with internal and external business users.Lists provide a way to manage list data directly on the web as well as export and import it from and to Microsoft Excel.Oracle WebCenter Analytics provides comprehensive reporting metrics on activity and content usage within portals or composite applications.Activity Streams allow you to track activities and visualize your business networks.While being able to integrate into your portal deployment, these services are also integrated into how users are already working. This includes integration with software such as Microsoft Outlook, Microsoft Office and mobile devices such as the Apple iPhone. These services are just a tip of the iceberg regarding social and collaborative services that Oracle WebCenter has to offer your employees. Be sure to keep checking back this week for in future posts, we’ll delve deeper into a few of these collaborative services and discuss how a combination of collaborative services offer a better portal deployment to empower business users. Technorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, social, activity streams, blogs, wikis

    Read the article

  • Network is not working anymore - Ubuntu 12.04

    - by Jonathan
    Network is not working anymore - Ubuntu 12.04 Hello, I have a problem with my network connection. I have been using the same laptop with Ubuntu and the same connection for more than a year, and suddenly yesterday the connection stopped working (both wireless and wired). I've tested with another computer and the connection is fine (both wireless and wired). I've been reading similar posts but I haven't found a solution yet. I tried a few commands that I'm posting here (my system is in spanish, so I have traslated it to english, maybe the terms are not accurate): grep -i eth /var/log/syslog | tail Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): now managed Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): device state change: unmanaged - unavailable (reason 'managed') [10 20 2] Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): bringing up device. Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): preparing device. Jun 3 18:45:40 vanesa-pc kernel: [ 7351.845743] forcedeth 0000:00:0a.0: irq 41 for MSI/MSI-X Jun 3 18:45:40 vanesa-pc kernel: [ 7351.845984] forcedeth 0000:00:0a.0: eth0: no link during initialization Jun 3 18:45:40 vanesa-pc kernel: [ 7351.847103] ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: (eth0): deactivating device (reason 'managed') [2] Jun 3 18:45:40 vanesa-pc NetworkManager[3584]: Added default wired connection 'Wired connection 1' for /sys/devices/pci0000:00/0000:00:0a.0/net/eth0 Jun 3 18:45:40 vanesa-pc kernel: [ 7351.848817] ADDRCONF(NETDEV_UP): eth0: link is not ready ifconfig -a eth0 Link encap:Ethernet addressHW 00:1b:24:fc:a8:d1 ACTIVE BROADCAST MULTICAST MTU:1500 Metric:1 Packages RX:0 errors:16 lost:0 overruns:0 frame:16 Packages TX:123 errors:0 lost:0 overruns:0 carrier:0 colissions:0 length.tailTX:1000 Bytes RX:0 (0.0 B) TX bytes:26335 (26.3 KB) Interruption:41 Base address: 0x2000 lo Link encap:Local loop Inet address:127.0.0.1 Mask:255.0.0.0 Inet6 address: ::1/128 Scope:Host ACTIVE LOOP WORKING MTU:16436 Metrics:1 Packages RX:1550 errors:0 lost:0 overruns:0 frame:0 Packages TX:1550 errors:0 lost:0 overruns:0 carrier:0 colissions:0 long.tailTX:0 Bytes RX:125312 (125.3 KB) TX bytes:125312 (125.3 KB) iwconfig lo no wireless extensions. eth0 no wireless extensions. sudo lshw -C network *-network description: Ethernet interface product: MCP67 Ethernet manufacturer: NVIDIA Corporation Physical id: a bus information: pci@0000:00:0a.0 logical name: eth0 version: a2 series: 00:1b:24:fc:a8:d1 capacity: 100Mbit/s width: 32 bits clock: 66MHz capacities: pm msi ht bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=forcedeth driverversion=0.64 latency=0 link=no maxlatency=20 mingnt=1 multicast=yes port=MII resources: irq:41 memoria:f6288000-f6288fff ioport:30f8(size=8) memoria:f6289c00-f6289cff memoria:f6289800-f628980f lsmod Module Size Used by usbhid 41906 0 hid 77367 1 usbhid rfcomm 38139 0 parport_pc 32114 0 ppdev 12849 0 bnep 17830 2 bluetooth 158438 10 rfcomm,bnep binfmt_misc 17292 1 joydev 17393 0 hp_wmi 13652 0 sparse_keymap 13658 1 hp_wmi nouveau 708198 3 ttm 65344 1 nouveau drm_kms_helper 45466 1 nouveau drm 197692 5 nouveau,ttm,drm_kms_helper i2c_algo_bit 13199 1 nouveau psmouse 87213 0 mxm_wmi 12859 1 nouveau serio_raw 13027 0 k8temp 12905 0 i2c_nforce2 12906 0 wmi 18744 2 hp_wmi,mxm_wmi video 19068 1 nouveau mac_hid 13077 0 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp forcedeth 58096 0 Let me know if I can give you more information. Thank you very much in advance, Jonathan

    Read the article

  • Izenda Reports 6.3 Top 10 Features

    - by gt0084e1
    Izenda 6.3 Top 10 New Features and Capabilities 1. Izenda Maps Add-On The Izenda Maps add-on allows rapid visualization of geographic or geo-spacial data.  It is fully integrated with the the rest of Izenda report package and adds a Maps tab which allows users to add interactive maps to their reports. Contact your representative or [email protected] for limited time discounts. Izenda Maps even has rich drill-down capabilities that allow you to dive deeper with a simple hover (also requires dashboards). 2. Streamlined Pie Charts with "Other" Slices The advanced properties of the Pie Chart now allows you to combine the smaller slices into a single "Other" slice. This reduces the visual complexity without throwing off the scale of the chart. Compare the difference below. 3. Combined Bar + Line Charts The Bar chart now allows dual visualization of multiple metrics simultaneously by adding a line for secondary data. Enabled via AdHocSettings.AllowLineOnBar = true; 4. Stacked Bar Charts The stacked bar chart lets you see a breakdown of a measure based on categorical data.  It is enabled with the following code. AdHocSettings.AllowStackedBarChart = true; 5. Self-Joining Data Sources The self-join features allows for parent-child relationships to be accessed from the Data Sources tab. The same table can be used as a secondary child table within the Report Designer. 6. Report Design From Dashboard View Dashboards now sport both view and design icons to allow quick access to both. 7. Field Arithmetic on Dates Differences between dates can now be used as measures with the arithmetic feature. 8. Simplified Multi-Tenancy Integrating with multi-tenant systems is now easier than ever. The following APIs have been added to facilitate common scenarios. AdHocSettings.CurrentUserTenantId = value; AdHocSettings.SchedulerTenantID = value; AdHocSettings.SchedulerTenantField = "AccountID"; 9. Support For SQL 2008 R2 and SQL Azure Izenda now supports the latest version of Microsoft's database as well as the SQL Azure service. 10. Enhanced Performance and Compatibility for Stored Procedures Izenda now supports more stored procedures than ever and runs them faster too.

    Read the article

  • Projected Results: Sound project management practices, combined with a complete technology platform, have an immediate and lasting impact on an organization’s bottom line.

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Article By: Alan Joch, is a business and technology writer who specializes in enterprise applications, cloud computing, mobile computing, and the Web. It’s no secret that complex, large-scale projects need close management controls to ensure that they’re delivered on time and on budget. But now there’s growing evidence that failing to meet these goals can have far-reaching consequences, not only for the reputations and value of individual organizations but also for the tenure of their top executives. Government watchdogs forced one large contractor to suspend a multibillion-dollar defense program—and delay payment receipts—until a better management system was launched to more accurately track spending, project milestones, and other fundamental metrics. Significant delays in the opening of the £4.3 billion Terminal 5 at Heathrow Airport impaired an airline’s operations and contributed to a drop in its share prices. These real-world examples are noteworthy because of the huge financial risks they created. They’re also far from being isolated cases. Research by the Economist Intelligence Unit found that only 11 percent of companies claimed they delivered expected ROI on major capital projects 90 percent of the time or more. In addition, 12 percent of respondents said they achieved planned ROI less than half the time. According to Phil Thornton, lead consultant at the analyst firm Clarity Economics, the numbers demonstrate obvious challenges related to managing risks, accurately predicting ROI, and consistently delivering bottom-line growth for major capital investments “Portfolio management is a path to improve your organization’s competitive advantage. It helps make sure your organization is investing in the right things and not spending its time on things that are not delivering the intended results for the firm.” Read the full article here

    Read the article

  • Oracle ERP Cloud Solution Defines Revenue Recognition Software Market

    - by Steve Dalton
    Normal 0 false false false EN-US X-NONE X-NONE Revenue is a fundamental yardstick of a company's performance, and one of the most important metrics for investors in the capital markets. So it’s no surprise that the accounting standard boards have devoted significant resources to this topic, with a key goal of ensuring that companies use a consistent method of recognizing revenue. Due to the myriad of revenue-generating transactions, and the divergent ways organizations recognize revenue today, the IFRS and FASB have been working for 12 years on a common set of accounting standards that apply to all industries in virtually all countries. Through their joint efforts on May 28, 2014 the FASB and IFRS released the IFRS 15 / ASU 2014-9 (Revenue from Contracts with Customers) converged accounting standard. This standard applies to revenue in all public companies, but heavily impacts organizations in any industry that might have complex sales contracts with multiple distinct deliverables (obligations). For example, an auto dealer who bundles free service with the sale of a car can only recognize the service revenue once the owner of the car brings it in for work. Similarly, high-tech companies that bundle software licenses, consulting, and support services on a sales contract will recognize bundled service revenue once the services are delivered. Now all companies need to review their revenue for hidden bundling and implicit obligations. Numerous time-consuming and judgmental activities must be performed to properly recognize revenue for complex sales contracts. To illustrate, after the contract is identified, organizations must identify and examine the distinct deliverables, determine the estimated selling price (ESP) for each deliverable, then allocate the total contract price to each deliverable based on the ESPs. In terms of accounting, organizations must determine whether the goods or services have been delivered or performed to the customer’s satisfaction, then either book revenue in the current period or record a liability for the obligation if revenue will be recognized in a future accounting period. Oracle Revenue Management Cloud was architected and developed so organizations can simplify and streamline revenue recognition. Among other capabilities, the solution uses business rules to efficiently identify and examine contracts, intelligently calculate and allocate deliverable prices based on prescribed inputs, and accurately recognize revenue for each deliverable based on customer satisfaction. "Oracle works very closely with our customers, the Big 4 accounting firms, and the accounting standard boards to deliver an adaptive, comprehensive, new generation revenue recognition solution,” said Rondy Ng, Senior Vice President, Applications Development. “With the recently announced IFRS 15 / ASU 2014-9, Oracle is ready to support customer adoption of the new standard with our Revenue Management Cloud,” said Rondy. Oracle Revenue Management Cloud, an integral part of Oracle Financials Cloud, helps organizations comply with accounting standards, provides them with confidence that reported revenue is materially accurate, and simplifies the accounting process for revenue recognition. Stay tuned to this blog for regular updates on Oracle Revenue Management Cloud. We also invite you to review our new oracle.com ERP pages @ oracle.com/erp. We will be updating these pages very soon with more information about Oracle Revenue Management Cloud.

    Read the article

  • Using Oracle BPM to Extend Oracle Applications

    - by Michelle Kimihira
    Author: Srikant Subramaniam, Senior Principal Product Manager, Oracle Fusion Middleware Customers often modify applications to meet their specific business needs - varying regulatory requirements, unique business processes, product mix transition, etc. Traditional implementation practices for such modifications are typically invasive in nature and introduce risk into projects, affect time-to-market and ease of use, and ultimately increase the costs of running and maintaining the applications. Another downside of these traditional implementation practices is that they literally cast the application in stone, making it difficult for end-users to tailor their individual work environments to meet specific needs, without getting IT involved. For many businesses, however, IT lacks the capacity to support such rapid business changes. As a result, adopting innovative solutions to change the economics of customization becomes an imperative rather than a choice. Let's look at a banking process in Siebel Financial Services and Oracle Policy Automation (OPA) using Oracle Business Process Management. This approach makes modifications simple, quick to implement and easy to maintain/upgrade. The process model is based on the Loan Origination Process Accelerator, i.e., a set of ready to deploy business solutions developed by Oracle using Business Process Management (BPM) 11g, containing customizable and extensible pre-built processes to fit specific customer requirements. This use case is a branch-based loan origination process. Origination includes a number of steps ranging from accepting a loan application, applicant identity and background verification (Know Your Customer), credit assessment, risk evaluation and the eventual disbursal of funds (or rejection of the application). We use BPM to model all of these individual tasks and integrate (via web services) with: Siebel Financial Services and (simulated) backend applications: FLEXCUBE for loan management, Background Verification and Credit Rating. The process flow starts in Siebel when a customer applies for loan, switches to OPA for eligibility verification and product recommendations, before handing it off to BPM for approvals. OPA Connector for Siebel simplifies integration with Siebel’s web services framework by saving directly into Siebel the results from the self-service interview. This combination of user input and product recommendation invokes the BPM process for loan origination. At the end of the approval process, we update Siebel and the financial app to complete the loop. We use BPM Process Spaces to display role-specific data via dashboards, including the ability to track the status of a given process (flow trace). Loan Underwriters have visibility into the product mix (loan categories), status of loan applications (count of approved/rejected/pending), volume and values of loans approved per processing center, processing times, requested vs. approved amount and other relevant business metrics. Summary Oracle recommends the use of Fusion Middleware as an extensions platform for applications. This approach makes modifications simple, quick to implement and easy to maintain/upgrade applications (by moving customizations away from applications to the process layer). It is also easier to manage processes that span multiple applications by using Oracle BPM. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • VirtualBox 4.2.14 is now available

    - by user12611829
    The VirtualBox development team has just released version 4.2.14, and it is now available for download. This is a maintenance release for version 4.2 and contains quite a few fixes. Here is the list from the official Changelog. VMM: another TLB invalidation fix for non-present pages VMM: fixed a performance regression (4.2.8 regression; bug #11674) GUI: fixed a crash on shutdown GUI: prevent stuck keys under certain conditions on Windows hosts (bugs #2613, #6171) VRDP: fixed a rare crash on the guest screen resize VRDP: allow to change VRDP parameters (including enabling/disabling the server) if the VM is paused USB: fixed passing through devices on Mac OS X host to a VM with 2 or more virtual CPUs (bug #7462) USB: fixed hang during isochronous transfer with certain devices (4.1 regression; Windows hosts only; bug #11839) USB: properly handle orphaned URBs (bug #11207) BIOS: fixed function for returning the PCI interrupt routing table (fixes NetWare 6.x guests) BIOS: don't use the ENTER / LEAVE instructions in the BIOS as these don't work in the real mode as set up by certain guests (e.g. Plan 9 and QNX 4) DMI: allow to configure DmiChassisType (bug #11832) Storage: fixed lost writes if iSCSI is used with snapshots and asynchronous I/O (bug #11479) Storage: fixed accessing certain VHDX images created by Windows 8 (bug #11502) Storage: fixed hang when creating a snapshot using Parallels disk images (bug #9617) 3D: seamless + 3D fixes (bug #11723) 3D: version 4.2.12 was not able to read saved states of older versions under certain conditions (bug #11718) Main/Properties: don't create a guest property for non-running VMs if the property does not exist and is about to be removed (bug #11765) Main/Properties: don't forget to make new guest properties persistent after the VM was terminated (bug #11719) Main/Display: don't lose seamless regions during screen resize Main/OVF: don't crash during import if the client forgot to call Appliance::interpret() (bug #10845) Main/OVF: don't create invalid appliances by stripping the file name if the VM name is very long (bug #11814) Main/OVF: don't fail if the appliance contains multiple file references (bug #10689) Main/Metrics: fixed Solaris file descriptor leak Settings: limit depth of snapshot tree to 250 levels, as more will lead to decreased performance and may trigger crashes VBoxManage: fixed setting the parent UUID on diff images using sethdparentuuid Linux hosts: work around for not crashing as a result of automatic NUMA balancing which was introduced in Linux 3.8 (bug #11610) Windows installer: force the installation of the public certificate in background (i.e. completely prevent user interaction) if the --silent command line option is specified Windows Additions: fixed problems with partial install in the unattended case Windows Additions: fixed display glitch with the Start button in seamless mode for some themes Windows Additions: Seamless mode and auto-resize fixes Windows Additions: fixed trying to to retrieve new auto-logon credentials if current ones were not processed yet Windows Additions installer: added the /with_wddm switch to select the experimental WDDM driver by default Linux Additions: fixed setting own timed out and aborted texts in information label of the lightdm greeter Linux Additions: fixed compilation against Linux 3.2.0 Ubuntu kernels (4.2.12 regression as a side effect of the Debian kernel build fix; bug #11709) X11 Additions: reduced the CPU load of VBoxClient in drag'and'drop mode OS/2 Additions: made the mouse wheel work (bug #6793) Guest Additions: fixed problems copying and pasting between two guests on an X11 host (bug #11792) The full changelog can be found here. You can download binaries for Solaris, Linux, Windows and MacOS hosts at http://www.virtualbox.org/wiki/Downloads Technocrati Tags: Oracle Virtualization VirtualBox

    Read the article

  • New Source Database Added for EBS 12 + 11gR2 Transportable Tablespaces

    - by John Abraham
    The Transportable Tablespaces (TTS) process was originally certified for the migration of E-Business Suite R12 databases going from a source database of 11gR1 or 11gR2 to a target of 11gR2. This requirement has now been expanded to include a source database of 10gR2 (10.2.0.5) - this will potentially save time for existing 10gR2 customers as they can remove on a crucial upgrade step prior to performing the platform migration. The migration process requires an updated Controlled patch delivered by the Oracle E-Business Suite Platform Engineering team, i.e. it requires a password obtainable from Oracle Support. We released the patch in this manner to gauge uptake, and help identify and monitor any customer issues due to the nature of this technology. This patch has been updated to now include supporting 10gR2 as a source database. Does it meet your requirements?Note that for migration across platforms of the same "endian" format, users are advised to use the Transportable Database (TDB) migration process instead for large databases. The "endian-ness" target platforms can be verified by querying the view V$DB_TRANSPORTABLE_PLATFORM using SQL*Plus (connected as sysdba) on the source platform:SQL>select platform_name from v$db_transportable_platform;If the intended target platform does not appear in the output, it means that it is of a different endian format from the source. Consequently. database migration will need to be performed via Transportable Tablespaces (for large databases) or export/import.The use of Transportable Tablespaces can greatly speed up the migration of the data portion of the database. However, it does not affect metadata, which must still be migrated using export/import. We recommend that users initially perform a test migration on their database, using export/import with the 'metrics=y' parameter. This will help identify the relative amounts of data and metadata, and provide a basis for assessing likely gains in timing. In general, the larger the amount of data (compared to metadata), the greater the reduction in downtime that can be expected from using TTS as a migration process. For smaller databases or for those that have relatively small data compared to metadata, TTS will not be as beneficial for cross endian migration and the use of export/import (datapump) for the whole database is recommended. Where can I find more information? Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 Enterprise Edition (My Oracle Support Document 1311487.1) Oracle Database Administrator's Guide 11g Release 2 (11.2) Related Articles Database Migration using 11gR2 Transportable Tablespaces Now Certified for EBS 12 New Source Databases Added for Transportable Tablespaces + EBS 11i 10gR2 Transportable Tablespaces Certified for EBS 11i Migrating E-Business Suite Release 11i Databases Between Platforms Migrating E-Business Suite Release 12 Databases Between Platforms

    Read the article

  • ApiChange Corporate Edition

    - by Alois Kraus
    In my inital announcement I could only cover a small subset what ApiChange can do for you. Lets look at how ApiChange can help you to fix bugs due to wrong usage of an Api within a fraction of time than it would take normally. It happens that software is tested and some bugs show up. One bug could be …. : We get way too man log messages during our test run. Now you have the task to find the most frequent messages and eliminate the Log calls from the source code. But what about the myriads other log calls? How can we check that the distribution of log calls is nearly equal across all developers? And if not how can we contact the developer to check his code? ApiChange can help you too connect these loose ends. It combines several information silos into one cohesive view. The picture below shows how it is able to fill the gaps. The public version does currently “only” parse the binaries and pdbs to give you for a –whousesmethod query the following colums: If it happens that you have Rational ClearCase (a source control system) in your development shop and an Active Directory in place then ApiChange will try to determine from the source file which was determined from the pdb the last check in user which should be present in your Active Directory. From there it is only a small hop to an LDAP query to your AD domain or the GC (Global Catalog) to get from the user name his Full name Email Phone number Department …. ApiChange will append this additional data all of your query results which contain source files if you add the –fileinfo option. As I said this is currently not enabled by default since the AD domain needs to be configured which are currently only some hard coded values in the SiteConstants.cs source file of ApiChange.Api.dll. Once you got this data you can generate metrics based on source file, developer, assembly, … and add additional data by drag and drop directly into the pivot tables inside Excel. This allows you to e.g. to generate a report which lists the source files with most log calls in descending order along with the developer name and email in the pivot table. Armed with this knowledge you can take meaningful measures e.g. to ask the developer if the huge number of log calls in this source file can be optimized. I am aware that this is a very specific scenario but it is a huge time saver when you are able to fill the missing gaps of information. ApiChange does this in an extensible way. namespace ApiChange.ExternalData {     public interface IFileInformationProvider     {         UserInfo GetInformationFromFile(string fileName);     } } It defines an interface where you can implement your custom information provider to close the gap between source control system and the real person I have to send an email to ask if his code needs a closer inspection.

    Read the article

  • Chargeback and showback...both a 'throw back'

    - by llaszews
    Been getting asked again by customers and partners about chargeback and showback in the cloud so thought I would blog on my response to this question. Charge Back background, information and industry analysis: Cloud computing is all about shared resources. These shared resources are computer servers (including memory and CPU), network devices, hard disk storage, database servers, application servers, cooling, floor space, electricity and more. These resources are shared by departments within a company, or by a number of companies, when resources are hosted in the public or hybrid cloud. Currently, hosting providers that run other companies on their cloud platforms do not have an accurate way to measure the shared computing resources used by a specific user let alone used by a specific customer. Additionally, companies running their own cloud data centers, for private or hybrid clouds, have no way of measure and charging back the departments in the company that are using these shared cloud resources. In both cases, the lack of determine shared resource costs and to charge them back to the company, department or user that is using this resources is limited a clear measure of business benefit and impacting company’s ability to measure the Return on Investment (ROI). An IT chargeback system is an accounting strategy that applies the costs of IT services, hardware or software to the business unit in which they are used. This system contrasts with traditional IT accounting models in which a centralized department bears all of the IT costs in an organization and those costs are treated simply as corporate overhead. Showback involves showing the IT costs to a department or customer but not actually charging them for their IT usage. Showback is a gradual method of introducing chargeback into an enterprise. Most companies implement a show back mechanism before a full chargeback system is put in place. Oracle chargeback product: Oracle Enterprise Manager provides tools for defining detailed Chargeback plans spanning different metrics collected for each type of resources as well as defining Cost Centers for grouping costs across multiple developers. Chargeback plans can use not only usage based costs, but also configuration based costs (e.g. version of the platform) or fixed costs (e.g. flat-rate management fee). Chargeback has rich out of the box reports. Trending reports show how charge and resource consumption varies over time, while Summary reports show the breakdown of charges or usage by different dimensions such as Cost Center or Target Type. These reports help consumers in understanding how their charges relate to their consumption and also assist the IT department with budgeting and planning activities. With BI Publisher, the reports can be made available in a variety of formats such as PDF, HTML, Word, Excel or PowerPoint.

    Read the article

  • What tools exist for assessing an organisation's development capability?

    - by Eric Smith
    I have a bit of a challenge at work at the moment. Presently (and in fact, for some time now), we have been experiencing the following problems with some in-house maintained applications: Defects (sometimes quite serious) being released into production; The Customer (that is, the relevant business unit) perpetually changing their minds (or appearing to do so) about what issue to work on next; A situation where everyone seems to be in a "fire-fighting" mode a lot of the time; Development staff responding to operational requests from business users; ("operational" here means something that needs to be done in order to continue with business, or perhaps just to make a business user's life a little less painful, as opposed to fixing a bug in the application, or enhancing the application); Now I'm sure this doesn't sound particularly new or surprising to most of the participants on this Q&A site and no prizes for identifying the "usual suspects" when it comes to root causes. My challenge is that I have to persuade the higher-ups to do uncomfortable things in order to address all of this. The folk I need to persuade come from a mixture of the following two cultures: Accounting; IT Infrastructure. I have therefore opted for a strategy that draws from things with-which folk from such a culture would be most comfortable (at least, in my estimation), namely: numbers and tangibles. Of course modern development practitioners know all too well that this sort of thing isn't easily solved using an analytical mindset (some would argue that that mindset is, in fact, entirely inappropriate). Never-the-less, this is the dichotomy with-which I am faced, so that's the stake that I've put in the ground. I would like to be able to do research and use the outputs to present findings in the form of metrics and measures. I am finding it quite difficult, though, to find an agreed-upon methodology and set of templates for assessing an organisations development capability--the only thing that seems applicable is the Software Engineering Institute's Capability Maturity Model. The latter, however, seems dated and even then rather vague. So, the question is: Do any tools or methodologies (free or commercial) exist that would assist me in completing this assessment?

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >