Search Results

Search found 39701 results on 1589 pages for 'free society management software'.

Page 83/1589 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Oracle Enterprise Manager Cloud Control 12c: Best Practices for Middleware Management

    - by JuergenKress
    This self-paced course teaches you best practices when using Oracle Enterprise Manager Cloud Control 12c for managing your WebLogic and SOA applications and infrastructure. It consists of interactive lectures, videos, review sessions, and optional demonstrations. This course covers Enterprise Manager Cloud Control 12c licensed with the WebLogic Server Management Pack Enterprise Edition and the SOA Management Pack Enterprise Edition. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: EM12c,Enterprise Manager,EM12c training,eductaion,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Programming language specific package management systems

    - by m0nhawk
    There are some programming languages for which exist their own package management systems: CTAN for TeX CPAN for Perl Pip & Eggs for Python Maven for Java cabal for Haskell Gems for Ruby Is there any other languages with such systems? What about C and C++? (that's the main question!) Why there are no such systems for them? And isn't creating packages for yum, apt-get or other general package management systems better? UPD: And what about unification? Have someone tried to unify that "the zoo"? If yes, looks like that project didn't succeed.

    Read the article

  • New RUP Patch for iSupplier Portal, Sourcing and Supplier Lifecycle Management (SLM)

    - by LuciaC
    Just released - the 12.1.3 Rollup (RUP) Patch 17525552:R12.PRC_PF.B for iSupplier Portal, Sourcing and Supplier Lifecycle Management (SLM). Who should apply this patch? Anyone that is on Release 12.1.3 and is using  iSupplier Portal, Sourcing or Supplier Lifecycle Management (SLM) functionality. The following areas have had major fixes: Prospective Supplier Guided Navigation: The train-navigation is introduced for prospective supplier registration so that prospective suppliers can see all steps needed to successfully register themselves. Supplier Registration Workflow Enhancement: With this release, provided the Approval Management Engine (AME) action required notifications for supplier approval, so that all workflow related features can be enabled. Vacation rules can be set, approvals can be forwarded and more information can be requested through the notification itself.  Additionally AME parallel Approval support for Supplier Registration approvals has been added. Reinstate Supplier Request: Allow buyer to reopen/reinstate the rejected supplier. Supplier is able to access their previously rejected registration again and make changes and resubmit request. Contact Address Association: The prospective supplier is allowed to associate addresses with contacts (including Primary) during the prospective supplier registration process. Primary Contact Enhancement: The prospective supplier can be registered without creating a user account for the primary contact. Mandatory Attributes: In the negotiation requirement creation page, the lookup meaning of 'Internal' has been changed to 'Internal Optional', and a new lookup value with meaning as 'Internal Required' has been added. The values available in the 'Type' dropdown now are Display Only, Internal Optional, Internal Required, Supplier Optional and Supplier Required.  So now during supplier evaluations, internal user response can be set as mandatory by using Internal Required type during requirement creation. Notifications to Supplier:  When the supplier saves and submits their supplier registration request, then a notification with a registration status page link will be sent for further access.  When the buyer approves, rejects or returns the request, the supplier will be notified in an email with the current status. There are also 10 major enhancements included in this RUP. For information about this RUP; including, the fixes and enhancements included, how to access and apply the patch, performing an impact analysis on your system, and testing recommendations, see Doc ID 1591198.1.  Don’t delay apply the patch today!

    Read the article

  • April 10 EBS WEBCAST: Cost Management Intercompany Accounting for Internal Order and Drop Shipment

    - by Oracle_EBS
    ADVISOR WEBCAST: Cost Management Intercompany Accounting for Internal Order and Drop ShipmentPRODUCT FAMILY: Cost Management April 10, 2012 at 11 am ET, 9 am MT, 8 am PT This one-hour advisor webcast discusses Intercompany Accounting for Internal Order and Drop Shipments. This session is recommended for technical and functional users who work on the costing part of the Internal Order and Drop Shipment cycles.TOPICS WILL INCLUDE: Understand the various setups involved in Intercompany Accounting Understand the accounting entries generated for different setups in Intercompany Accounting A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Current Schedule can be found on Note 740966.1 Post Presentation Recordings can be found on Note 740964.1

    Read the article

  • Why can't non-admin users install software?

    - by fiftyeight
    This is probably something I don't understand since I am used to Windows and am only starting out with Ubuntu. I know that software in linux comes in packages what I don't understand is why can't non-admin users install software. I mean, every application is run by a specific user, and that user will only be able to run that applciation with his privilages, so if he has no admin privileges, the application also won't be able to access unauthorized directories etc. I want most of the time to work on my PC with a non-admin user since it seems more safe to me, most of the time I have no need for admin privileges. and even though I know viruses in linux are uncommon I still think the best practice is to work on the computer in a state that you yourself can't make any changes to important files, that way viruses also can't harm any important files, but I need to install software for programming and web-design etc. and first of all I don't want to switch users all the time. But also it sounds safer to me that everything being done on the PC will be done through the non-admin user. I'll be glad to know what misunderstanding I have here, cause something here doesn't sound right.

    Read the article

  • The Latest In Master Data Management

    Today master data continues to expand while data quality becomes more important. The challenge of clean data is not new, but the stakes and complexities are higher than ever. Fortunately, Oracle has a solution -- Oracle Master Data Management. Hear from Pascal Laik, VP Oracle MDM Product Strategy about the benefits of Master Data Management, the solutions that Oracle offers and why they are unique and what benefits customers are deriving from Oracle MDM products. Learn about the latest product in the Oracle MDM family and where Oracle MDM strategy is heading.

    Read the article

  • Upcoming Webcast: Cost Management Intercompany Accounting for Internal Order and Drop Shipment

    - by Oracle_EBS
    ADVISOR WEBCAST: Cost Management Intercompany Accounting for Internal Order and Drop ShipmentPRODUCT FAMILY: Cost Management April 10, 2012 at 11 am ET, 9 am MT, 8 am PT This one-hour advisor webcast discusses Intercompany Accounting for Internal Order and Drop Shipments. This session is recommended for technical and functional users who work on the costing part of the Internal Order and Drop Shipment cycles.TOPICS WILL INCLUDE: Understand the various setups involved in Intercompany Accounting Understand the accounting entries generated for different setups in Intercompany Accounting A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Current Schedule can be found on Note 740966.1 Post Presentation Recordings can be found on Note 740964.1

    Read the article

  • SQL Server 2008 - Management Studio issue

    - by Phil Streiff
    This is a known, documented issue with SQL Server 2008 Management Studio, but certain DDL operations like ALTERing a column datatype from Management Studio fails. For example, in Object Explorer, navigate to a table column > right-click on column > Modify. Then, change column datatype or length, then save and this error message displays: To workaround this problem, go to Query Editor and issue the following DDL statement instead:  TABLE dbo.FTPFile ALTER COLUMN CmdLine VARCHAR (100) ; ALTER   GO   The column change is successfuly applied now.

    Read the article

  • is Java free for mobile development?

    - by exTrace101
    Q1. I would like to know if it's free for a developer (I mean, if I have to pay no royalties to Sun/Oracle) to develop (Android) mobile apps in Java? After reading this snippet about use of Java field, I'm getting the impression that Java is not free for mobile development, is that right? .."General Purpose Desktop Computers and Servers" means computers, including desktop and laptop computers, or servers, used for general computing functions under end user control (such as but not specifically limited to email, general purpose Internet browsing, and office suite productivity tools). The use of Software in systems and solutions that provide dedicated functionality (other than as mentioned above) or designed for use in embedded or function-specific software applications, for example but not limited to: Software embedded in or bundled with industrial control systems, wireless mobile telephones, wireless handheld devices, netbooks, kiosks, TV/STB, Blu-ray Disc devices, telematics and network control switching equipment, printers and storage management systems, and other related systems are excluded from this definition and not licensed under this Agreement... and from http://www.excelsiorjet.com/embedded/ Notice : The Java SE Embedded technology license currently prohibits the use of Java SE in cell phones. Q2. how come these plethora of Android Java developers aren't paying Sun/Oracle a dime?

    Read the article

  • Reminder: Free, Global, Virtual Developer Day November 5th

    - by jeckels
    Just a quick reminder about the FREE virtual developer day focused on Coherence (and WebLogic) coming on November 5th. This day, with content tailored for developers, will guide you through tooling updates and best practices around creating applications with WebLogic and Coherence as target platforms. We'll also explore advances in how you can manage your build, deploy and ongoing management processes to streamline your application's life cycle. And of course, we'll conclude with some hands-on labs that ensure this isn't all a bunch of made-up stuff - get your hands dirty in the code!November 5, 20139am PT/12pm ETREGISTER NOW We're offering two tracks for your attendance, though of course you're free to attend any session you wish. The first will be for pure developers with sessions around developing for WebLogic with HTML5, processing live events with Coherence, and looking at development tooling. The second is for developers who are involved in the building and management processes as part of the application life cycle. These sessions focus on using Maven for builds, using Chef and Puppet for configuration and more.We look forward to seeing you there - don't forget to invite a friend!

    Read the article

  • Robust way to keep records of software releases?

    - by japreiss
    We release a number of small plug-ins that go along with our software. Each plug-in allows our software to talk to a single manufactuer's hardware. I would like to devise a system for keeping track of plug-in releases. Example info that should be stored: Hardware manufacturer name 32-bit? 64-bit? both? What modes of operation does the hardware support? What versions of the manufacturer's driver have been tested with the plugin? Desirable properties of the system: Able to synchronize with version control software Stores data in human-readable text file (also good for differ tool) Free visual, spreadsheet-like editor available Able to do simple analysis like "What is the oldest plug-in?" I've got to imagine that someone else has tackled this problem already. Right now my best guess is XML/JSON with a visual editor, but I have been disappointed in the editors I've tried so far. I'd like to get input from some more experienced developers. Thanks!

    Read the article

  • Why doesn't openSUSE Linux upgrade itself through its software repositories?

    - by Dougal
    openSUSE - fast becoming my favourite Linux distro on the client - doesn't seem to upgrade itself through its own configured software repositories. Do we know why this is the case? Is it a money-making thing where they can then sell upgrade CDs / DVDs? I mean, pretty much every other Linux upgrades itself through the normal software repositories. For example, Ubuntu can upgrade itself from 10.4 to 10.10 just through the normal software package upgrade procedure. Why must it be a huge procedure to upgrade openSUSE? Any knowledge or ideas appreciated. Thank you.

    Read the article

  • Domain Specific Software Engineering (DSSE)

    Domain Specific Software Engineering (DSSE) believes that creating every application from nothing is not advantageous when existing systems can be leveraged to create the same application in less time and with less cost.  This belief is founded in the idea that forcing applications to recreate exiting functionality is unnecessary. Why would we build a better wheel when we already have four really good and proven wheels? DSSE suggest that we take an existing wheel and just modify it to fit an existing need of a system. This allows developers to leverage existing codebases so that more time and expense are focused on creating more usable functionality compared to just creating more functionality. As an example, how many functions do we need to create to send an email when one can be created and used by all other applications within the existing domain? Key Factors of DSSE Domain Technology Business A Domain in DSSE is used to control the problem space for a project. This control allows for applications to be developed within specific constrains that focus development is to a specific direction.Technology in DSSE offers a variety of technological solutions to be applied within a domain. Technology Examples: Tools Patterns Architectures & Styles Legacy Systems Business is the motivator for any originations to use DSSE in there software development process. Business reason to use DSSE: Minimize Costs Maximize market and Profits When these factors are used in combination additional factors and benefits can be found. Result of combining Key Factors of DSSE Domain + Business  = Corporate Core Competencies Domain expertise improved by market and business expertise Domain + Technology = Application Family Architectures All possible technological solutions to problems in a domain without any business constraints.  Business + Technology =  Domain independent infrastructure Tools and techniques for building systems  independent of all domains  Domain + Business + Technology = Domain-specific software engineering Applies technology to domain related goals in the context of business and market expertise

    Read the article

  • Abstracting functionality

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/22/abstracting-functionality.aspxWhat is more important than data? Functionality. Yes, I strongly believe we should switch to a functionality over data mindset in programming. Or actually switch back to it. Focus on functionality Functionality once was at the core of software development. Back when algorithms were the first thing you heard about in CS classes. Sure, data structures, too, were important - but always from the point of view of algorithms. (Niklaus Wirth gave one of his books the title “Algorithms + Data Structures” instead of “Data Structures + Algorithms” for a reason.) The reason for the focus on functionality? Firstly, because software was and is about doing stuff. Secondly because sufficient performance was hard to achieve, and only thirdly memory efficiency. But then hardware became more powerful. That gave rise to a new mindset: object orientation. And with it functionality was devalued. Data took over its place as the most important aspect. Now discussions revolved around structures motivated by data relationships. (John Beidler gave his book the title “Data Structures and Algorithms: An Object Oriented Approach” instead of the other way around for a reason.) Sure, this data could be embellished with functionality. But nevertheless functionality was second. When you look at (domain) object models what you mostly find is (domain) data object models. The common object oriented approach is: data aka structure over functionality. This is true even for the most modern modeling approaches like Domain Driven Design. Look at the literature and what you find is recommendations on how to get data structures right: aggregates, entities, value objects. I´m not saying this is what object orientation was invented for. But I´m saying that´s what I happen to see across many teams now some 25 years after object orientation became mainstream through C++, Delphi, and Java. But why should we switch back? Because software development cannot become truly agile with a data focus. The reason for that lies in what customers need first: functionality, behavior, operations. To be clear, that´s not why software is built. The purpose of software is to be more efficient than the alternative. Money mainly is spent to get a certain level of quality (e.g. performance, scalability, security etc.). But without functionality being present, there is nothing to work on the quality of. What customers want is functionality of a certain quality. ASAP. And tomorrow new functionality needs to be added, existing functionality needs to be changed, and quality needs to be increased. No customer ever wanted data or structures. Of course data should be processed. Data is there, data gets generated, transformed, stored. But how the data is structured for this to happen efficiently is of no concern to the customer. Ask a customer (or user) whether she likes the data structured this way or that way. She´ll say, “I don´t care.” But ask a customer (or user) whether he likes the functionality and its quality this way or that way. He´ll say, “I like it” (or “I don´t like it”). Build software incrementally From this very natural focus of customers and users on functionality and its quality follows we should develop software incrementally. That´s what Agility is about. Deliver small increments quickly and often to get frequent feedback. That way less waste is produced, and learning can take place much easier (on the side of the customer as well as on the side of developers). An increment is some added functionality or quality of functionality.[1] So as it turns out, Agility is about functionality over whatever. But software developers’ thinking is still stuck in the object oriented mindset of whatever over functionality. Bummer. I guess that (at least partly) explains why Agility always hits a glass ceiling in projects. It´s a clash of mindsets, of cultures. Driving software development by demanding small increases in functionality runs against thinking about software as growing (data) structures sprinkled with functionality. (Excuse me, if this sounds a bit broad-brush. But you get my point.) The need for abstraction In the end there need to be data structures. Of course. Small and large ones. The phrase functionality over data does not deny that. It´s not functionality instead of data or something. It´s just over, i.e. functionality should be thought of first. It´s a tad more important. It´s what the customer wants. That´s why we need a way to design functionality. Small and large. We need to be able to think about functionality before implementing it. We need to be able to reason about it among team members. We need to be able to communicate our mental models of functionality not just by speaking about them, but also on paper. Otherwise reasoning about it does not scale. We learned thinking about functionality in the small using flow charts, Nassi-Shneiderman diagrams, pseudo code, or UML sequence diagrams. That´s nice and well. But it does not scale. You can use these tools to describe manageable algorithms. But it does not work for the functionality triggered by pressing the “1-Click Order” on an amazon product page for example. There are several reasons for that, I´d say. Firstly, the level of abstraction over code is negligible. It´s essentially non-existent. Drawing a flow chart or writing pseudo code or writing actual code is very, very much alike. All these tools are about control flow like code is.[2] In addition all tools are computationally complete. They are about logic which is expressions and especially control statements. Whatever you code in Java you can fully (!) describe using a flow chart. And then there is no data. They are about control flow and leave out the data altogether. Thus data mostly is assumed to be global. That´s shooting yourself in the foot, as I hope you agree. Even if it´s functionality over data that does not mean “don´t think about data”. Right to the contrary! Functionality only makes sense with regard to data. So data needs to be in the picture right from the start - but it must not dominate the thinking. The above tools fail on this. Bottom line: So far we´re unable to reason in a scalable and abstract manner about functionality. That´s why programmers are so driven to start coding once they are presented with a problem. Programming languages are the only tool they´ve learned to use to reason about functional solutions. Or, well, there might be exceptions. Mathematical notation and SQL may have come to your mind already. Indeed they are tools on a higher level of abstraction than flow charts etc. That´s because they are declarative and not computationally complete. They leave out details - in order to deliver higher efficiency in devising overall solutions. We can easily reason about functionality using mathematics and SQL. That´s great. Except for that they are domain specific languages. They are not general purpose. (And they don´t scale either, I´d say.) Bummer. So to be more precise we need a scalable general purpose tool on a higher than code level of abstraction not neglecting data. Enter: Flow Design. Abstracting functionality using data flows I believe the solution to the problem of abstracting functionality lies in switching from control flow to data flow. Data flow very naturally is not about logic details anymore. There are no expressions and no control statements anymore. There are not even statements anymore. Data flow is declarative by nature. With data flow we get rid of all the limiting traits of former approaches to modeling functionality. In addition, nomen est omen, data flows include data in the functionality picture. With data flows, data is visibly flowing from processing step to processing step. Control is not flowing. Control is wherever it´s needed to process data coming in. That´s a crucial difference and needs some rewiring in your head to be fully appreciated.[2] Since data flows are declarative they are not the right tool to describe algorithms, though, I´d say. With them you don´t design functionality on a low level. During design data flow processing steps are black boxes. They get fleshed out during coding. Data flow design thus is more coarse grained than flow chart design. It starts on a higher level of abstraction - but then is not limited. By nesting data flows indefinitely you can design functionality of any size, without losing sight of your data. Data flows scale very well during design. They can be used on any level of granularity. And they can easily be depicted. Communicating designs using data flows is easy and scales well, too. The result of functional design using data flows is not algorithms (too low level), but processes. Think of data flows as descriptions of industrial production lines. Data as material runs through a number of processing steps to be analyzed, enhances, transformed. On the top level of a data flow design might be just one processing step, e.g. “execute 1-click order”. But below that are arbitrary levels of flows with smaller and smaller steps. That´s not layering as in “layered architecture”, though. Rather it´s a stratified design à la Abelson/Sussman. Refining data flows is not your grandpa´s functional decomposition. That was rooted in control flows. Refining data flows does not suffer from the limits of functional decomposition against which object orientation was supposed to be an antidote. Summary I´ve been working exclusively with data flows for functional design for the past 4 years. It has changed my life as a programmer. What once was difficult is now easy. And, no, I´m not using Clojure or F#. And I´m not a async/parallel execution buff. Designing the functionality of increments using data flows works great with teams. It produces design documentation which can easily be translated into code - in which then the smallest data flow processing steps have to be fleshed out - which is comparatively easy. Using a systematic translation approach code can mirror the data flow design. That way later on the design can easily be reproduced from the code if need be. And finally, data flow designs play well with object orientation. They are a great starting point for class design. But that´s a story for another day. To me data flow design simply is one of the missing links of systematic lightweight software design. There are also other artifacts software development can produce to get feedback, e.g. process descriptions, test cases. But customers can be delighted more easily with code based increments in functionality. ? No, I´m not talking about the endless possibilities this opens for parallel processing. Data flows are useful independently of multi-core processors and Actor-based designs. That´s my whole point here. Data flows are good for reasoning and evolvability. So forget about any special frameworks you might need to reap benefits from data flows. None are necessary. Translating data flow designs even into plain of Java is possible. ?

    Read the article

  • Interview question: Develop an application that can display trail period expires after 30 days witho

    - by Algorist
    Hi, I saw this question in a forum about how an application can be developed that can keep track of the installation date and show trail period expired after 30 days of usage. The only constraint is not to use the external storage of any kind. Question: How to achieve this? Thanks Bala --Edit I think its easy to figure out the place to insert a question work. Anyway, I will write the question clearly. "external storage" means don't use any kind of storage like file, registry, network or anything. You only have your program.

    Read the article

  • Should tests be self written in TDD?

    - by martin
    We run a project, which we want to solve with test driven development. I thought about some questions that came up, when initiating the project. One question was, who should write the unit-test for a feature. Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs? If i understand the concept of TDD in the right way. The feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer? What would you say, should the tests in TDD written by the programmer himself or should another programmer write the tests that describes what a method can do?

    Read the article

  • Conventions for modelling c programs.

    - by Hassan Syed
    I'm working with a source base written almost entirely in straight-c (nginx). It does, however, make use of rich high level programming techniques such as compile-time metaprogramming, and OOP - including run-time dispatch. I want to draw ER diagrams, UML class diagrams and UML sequence diagrams. However to have a clean mapping between the two, consistent conventions must be applied. So, I am hopping someone has some references to material that establishes or applies such conventions to similar style c-code.

    Read the article

  • In C, as free() knows an array size, why isn't there a function that gets the array size? [closed]

    - by user354959
    Possible Duplicate: If free() knows the length of my array, why can’t I ask for it in my own code? Searching around (including here at stackoverflow), I got that malloc() allocates an array and also creates a header to control the array info. In this header, there's also the array size. free() use such information to know how to deallocate that array. So, if the array size info is "there" (somewhere in the memory), why there isn't a function that returns an array size, looking for this at the array header? Or am I missing something?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >