Search Results

Search found 2956 results on 119 pages for 'sources'.

Page 41/119 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Toutes les semaines un peu de code pour aller plus loin avec Windows 7, l'accélération matérielle

    En cette fin d'année, la communauté de Developpez.com s'est alliée avec Microsoft France pour relayer une série de questions / réponses sur le développement Windows 7. A partir d'aujourd'hui, nous poserons une question chaque lundi sur une fonctionnalité propre au développement d'applications Windows 7. La bonne réponse de la question de la semaine sera ensuite dévoilée la semaine suivante avec un exemple de mise en pratique. Pour commencer cette série, nous vous proposons déjà une première question sur la recherche fédérée avec sa réponse comme exemple : Quelle fonctionnalité sous Windows 7 permet de rechercher sous plusieurs sources de données e...

    Read the article

  • Looking for enterprise web application design inspiration

    - by Farshid
    I've checked many websites to be inspired about what the look and feel of a serious enterprise web-application should look like. But the whole I saw were designed for being used by single users and not serious corporate users. The designs I saw were mostly happy-colored, and looked like being developed by a small team of eager young passionate developers. But what I'm looking for, are showcases of serious web apps' designs (e.g. web apps developed by large corporations) that are developed for being used by a large number of corporate uses. Can you suggest me sources for this kind of inspiration?

    Read the article

  • are keywords in URLs good SEO or needlessly redundant?

    - by Blazemonger
    A coworker and I are locked in a debate over the value of SEO keywords in the URL of a page. She wants to change all the filenames of the HTML pages of a fencing company so they look like residential-home-chicago.html, contact-chicago-contractor.html, and so on. She is convinced that because Google highlights keywords in URLS in search results, that means that putting keywords here is more valuable. My position is that these do not improve SEO, since Google doesn't seem to give keywords in the URL any more weight than keywords in the body of the page, and might even give them less weight. In the meantime, they make it harder for me to find the pages I want when its time to edit them, and the site as a whole looks cheap and spammy. Google's own SEO guide suggests to me that yes, keywords in URLs are useful, but not superior, and that they are more useful for human readability than search engine rankings. I'm looking for authoritative sources that support either position, not blog articles from SEO optimization companies trying to promote themselves.

    Read the article

  • Articles on TFS Build Server / MSBuild

    - by MartinWatts
    I have decided to write some articles on using a TFS Build Server. During the past few years I have had the responsibility and challange of keeping one running, and I found out that on some subjects, there is very little to find on the internet. So hopefully my experiences can help others. That is, before VS 2010 build server makes everything we have learnt on MSBuild so far redundant. ;) The first article is about selectively getting the sources you need to get the build done. You can find the article here.

    Read the article

  • A Method for Reducing Contention and Overhead in Worker Queues for Multithreaded Java Applications

    - by Janice J. Heiss
    A java.net article, rich in practical resources, by IBM India Labs’ Sathiskumar Palaniappan, Kavitha Varadarajan, and Jayashree Viswanathan, explores the challenge of writing code in a way that that effectively makes use of the resources of modern multicore processors and multiprocessor servers.As the article states: “Many server applications, such as Web servers, application servers, database servers, file servers, and mail servers, maintain worker queues and thread pools to handle large numbers of short tasks that arrive from remote sources. In general, a ‘worker queue’ holds all the short tasks that need to be executed, and the threads in the thread pool retrieve the tasks from the worker queue and complete the tasks. Since multiple threads act on the worker queue, adding tasks to and deleting tasks from the worker queue needs to be synchronized, which introduces contention in the worker queue.” The article goes on to explain ways that developers can reduce contention by maintaining one queue per thread. It also demonstrates a work-stealing technique that helps in effectively utilizing the CPU in multicore systems. Read the rest of the article here.

    Read the article

  • MDM for Tax Authorities

    - by david.butler(at)oracle.com
    In last week’s MDM blog, we discussed MDM in the Public Sector. I want to continue that thread. After all, no industry faces tougher data quality problems than governmental organizations, and few industries suffer more significant down side consequences to poor operations than local, state and federal governments. One key challenge area is taxation. Tax Authorities face a multitude of IT challenges. Firstly, the data used in tax calculations is increasing in volume and complexity. They must improve service by introducing multi-channel contact centers and self-service capabilities. Security concerns necessitate increasingly sophisticated data protection procedures. And cost constraints are driving Tax Authorities to rely on off-the-shelf software for many of their functional areas. Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex. They typically include multiple, disparate operational and analytical systems across which the sum total of data about individual constituents is fragmented. To make matters more complicated, taxation is not carried out by a single jurisdiction, and often sources of income including employers, investments and other sources of taxable income and deductions must also be tracked and shared among tax authorities. Collectively, these systems are involved in tax assessment and collections, risk analysis, scoring, tracking, auditing and investigation case management. The Problem of Constituent Data Management The infrastructure described above makes it very difficult to create a consolidated representation of a given party. Differing formats and data models mean that a constituent may be represented in one way in one system and in a different way in another. Individual records are frequently inaccurate, incomplete, out of date and/or inconsistent with other records relating to the same constituent. When constituent data must be aggregated and scored, information within each system must be rationalized and normalized so the agency can produce a constituent information file (CIF) that provides a single source of truth about that party. If information about that constituent changes, each system in turn must be updated. There have been many attempts to solve this problem with technology: from consolidating transactional systems to conducting manual systems integration projects and superimposing layers of business intelligence and analytics. All these approaches can be successful in solving a portion of the problem at a specific point in time, but without an enterprise perspective, anything gained is quickly lost again. Oracle Constituent Data Mastering for Tax Authorities: A Single View of the Constituent Oracle has a flexible and long-term solution to the problem of securely integrating and managing constituent data. The Oracle Solution for mastering Constituent Data for Tax Authorities is based on two core product offerings: Oracle Customer Hub and – optionally – Oracle Application Integration Architecture (AIA). Customer Hub is a master data management (MDM) product that centralizes, de-duplicates, and enriches constituent data. It unifies fragmented information without disrupting existing business processes or IT investments. Role based data access and privacy rules guarantee maximum security and privacy. Data is continuously and automatically synchronized with all source systems. With the Oracle Customer Hub managing the master constituent identity, every department can capture transaction activity against the same record, improving reporting accuracy, employee productivity, reliability of constituent analytics, and day-to-day constituent relationships. Oracle Application Integration Architecture provides a collection of core pre-built processes to support out of the box Master Data Governance across Oracle Customer Hub, Siebel CRM, and Oracle E-Business Suite. It also provides a framework to enable MDM integrations with other Oracle and non-Oracle applications. Oracle AIA removes some of the key inhibitors to implementing a service-oriented architecture (SOA) by providing a pre-built SOA-based middleware foundation as well as industry-optimized service oriented applications, all built around a SOA governance model that encourages effective design and reuse. I encourage you to read Oracle Solution for Mastering Constituents Data for Public Sector – Tax Authorities by Roberto Negro. It is an outstanding whitepaper that describes how the Oracle MDM solution allows you to create a unified, reconciled source of high-quality constituent data and gain an accurate single view of each constituent. This foundation enables you to lower the costs associated with data quality and integration and create a tax organization that is efficient, secure and constituent-centric. Also, don’t forget the upcoming webcast on Thursday, February 10th: Deliver Improved Services to Citizens at Lower Cost to your Organization Our Guest Speaker is Ruben Spekle, from Capgemini. He will also provide insight into Public Sector Master Data Management and Case Management implementations including one that was executed for a Dutch Government Agency. If you are interested in how governmental organizations from around the world are using MDM to advance their cause, click here to register for the webcast.

    Read the article

  • Remote install of Ubuntu Server

    - by David Walker
    Hi all, I have a machine located 500 miles away that's running Ubuntu 8.04. I figure it's just about time that I upgrade to the latest LTS. However, there's a software raid (md_raid) in there, and I'm afraid that just a dist-upgrade when I switch over the sources.list will end with catastrophic failure. Like a panic on boot that the raid'd disk can't be read, or something else. First, hoping that's not the case, however, if it ends up happening I'm wondering if there's a means of having someone drop in a Ubuntu 10.04 server install disk, and flip on ssh, and some means for me to hop on and re-run the installer remotely. Is this feasible? If so, what would one need to do aside run apt-get install ssh on the target machine? I do have friends who can be in front of the target machine to initiate the process, just not execute it out.

    Read the article

  • lubuntu notify-send remove limit of 21?

    - by giuspen
    sending notifications with notify-send in lubuntu notify-send -i error -t 1000 "Error" "error notification" I can send only 21 of them, after that no more notifications sent, the only way to receive more notifications is to click on the panel where there's a letter with the number 21 and then click on the button "clear all notifications". Is there a way to avoid the need to go clicking the button, also is there a way to remove at all that letter with number of notifications received? UPDATE: I realize that notification-daemon (0.7.3) is used. I downloaded the sources and edited the source code (nd-queue.c - on_bubble_destroyed) to do not buffer but always destroy the bubbles but I would prefer another way...

    Read the article

  • Master Data Management Implementation Styles

    - by david.butler(at)oracle.com
    In any Master Data Management solution deployment, one of the key decisions to be made is the choice of the MDM architecture. Gartner and other analysts describe some different Hub deployment styles, which must be supported by a best of breed MDM solution in order to guarantee the success of the deployment project.   Registry Style: In a Registry Style MDM Hub, the various source systems publish their data and a subscribing Hub stores only the source system IDs, the Foreign Keys (record IDs on source systems) and the key data values needed for matching. The Hub runs the cleansing and matching algorithms and assigns unique global identifiers to the matched records, but does not send any data back to the source systems. The Registry Style MDM Hub uses data federation capabilities to build the "virtual" golden view of the master entity from the connected systems.   Consolidation Style: The Consolidation Style MDM Hub has a physically instantiated, "golden" record stored in the central Hub. The authoring of the data remains distributed across the spoke systems and the master data can be updated based on events, but is not guaranteed to be up to date. The master data in this case is usually not used for transactions, but rather supports reporting; however, it can also be used for reference operationally.   Coexistence Style: The Coexistence Style MDM Hub involves master data that's authored and stored in numerous spoke systems, but includes a physically instantiated golden record in the central Hub and harmonized master data across the application portfolio. The golden record is constructed in the same manner as in the consolidation style, and, in the operational world, Consolidation Style MDM Hubs often evolve into the Coexistence Style. The key difference is that in this architectural style the master data stored in the central MDM system is selectively published out to the subscribing spoke systems.   Transaction Style: In this architecture, the Hub stores, enhances and maintains all the relevant (master) data attributes. It becomes the authoritative source of truth and publishes this valuable information back to the respective source systems. The Hub publishes and writes back the various data elements to the source systems after the linking, cleansing, matching and enriching algorithms have done their work. Upstream, transactional applications can read master data from the MDM Hub, and, potentially, all spoke systems subscribe to updates published from the central system in a form of harmonization. The Hub needs to support merging of master records. Security and visibility policies at the data attribute level need to be supported by the Transaction Style hub, as well.   Adaptive Transaction Style: This is similar to the Transaction Style, but additionally provides the capability to respond to diverse information and process requests across the enterprise. This style emerged most recently to address the limitations of the above approaches. With the Adaptive Transaction Style, the Hub is built as a platform for consolidating data from disparate third party and internal sources and for serving unified master entity views to operational applications, analytical systems or both. This approach delivers a real-time Hub that has a reliable, persistent foundation of master reference and relationship data, along with all the history and lineage of data changes needed for audit and compliance tracking. On top of this persistent master data foundation, the Hub can dynamically aggregate transaction data on demand from different source systems to deliver the unified golden view to downstream systems. Data can also be accessed through batch interfaces, published to a message bus or served through a real-time services layer. New data sources can be readily added in this approach by extending the data model and by configuring the new source mappings and the survivorship rules, meaning that all legacy data hubs can be leveraged to contribute their records/rules into the new transaction hub. Finally, through rich user interfaces for data stewardship, it allows exception handling by business analysts to keep it current with business rules/practices while maintaining the reliability of best-of-breed master records.   Confederation Style: In this architectural style, several Hubs are maintained at departmental and/or agency and/or territorial level, and each of them are connected to the other Hubs either directly or via a central Super-Hub. Each Domain level Hub can be implemented using any of the previously described styles, but normally the Central Super-Hub is a Registry Style one. This is particularly important for Public Sector organizations, where most of the time it is practically or legally impossible to store in a single central hub all the relevant constituent information from all departments.   Oracle MDM Solutions can be deployed according to any of the above MDM architectural styles, and have been specifically designed to fully support the Transaction and Adaptive Transaction styles. Oracle MDM Solutions provide strong data federation and integration capabilities which are key to enabling the use of the Confederated Hub as a possible architectural style approach. Don't lock yourself into a solution that cannot evolve with your needs. With Oracle's support for any type of deployment architecture, its ability to leverage the outstanding capabilities of the Oracle technology stack, and its open interfaces for non-Oracle technology stacks, Oracle MDM Solutions provide a low TCO and a quick ROI by enabling a phased implementation strategy.

    Read the article

  • Améliorer la pertinence des réponses aux demandes clients et réduire les DMT*

    - by Valérie De Montvallon
    Le Knowledge Management pour améliorer la pertinence des réponses aux demandes clients et réduire les DMT Avec le témoignage de SFR Lundi 2 juillet de 8h30 à 10h30 à l’Automobile Club de France, Paris Web, appel vocal, rendez-vous en agence, vos clients s'attendent aujourd’hui à obtenir une réponse unique, pertinente et rapide, quel que soit le canal de contact. Vos conseillers clients et vos agents ont besoin d'avoir accès facilement à l'information nécessaire. La volumétrie des données utilisées par les services de relation clients (centres de contacts, vendeurs en magasin, community manager…) est impressionnante, mixant souvent plusieurs sources d’information et nécessitant des recherches sémantiques. Rendez-vous le 2 juillet pour découvrir comment un outil de Knowledge Management permet d’optimiser la pertinence des résultats de recherche. Au cours de cette matinée d’échanges, Jocelyn Aubry, DSI Relation Client Grand Public chez SFR, partagera son expérience d'intégration de la solution Oracle InQuira pour constituer une base de connaissance unique et cross-canal, au service de ses conseillers clients. Inscription : [email protected]

    Read the article

  • ubuntu 12.10 - nvidia gtx 550 ti proprietary drivers issue

    - by Phill
    I have installed Ubuntu 12.1 and own the nvidia gtx 550 ti (no integrated video). Installed it just fine but the buttons and text seem to get scrambled if using the nouveau drivers I guess. I tried installing the proprietary drivers manualy and from the Additional Drivers tab in software sources but every time I do that after it restarts the menus and icons dissapear an the only way getting them back is to revert to the nouveau drivers. I've tried noumerous attempts ending up reinstalling it. Any help would be apreciated, thanks.

    Read the article

  • Dofollow or Nofollow trackbacks?

    - by Ernesto Marrero
    Trackbacs Dofollow? ¿yes or not? Authoritative sources, Pleasse Thinking about SEO, is it better that they are dofollow trackback or nofollow trackbacks? When I write a post on my blog, I automatically send trackback to my site from http://bitacoras.com. The above link is dofollow. Is it advisable to give relevance to this link in also handling dofollow link? For example any domain. on the homepage has pagerank 3. This site has a pagerank 0 of the internal page that I send trackbacks to. Is it convenient to send dofollow links to that page to increase the pagerank? Will they increase my pagerank if I perform this operation?

    Read the article

  • Electronic circuit simulator four-way flood-filling issues

    - by AJ Weeks
    I've made an electronic circuit board simulator which has simply 3 types of tiles: wires, power sources, and inverters. Wires connect to anything they touch, other than the sides of inverters; inverters have one input side and one output side; and finally power tiles connect in a similar manner as wires. In the case of an infinite loop, caused by the output of the inverter feeding into its input, I want inverters to oscillate (quickly turn on/off). I've attempted to implement a FloodFill algorithm to spread the power throughout the grid, but seem to have gotten something wrong, as only the tiles above the power source get powered (as seen below) I've attempted to debug the program, but have had no luck thus far. My code concerning the updating of power can be seen here.

    Read the article

  • What programming Language Would you learn to Re-engineer USB Devices? [closed]

    - by user70113
    Currently Work in IT support and am retraining in electrical engineering / electronics, I am also interested in Reverse Engineering which language would be best for Hardware RE, I have seen a few sources say C, C++ and Python? I am not familiar with Linux, but installed Ubuntu to learn with. I am not a programmer. Far from it. But, I can understand enough basic VB,Java and PHP to edit it for simple things. One of my immediate projects would be to learn to reverse engineer USB devices and write my own low level drivers. I know there are porting kits, but I really want to know it from the ground up. Thanks for any advise folks Most Appreciated.

    Read the article

  • How do you Install the Latest Release of Miro?

    - by Brenton Horne
    In the software centre the latest release of Miro available is 4.0.4 whereas the latest release of Miro is 5.0.4. How do I download 5.0.4 on 12.10? I have tried following the guide at http://www.getmiro.com/download/for-ubuntu/ (and thus have already run sudo add-apt-repository ppa:pcf/miro-releases) but it failed and when I tried to run sudo apt-get update I received the error: W: Failed to fetch http://ppa.launchpad.net/pcf/miro-releases/ubuntu/dists/quantal/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/pcf/miro-releases/ubuntu/dists/quantal/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Installing ZFS changed my sudoers file

    - by MaxMackie
    Following the wiki's advice, I installed ubuntu-zfs. However, once everything installed correctly, and I tried installing another application via apt-get, I get a weird issue with my sudoers file: max@host:~$ sudo apt-get install deluge deluge-web sudo: /etc/sudoers.d/zfs is mode 0644, should be 0440 >>> /etc/sudoers.d/README: /etc/sudoers.d/zfs near line 18 <<< sudo: parse error in /etc/sudoers.d/README near line 18 sudo: no valid sudoers sources found, quitting *** glibc detected *** sudo: double free or corruption (!prev): 0x08909d08 *** ======= Backtrace: ========= .... Why has zfs messed with the sudoers file? I can post the backtrace if needed.

    Read the article

  • Simple Project Templates

    - by Geertjan
    The NetBeans sources include a module named "simple.project.templates": In the module sources, Tim Boudreau turns out to be the author of the code, so I asked him what it was all about, and if he could provide some usage code. His response, from approximately this time last year because it's been sitting in my inbox for a while, is below. Sure - though I think the javadoc in it is fairly complete.  I wrote it because I needed to create a bunch of project templates for Javacard, and all of the ways that is usually done were grotesque and complicated.  I figured we already have the ability to create files from templates, and we already have the ability to do substitutions in templates, so why not have a single file that defines the project as a list of file templates to create (with substitutions in the names) and some definitions of what should be in project properties. You can also add files to the project programmatically if you want.Basically, a template for an entire project is a .properties file.  Any line which doesn't have the prefix 'pp.' or 'pvp.' is treated as the definition of one file which should be created in the new project.  Any such line where the key ends in * means that file should be opened once the new project is created.  So, for example, in the nodejs module, the definition looks like: {{projectName}}.js*=Templates/javascript/HelloWorld.js .npmignore=node_hidden_templates/npmignore So, the first line means:  - Create a file with the same name as the project, using the HelloWorld template    - I.e. the left side of the line is the relative path of the file to create, and the right side is the path in the system filesystem for the template to use       - If the template is not one you normally want users to see, just register it in the system filesystem somewhere other than Templates/ (but remember to set the attribute that marks it as a template)  - Include that file in the set of files which should be opened in the editor once the new project is created. To actually create a project, first you just create a new ProjectCreator: ProjectCreator gen = new ProjectCreator( parentFolderOfNewProject ); Now, if you want to programmatically generate any files, in addition to those defined in the template, you can: gen.add (new FileCreator("nbproject", "project.xml", false) {     public DataObject create (FileObject project, Map<String,String> substitutions) throws IOException {          ...     } }); Then pass the FileObject for the project template (the properties file) to the ProjectCreator's createProject method (hmm, maybe it should be the string path to the project template instead, to save the caller trouble looking up the FileObject for the template).  That method looks like this: public final GeneratedProject createProject(final ProgressHandle handle, final String name, final FileObject template, final Map<String, String> substitutions) throws IOException { The name parameter should be the directory name for the new project;  the map is the strings you gathered in the wizard which should be used for substitutions.  createProject should be called on a background thread (i.e. use a ProgressInstantiatingIterator for the wizard iterator and just pass in the ProgressHandle you are given). The return value is a GeneratedProject object, which is just a holder for the created project directory and the set of DataObjects which should be opened when the wizard finishes. I'd love to see simple.project.templates moved out of the javacard cluster, as it is really useful and much simpler than any of the stuff currently done for generating projects.  It would also be possible to do much richer tools for creating projects in apisupport - i.e. choose (or create in the wizard) the templates you want to use, generate a skeleton wizard with a UI for all the properties you'd like to substitute, etc. Here is a partial project template from Javacard - for example usage, see org.netbeans.modules.javacard.wizard.ProjectWizardIterator in javacard.project (or the much simpler one in contrib/nodejs). #This properties file describes what to create when a project template is#instantiated.  The keys are paths on disk relative to the project root. #The values are paths to the templates to use for those files in the system#filesystem.  Any string inside {{ and }}'s will be substituted using properties#gathered in the template wizard.#Special key prefixes are #  pp. - indicates an entry for nbproject/project.properties#  pvp. - indicates an entry for nbproject/private/private.properties #File templates, in format [path-in-project=path-to-template]META-INF/javacard.xml=org-netbeans-modules-javacard/templates/javacard.xmlMETA-INF/MANIFEST.MF=org-netbeans-modules-javacard/templates/EAP_MANIFEST.MF APPLET-INF/applet.xml=org-netbeans-modules-javacard/templates/applet.xmlscripts/{{classnamelowercase}}.scr=org-netbeans-modules-javacard/templates/test.scrsrc/{{packagepath}}/{{classname}}.java*=Templates/javacard/ExtendedApplet.java nbproject/deployment.xml=org-netbeans-modules-javacard/templates/deployment.xml#project.properties contentspp.display.name={{projectname}}pp.platform.active={{activeplatform}} pp.active.device={{activedevice}}pp.includes=**pp.excludes= I will be using the above info in an upcoming blog entry and provide step by step instructions showing how to use them. However, anyone else out there should have enough info from the above to get started yourself!

    Read the article

  • TFS: Work Items values from External Databases

    - by javarg
    A common question in TFS forums is how to populate list items from external sources in Work Items. Well, there is not a specific functionality to integrate Work Items with external databases or systems when designing them. Actually, you will need to associate your Work Items fields with Global Lists and then have some automated process update this global list regularly. Download this ImportGlobalList.zip file. I’ve put together a simple class (TfsGlobalList) that you can use to update global list items from a .NET application. You could for example, create a simple Console App and schedule it using Windows Scheduler. This App would query a database and then update a TFS Global List using the provided code. Note: the provided code must be run under an account with modify Global List permissions in TFS. Note: remember to refresh Team Explorer in order to see updates in Work Item field values. Enjoy!  

    Read the article

  • UML class diagram - can aggregated object be part of two aggregated classes?

    - by user970696
    Some sources say that aggregation means that the class owns the object and shares reference. Lets assume an example where a company class holds a list of cars but departments of that company has list of cars used by them. class Department { list<Car> listOfCars; } class Company { list<Car> listOfCars; //initialization of the list } So in UML class diagram, I would do it like this. But I assume this is not allowed because it would imply that both company and department own the objects.. [COMPANY]<>------[CAR] [DEPARTMENT]<>---| //imagine this goes up to the car class

    Read the article

  • Closing the gap between strategy and execution with Oracle Business Intelligence 11g

    - by manan.goel(at)oracle.com
    Wikipedia defines strategy as a plan of action designed to achieve a particular goal. An example of this is General Electric's acquisitions and divestiture strategy (plan) designed to propel GE to number 1 or 2 place (goal) in every business segment that it operated in. Execution on the other hand can be defined as the actions taken to getting things done. In GE's case execution will be steps followed for mergers/acquisitions or divestiture. Business press has written extensively about the importance of both strategy and execution in achieving desired business objectives. Perhaps the quote from Thomas Edison says it best - "vision without execution is hallucination". Conversely, it can be said that "execution without vision" is well may be "wishful thinking". Research overwhelmingly point towards the wide gap between strategy and execution. According to a published study, 49% of surveyed executives perceive a gap between their organizations' ability to develop and communicate sound strategies and their ability to implement those strategies. Further, of these respondents, 64% don't have full confidence that their companies will be able to close the gap. Having established the severity and importance of the problem let's talk about the reasons for the strategy-execution gap. The common reasons include: -        Lack of clearly defined goals -        Lack of consistent measure of success -        Lack of ownership -        Lack of alignment -        Lack of communication -        Lack of proper execution -        Lack of monitoring       There are multiple approaches to solving the problem including organizational development practices, technology enablement etc. In most cases a combination of approaches is required to achieve the desired result. For the purposes of this discussion, I'll focus on technology.  Imagine an integrated closed loop technology platform that automates the entire management cycle from defining strategy to assigning ownership to communicating goals to achieving alignment to collaboration to taking actions to monitoring progress and achieving mid course corrections. Besides, for best ROI and lowest TCO such a system should also have characteristics like:  Complete -        Full functionality -        Rich end user access Open -        Any data source -        Any business application -        Any technology stack  Integrated -        Common metadata -        Common security -        Common system management From a capabilities perspective the system should provide the following capabilities: Define -        Strategy -        Objectives -        Ownership -        KPI's Communicate -        Pervasive -        Collaborative -        Role based -        Secure Execute -        Integrated -        Intuitive -        Secure -        Ubiquitous Monitor -        Multiple styles and formats -        Exception based -        Push & Pull Having talked about the business problem and outlined the blueprint for a technology solution, let's talk about how Oracle Business Intelligence 11g can help. Oracle Business Intelligence is a comprehensive business intelligence solution for reporting, ad hoc query and analysis, OLAP, dashboards and scorecards. Oracle's best in class BI platform is based on an architecturally integrated technology foundation that provides a unified end user experience and features a Common Enterprise Information Model, with common security, query request generation and optimization, and system management. The BI platform is ·         Complete - meaning it delivers all modes and styles of BI including reporting, ad hoc query and analysis, OLAP, dashboards and scorecards with a rich end user experience that includes visualization, collaboration, alerts and notifications, search and mobile access. ·         Open - meaning the BI platform integrates with any data source, ETL tool, business application, application server, security infrastructure, portal technology as well as any ODBC compliant third party analytical tool. The suite accesses data from multiple heterogeneous sources--including popular relational and multidimensional data sources and major ERP and CRM applications from Oracle and SAP. ·         Integrated - meaning the BI platform is based on an architecturally integrated technology foundation built on an open, standards based service oriented architecture.  The platform features a common enterprise information model, common security model and a common configuration, deployment and systems management framework. To summarize, Oracle Business Intelligence is a comprehensive, integrated BI platform that lets you define strategy, identify objectives, assign ownership, define KPI's, collaborate, take action, monitor, report and do course corrections all form a single interface and a single system. The platform's integrated metadata model and task based design ensures that the entire workflow from defining strategy to execution to monitoring is completely integrated delivering end to end visibility, transparency and agility. Click here to learn more about Oracle BI 11g. 

    Read the article

  • Lancement d'un forum Usine Logicielle dédié aux pratiques et outils s'inscrivant dans une démarche d

    Bonjour, Le forum Usine Logicielle a été créé dans le but de couvrir un domaine en forte croissance depuis quelques années qui est à rapprocher de démarches bien présentes dans d'autres sciences de l'ingénierie (ex. mécanique) touchant à l'industrialisation. Vous trouverez donc des forums dédiés à des problématiques telle la gestion de sources, le build, l'intégration continue, la traçabilité, la qualimétrie, etc. Comme il est délicat et difficilement concevable de séparer théorie/pratique et outils sur ces thèmes, le présent forum et ses sous-forum adresse à chaque fois les deux aspects (aux forums spécialisés sur des outils - ou assimilés - créés en fonction de leur popularité). Vous trouverez donc quelques forums existants com...

    Read the article

  • Is there a difference between "." and "source" in bash, after all?

    - by ysap
    I was looking for the difference between the "." and "source" builtin commands and a few sources (e.g., in this discussion, and the bash manpage) suggest that these are just the same. However, following a problem with environment variables, I conducted a test. I created a file testenv.sh that contains: #!/bin/bash echo $MY_VAR In the command prompt, I performed the following: > chmod +x testenv.sh > MY_VAR=12345 > ./testenv.sh > source testenv.sh 12345 > MY_VAR=12345 ./testenv.sh 12345 [note that the 1st form returned an empty string] So, this little experiment suggests that there is a difference after all, where for the "source" command, the child environment inherits all the variables from the parent one, where for the "." it does not. Am I missing something, or is this is an undocumented/deprecated feature of bash? [ GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu) ]

    Read the article

  • Free or Low Cost Web Hosting for Small Website [duplicate]

    - by etangins
    This question already has an answer here: How to find web hosting that meets my requirements? 5 answers I have a small website (between 2000 and 10000) page-views a day. I'm looking for a free or low cost web host. I tried 50webs.com but their server breaks down. So as not to cause debate, I am also just looking for links to good information sources for web hosting if just finding a good web host is too general. I currently only use HTML, CSS, and JavaScript though I'm considering learning PhP and other more advanced languages to step up my game.

    Read the article

  • Google Analytics conversion tracking - referrals from payment provider

    - by martynas
    I have a question regarding conversion tracking using Google Analytics. My client uses an external payment service provider - SecureTrading. Problem: All website visitors who would like to make a purchase are taken to a payment form on https://securetrading.net and are redirected back after a successful payment. Google Analytics counts that as a referral and messes up conversion tracking stats. Question: What needs to be changed / added in the payment forms or Google Analytics settings so that the conversions would be assigned to the right traffic sources. Screenshot:

    Read the article

  • apt-fast for ubuntu 14.04?

    - by Jatin Kaushal
    I just upgraded to ubuntu 14.04 from ubuntu 12.04 (Which I loved) and now, I cant install apt-fast. My net connection is very slow and I want to download using apt-fast but whenever I add the apt-fast ppa and update and try to install it, It says package apt-fast not found How do I fix this? thank you. I really appreciate The effort made by askubuntu (Which includes you awesome people ;)) To help me. My problem has already been answered. Here is a screenshot of my software sources. The person who answered my question has apparently removed the comment that answered my question but here is the repository that helped me add apt-fast sudo add-apt-repository ppa:saiarcot895/myppa So if this helps I would like to upvote each and every person who helped me and I will accept one answer too, Just give me some time to test both of them ;) anyway thank you guys to help me. You are awesome :-) XD

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >