Search Results

Search found 18396 results on 736 pages for 'oracle policy administration'.

Page 207/736 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Time to Check Your Servers

    - by fatherjack
    Do you know how to find the time that your SQL Server started? Since SQL Server 2008 you can use: SELECT sqlserver_start_timeFROM sys.dm_os_sys_info On one of my servers this gives me: This is great, and can be used in lots of ways. I happened across the [sys].[dm_exec_requests]view the other day and out of curiosity ran the query SELECT MIN(start_time) AS [start time]FROM [sys].[dm_exec_requests] AS der And I was surprised to see the result as: Almost exactly an hour different. Now as...(read more)

    Read the article

  • ORA-7445 Troubleshooting

    - by [email protected]
        QUICKLINK: Note 153788.1 ORA-600/ORA-7445 Lookup tool Note 1082674.1 : A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video]   Have you observed an ORA-07445 error reported in your alert log? While the ORA-600 error is "captured" as a handled exception in the Oracle source code, the ORA-7445 is an unhandled exception error due to an OS exception which should result in the creation of a core file.  An ORA-7445 is a generic error, and can occur from anywhere in the Oracle code. The precise location of the error is identified by the core file and/or trace file it produces.  Looking for the best way to diagnose? Whenever an ORA-7445 error is raised a core file is generated.  There may be a trace file generated with the error as well.   Prior to 11g, the core files are located in the CORE_DUMP_DEST directory.   Starting with 11g, there is a new advanced fault diagnosability infrastructure to manage trace data.  Diagnostic files are written into a root directory for all diagnostic data called the ADR home.   Core files at 11g will go to the ADR HOME/cdump directory.   For more information on the Oracle 11g Diagnosability feature see Note 453125.1 11g Diagnosability Frequently Asked Questions Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]   NOTE:  While the core file is captured in the Diagnosability infrastructure, the file may not be included with a diagnostic package.1.  Check the Alert LogThe alert log may indicate additional errors or other internal errors at the time of the problem.   In some cases, the ORA-7445 error will occur along with ORA-600, ORA-3113, ORA-4030 errors.  The ORA-7445 error can be side effects of the other problems and you should review the first error and associated core file or trace file and work down the list of errors.   Note 1020463.6 DIAGNOSING ORA-3113 ERRORS Note 1812.1 TECH:  Getting a Stack Trace from a CORE file Note 414966.1 RDA Documentation Index   If the ORA-7445 errors are not associated with other error conditions, ensure the trace data is not truncated. If you see a message at the end of the file   "MAX DUMP FILE SIZE EXCEEDED"   the MAX_DUMP_FILE_SIZE parameter is not setup high enough or to 'unlimited'. There could be vital diagnostic information missing in the file and discovering the root issue may be very difficult.  Set the MAX_DUMP_FILE_SIZE appropriately and regenerate the error for complete trace information. For pointers on deeper analysis of these errors see   Note 390293.1 Introduction to 600/7445 Internal Error Analysis Note 211909.1 Customer Introduction to ORA-7445 Errors 2.  Search 600/7445 Lookup Tool Visit My Oracle Support to access the ORA-00600 Lookup tool (Note 153788.1). The ORA-600/ORA-7445 Lookup tool may lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out key stack pointers from the associated trace file to match up against known bugs.3.  "Fine tune" searches in Knowledge Base As the ORA-7445 error indicates an unhandled exception in the Oracle source code, your search in the Oracle Knowledge Base will need to focus on the stack data from the core file or the trace file. Keep in mind that searches on generic argument data will bring back a large result set.  The more you can learn about the environment and code leading to the errors, the easier it will be to narrow the hit list to match your problem. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] NOTE:  If no trace file is captured, see Note 1812.1 TECH:  Getting a Stack Trace from a CORE file.  Core files are managed through 11g Diagnosability, but are not packaged with other diagnostic data automatically.  The core files can be quite large, but may be useful during analysis within Oracle Support.4.  If assistance is required from Oracle Should it become necessary to get assistance from Oracle Support on an ORA-7445 problem, please provide at a minimum, the Alert log  Associated tracefile(s) or incident package at 11g Patch level  information Core file(s)  Information about changes in configuration and/or application prior to  issues  If error is reproducible, a self-contained reproducible testcase: Note.232963.1 How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors RDA report or Oracle Configuration Manager information Oracle Configuration Manager Quick Start Guide Note 548815.1 My Oracle Support Configuration Management FAQ Note 414966.1 RDA Documentation Index ***For reference to the content in this blog, refer to Note.1092832.1 Master Note for Diagnosing ORA-600

    Read the article

  • BI für schnelle Analysen: Ein Erfolgsprojekt für die DAB bank, durchgeführt von Riverland Reply

    - by A & C Redaktion
    Zufriedene Kunden sind die beste Marketingstrategie. Deshalb bieten wir spezialisierten Partnern die Möglichkeit, professionelle Anwenderberichte über eigene erfolgreiche Oracle Projekte erstellen zu lassen. Hier im Blog präsentieren wir Ihnen in loser Folge Referenzberichte, mit denen Partner bereits erfolgreich werben. Heute: Der Oracle Partner Riverland Reply und sein Business-Intelligence-Projekt für die DAB bank Für die Direkt Anlage Bank, kurz DAB, sind BI-Lösungen längst zu einem wichtigen Steuerungstool geworden. Dabei müssen Datenströme aus verschiedenen Quellen zuverlässig gesteuert werden können. Um die neuesten CRM- und Business Intelligence-Funktionalitäten nutzen zu können, entschied sich die DAB bank für ein Upgrade auf Siebel CRM 8.1.1.7 mit CTI sowie die Integration von Oracle Business Intelligence 11g. Um ihren Wunsch nach Vereinfachung und Vereinheitlichung umzusetzen, fand die DAB in der Münchner Riverland Reply GmbH einen zuverlässigen Partner, der auf technische Beratung, Implementierung und Systemintegration in den Bereichen Prozesse, Businesslösungen und Technologien spezialisiert ist. Die Bedienbarkeit konnte deutlich verbessert werden, was die Prozessabläufe erleichtert. Den Datenzugriff erleichtert nun eine einheitliche Umgebung, und die Reporting-, Visualisierungs-, Such- und Collaboration-Funktionen sind in einem BI-Tool zusammengeführt. Details zum genauen Projektverlauf und den spezifischen Anforderungen finden Sie hier im Anwenderbericht der DAB. Die Möglichkeit, sich und ihre Arbeit gewinnbringend zu präsentieren, können alle spezialisierten Partner nutzen, die ein repräsentatives Oracle Projekt abgeschlossen haben. Erfahrene Fachjournalisten interviewen sowohl Partner als auch Endkunde und erstellen einen ausführlichen, ansprechend aufbereiteten Bericht. Die Veröffentlichung erfolgt über verschiedene Marketing-Kanäle. Natürlich können die Partner die Anwenderberichte auch für eigene Marketingzwecke nutzen, z. B. für Veranstaltungen. Haben Sie Interesse? Dann wenden Sie sich an Frau Renate Mayer. Wir benötigen von Ihnen einige Eckdaten wie Kundenname, Ansprechpartner und eingesetzte Oracle Produkte, eine Beschreibung des Projektes in drei-vier Sätzen sowie Ihren Ansprechpartner im Haus. Und dann: Lassen Sie Ihre gute Arbeit für sich sprechen!

    Read the article

  • Where is my IIS change going to be applied?

    - by The Official Microsoft IIS Site
    When you make a change to a website in Internet Information Services (IIS) 6, it is straightforward where your change was going to be applied. All information is stored in the Metabase. Starting with IIS 7, information is stored in a variety of places. Sometimes the site or server change that you make is stored in a web.config file at the server level. Other times the change is stored in the main IIS configuration file, aplicationHost.config. To complicate it further, sometimes the change is stored...(read more)

    Read the article

  • Cluster Nodes as RAID Drives

    - by BuckWoody
    I'm unable to sleep tonight so I thought I would push this post out VERY early. When you don't sleep your mind takes interesting turns, which can be a good thing. I was watching a briefing today by a couple of friends as they were talking about various ways to arrange a Windows Server Cluster for SQL Server. I often see an "active" node of a cluster with a "passive" node backing it up. That means one node is working and accepting transactions, and the other is not doing any work but simply "standing by" waiting for the first to fail over. The configuration in the demonstration I saw was a bit different. In this example, there were three nodes that were actively working, and a fourth standing by for all three. I've put configurations like this one into place before, but as I was looking at their architecture diagram, it looked familar - it looked like a RAID drive setup! And that's not a bad way to think about your cluster arrangements. The same concerns you might think about for a particular RAID configuration provides a good way to think about protecting your systems in general. So even if you're not staying awake all night thinking about SQL Server clusters, take this post as an opportunity for "lateral thinking" - a way of combining in your mind the concepts from one piece of knowledge to another. You might find a new way of making your technical environment a little better. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What can I do to give some more love and disk space to my database on Ubuntu?

    - by Yaron Naveh
    I'm new to linux. I've deployed a db to ubuntu server on amazon and found out I'm low on disk space. did df (see below) - and found out that I'm 89% capacity on one file system, but less on others. What does this mean? Do I have a few partitions and can now utilize others besides /dev/xvda1? Also /dev/xvdb seems large, is it safe to put the db in it and only use it? If so do I need to mount it or do something special? $> df -lah Filesystem Size Used Avail Use% Mounted on /dev/xvda1 8.0G 6.7G 914M 89% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys none 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security udev 3.7G 8.0K 3.7G 1% /dev devpts 0 0 0 - /dev/pts tmpfs 1.5G 164K 1.5G 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.7G 0 3.7G 0% /run/shm /dev/xvdb 414G 199M 393G 1% /mnt

    Read the article

  • Cloud Evolving, SQL Server Responding

    - by KKline
    Brent Ozar ( blog | twitter ) and I did an interview with TechTarget’s Brendan Cournoyer at last summer's Tech-Ed, which as turned into a podcast titled “Cloud efforts advance, SQL Server evolves.” The podcast covers all the major trends at the conference (like BI), virtualization features in Quest’s products (like Spotlight), Brent’s new book and MCM certification, and more. Here’s a link to hear it, appearing on 6/11/10: http://searchsqlserver.techtarget.com/podcast/Cloud-efforts-advance-SQL-Server-evolves....(read more)

    Read the article

  • Parsing the sqlserver.sql_text Action in Extended Events by Offsets

    - by Jonathan Kehayias
    A couple of weeks back I received an email from a member of the community who was reading the XEvent a Day blog series and had a couple of interesting questions about Extended Events.  This person had created an Event Session that captured the sqlserver.sql_statement_completed and sqlserver.sql_statement_starting Events and wanted to know how to do a correlation between the related Events so that the offset information from the starting Event could be used to find the statement of the completed...(read more)

    Read the article

  • Accenture Launches Smart Grid Data Management Platform

    - by caroline.yu
    Accenture announced today it has launched the Accenture Intelligent Network Data Enterprise (INDE), a data management platform to help utilities design, deploy and manage smart grids. INDE's functionality can be enabled by an array of third party technologies. In addition, Accenture plans to offer utilities the option of implementing the INDE solution based on a pre-configured suite of Oracle technologies. The Oracle-based version of INDE will accelerate the design of smart grids and help reduce the costs and risks associated with smart grid implementation. Stephan Scholl, Senior Vice President and General Manager of Oracle Utilities said, "Oracle and Accenture share a common vision of how the smart grid will enable more efficient energy choices for utilities and their customers. Our combined expertise in delivering mission-critical smart grid applications, security, data management and systems integration can help accelerate utilities toward a more intelligent network now and as future needs arise." For the full press release, click here.

    Read the article

  • Deploying Data Mining Models using Model Export and Import, Part 2

    - by [email protected]
    In my last post, Deploying Data Mining Models using Model Export and Import, we explored using DBMS_DATA_MINING.EXPORT_MODEL and DBMS_DATA_MINING.IMPORT_MODEL to enable moving a model from one system to another. In this post, we'll look at two distributed scenarios that make use of this capability and a tip for easily moving models from one machine to another using only Oracle Database, not an external file transport mechanism, such as FTP. The first scenario, consider a company with geographically distributed business units, each collecting and managing their data locally for the products they sell. Each business unit has in-house data analysts that build models to predict which products to recommend to customers in their space. A central telemarketing business unit also uses these models to score new customers locally using data collected over the phone. Since the models recommend different products, each customer is scored using each model. This is depicted in Figure 1.Figure 1: Target instance importing multiple remote models for local scoring In the second scenario, consider multiple hospitals that collect data on patients with certain types of cancer. The data collection is standardized, so each hospital collects the same patient demographic and other health / tumor data, along with the clinical diagnosis. Instead of each hospital building it's own models, the data is pooled at a central data analysis lab where a predictive model is built. Once completed, the model is distributed to hospitals, clinics, and doctor offices who can score patient data locally.Figure 2: Multiple target instances importing the same model from a source instance for local scoring Since this blog focuses on model export and import, we'll only discuss what is necessary to move a model from one database to another. Here, we use the package DBMS_FILE_TRANSFER, which can move files between Oracle databases. The script is fairly straightforward, but requires setting up a database link and directory objects. We saw how to create directory objects in the previous post. To create a database link to the source database from the target, we can use, for example: create database link SOURCE1_LINK connect to <schema> identified by <password> using 'SOURCE1'; Note that 'SOURCE1' refers to the service name of the remote database entry in your tnsnames.ora file. From SQL*Plus, first connect to the remote database and export the model. Note that the model_file_name does not include the .dmp extension. This is because export_model appends "01" to this name.  Next, connect to the local database and invoke DBMS_FILE_TRANSFER.GET_FILE and import the model. Note that "01" is eliminated in the target system file name.  connect <source_schema>/<password>@SOURCE1_LINK; BEGIN  DBMS_DATA_MINING.EXPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_SOURCE_DIR_OBJECT',                                 'name =''MY_MINING_MODEL'''); END; connect <target_schema>/<password>; BEGIN  DBMS_FILE_TRANSFER.GET_FILE ('MY_SOURCE_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '01.dmp',                               'SOURCE1_LINK',                               'MY_TARGET_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '.dmp' );  DBMS_DATA_MINING.IMPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_TARGET_DIR_OBJECT'); END; To clean up afterward, you may want to drop the exported .dmp file at the source and the transferred file at the target. For example, utl_file.fremove('&directory_name', '&model_file_name' || '.dmp');

    Read the article

  • Experiencing the New Social Enterprise

    - by kellsey.ruppel(at)oracle.com
    Social media and networking tools, popularly known as Web 2.0 technologies, are rapidly transforming user expectations of enterprise systems. Many organizations are investing in these new tools to cultivate a modern user experience in an "Enterprise 2.0" environment that unlocks the full potential of traditional IT systems and fosters collaboration in key business processes. Is your organization a social enterprise? How are you using Web 2.0 and Enterprise 2.0 technologies? Read this white paper to learn how Oracle WebCenter Suite enables organizations to become social enterprises and is the modern user experience platform for the enterprise and the Web.

    Read the article

  • Top 5 Sites and Activities in San Francisco to Experience During Oracle OpenWorld

    - by kgee
    While Oracle OpenWorld may provide solutions and information on topics like how to simplify your IT, the importance of cloud, and what types of storage may satisfy your enterprise needs, who is going to tell you more about San Francisco? Here are some suggested sites and activities to experience after OpenWorld that aren’t too far from the Moscone Center. It is recommended to take a cab for the sake of time, but the 6 square miles that make up San Francisco will make for a quick trek to any of the following destinations: The Golden Gate BridgeAn image often associated with San Francisco, this bridge is one of the most impressive in the world. Take a walk across it, or view it from nearby Crissy Field, it is a sight that floors even the most veteran of San Franciscans. The Ferry BuildingLocated at the end of Market Street in the Embarcadero, the Ferry Building once served as a hub of water transport and trade. The building has a bay front view and an array of food choices and restaurants. It is easily accessible via the Muni, BART, trolley or by cab. It is a must-see in San Francisco, and not too far from the Moscone Center. Ride the Trolley to the CastroFor only $2, you can get go back in history for a moment on the Trolley. Take the F-line from the Embarcadero and ride it all the way to the Castro district. During the ride, you will get an overview of the landscape and cultures that are prevalent in San Francisco, but be wary that some areas may beg for an open mind more than others. Golden Gate ParkWhen you tire of the concrete jungle, the lucky part of being in San Francisco is that you can escape to a natural refuge, this park being one of the favorites. This park is known for its hiking trails, cultural attractions, monuments, lakes and gardens. It is one good reason to bring your sneakers to San Francisco, and is also a great place to picnic. Please be wary that it is easy to get lost, and it is advisable to bring a map (just in case) if you go. Haight AshburyFor a complete change of scenery, Haight Ashbury is known as one of the places hippies used to live and the location of "The Summer of Love." It is now a more affluent neighborhood with boutique shops and the occasional drum circle. While it may be perceived as grungy in certain spots, it is one of the most photographed places in San Francisco and an integral part of San Franciscan history.

    Read the article

  • Defragment / Performance Monitor without Task Scheduler

    - by mjaggard
    My organisation has a policy of disabling Task Scheduler on all servers and workstations (don't ask, I tried once to wrestle the pig). I need to collect performance stats using Data Collector Sets in Windows 7 or Windows 2008 but the Performance Monitor interface requires Task Scheduler to be running. Is this possible because I'm not trying to schedule anything (except the collection of WMI information every 15 seconds but I doubt it hands that task off to the task scheduler)? Is there any way to trick it into thinking Task Scheduler is running? If not, is there any way to temporarily override the group policy to allow Task Scheduler to run? I've found that most group policy can be overridden in this way by an Administrator by editing the registry. On exactly the same vein, I want to defragment a hard disk on one of my workstations, but I can't get it to start because of the dependancy on Task Scheduler - is it possible to overcome this?

    Read the article

  • Consumer Electronics Show (CES):CRM for High Technology Firms

    - by charles.knapp
    The Consumer Electronics Show, opening Thursday, showcases product innovations that stem from best practices in design, manufacturing, and distribution. Oracle and IBM invite you to learn best practices from peers, as well as why it matters to use CRM tailored for high technology firms -- offered only by Oracle. On Wednesday, January 5, 1-7 pm at the Bellagio Hotel Las Vegas, learn from peers at IBM, VTech, Plantronics, Cisco, Symantec, and Oracle about how to improve:Channel sales, marketing, and operations management - maximize new product introductions (NPI), sales, forecasts, training, channel promotions, and settlement Winning the deal - determine the right price for the right deal for the "perfect quote," capture the order, and manage orders Collaborative and rapid supply chain planning - improve agility, inventory turns, and profits Please join us for the Oracle/IBM CES High Technology Summit and make useful connections with your peers at the evening networking reception. Register now for this FREE event.

    Read the article

  • Back in Atlanta! Wed, Feb 9 2011

    - by KKline
    I always enjoy spending time with my friends from Atlanta, as well as meeting folks and making new friends. If you live in the Atlanta area, I hope you'll join me on the evening of Wednesday, February 9th, 2011. Details are at the Atlanta SQL Server user group website . It's common knowledge that I have a terrible memory for many things. However, one of the few things that my memory is usually really good at is remember names & faces (and remembering stories, but that is another story as well)....(read more)

    Read the article

  • Join Our Call: Sun Storage 2500-M2 Announcement

    - by mseika
    Oracle's Sun Storage 2500-M2 array brings together the latest Fibre Channel (FC) and SAS2 technologies with Oracle's Sun Storage Common Array software from Oracle to create a robust solution that’s equally adept in an ! entry-level storage area network (SAN) for the mid-size business and integrating into an existing storage network within the enterprise. The Sun Storage 2500-M2 replaces Sun's Storage 2500 array product line and is designed so that the customer may have a quick qualification time for fast and easy deployment in the traditional 2500 environments. Jun Jang, Oracle Principal Product Manager will be hosting this 1 hour live call (a recording will be available), please join us to find out more:24. Jun 2011 08:00 am PST/PDT/4pm UK timeWeb Registration and AccessAccess for Mobile DevicesInternational Participant Dial-In Number: 706-634-8508Additional International Dial-In Numbers LinkDial-In Passcode: 6395

    Read the article

  • Boost your infrastructure with Coherence into the Cloud

    - by Nino Guarnacci
    Authors: Nino Guarnacci & Francesco Scarano,  at this URL could be found the original article:  http://blogs.oracle.com/slc/coherence_into_the_cloud_boost. Thinking about the enterprise cloud, come to mind many possible configurations and new opportunities in enterprise environments. Various customers needs that serve as guides to this new trend are often very different, but almost always united by two main objectives: Elasticity of infrastructure both Hardware and Software Investments related to the progressive needs of the current infrastructure Characteristics of innovation and economy. A concrete use case that I worked on recently demanded the fulfillment of two basic requirements of economy and innovation.The client had the need to manage a variety of data cache, which can process complex queries and parallel computational operations, maintaining the caches in a consistent state on different server instances, on which the application was installed.In addition, the customer was looking for a solution that would allow him to manage the likely situations in load peak during certain times of the year.For this reason, the customer requires a replication site, on which convey part of the requests during periods of peak; the desire was, however, to prevent the immobilization of investments in owned hardware-software architectures; so, to respond to this need, it was requested to seek a solution based on Cloud technologies and architectures already offered by the market. Coherence can already now address the requirements of large cache between different nodes in the cluster, providing further technology to search and parallel computing, with the simultaneous use of all hardware infrastructure resources. Moreover, thanks to the functionality of "Push Replication", which can replicate and update the information contained in the cache, even to a site hosted in the cloud, it is satisfied the need to make resilient infrastructure that can be based also on nodes temporarily housed in the Cloud architectures. There are different types of configurations that can be realized using the functionality "Push-Replication" of Coherence. Configurations can be either: Active - Passive  Hub and Spoke Active - Active Multi Master Centralized Replication Whereas the architecture of this particular project consists of two sites (Site 1 and Site Cloud), between which only Site 1 is enabled to write into the cache, it was decided to adopt an Active-Passive Configuration type (Hub and Spoke). If, however, the requirement should change over time, it will be particularly easy to change this configuration in an Active-Active configuration type. Although very simple, the small sample in this post, inspired by the specific project is effective, to better understand the features and capabilities of Coherence and its configurations. Let's create two distinct coherence cluster, located at miles apart, on two different domain contexts, one of them "hosted" at home (on-premise) and the other one hosted by any cloud provider on the network (or just the same laptop to test it :)). These two clusters, which we call Site 1 and Site Cloud, will contain the necessary information, so a simple client can insert data only into the Site 1. On both sites will be subscribed a listener, who listens to the variations of specific objects within the various caches. To implement these features, you need 4 simple classes: CachedResponse.java Represents the POJO class that will be inserted into the cache, and fulfills the task of containing useful information about the hypothetical links navigation ResponseSimulatorHelper.java Represents a link simulator, which has the task of randomly creating objects of type CachedResponse that will be added into the caches CacheCommands.java Represents the model of our example, because it is responsible for receiving instructions from the controller and performing basic operations against the cache, such as insert, delete, update, listening, objects within the cache Shell.java It is our controller, which give commands to be executed within the cache of the two Sites So, summarily, we execute the java class "Shell", asking it to put into the cache 100 objects of type "CachedResponse" through the java class "CacheCommands", then the simulator "ResponseSimulatorHelper" will randomly create new instances of objects "CachedResponse ". Finally, the Shell class will listen to for events occurring within the cache on the Site Cloud, while insertions and deletions are performed on Site 1. Now, we realize the two configurations of two respective sites / cluster: Site 1 and Site Cloud.For the Site 1 we define a cache of type "distributed" with features of "read and write", using the cache class store for the "push replication", a functionality offered by the project "incubator" of Oracle Coherence.For the "Site Cloud" we expect even the definition of “distributed” cache type with tcp proxy feature enabled, so it can receive updates from Site 1.  Coherence Cache Config XML file for "storage node" on "Site 1" site1-prod-cache-config.xml Coherence Cache Config XML file for "storage node" on "Site Cloud" site2-prod-cache-config.xml For two clients "Shell" which will connect respectively to the two clusters we have provided two easy access configurations.  Coherence Cache Config XML file for Shell on "Site 1" site1-shell-prod-cache-config.xml Coherence Cache Config XML file for Shell on "Site Cloud" site2-shell-prod-cache-config.xml Now, we just have to get everything and run our tests. To start at least one "storage" node (which holds the data) for the "Cloud Site", we can run the standard class  provided OOTB by Oracle Coherence com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site2-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud To start at least one "storage" node (which holds the data) for the "Site 1", we can perform again the standard class provided by Coherence  com.tangosol.net.DefaultCacheServer with the following parameters and values:-Xmx128m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.cacheconfig=config/site1-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1 Then, we start the first client "Shell" for the "Cloud Site", launching the java class it.javac.Shell  using these parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site2-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9002-Dtangosol.coherence.site=SiteCloud Finally, we start the second client "Shell" for the "Site 1", re-launching a new instance of class  it.javac.Shell  using  the following parameters and values: -Xmx64m-Xms64m-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=config/site1-shell-prod-cache-config.xml-Dtangosol.coherence.clusterport=9001-Dtangosol.coherence.site=Site1  And now, let’s execute some tests to validate and better understand our configuration. TEST 1The purpose of this test is to load the objects into the "Site 1" cache and seeing how many objects are cached on the "Site Cloud". Within the "Shell" launched with parameters to access the "Site 1", let’s write and run the command: load test/100 Within the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: size passive-cache Expected result If all is OK, the first "Shell" has uploaded 100 objects into a cache named "test"; consequently the "push-replication" functionality has updated the "Site Cloud" by sending the 100 objects to the second cluster where they will have been posted into a respective cache, which we named "passive-cache". TEST 2The purpose of this test is to listen to deleting and adding events happening on the "Site 1" and that are replicated within the cache on "Cloud Site". In the "Shell" launched with parameters to access the "Site Cloud" let’s write and run the command: listen passive-cache/name like '%' or a "cohql" query, with your preferred parameters In the "Shell" launched with parameters to access the "Site 1" let’s write and run the following commands: load test/10 load test2/20 delete test/50 Expected result If all is OK, the "Shell" to Site Cloud let us to listen to all the add and delete events within the cache "cache-passive", whose objects satisfy the query condition "name like '%' " (ie, every objects in the cache; you could change the tests and create different queries).Through the Shell to "Site 1" we launched the commands to add and to delete objects on different caches (test and test2). With the "Shell" running on "Site Cloud" we got the evidence (displayed or printed, or in a log file) that its cache has been filled with events and related objects generated by commands executed from the" Shell "on" Site 1 ", thanks to "push-replication" feature.  Other tests can be performed, such as, for example, the subscription to the events on the "Site 1" too, using different "cohql" queries, changing the cache configuration,  to effectively demonstrate both the potentiality and  the versatility produced by these different configurations, even in the cloud, as in our case. More information on how to configure Coherence "Push Replication" can be found in the Oracle Coherence Incubator project documentation at the following link: http://coherence.oracle.com/display/INC10/Home More information on Oracle Coherence "In Memory Data Grid" can be found at the following link: http://www.oracle.com/technetwork/middleware/coherence/overview/index.html To download and execute the whole sources and configurations of the example explained in the above post,  click here to download them; After download the last available version of the Push-Replication Pattern library implementation from the Oracle Coherence Incubator site, and download also the related and required version of Oracle Coherence. For simplicity the required .jarS to execute the example (that can be found into the Push-Replication-Pattern  download and Coherence Distribution download) are: activemq-core-5.3.1.jar activemq-protobuf-1.0.jar aopalliance-1.0.jar coherence-commandpattern-2.8.4.32329.jar coherence-common-2.2.0.32329.jar coherence-eventdistributionpattern-1.2.0.32329.jar coherence-functorpattern-1.5.4.32329.jar coherence-messagingpattern-2.8.4.32329.jar coherence-processingpattern-1.4.4.32329.jar coherence-pushreplicationpattern-4.0.4.32329.jar coherence-rest.jar coherence.jar commons-logging-1.1.jar commons-logging-api-1.1.jar commons-net-2.0.jar geronimo-j2ee-management_1.0_spec-1.0.jar geronimo-jms_1.1_spec-1.1.1.jar http.jar jackson-all-1.8.1.jar je.jar jersey-core-1.8.jar jersey-json-1.8.jar jersey-server-1.8.jar jl1.0.jar kahadb-5.3.1.jar miglayout-3.6.3.jar org.osgi.core-4.1.0.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-osgi-core-1.2.1.jar spring-osgi-io-1.2.1.jar At this URL could be found the original article: http://blogs.oracle.com/slc/coherence_into_the_cloud_boost Authors: Nino Guarnacci & Francesco Scarano

    Read the article

  • Find a Faster DNS Server with Namebench

    - by Mysticgeek
    One way to speed up your Internet browsing experience is using a faster DNS server. Today we take a look at Namebench, which will compare your current DNS server against others out there, and help you find a faster one. Namebench Download the file and run the executable (link below). Namebench starts up and will include the current DNS server you have configured on your system. In this example we’re behind a router and using the DNS server from the ISP. Include the global DNS providers and the best available regional DNS server, then start the Benchmark. The test starts to run and you’ll see the queries it’s running through. The benchmark takes about 5-10 minutes to complete. After it’s complete you’ll get a report of the results. Based on its findings, it will show you what DNS server is fastest for your system. It also displays different types of graphs so you can get a better feel for the different results. You can export the results to a .csv file as well so you can present the results in Excel. Conclusion This is a free project that is in continuing development, so results might not be perfect, and there may be more features added in the future. If you’re looking for a method to help find a faster DNS server for your system, Namebench is a cool free utility to help you out. If you’re looking for a public DNS server that is customizable and includes filters, you might want to check out our article on helping to protect your kids from questionable content using OpenDNS. You can also check out how to speed up your web browsing with Google Public DNS. Links Download NameBench for Windows, Mac, and Linux from Google Code Learn More About the Project on the Namebench Wiki Page Similar Articles Productive Geek Tips Open a Second Console Session on Ubuntu ServerShare Ubuntu Home Directories using SambaSetup OpenSSH Server on Ubuntu LinuxDisable the Annoying “This device can perform faster” Balloon Message in Windows 7Search For Rows With Special Characters in SQL Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app

    Read the article

  • Running Multiple Queries in Oracle SQL Developer

    - by thatjeffsmith
    There are two methods for running queries in SQL Developer: Run Statement Run Statement, Shift+Enter, F9, or this button Run Script No grids, just script (SQL*Plus like) ouput is fine, thank you very much! What’s the Difference? There are some obvious differences between the two features, the most obvious being the format of the output delivered. But there are some other, more subtle differences here, primarily around fetching. What is Fetch? After you run send your query to Oracle, it has to do 3 things: Parse Execute Fetch Technically it has to do at least 2 things, and sometimes only 1. But, to get the data back to the user, the fetch must occur. If you have a 10 row query or a 1,000,000 row query, this can mean 1 or many fetches in groups of records. Ok, before I went on the Fetch tangent, I said there were two ways to run statements in SQL Developer: Run Statement Run statement brings your query results to a grid with a single fetch. The user sees 50, 100, 500, etc rows come back, but SQL Developer and the database know that there are more rows waiting to be retrieved. The process on the server that was used to execute the query is still hanging around too. To alleviate this, increase your fetch size to 500. Every query ran will come back with the first 500 rows, and rows will be continued to be fetched in 500 row increments. You’ll then see most of your ad hoc queries complete with a single fetch. Scroll down, or hit Ctrl+End to force a full fetch and get all your rows back. Run Script Run Script runs the contents of the worksheet (or what’s highlighted) as a ‘script.’ What does that mean exactly? Think of this as being equivalent to running this in SQL*Plus: @my_script.sql; Each statement is executed. Also, ALL rows are fetched. So once it’s finished executing, there are no open cursors left around. The more obvious difference here is that the output comes back formatted as plain old text. Run one or more commands plus SQL*Plus commands like SET and SPOOL The Trick: Run Statement Works With Multiple Statements! It says ‘run statement,’ but if you select more than one with your mouse and hit the button – it will run each and throw the results to 1 grid for each statement. If you mouse hover over the Query Result panel tab, SQL Developer will tell you the query used to populate that grid. This will work regardless of what you have this preference set to: DATABASE – WORKSHEET – SHOW QUERY RESULTS IN NEW TABS Mind the fetch though! Close those cursors by bring back all the records or closing the grids when you’re done with them.

    Read the article

  • MDM for Tax Authorities

    - by david.butler(at)oracle.com
    In last week’s MDM blog, we discussed MDM in the Public Sector. I want to continue that thread. After all, no industry faces tougher data quality problems than governmental organizations, and few industries suffer more significant down side consequences to poor operations than local, state and federal governments. One key challenge area is taxation. Tax Authorities face a multitude of IT challenges. Firstly, the data used in tax calculations is increasing in volume and complexity. They must improve service by introducing multi-channel contact centers and self-service capabilities. Security concerns necessitate increasingly sophisticated data protection procedures. And cost constraints are driving Tax Authorities to rely on off-the-shelf software for many of their functional areas. Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex. They typically include multiple, disparate operational and analytical systems across which the sum total of data about individual constituents is fragmented. To make matters more complicated, taxation is not carried out by a single jurisdiction, and often sources of income including employers, investments and other sources of taxable income and deductions must also be tracked and shared among tax authorities. Collectively, these systems are involved in tax assessment and collections, risk analysis, scoring, tracking, auditing and investigation case management. The Problem of Constituent Data Management The infrastructure described above makes it very difficult to create a consolidated representation of a given party. Differing formats and data models mean that a constituent may be represented in one way in one system and in a different way in another. Individual records are frequently inaccurate, incomplete, out of date and/or inconsistent with other records relating to the same constituent. When constituent data must be aggregated and scored, information within each system must be rationalized and normalized so the agency can produce a constituent information file (CIF) that provides a single source of truth about that party. If information about that constituent changes, each system in turn must be updated. There have been many attempts to solve this problem with technology: from consolidating transactional systems to conducting manual systems integration projects and superimposing layers of business intelligence and analytics. All these approaches can be successful in solving a portion of the problem at a specific point in time, but without an enterprise perspective, anything gained is quickly lost again. Oracle Constituent Data Mastering for Tax Authorities: A Single View of the Constituent Oracle has a flexible and long-term solution to the problem of securely integrating and managing constituent data. The Oracle Solution for mastering Constituent Data for Tax Authorities is based on two core product offerings: Oracle Customer Hub and – optionally – Oracle Application Integration Architecture (AIA). Customer Hub is a master data management (MDM) product that centralizes, de-duplicates, and enriches constituent data. It unifies fragmented information without disrupting existing business processes or IT investments. Role based data access and privacy rules guarantee maximum security and privacy. Data is continuously and automatically synchronized with all source systems. With the Oracle Customer Hub managing the master constituent identity, every department can capture transaction activity against the same record, improving reporting accuracy, employee productivity, reliability of constituent analytics, and day-to-day constituent relationships. Oracle Application Integration Architecture provides a collection of core pre-built processes to support out of the box Master Data Governance across Oracle Customer Hub, Siebel CRM, and Oracle E-Business Suite. It also provides a framework to enable MDM integrations with other Oracle and non-Oracle applications. Oracle AIA removes some of the key inhibitors to implementing a service-oriented architecture (SOA) by providing a pre-built SOA-based middleware foundation as well as industry-optimized service oriented applications, all built around a SOA governance model that encourages effective design and reuse. I encourage you to read Oracle Solution for Mastering Constituents Data for Public Sector – Tax Authorities by Roberto Negro. It is an outstanding whitepaper that describes how the Oracle MDM solution allows you to create a unified, reconciled source of high-quality constituent data and gain an accurate single view of each constituent. This foundation enables you to lower the costs associated with data quality and integration and create a tax organization that is efficient, secure and constituent-centric. Also, don’t forget the upcoming webcast on Thursday, February 10th: Deliver Improved Services to Citizens at Lower Cost to your Organization Our Guest Speaker is Ruben Spekle, from Capgemini. He will also provide insight into Public Sector Master Data Management and Case Management implementations including one that was executed for a Dutch Government Agency. If you are interested in how governmental organizations from around the world are using MDM to advance their cause, click here to register for the webcast.

    Read the article

  • Create and Track Your Own License Keys with PowerShell

    - by BuckWoody
    SQL Server used to have  cool little tool that would let you track your licenses. Microsoft didn’t use it to limit your system or anything, it was just a place on the server where you could put that this system used this license key. I miss those days – we don’t track that any more, and I want to make sure I’m up to date on my licensing, so I made my own. Now, there are a LOT of ways you could do this. You could add an extended property in SQL Server, add a table to a tracking database, use a text file, track it somewhere else, whatever. This is just the route I chose; if you want to use some other method, feel free. Just sharing here. Warning Serious problems might occur if you modify the registry incorrectly by using Registry Editor or by using another method. These problems might require that you reinstall the operating system. Microsoft cannot guarantee that these problems can be solved. Modify the registry at your own risk. And this is REALLY important. I include a disclaimer at the end of my scripts, but in this case you’re modifying your registry, and that could be EXTREMELY dangerous – only do this on a test server – and I’m just showing you how I did mine. It isn’t an endorsement or anything like that, and this is a “Buck Woody” thing, NOT a Microsoft thing. See this link first, and then you can read on. OK, here’s my script: # Track your own licenses # Write a New Key to be the License Location mkdir HKCU:\SOFTWARE\Buck   # Write the variables - one sets the type, the other sets the number, and the last one holds the key New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseType" -value "Processor" # Notice the Dword value here - this one is a number so it needs that. Keep this on one line! New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseNumber" -propertytype DWord -value 4 New-ItemProperty HKCU:\SOFTWARE\Buck -name "SQLServerLicenseKey" -value "ABCD1234"   # Read them all $LicenseKey = Get-Item HKCU:\Software\Buck $Licenses = Get-ItemProperty $LicenseKey.PSPath foreach ($License in $LicenseKey.Property) { $License + "=" + $Licenses.$License }   Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • PASS Summit 2012 PreCon - DBA-298-P Automate and Manage SQL Server with PowerShell

    - by AllenMWhite
    On Tuesday I presented an all-day pre-conference session on using PowerShell to automate and manage SQL Server. It was a very full day and we had a lot of great questions. One discussion in Module 6 was around scripting all the objects in a database, and I'd mentioned the script I wrote for the book The Red Gate Guide to SQL Server Team-based Development . When putting together the demos for the attendees to download I realized I'd placed that script in the Module 6 folder, so you don't need to go...(read more)

    Read the article

  • The future continues to be brighter than ever for JD Edwards as the first ERP suite to run on Apple iPad.

    - by mseika
    Announcing JD Edwards Tools JD EdwardsLatest and Greatest Live Demo and Webcast of the New Applications User Interface & Tools on Apple iPad Tuesday December 6, 2011 at 8:00 a.m. Pacific Click here to register Oracle’s JD Edwards Development Team just completed an exciting new EnterpriseOne User Interface and a massive number of feature innovations for users and system administrators. We are looking forward to demonstrating the new User Interface and Tools. We have a panel of experts lined up just for you and we will be sure to answer all your questions. Lyle Ekdahl – Oracle Group Vice President Gary Grieshaber – Oracle Strategy Senior Director Brian Stanz – Oracle Development Senior Director The future continues to be brighter than ever for JD Edwards as the first ERP suite to run on Apple iPad. Please join us for this important webcast and see why we are so excited about these cool tools that make your work more mobile and efficient. Click here to register for the live webcast on Tuesday, December 6th, 2011 at 8:00 a.m. Pacific time! Copyright © 2011, Oracle Corporation and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >