Search Results

Search found 26256 results on 1051 pages for 'information science'.

Page 78/1051 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • New User of UPK?

    - by [email protected]
    The UPK Developer comes with a variety of manuals to help support your organization in the development and deployment of content. The Developer manuals can be found in the \Documentation\Language Code\Reference folder where the Developer has been installed. As of 3.5.x the documentation can also be accessed via the Start menu, Start\Programs\User Productivity Kit\Documentation\Reference. Content Deployment.pdf: This manual provides information on how to deploy content to your audience. Content Development.pdf: This manual provides information on how to create, maintain, and publish content using the Developer. The content of this manual also appears in the Developer help system. Content Player.pdf: This manual provides instructions on how to view content using the Player. The content of this manual also appears in the Player help system. In-Application Support Guide.pdf: This manual provides information on how implement content-sensitive, in-application support for enterprise applications using Player content. Installation & Administration.pdf: This manual provides instructions for installing the Developer in a single-user or multi-user environment as well as information on how to add and manage users and content in a multi-user installation. An Administration help system also appears in the Developer for authors configured as administrators. This manual also provides instructions for installing and configuring Usage Tracking. Upgrade.pdf: This manual provides information on how to upgrade from a previous version to the current version. Usage Tracking Administration & Reporting.pdf: This manual provides instructions on how to manage users and usage tracking reports. - Kathryn Lustenberger, Oracle UPK Outbound Product Management

    Read the article

  • SharePoint 2010 Design & Deployment Best Practices

    - by Michael Van Cleave
    Well now that SharePoint 2010 has successfully launched and everyone is scratching for every piece of best practices information they can get their hands on, I would like to invite anyone and everyone to come and take part in ShareSquared's next webinar. The webinar will cover some key information such as: Pros and cons of the different approaches to installing and configuring SharePoint 2010 Configuration Best Practices for SharePoint 2010 farms Services architecture; dependencies, licensing, and topologies Information Architecture guidance for sizing, multilingual support, multi-tenancy, and more. Using tools such as SharePoint Composer and SharePoint Maestro to configure and deploy SharePoint 2010 And most of all, avoiding common pitfalls for installation and deployment. What is better than all of that? Well, the even more exciting thing is that the presenters will be our very own SharePoint MVP's Gary Lapointe and Paul Stork. If you don't know who these guys are then you should definitely check out their blogs and their contributions to the SharePoint community. To get more information and register click here: REGISTER Other great links to information in this post: ShareSquared, Inc Gary Lapointe's Blog Paul Stork's Blog SharePoint Composer Check it out and get up to speed from some of the best in the industry. Michael

    Read the article

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • mappoint 2013 randomly crashes on import

    - by ErocM
    We are sending routes to Mappoint 2013 from our application using an access database. It seems to happen with Mappoint 2010 and 2011 also. It doesn't happen on all of our clients either and it happens randomly on those who it does happen. This is the message: Problem signature: Problem Event Name: BEX Application Name: MapPoint.exe Application Version: 19.0.18.1100 Application Timestamp: 4fd664bb Fault Module Name: StackHash_94b0 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: 7f82c94f Exception Code: c0000005 Exception Data: 00000008 OS Version: 6.0.6002.2.2.0.18.10 Locale ID: 1033 Additional Information 1: 94b0 Additional Information 2: 30950b6006304277980cdff17dfbd104 Additional Information 3: 098a Additional Information 4: 31c80150ac0b74b2dcb7884aa8fa1dac Does anyone know where I'd find out more information on this or how to resolve it? If this is not the correct exchange, pls point me to the right one and I'll delete and respost it. Thanks!

    Read the article

  • What is the best database design and/or software to model a thesaurus?

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus : a long list of words with attributes, all of which are linked to each other. Wikipedia defines it as: In Information Science, Library Science, and Information Technology, specialized thesauri are designed for information retrieval. They are a type of controlled vocabulary, for indexing or tagging purposes. Such a thesaurus can be used as the basis of an index for online material. The Art and Architecture Thesaurus, for example, is used to index the Canadian Information retrieval thesauri are formally organized so that existing relationships between concepts are made explicit. What database software, design or model would best fit this? Are PHP and MySQL good technologies to handle it?

    Read the article

  • No search data in Google Analytics or Webmasters

    - by cjk
    I have a domain that has been registered in Google Webmasters and using Google Analytics for over 4 months. I get lots of analytics data, but am getting no information on Google searches in Webmasters, or Queries in Search Engine Optimisation in Analytics, even though I am getting keywords for traffic coming to my site from search engines. I have a test sub-domain with the same setup (except not HTTPS) that is getting some of this information through, even with much less data and visits. What could be wrong to stop me getting this information?

    Read the article

  • July, the 31 Days of SQL Server DMO’s - Intro

    - by Tamarick Hill
    DMO’s burst onto the SQL Server scene in 2005 and when they did they unlocked a wealth of information. I’ve became a major fan of DMO’s as they tend to simplify my troubleshooting as well as provide me with valuable information about what is going on within the SQL Server engine. I would recommend that those of you who are not familiar with DMO’s, take the time to really learn more about them. For those of you who may not be familiar with DMO’s, for the month of July, I will be writing about one DMO per day. Don’t get me wrong, I’m no DMO expert or anything like that, but I’ve worked with them enough to feel that I can give you some good information about DMO’s to help you get started with using them. During these blog sessions, I will not be providing you with any complicated queries to solve all of your SQL Server problems that you may or may not have. I will be simply introducing you to various DMO’s and illustrating what type of information they provide. After you learn more about these individually, then you will be able to join whatever DMO’s you need to pull back the information you are seeking. I hope that you all benefit in some form or fashion from my next 31 DMO postings!!! Enjoy!

    Read the article

  • Blog Posts from Prepping for Last Year's Summit

    - by RickHeiges
    Last year, I had a series of blog posts that matched up with a webcast I did targeting First Timers to the PASS Summit 2011. Here is a link to the final blog post which is a summary of those posts and links to the main points in the series. A good deal of the information in those posts are still relevant. I am in the process of updating the webcast and will be presenting the information again this year on Oct 25, 2012 at 11am ET. There is a lot of great information out there for first timers that...(read more)

    Read the article

  • No search data in Goolge Analytics or Webmasters

    - by cjk
    I have a domain that has been registered in Google Webmasters and using Google Analytics for over 4 months. I get lots of analytics data, but am getting no information on Google searches in Webmasters, or Queries in Search Engine Optimisation in Analytics, even though I am getting keywords for traffic coming to my site from search engines. I have a test sub-domain with the same setup (except not HTTPS) that is getting some of this information through, even with much less data and visits. What could be wrong to stop me getting this information?

    Read the article

  • Visual Studio 2013 - Express for Web vs Professional [duplicate]

    - by TimS
    This question already has an answer here: Visual Studio 2012 - Express vs Professional 2 answers What are the main differences and limitations between Visual Studio 2013 Express and Visual Studio 2013 Professional? I'm specifically interested in information related to the Web edition. I need to be able to develop ASP.Net applications, Windows Services and console applications - not Desktop or Phone apps. Microsoft seems to hide this information well and I can only seem to find information relating to 2012 products and earlier.

    Read the article

  • how to send trackback and pingback using c# script

    - by anirudha
    This is a very interesting topic because if you want to search about them. you find much useless stuff even you use c# as prefix. 1. how trackback works ? Every blog who have support to trackback that in their every post they have some text comment like <rdf:/rdf></rdf:rdf>  inside this tag the attribute “trackback:ping” have a url where we can send trackback. 2. you need some information about your blog to post where you want to trackback like 1. URL where you want to send the trackback 2. your post title [may be page title] 3. your post URL [may be page url] 4.  Excerpt : information you want to send. 5. you blogname [may be sitename if you use site not blog] make the information like querystring just we use in asp.net ex: title=”pingpost&url=pingurl&excerpt=it’s me&blog=myblog” ; the information look like asp.net Querystring if you unsure that you can HTMLencode the information who you use in parameters. you need to be sure that your post have URL of post where you want to send trackback. make  a request to pingurl set the following property request.Method = “POST”; //because they support only POST request.ContentLength = param.length // choose the length of parameters we create for sending ping. request.ContentType = "application/x-www-form-urlencoded"; // required to set. now when you send the request then server respond you something about your request check that the request.statuscode is verify that’s work or not if (response.StatusCode < HttpStatusCode.OK && response.StatusCode >= HttpStatusCode.Ambiguous)                     throw new Exception(string.Format(response.StatusCode.ToString())); because you have the response in XML format you can parse the response that’s have Error tag inside them or not. i put here information not code the reason is that “i see some other blog from a week on the topic but i found that they[blogger] post code not the method and all their code are useless and not worked”. because i thing to be more declarative i post here the definition not code.

    Read the article

  • What is a Linux device name for RAID of sas drives?

    - by flashnik
    I have a RAID1 using Promise FastTrack TX2650 consisting of 2 SAS drives. What is a Linux device name for them? Like sda is for first sata drive. I have Windows server so I can't look it directly but need this information for smartctl usage. UPDATE. I found how to access RAID: smartctl -d scsi sdb (because I also have a SATA drive). But in this case I just get an information about just raid controller though I wantto get information about drives itself. Is it possible? Promises's control panel provides information only about their healthy status (boolean) and I want more. Mostly now I need information about temperature.

    Read the article

  • Découvrir la solution d'exploration de données structuré et non structuré

    - by David lefranc
    Explorer et découvrir l’information… Nous vous proposons un atelier découverte pour vous permettre d’explorer toute type de données grâce à la solution Oracle Endeca . Quand : 7 Décembre 2012 De 9h30 à 12h30  Lieu : Oracle 15 Boulevard Charles de gaulle 92715 Colombes Pour s'inscrire : [email protected] Réalisé pour des utilisateurs métiers, cet atelier vous permettera en une demi journée , de découvrir Oracle Endeca Information Discovery afin de : Comprendre et explorer toute information venant de différents horizons ( Big Data, réseaux sociaux, forums, sondages, blogs..) Découvrir en quoi et comment OEID est un complément à des solutions de BI classiques Par une navigation simple et rapide, vous découvrirez combien il est facile de trouver des réponses à des questions imprévues en utilisant OEID sans formation préalable. Utilisez la recherche et la navigation guidée pour voir comment les informations structurées et non structurées peuvent être rapidement réunies pour dégager la valeur cachée. Explorer toutes vos données dans n'importe quel format et à partir de n'importe quelle source, y compris les médias sociaux, documents, fichiers,…. Pouvoir découvrir et explorer vos données sans référentiel pour permettre aux utilisateurs d’être autonome et d’analyser leurs propres données de manière rapide Élaborer une stratégie visant à accroître la valeur des données de l'entreprise tout en réduisant le coût total de possession Découvrez l'incroyable performance d’ Endeca sur Oracle Exalytics la machine In Memory AgendaAprès une introduction sur la solution Oracle information Endeca, suivi d’un atelier, vous verrez comment il est facile de: Utiliser la navigation guidée et le moteur de recherche pour explorer les données structurées et non structurées intégrer rapidement les nouvelles sources de données comme les médias sociaux Construire de nouvelles interfaces utilisateur tout en découvrant l’information répondre rapidement aux besoins changeants des entreprises et des environnements de données

    Read the article

  • SSW Scrum Rule: Do you know to use clear task descriptions?

    - by Martin Hinshelwood
    When you create tasks in Scrum you are doing this within a time box and you tend to add only the information you need to remember what the task is. And the entire Team was at the meeting and were involved in the discussions around the task, so why do you need more? Once you have accepted a task you should then add as much information as possible so that anyone can pick up that task; what if your numbers come up? Will you be into work the next day? Figure: What if your numbers come up in the lottery? What if the Team runs a syndicate and all your numbers come up? The point is that anything can happen and you need to protect the integrity of the project, the company and the Customer. Add as much information to the task as you think is necessary for anyone to work on the task. If you need to add rich text and images you can do this by attaching an email to the task.   Figure: Bad example, there is not enough information for a non team member to complete this task Figure: Julie provided a lot more information and another team should be able to pick this up. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS.   Technorati Tags: Scrum,SSW Rules,TFS 2010

    Read the article

  • Ganglia and how it communicates

    - by MikeKulls
    I'm a little confused about how Ganglia sends information around and have found conflicting information on the web. I would have thought that the gmond process would either send info to gmetad at a regular interval or gmetad would request info from the various instances of gmond. At least one online article states this is how it works but from what I understand this is incorrect. It appears that you configure all gmond processes to send their info to a central gmond process and gmetad will request info from that central gmond. Is that correct? In my case I have 6 servers sending their information to 1 central server. If I set gmetad to get it's information from the central server then I get information from all 6 servers, all good.. Their weird thing is that if I point gmetad to one of the 6 servers then I still get info from all 6 servers. How is it that server 1 in my cluster if getting stats from all the other servers?

    Read the article

  • WebCenter Customer Spotlight: spectrumK Holding GmbH

    - by me
    Author: Peter Reiser - Social Business Evangelist Oracle WebCenter Solution Summary spectrumK Holding GmbH was founded in 2007 by various German health insurance funds and national insurance associations and is a service provider for the healthcare market, covering patient care management, financial management, and information management, as well as payment services and legal counseling. spectrumK Holding GmbH business objectives was to implement innovative new Web-based services and solution systems for health insurance funds by integrating a multitude of isolated solutions from different organizations. Using Oracle WebCenter Portal, Oracle WebCenter Content, and Site Studio, the customer created a multiple-portal environment and deployed the 1st three applications for patient receipt, a medication navigator, and disability information. spectrumK Holding GmbH accelerated time-to-market for new features by reducing the development time, achieved 40% development and cost savings using standard modules and realized 80% overall savings using the Oracle multiple portal environment, as compared to individual installations. Company Overview spectrumK Holding GmbH was founded in 2007 by various company health insurance funds and national insurance associations. A service provider for the healthcare market, spectrumK consists of one holding company and four operative subsidiaries. Its broad product portfolio of compulsory health funds covers patient care management, financial management, and information management, as well as payment services and legal counseling. Business ChallengesspectrumK Holding GmbH business objectives were to implement innovative new Web-based services and solution systems for the health insurance funds by integrating a multitude of isolated solutions from different organizations. Specifically, spectrumK was looking to: Establish a portal-based environment to provide health coverage information services to the insured, with the option to integrate a multitude of isolated solutions from different organizations Implement innovative new Web-based spectrumK service products and solutions systems for health insurance funds Lower costs while improving services for the health fund’s clients Find an infrastructure that supports the small development team in efficient implementation and operation of the solution Reuse standard modules while enabling easy, inexpensive adaptations to customer-specific corporate requirements Solution Deployed spectrumK Holding GmbH created a multiple-portal environment, called “KundenCenter+“ which is based on the integration of Oracle WebCenter Portal, Oracle WebCenter Content, and Site Studio. They initiated and launched the first three of the company’s KundenCenter+, Oracle-based modules for patient receipt, a medication navigator, and disability information, with numerous successful deployments and individual customer environment adaptations. Business ResultsspectrumK Holding GmbH accelerated time-to-market for new features by reducing the development time, achieved 40% development and cost savings using standard modules and realized 80% overall savings using the Oracle multiple portal environment, as compared to individual installations Additional Information  spectrumK Holding GmbH Snapshot Oracle WebCenter Suite Oracle Customer Support Oracle Consulting Oracle WebCenter Content

    Read the article

  • Recent improvements in Console Performance

    - by loren.konkus
    Recently, the WebLogic Server development and support organizations have worked with a number of customers to quantify and improve the performance of the Administration Console in large, distributed configurations where there is significant latency in the communications between the administration server and managed servers. These improvements fall into two categories: Constraining the amount of time that the Console stalls waiting for communication Reducing and streamlining the amount of data required for an update A few releases ago, we added support for a configurable domain-wide mbean "Invocation Timeout" value on the Console's configuration: general, advanced section for a domain. The default value for this setting is 0, which means wait indefinitely and was chosen for compatibility with the behavior of previous releases. This configuration setting applies to all mbean communications between the admin server and managed servers, and is the first line of defense against being blocked by a stalled or completely overloaded managed server. Each site should choose an appropriate timeout value for their environment and network latency. In the next release of WebLogic Server, we've added an additional console preference, "Management Operation Timeout", to the Console's shared preference page. This setting further constrains how long certain console pages will wait for slowly responding servers before returning partial results. While not all Console pages support this yet, key pages such as the Servers Configuration and Control table pages and the Deployments Control pages have been updated to support this. For example, if a user requests a Servers Table page and a Management Operation Timeout occurs, the table is displayed with both local configuration and remote runtime information from the responding managed servers and only local configuration information for servers that did not yet respond. This means that a troublesome managed server does not impede your ability to manage your domain using the Console. To support these changes, these Console pages have been re-written to use the Work Management feature of WebLogic Server to interact with each server or deployment concurrently, which further improves the responsiveness of these pages. The basic algorithm for these pages is: For each configuration mbean (ie, Servers) populate rows with configuration attributes from the fast, local mbean server Find a WorkManager For each server, Create a Work instance to obtain runtime mbean attributes for the server Schedule Work instance in the WorkManager Call WorkManager.waitForAll to wait WorkItems to finish, constrained by Management Operation Timeout For each WorkItem, if the runtime information obtained was not complete, add a message indicating which server has incomplete data Display collected data in table In addition to these changes to constrain how long the console waits for communication, a number of other changes have been made to reduce the amount and scope of managed server interactions for key pages. For example, in previous releases the Deployments Control table looked at the status of a deployment on every managed server, even those servers that the deployment was not currently targeted on. (This was done to handle an edge case where a deployment's target configuration was changed while it remained running on previously targeted servers.) We decided supporting that edge case did not warrant the performance impact for all, and instead only look at the status of a deployment on the servers it is targeted to. Comprehensive status continues to be available if a user clicks on the 'status' field for a deployment. Finally, changes have been made to the System Status portlet to reduce its impact on Console page display times. Obtaining health information for this display requires several mbean interactions with managed servers. In previous releases, this mbean interaction occurred with every display, and any delay or impediment in these interactions was reflected in the display time for every page. To reduce this impact, we've made several changes in this portlet: Using Work Management to obtain health concurrently Applying the operation timeout configuration to constrain how long we will wait Caching health information to reduce the cost during rapid navigation from page to page and only obtaining new health information if the previous information is over 30 seconds old. Eliminating heath collection if this portlet is minimized. Together, these Console changes have resulted in significant performance improvements for the customers with large configurations and high latency that we have worked with during their development, and some lesser performance improvements for those with small configurations and very fast networks. These changes will be included in the 11g Rel 1 patch set 2 (10.3.3.0) release of WebLogic Server.

    Read the article

  • Use Enterprise Manager Cloud Control to monitor OBIEE 11.1.1.7.x Dashboards

    - by Torben Hein -Oracle
    (in via Senthil )  If your OBIEE 11.1.1.7.x is set up in the following way: The OBIEE repository is an Oracle Database and is set up as a data warehouse Usage tracking is enabled in OBIEE. ( For information on how to enable usage tracking in OBIEE, refer to the following link: Setting Up Usage Tracking in Oracle BI 11g ) The OBIEE instance is discovered in EM Cloud Control. ( For information on how to discover an OBIEE instance in Cloud Control, refer to the following link: Discovering Oracle Business Intelligence Instance and Oracle Essbase Targets ) The OBIEE repository is discovered in EM Cloud Control. ( For information on how to discover an Oracle database, refer to the following link: Discovering, Promoting, and Adding Database Targets ) then we've got news for you: KM Article:  OBIEE 11g: How To Diagnose Slowly Performing Dashboards using Enterprise Manager Cloud Control (Doc ID 1668236.1) takes you step by step through monitoring the SQL query performance behind your OBIEE dashboard. This Diagnostic approach ... .. will help you piece together information on BI dashboard performance, e.g. processing time from the different layers of the BI system including the repository. .. should enable you to get to the bottom of slow dashboards by using the wealth of information available in EM Cloud Control on OBIEE and Oracle DB. .. will NOT fix any performance issues on its own, but will help identify bottlenecks while processing dashboard requests. (layout and post: Torben, authorized: Lia)

    Read the article

  • Accenture Foundation Platform for Oracle

    - by Michelle Kimihira
    Accenture Foundation Platform for Oracle (AFPO) helps clients accelerate deployments on Oracle Fusion Middleware. Version 5 is the latest version and added exciting new capabilities including: integration with Oracle Application Development Framework (ADF) Mobile and Oracle WebCenter. For more information, visit Accenture's Website. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • WoW lua: Getting quest attributes before the QUEST_DETAIL event

    - by Matt DiTrolio
    I'd like to determine the attributes of a quest (i.e., information provided by functions such as QuestIsDaily and IsQuestCompletable) before the player clicks on the quest detail. I'm trying to write an add-on that handles accepting and completing of daily quests with a single click on the NPC, but I'm running into a problem whereby I can't find out anything about a given quest unless the quest text is currently being displayed, defeating the purpose of the add-on. Other add-ons of this nature seem to be getting around this limitation by hard-coding information about quests, an approach I don't much like as it requires constant maintenance. It seems to me that this information must be available somehow, as the game itself can properly figure out which icon to display over the head of the NPC without player interaction. The only question is, are add-on authors allowed access to this information? If so, how? EDIT: What I originally left out was that the situations I'm trying to address are when: An NPC has multiple quests The quest detail is not the first thing that shows up upon right-click Otherwise, the situation is much simpler, as I have the information I need provided immediately.

    Read the article

  • Server Side Developer Prerequisites

    - by Jking
    I am new to server side development and am currently learning node.js. What sort of networking information should I be familiar with to allow for a smooth learning curve with server side development. Could anyone provide resources pertaining to the information required to get into server programming? To give you a better idea of my standpoint: I do not know how a server interacts with a database [Q: How does a NoSQL database, or database in general, communicate with a server?] I am unsure of how a web stack works [Q: I have heard of LAMP but do not know how Apache, MySQL, and PHP interact. Hopefully this applies to other stacks as well. How do the components of a stack work together? Also, is a MEAN stack an alternative, or is it completely irrelevant to this] I have trivial knowledge of internet protocol [however extremely inefficient][Q: What resources are beneficial when learning about networking, and how much/what knowledge should I acquire to program on the server side] I am unsure of what I am unsure of concerning networking information necessary to start development Information on how the client-server model works would be greatly appreciated

    Read the article

  • WebLogic Server–Use the Execution Context ID in Applications–Lessons From Hansel and Gretel

    - by james.bayer
    I learned a neat trick this week.  Don’t let your breadcrumbs go to waste like Hansel and Gretel did!  Keep track of the code path, logs and errors for each request as they flow through the system.  Earlier this week an OTN forum post in the WLS – General category by Oracle Ace John Stegeman asked a question how to retrieve the Execution Context ID so that it could be used on an error page that a user could provide to a help desk or use to check with application administrators so they could look up what went wrong.  What is the Execution Context ID (ECID)?  Fusion Middleware injects an ECID as a request enters the system and it says with the request as it flows from Oracle HTTP Server to Oracle Web Cache to multiple WebLogic Servers to the Oracle Database. It’s a way to uniquely identify a request across tiers.  According to the documentation it’s: The value of the ECID is a unique identifier that can be used to correlate individual events as being part of the same request execution flow. For example, events that are identified as being related to a particular request typically have the same ECID value.  The format of the ECID string itself is determined by an internal mechanism that is subject to change; therefore, you should not have or place any dependencies on that format. The novel idea that I see John had was to extend this concept beyond the diagnostic information that is captured by Fusion Middleware.  Why not also use this identifier in your logs and errors so you can correlate even more information together!  Your logging might already identify the user, so why not identify the request so you filter down even more.  All you need to do inside of WebLogic Server to get ahold of this information is invoke DiagnosticConextHelper: weblogic.diagnostics.context.DiagnosticContextHelper.getContextId() This class has other helpful methods to see other values tracked by the diagnostics framework too.  This way I can see even more detail and get information across tiers. In performance profiling, this can be very handy to track down where time is being spent in code.  I’ve blogged and made videos about this before.  JRockit Flight Recorder can use the WLDF Diagnostic Volume in WLS 10.3.3+ to automatically capture and correlate lots of helpful information for each request without installing any special agents and with the out-of-the-box JRockit and WLS settings!  You can see here how information is displayed in JRockit Flight Recorder about a single request as it calls a Servlet, which calls an EJB, which gets a DB connection, which starts a transaction, etc.  You can get timings around everything and even see the SQL that is used. http://download.oracle.com/docs/cd/E21764_01/web.1111/e13714/using_flightrecorder.htm#WLDFC480 Recent versions of the WLS console also are able to visualize this data too, so it works with other JVMs besides JRockit when you turn on WLDF instrumentation. I wrote a little sample application that verified to myself that the ECID did actually cross JVM boundaries.  I invoked a Servlet in one JVM, which acted as an EJB client to Stateless Session Bean running in another JVM.  Each call returned the same ECID.  You need to turn on WLDF Instrumentation for this to work otherwise the framework returns null.  I’m glad John put me on to this API as I have some interesting ideas on how to correlate some information together.

    Read the article

  • WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256

    - by user702295
    WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256 I am running my engine on Linux and am receiving an intermittent message "WARNING Retrying bulk insert for file: sqlldr due to communication Error: 256" The engine seems to have completed successfully, but it is not clear if this error caused some of the forecast to not complete. It is also not clear what caused the error. Generally if you see only the WARNING of it, it means that next retries of the same load request have eventually succeeded and so the run a a whole is not affected. In order to know more about what happens, look for .log/.bad files left in the engines bin directory or possibly a quote of them within the specific engine log that had the issue.  The sqlnet.log file may also have some information about it and perhaps at the database server side there may be some log/alert regarding what happened.  Look at the alert.log. In general it could be that the database server/network was over loaded at the time and somehow the connection was rejected/failed/aborted either due to specific setting on concurrent connections/sessions or inadvertently due to glitch in network/os/hardware. If this repeats and becomes more frequent during the run you should look further into it as mentioned above. You can also track this using either SQL*Trace or java.util.logging.  - Globally enable logging by setting the oracle.jdbc.Trace system property java -Doracle.jdbc.Trace=true - Client Side Tracing: Your SQLNET.ORA file should contain the following lines to produce a client side trace file: trace_level_client = 10 trace_unique_client = on trace_file_client = sqlnet.trc trace_directory_client = <path_to_trace_dir> Server Side Tracing: To enable server side tracing, use the following parameters: trace_level_server = 10 trace_file_server = server.trc trace_directory_server = <path_to_trace_dir> Tracing Levels: The following values can be used for TRACE_LEVEL* parameters:     16 or SUPPORT — WorldWide Customer Support trace information     10 or ADMIN — Administration trace information     4 or USER — User trace information     0 or OFF — no tracing, the default Additional information is readily available via the web.

    Read the article

  • Webcast Tomorrow: Securing the Cloud for Public Sector

    - by Darin Pendergraft
    Securing the Cloud for Public Sector Click here, to register for the live webcast. Cloud computing offers government organizations tremendous potential to enhance public value by helping organizations increase operational efficiency and improve service delivery. However, as organizations pursue cloud adoption to achieve the anticipated benefits a common set of questions have surfaced. “Is the cloud secure? Are all clouds equal with respect to security and compliance? Is our data safe in the cloud?” Join us December 12th for a webcast as part of the “Secure Government Training Series” to get answers to your pressing cloud security questions and learn how to best secure your cloud environments. You will learn about a comprehensive set of security tools designed to protect every layer of an organization’s cloud architecture, from application to disk, while ensuring high levels of compliance, risk avoidance, and lower costs. Discover how to control and monitor access, secure sensitive data, and address regulatory compliance across cloud environments by: providing strong authentication, data encryption, and (privileged) user access control to ensure that information is only accessible to those who need it mitigating threats across your databases and applications protecting applications and information – no matter where it is – at rest, in use and in transit For more information, access the Secure Government Resource Center or to speak with an Oracle representative, please call1.800.ORACLE1. LIVE Webcast Securing the Cloud for Public Sector Date: Wednesday, December 12, 2012 Time: 2:00 p.m. ET Visit the Secure Government Resource CenterClick here for information on enterprise security solutions that help government safeguard information, resources and networks. ACCESS NOW Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices | Privacy Statement

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >