Search Results

Search found 24080 results on 964 pages for 'oracle billing and revenue management'.

Page 177/964 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • La merde d'Oracle

    - by hakim
    Que comprendre chez oracle qui nous propose toujours de télécharger Solaris 10 et opensolaris? Où est la nouvelle version d'opensolaris 2010.02 ou 2010.03 promise alors que nous sommes au mois de mai? On se paye notre tête? Le temps s'est-il arrété pour Oracle en juin 2009 avec la version 2009.06 plutot expérimentale? Quand à solaris, il vaut mieux peut-être ne pas rentrer dans les détails: Les dirigants d'oracles pensent-ils qu'on peut travailler avec un système qui ne donne de drivers ni pour l'essentiel des cartes graphiques, ni pour les cartes réseaux, ... Mesieurs, nous sommes en 2010; heuresement qu'il y'a free BSD et les différents Linux pour nous permettre réellement de travailler sur nos machines et de produire au lieu de gaspiller notre temps avec vos svcs, svcsadm, ... de merde!

    Read the article

  • Modify Oracle SOA Suite 11g repository DB config

    - by Alfabravo
    Hello there! Don't know if this question goes here or in superuser. Anyhow, let's try. I have an Oracle SOA Suite installed in a server. The repository database is installed in another server. Both are virtual. Sadly, we don't have snapshots neither UPS and lights went off yesterday... the repo database is now a bunch of unformed bits and we need to recreate it. ¿Is there any way to reconfigure Oracle SOA Suite to use a brand new repository? Or should I paninfully reinstall the whole crap? Thanks in advance.

    Read the article

  • Alternatives to phpSchedule?

    - by paulw1128
    Currently we have a lightly customised version of phpSchedule for booking our development/test machines (including a raft of VMs), which no-one uses. It's slow, time-consuming to use, out of date, and therefore pretty much useless. Getting a new system in place suddenly appeared on the management radar, as there's been complaints that people are being blocked by not being able to get any machines to do development testing on (running our product on a desktop machine is not an option). Does anyone have any recommendations for alternative booking systems that are worth investigating?

    Read the article

  • [C] Texture management / pointer question

    - by ndg
    I'm working on a texture management and animation solution for a small side project of mine. Although the project uses Allegro for rendering and input, my question mostly revolves around C and memory management. I wanted to post it here to get thoughts and insight into the approach, as I'm terrible when it comes to pointers. Essentially what I'm trying to do is load all of my texture resources into a central manager (textureManager) - which is essentially an array of structs containing ALLEGRO_BITMAP objects. The textures stored within the textureManager are mostly full sprite sheets. From there, I have an anim(ation) struct, which contains animation-specific information (along with a pointer to the corresponding texture within the textureManager). To give you an idea, here's how I setup and play the players 'walk' animation: createAnimation(&player.animations[0], "media/characters/player/walk.png", player.w, player.h); playAnimation(&player.animations[0], 10); Rendering the animations current frame is just a case of blitting a specific region of the sprite sheet stored in textureManager. For reference, here's the code for anim.h and anim.c. I'm sure what I'm doing here is probably a terrible approach for a number of reasons. I'd like to hear about them! Am I opening myself to any pitfalls? Will this work as I'm hoping? anim.h #ifndef ANIM_H #define ANIM_H #define ANIM_MAX_FRAMES 10 #define MAX_TEXTURES 50 struct texture { bool active; ALLEGRO_BITMAP *bmp; }; struct texture textureManager[MAX_TEXTURES]; typedef struct tAnim { ALLEGRO_BITMAP **sprite; int w, h; int curFrame, numFrames, frameCount; float delay; } anim; void setupTextureManager(void); int addTexture(char *filename); int createAnimation(anim *a, char *filename, int w, int h); void playAnimation(anim *a, float delay); void updateAnimation(anim *a); #endif anim.c void setupTextureManager() { int i = 0; for(i = 0; i < MAX_TEXTURES; i++) { textureManager[i].active = false; } } int addTextureToManager(char *filename) { int i = 0; for(i = 0; i < MAX_TEXTURES; i++) { if(!textureManager[i].active) { textureManager[i].bmp = al_load_bitmap(filename); textureManager[i].active = true; if(!textureManager[i].bmp) { printf("Error loading texture: %s", filename); return -1; } return i; } } return -1; } int createAnimation(anim *a, char *filename, int w, int h) { int textureId = addTextureToManager(filename); if(textureId > -1) { a->sprite = textureManager[textureId].bmp; a->w = w; a->h = h; a->numFrames = al_get_bitmap_width(a->sprite) / w; printf("Animation loaded with %i frames, given resource id: %i\n", a->numFrames, textureId); } else { printf("Texture manager full\n"); return 1; } return 0; } void playAnimation(anim *a, float delay) { a->curFrame = 0; a->frameCount = 0; a->delay = delay; } void updateAnimation(anim *a) { a->frameCount ++; if(a->frameCount >= a->delay) { a->frameCount = 0; a->curFrame ++; if(a->curFrame >= a->numFrames) { a->curFrame = 0; } } }

    Read the article

  • Need help with a memory management problem in my game model

    - by user309030
    Hi, I'm a beginner level programmer trying to make a game app for the iphone and I've encountered a possible issue with the memory management (exc_bad_access) of my program so far. I've searched and read dozens of articles regarding memory management (including apple's docs) but I still can't figure out what exactly is wrong with my codes. So I would really appreciate it if someone can help clear up the mess I made for myself. //in the .h file @property(nonatomic,retain) NSMutableArray *fencePoleArray; @property(nonatomic,retain) NSMutableArray *fencePoleImageArray; @property(nonatomic,retain) NSMutableArray *fenceImageArray; //in the .m file - (void)viewDidLoad { [super viewDidLoad]; self.gameState = gameStatePaused; fencePoleArray = [[NSMutableArray alloc] init]; fencePoleImageArray = [[NSMutableArray alloc] init]; fenceImageArray = [[NSMutableArray alloc] init]; mainField = CGRectMake(10, 35, 310, 340); .......... [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(gameLoop) userInfo:nil repeats:YES]; } So basically, the player touches the screen to set up the fences/poles -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { if(.......) { ....... } else { UITouch *touch = [[event allTouches] anyObject]; currentTapLoc = [touch locationInView:touch.view]; NSLog(@"%i, %i", (int)currentTapLoc.x, (int)currentTapLoc.y); if(CGRectContainsPoint(mainField, currentTapLoc)) { if([self checkFence]) { onFencePole++; //this 3 set functions adds their respective objects into the 3 NSMutableArrays using addObject: [self setFencePole]; [self setFenceImage]; [self setFencePoleImage]; ....... } } else { ....... } } } } The setFence function (setFenceImage and setFencePoleImage is similar to this) -(void)setFencePole { Fence *fencePole; if (!elecFence) { fencePole = [[Fence alloc] initFence:onFencePole fenceType:1 fencePos:currentTapLoc]; } else { fencePole = [[Fence alloc] initFence:onFencePole fenceType:2 fencePos:currentTapLoc]; } [fencePoleArray addObject:fencePole]; [fencePole release]; and whenever I press a button in the game, endOpenState is called to clear away all the extra images(fence/poles) on the screen and also to remove all existing objects in the 3 NSMutableArray. Point is to remove all the objects in the NSMutableArrays but keep the array itself so it can be reused later. -(void)endOpenState { ........ int xMax = [fencePoleArray count]; int yMax = [fenceImageArray count]; for (int x = 0; x < xMax; x++) { [[fencePoleImageArray objectAtIndex:x] removeFromSuperview]; } for (int y = 0; y < yMax; y++) { [[fenceImageArray objectAtIndex:y] removeFromSuperview]; } [fencePoleArray removeAllObjects]; [fencePoleImageArray removeAllObjects]; [fenceImageArray removeAllObjects]; ........ } The crash happens here at the checkFence function. -(BOOL)checkFence { if (onFencePole == 0) { return YES; } else if (onFencePole >= 1 && onFencePole < currentMaxFencePole - 1) { CGPoint tempPoint1 = currentTapLoc; CGPoint tempPoint2 = [[fencePoleArray objectAtIndex:onFencePole-1] returnPos]; // the crash happens at this line if ([self checkDistance:tempPoint1 point2:tempPoint2]) { return YES; } else { return NO; } } else if (onFencePole == currentMaxFencePole - 1) { ...... } else { return NO; } } So the problem here is, everything works fine until checkFence is called the 2nd time after endOpenState is called. So its like tap_screen - tap_screen - press_button_to_call_endOpenState - tap screen - tap_screen - crash What I'm thinking of is that fencePoleArray got messed up when I used [fencePoleArray removeAllObjects] because it doesn't crash when I comment it out. It would really be great if someone can explain to me what went wrong. And thanks in advance.

    Read the article

  • Management Reporter Installation – Lessons Learned Part II - Dynamics GP

    - by Ryan McBee
    After feeling pretty good about my deployment skills of Management Reporter for Dynamics GP a few weeks ago, I ran into two additional lessons learned that I wanted to share. First, on another new deployment, I got the error shown below which says “An error occurred while creating the database.  View the installation log for additional information.”  This problem initially pointed me to KB 2406948 which did not provide resolution. After several hours of troubleshooting, I found there is an issue if the defaults database locations in SQL Server are set to the root of a drive. You will want to set the default to something like the following to get it installed; C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA.  My default database locations for the data and log files were indeed sitting on the H:\ and I:\ drives. To change this property in your SQL Server Instance you need to open SQL Server Management Studio, right click on the server, and choose properties and then database settings. When I initially got the error, I briefly considered creating the ManagementReporter database by hand, but experience tells me that would have created more headaches down the road. The second problem I ran into with this particular deployment of Management Reporter happened when I started the FRx conversion utility.  The errors reads “The ‘Microsoft.ACE.OLEDB.12.0’ provider is not registered on the local machine. I had a suspicion that this error was related to the fact FRx uses outdated technology and I happened to be on a new install of Server 2008 R2.  A knowledge base search quickly pointed me to KB 2102486. The resolution for this Management Reporter issue was to install the Microsoft Access Database Engine Redistributable, by following the site below. http://www.microsoft.com/downloads/details.aspx?familyid=C06B8369-60DD-4B64-A44B-84B371EDE16D&displaylang=en

    Read the article

  • agile as our first project management methodology [closed]

    - by Hasan Khan
    we are a small web development company that has till now been working on client projects. we employed little to no project management and that has cost us a lot. we've used only the barest of tools (wireframing, prototyping etc) but no formal project management process has been put into place. we've learnt from our mistakes and want to prevent them from happening in the future. also, we are looking to develop our own products and we understand that putting in a proper project management paradigm will help. after a lot of research, we've sort of settled on agile for a few reasons: agile seems to scale well with team size. our team is small right now and we hope to grow and agile seems to be a process that we can put in place now and grow with. agile will help us with customers who just can't seem to make up their minds and keep changing requirements. we'd appreciate the community's thoughts on this. is this a correct way to think? will agile be a good system to put into place, where there has been none till now? are there any resources that may help us in our position? pretty much all of the resources that we've found start by comparing agile to x (where x = any management methodology) and why its better than x and how agile can be implemented in place of x. we're looking for resources that can help us out in our particular situation. thanks for all your help!

    Read the article

  • SSIS Denali as part of “Enterprise Information Management”

    - by jorg
    When watching the SQL PASS session “What’s Coming Next in SSIS?” of Steve Swartz, the Group Program Manager for the SSIS team, an interesting question came up: Why is SSIS thought of to be BI, when we use it so frequently for other sorts of data problems? The answer of Steve was that he breaks the world of data work into three parts: Process of inputs BI   Enterprise Information Management All the work you have to do when you have a lot of data to make it useful and clean and get it to the right place. This covers master data management, data quality work, data integration and lineage analysis to keep track of where the data came from. All of these are part of Enterprise Information Management. Next, Steve told Microsoft is developing SSIS as part of a large push in all of these areas in the next release of SQL. So SSIS will be, next to a BI tool, part of Enterprise Information Management in the next release of SQL Server. I'm interested in the different ways people use SSIS, I've basically used it for ETL, data migrations and processing inputs. In which ways did you use SSIS?

    Read the article

  • Oracle Traffic Director – download and check out new cool features in 11.1.1.7.0 by Frances Zhao

    - by JuergenKress
    As Oracle's strategic layer-7 software load balancer product, Oracle Traffic Direct is fast, reliable, secure, easy-to-use and scalable; that you can deploy as the reliable entry point for all TCP, HTTP and HTTPS traffic to application servers and web servers in your network. The latest release Oracle Traffic Director 11.1.1.7.0 is available for ExaLogic and Database Appliance! For download and details please visit the Traffic Director OTN website. It this release, we have introduced some major new functionality and improvements. Web application firewall. Oracle Traffic Director supports web application firewalls. A web application firewall (WAF) is a filter or server plugin that applies a set of rules, called rule sets, to an HTTP request. Using a web application firewall, users can inspect traffic and deny requests to protect back-end applications from CSRF vulnerabilities and common attacks such as cross-site scripting. WebSocket Connections. Oracle Traffic Director handles WebSocket connections by default. WebSocket connections are long-lived and allow support for live content, games in real-time, video chatting, and so on. Support for LDAP/T3 Load Balancing. Oracle Traffic Director now supports basic LDAP/T3 load balancing at layer 7, where requests are handled as generic TCP connections for traffic tunneling. It works in full-NAT mode. Please download and try it out. For more information, check out the data sheet and the documentation. For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: traffic director,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Whew.... what a week!

    - by [email protected]
    Last week was a busy week for the UPK and Tutor teams at Oracle. It started with the the Collaborate Conference in Las Vegas and ended with our first UPK and Tutor Customer Advisory Board (CAB) meeting at Oracle HQ. The Collaborate Conference is a yearly event sponsored by three of the largest Oracle User Groups. • Oracle Applications User Group (OAUG) • Independent Oracle User Group (IOUG) • Quest - International User Group The User Groups are completely user run organizations with Oracle participation. If you've never attended a conference, time to start planning for the 2011 event in Orlando! If that's out of your reach, there are many regional and industry user groups that meet on a regular basis. They offer a great way to get involved, network with other users, and increase your knowledge around the Oracle applications. For a list of groups near you, check out the Oracle User Group Center. I'll add that the biggest meeting of Oracle users is at the Oracle Open World Conference in San Francisco in September, where we will have many UPK & Tutor focused development and customer sessions. More information on Oracle Open World will be forthcoming over the next few months. We hope to see many of you there! The CAB was a first for the UPK and Tutor team. Although we speak with customers regularly, this gave us an opportunity to meet in a more formal setting to discuss industry trends, business issues, and the direction of the products. Members serve a 2 year term and are required to attend 2 meetings per year, one in person, one via phone. We have some tweaking to do to our meeting format (most members wanted it to be longer!), but the overwhelming consensus was that it was a great success. There were many experiences and ideas shared and the wheels of the UPK and Tutor Development teams have been turning ever since. I'm sure you will see some of these discussions result in new product features over time. What a great week!

    Read the article

  • Overview of SOA Diagnostics in 11.1.1.6

    - by ShawnBailey
    What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections. In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. RDA (Remote Diagnostic Agent) RDA is a standalone tool that is used to collect both static configuration and dynamic runtime information from the SOA environment. RDA is generally run manually from the command line against a domain or single server. When opening a new Service Request, including an RDA collection can dramatically decrease the back and forth required to collect logs and configuration information for Support. After installing RDA you configure it to use the SOA Suite module as decribed in the referenced resources. The SOA module includes the Oracle WebLogic Server (WLS) module by default in order to include all of the relevant information for the environment. In addition to this basic configuration there is also an advanced mode where you can set the number of thread dumps for the collections, log files, Incidents, etc. When would you use it? When creating a Service Request or otherwise working with Oracle resources on an issue, capturing environment snapshots to baseline your configuration or to diagnose an issue on your own. How is it related to the other tools? RDA is related to DFW in that it collects the last 10 Incidents from the server by default. In a similar manner, RDA is related to ODL through its collection of the diagnostic logs and these may contain information from Selective Tracing sessions. Examples of what it currently collects: (for details please see the links in the Resources section) Diagnostic Logs (ODL) Diagnostic Framework Incidents (DFW) SOA MDS Deployment Descriptors SOA Repository Summary Statistics Thread Dumps Complete Domain Configuration RDA Resources: Webcast Recording: Using RDA with Oracle SOA Suite 11g Blog Post: Diagnose SOA Suite 11g Issues Using RDA Download RDA How to Collect Analysis Information Using RDA for Oracle SOA Suite 11g Products [ID 1350313.1] How to Collect Analysis Information Using RDA for Oracle SOA Suite and BPEL Process Manager 11g [ID 1352181.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] top DFW (Diagnostic Framework) DFW provides the ability to collect specific information for a particular problem when that problem occurs. DFW is included with your SOA Suite installation and deployed to the domain. Let's define the components of DFW. Diagnostic Dumps: Specific diagnostic collections that are defined at either the 'system' or product level. Examples would be diagnostic logs or thread dumps. Incident: A collection of Diagnostic Dumps associated with a particular problem Log Conditions: An Oracle Diagnostic Logging event that DFW is configured to listen for. If the event is identified then an Incident will be created. WLDF Watch: The WebLogic Diagnostic Framework or 'WLDF' is not a component of DFW, however, it can be a source of DFW Incident creation through the use of a 'Watch'. WLDF Notification: A Notification is a component of WLDF and is the link between the Watch and DFW. You can configure multiple Notification types in WLDF and associate them with your Watches. 'FMWDFW-notification' is available to you out of the box to allow for DFW notification of Watch execution. Rule: Defines a WLDF Watch or Log Condition for which we want to associate a set of Diagnostic Dumps. When triggered the specified dumps will be collected and added to the Incident Rule Action: Defines the specific Diagnostic Dumps to collect for a particular rule ADR: Automatic Diagnostics Repository; Defined for every server in a domain. This is where Incidents are stored Now let's walk through a simple flow: Oracle Web Services error message OWS-04086 (SOAP Fault) is generated on managed server 1 DFW Log Condition for OWS-04086 evaluates to TRUE DFW creates a new Incident in the ADR for managed server 1 DFW executes the specified Diagnostic Dumps and adds the output to the Incident In this case we'll grab the diagnostic log and thread dump. We might also want to collect the WSDL binding information and SOA audit trail When would you use it? When you want to automatically collect Diagnostic Dumps at a particular time using a trigger or when you want to manually collect the information. In either case it can be readily uploaded to Oracle Support through the Service Request. How is it related to the other tools? DFW generates Incidents which are collections of Diagnostic Dumps. One of the system level Diagonstic Dumps collects the current server diagnostic log which is generated by ODL and can contain information from Selective Tracing sessions. Incidents are included in RDA collections by default and ADRCI is a tool that is used to package an Incident for upload to Oracle Support. In addition, both ODL and DMS can be used to trigger Incident creation through DFW. The conditions and rules for generating Incidents can become quite complicated and the below resources go into more detail. A simpler approach to leveraging at least the Diagnostic Dumps is through WLST (WebLogic Scripting Tool) where there are commands to do the following: Create an Incident Execute a single Diagnostic Dump Describe a Diagnostic Dump List the available Diagnostic Dumps The WLST option offers greater control in what is generated and when. It can be a great help when collecting information for Support. There are overlaps with RDA, however, DFW is geared towards collecting specific runtime information when an issue occurs while existing Incidents are collected by RDA. There are 3 WLDF Watches configured by default in a SOA Suite 11g domain: Stuck Threads, Unchecked Exception and Deadlock. These Watches are enabled by default and will generate Incidents in ADR. They are configured to reset automatically after 30 seconds so they have the potential to create multiple Incidents if these conditions are consistent. The Incidents generated by these Watches will only contain System level Diagnostic Dumps. These same System level Diagnostic Dumps will be included in any application scoped Incident as well. Starting in 11.1.1.6, SOA Suite is including its own set of application scoped Diagnostic Dumps that can be executed from WLST or through a WLDF Watch or Log Condition. These Diagnostic Dumps can be added to an Incident such as in the earlier example using the error code OWS-04086. soa.config: MDS configuration files and deployed-composites.xml soa.composite: All artifacts related to the deployed composite soa.wsdl: Summary of endpoints configured for the composite soa.edn: EDN configuration summary if applicable soa.db: Summary DB information for the SOA repository soa.env: Coherence cluster configuration summary soa.composite.trail: Partial audit trail information for the running composite The current release of RDA has the option to collect the soa.wsdl and soa.composite Diagnostic Dumps. More Diagnostic Dumps for SOA Suite products are planned for future releases along with enhancements to DFW itself. DFW Resources: Webcast Recording: SOA Diagnostics Sessions: Diagnostic Framework Diagnostic Framework Documentation DFW WLST Command Reference Documentation for SOA Diagnostic Dumps in 11.1.1.6 top Selective Tracing Selective Tracing is a facility available starting in version 11.1.1.4 that allows you to increase the logging level for specific loggers and for a specific context. What this means is that you have greater capability to collect needed diagnostic log information in a production environment with reduced overhead. For example, a Selective Tracing session can be executed that only increases the log level for one composite, only one logger, limited to one server in the cluster and for a preset period of time. In an environment where dozens of composites are deployed this can dramatically reduce the volume and overhead of the logging without sacrificing relevance. Selective Tracing can be administered either from Enterprise Manager or through WLST. WLST provides a bit more flexibility in terms of exactly where the tracing is run. When would you use it? When there is an issue in production or another environment that lends itself to filtering by an available context criteria and increasing the log level globally results in too much overhead or irrelevant information. The information is written to the server diagnostic log and is exportable from Enterprise Manager How is it related to the other tools? Selective Tracing output is written to the server diagnostic log. This log can be collected by a system level Diagnostic Dump using DFW or through a default RDA collection. Selective Tracing also heavily leverages ODL fields to determine what to trace and to tag information that is part of a particular tracing session. Available Context Criteria: Application Name Client Address Client Host Composite Name User Name Web Service Name Web Service Port Selective Tracing Resources: Webcast Recording: SOA Diagnostics Session: Using Selective Tracing to Diagnose SOA Suite Issues How to Use Selective Tracing for SOA [ID 1367174.1] Selective Tracing WLST Reference top DMS (Dynamic Monitoring Service) DMS exposes runtime information for monitoring. This information can be monitored in two ways: Through the DMS servlet As exposed MBeans The servlet is deployed by default and can be accessed through http://<host>:<port>/dms/Spy (use administrative credentials to access). The landing page of the servlet shows identical columns of what are known as Noun Types. If you select a Noun Type you will see a table in the right frame that shows the attributes (Sensors) for the Noun Type and the available instances. SOA Suite has several exposed Noun Types that are available for viewing through the Spy servlet. Screenshots of the Spy servlet are available in the Knowledge Base article How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS). Every Noun instance in the runtime is exposed as an MBean instance. As such they are generally available through an MBean browser and available for monitoring through WLDF. You can configure a WLDF Watch to monitor a particular attribute and fire a notification when the threshold is exceeded. A WLDF Watch can use the out of the box DFW notification type to notify DFW to create an Incident. When would you use it? When you want to monitor a metric or set of metrics either manually or through an automated system. When you want to trigger a WLDF Watch based on a metric exposed through DMS. How is it related to the other tools? DMS metrics can be monitored with WLDF Watches which can in turn notify DFW to create an Incident. DMS Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How to Reset a SOA 11g DMS Metric DMS Documentation top ODL (Oracle Diagnostic Logging) ODL is the primary facility for most Fusion Middleware applications to log what they are doing. Whenever you change a logging level through Enterprise Manager it is ultimately exposed through ODL and written to the server diagnostic log. A notable exception to this is WebLogic Server which uses its own log format / file. ODL logs entries in a consistent, structured way using predefined fields and name/value pairs. Here's an example of a SOA Suite entry: [2012-04-25T12:49:28.083-06:00] [AdminServer] [ERROR] [] [oracle.soa.bpel.engine] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 0963fdde7e77631c:-31a6431d:136eaa46cda:-8000-00000000000000b4,0] [errid: 41] [WEBSERVICE_PORT.name: BPELProcess2_pt] [APP: soa-infra] [composite_name: TestProject2] [J2EE_MODULE.name: fabric] [WEBSERVICE.name: bpelprocess1_client_ep] [J2EE_APP.name: soa-infra] Error occured while handling a post operation[[ When would you use it? You'll use ODL almost every time you want to identify and diagnose a problem in the environment. The entries are written to the server diagnostic log. How is it related to the other tools? The server diagnostic logs are collected by DFW and RDA. Selective Tracing writes its information to the diagnostic log as well. Additionally, DFW log conditions are triggered by ODL log events. ODL Resources: ODL Documentation top ADR (Automatic Diagnostics Repository) ADR is not a tool in and of itself but is where DFW stores the Incidents it creates. Every server in the domain has an ADR location which can be found under <SERVER_HOME>/adr. This is referred to the as the ADR 'Base' location. ADR also has what are known as 'Home' locations. Example: You have a domain called 'myDomain' and an associated managed server called 'myServer'. Your admin server is called 'AdminServer'. Your domain home directory is called 'myDomain' and it contains a 'servers' directory. The 'servers' directory contains a directory for the managed server called 'myServer' and here is where you'll find the 'adr' directory which is the ADR 'Base' location for myServer. To get to the ADR 'Home' locations we drill through a few levels: diag/ofm/myDomain/ In an 11.1.1.6 SOA Suite domain you will see 2 directories here, 'myServer' and 'soa-infra'. These are the ADR 'Home' locations. 'myServer' is the 'system' ADR home and contains system level Incidents. 'soa-infra' is the name that SOA Suite used to register with DFW and this ADR home contains SOA Suite related Incidents Each ADR home location contains a series of directories, one of which is called 'incident'. This is where your Incidents are stored. When would you use it? It's a good idea to check on these locations from time to time to see whether a lot of Incidents are being generated. They can be cleaned out by deleting the Incident directories or through the ADRCI tool. If you know that an Incident is of particular interest for an issue you're working with Oracle you can simply zip it up and provide it. How does it relate to the other tools? ADR is obviously very important for DFW since it's where the Incidents are stored. Incidents contain Diagnostic Dumps that may relate to diagnostic logs (ODL) and DMS metrics. The most recent 10 Incident directories are collected by RDA by default and ADRCI relies on the ADR locations to help manage the contents. top ADRCI (Automatic Diagnostics Repository Command Interpreter) ADRCI is a command line tool for packaging and managing Incidents. When would you use it? When purging Incidents from an ADR Home location or when you want to package an Incident along with an offline RDA collection for upload to Oracle Support. How does it relate to the other tools? ADRCI contains a tool called the Incident Packaging System or IPS. This is used to package an Incident for upload to Oracle Support through a Service Request. Starting in 11.1.1.6 IPS will attempt to collect an offline RDA collection and include it with the Incident package. This will only work if Perl is available on the path, otherwise it will give a warning and package only the Incident files. ADRCI Resources: How to Use the Incident Packaging System (IPS) in SOA 11g [ID 1381259.1] ADRCI Documentation top WLDF (WebLogic Diagnostic Framework) WLDF is functionality available in WebLogic Server since version 9. Starting with FMw 11g a link has been added between WLDF and the pre-existing DFW, the WLDF Watch Notification. Let's take a closer look at the flow: There is a need to monitor the performance of your SOA Suite message processing A WLDF Watch is created in the WLS console that will trigger if the average message processing time exceeds 2 seconds. This metric is monitored through a DMS MBean instance. The out of the box DFW Notification (the Notification is called FMWDFW-notification) is added to the Watch. Under the covers this notification is of type JMX. The Watch is triggered when the threshold is exceeded and fires the Notification. DFW has a listener that picks up the Notification and evaluates it according to its rules, etc When it comes to automatic Incident creation, WLDF is a key component with capabilities that will grow over time. When would you use it? When you want to monitor the WLS server log or an MBean metric for some condition and fire a notification when the Watch is triggered. How does it relate to the other tools? WLDF is used to automatically trigger Incident creation through DFW using the DFW Notification. WLDF Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How To Script the Creation of a SOA WLDF Watch in 11g [ID 1377986.1] WLDF Documentation top

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by [email protected]
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage. 2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.." 3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what. Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.

    Read the article

  • OBIEE Version 11.1.1.7.140527 Now Released

    - by Lia Nowodworska - Oracle
    (in via Martin) The Oracle Business Intelligence Enterprise Edition (OBIEE) 11g 11.1.1.7.140527 Bundle Patch is now available to download via My Oracle Support | Patches & Updates. This is provided as single Bundle Patch  Patch  18507268 and is comprised of the following: Patch 16913445 - 1 of 8 Oracle BI Installer (BIINST) Patch 18507640 - 2 of 8 Oracle BI Publisher (BIP) Patch 18657616 - 3 of 8 EPM Components Installed from BI Installer 11.1.1.7.0 (BIFNDNEPM) Patch 18507802 - 4 of 8 Oracle BI Server (BIS) Patch 18507778 - 5 of 6 Oracle BI Presentation Services (BIPS) Patch 17300045 - 6 of 8 Oracle Real-Time Decisions (RTD) Patch 16997936 - 7 of 8 Oracle BI ADF Components (BIADFCOMPS) Patch 18507823 - 8 of 8 Oracle BI Platform Client Installers and MapViewer NOTE: Also required to be downloaded: Patch 16569379 - Dynamic Monitoring Service patch This patch set is available for all customers who are using Oracle Business Intelligence Enterprise Edition 11.1.1.7.0, 11.1.1.7.1, 11.1.1.7.131017, 11.1.1.7.140114, 11.1.1.7.140225 and 11.1.1.7.140415 NOTE: It is also available for Exalytics customers who have applied the Exalytics PS3 patch. For more information refer to: OBIEE 11g 11.1.1.7.140527 Bundle Patch is Available for OBIEE ( Doc ID 1676798.1 ) The OBIEE Suite Bundle Patches are cumulative - the content of the previous 11.1.1.7.x bundle patches are included in this latest bundle patch. Ensure to review the Readme documentation for further important patch information.  This is available via the My Oracle Support | Patches & Updates screen when downloading. Keep up to-date with the latest OBIEE Patches and Patch Set Updates by visiting OBIEE 11g: Required and Recommended Patches and Patch Sets (Doc ID 1488475.1 )

    Read the article

  • Personal Financial Management – The need for resuscitation

    - by Salil Ravindran
    Until a year or so ago, PFM (Personal Financial Management) was the blue eyed boy of every channel banking head. In an age when bank account portability is still fiction, PFM was expected to incentivise customers to switch banks. It still is, in some emerging economies, but if the state of PFM in matured markets is anything to go by, it is in a state of coma and badly requires resuscitation. Studies conducted around the year show an alarming decline and stagnation in PFM usage in mature markets. A Sept 2012 report by Aite Group – Strategies for PFM Success shows that 72% of users hadn’t used PFM and worse, 58% of them were not kicked about using it. Of the rest who had used it, only half did on a bank site. While there are multiple reasons for this lack of adoption, some are glaringly obvious. While pretty graphs and pie charts are important to provide a visual representation of my income and expense, it is simply not enough to encourage me to return. Static representation of data without any insightful analysis does not help me. Budgeting and Cash Flow is important but when I have an operative account, a couple of savings accounts, a mortgage loan and a couple of credit cards help me with what my affordability is in specific contexts rather than telling me I just busted my budget. Help me with relative importance of each budget category so that I know it is fine to go over budget on books for my daughter as against going over budget on eating out. Budget over runs and spend analysis are post facto and I am informed of my sins only when I return to online banking. That too, only if I decide to come to the PFM area. Fundamentally, PFM should be a part of my banking engagement rather than an analysis tool. It should be contextual so that I can make insight based decisions. So what can be done to resuscitate PFM? Amalgamation with banking activities – In most cases, PFM tools are integrated into online banking pages and they are like chapter 37 of a long story. PFM needs to be a way of banking rather than a tool. Available balances should shift to Spendable Balances. Budget and goal related insights should be integrated with transaction sessions to drive pre-event financial decisions. Personal Financial Guidance - Banks need to think ground level and see if their PFM offering is really helping customers achieve self actualisation. Banks need to recognise that most customers out there are non-proficient about making the best value of their money. Customers return when they know that they are being guided rather than being just informed on their finance. Integrating contextual financial offers and financial planning into PFM is one way ahead. Yet another way is to help customers tag unwanted spending thereby encouraging sound savings habits. Mobile PFM – Most banks have left all those numbers on online banking. With access mostly having moved to devices and the success of apps, moving PFM on to devices will give it a much needed shot in the arm. This is not only about presenting the same wine in a new bottle but also about leveraging the power of the device in pushing real time notifications to make pre-purchase decisions. The pursuit should be to analyse spend, budgets and financial goals real time and push them pre-event on to the device. So next time, I should know that I have over run my eating out budget before walking into that burger joint and not after. Increase participation and collaboration – Peer group experiences and comments are valued above those offered by the bank. Integrating social media into PFM engagement will let customers share and solicit their financial management experiences with their peer group. Peer comparisons help benchmark one’s savings and spending habits with those of the peer group and increases stickiness. While mature markets have gone through this learning in some way over the last one year, banks in maturing digital banking economies increasingly seem to be falling into this trap. Best practices lie in profiling and segmenting customers, being where they are and contextually guiding them to identify and achieve their financial goals. Banks could look at the likes of Simple and Movenbank to draw inpiration from.

    Read the article

  • A Cost Effective Solution to Securing Retail Data

    - by MichaelM-Oracle
    By Mike Wion, Director, Security Solutions, Oracle Consulting Services As so many noticed last holiday season, data breaches, especially those at major retailers, are now a significant risk that requires advance preparation. The need to secure data at all access points is now driven by an expanding privacy and regulatory environment coupled with an increasingly dangerous world of hackers, insider threats, organized crime, and other groups intent on stealing valuable data. This newly released Oracle whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c outlines a powerful story related to a defense in depth, multi-layered, security model that includes preventive, detective, and administrative controls for data security. At Oracle Consulting Services (OCS), we help to alleviate the fears of massive data breach by providing expert services to assist our clients with the planning and deployment of Oracle’s Database Security solutions. With our deep expertise in Oracle Database Security, Oracle Consulting can help clients protect data with the security solutions they need to succeed with architecture/planning, implementation, and expert services; which, in turn, provide faster adoption and return on investment with Oracle solutions. On June 10th at 10:00AM PST , Larry Ellison will present an exclusive webcast entitled “The Future of Database Begins Soon”. In this webcast, Larry will launch the highly anticipated Oracle Database In-Memory technology that will make it possible to perform true real-time, ad-hoc, analytic queries on your organization’s business data as it exists at that moment and receive the results immediately. Imagine real-time analytics available across your existing Oracle applications! Click here to download the whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c.

    Read the article

  • What CI server and Configuration Management tools I should use

    - by Bera
    Hi ! What CI server and Configuration Management tools I should use together for a truly development and deploy maintenance. There isn't the de facto rails sustainable environment, is there? Some assumptions: • code control version ok - git (de facto tool) • test framework ok - whatever (rspec is my choice) • code coverage and analysis ok - whatever (metric-fu, for example) • server stack ok - (Passenger for example) • issue tracker (RedMine) • etc, ... I'm want to play if integrity and moonshine projects, for me it's a good for beginning, isn't it? What do you think about this? Thanks, Bruno

    Read the article

  • 3rd Party Document Management Service

    - by Element
    I am developing an asp.net application that requires users to upload/view various documents. Rather then reinvent the wheel I was thinking about using a 3rd party service like Scribd to handle these documents and integrate it into my app via their API; I really like their ipaper viewer too. My concern is some of these documents will be sensitive data. Even though Scribd's FAQ says they are equipped to handle sensitive information, I am a little hesitant to trust an unpaid service that lacks an SLA. Has anyone used Scribd successfully for a similar task? Or can anyone recommend a better document management service?

    Read the article

  • SQL Server 2008 Management Studio doesn't recognize new Schema

    - by Lieven Cardoen
    I have created a new Schema in a database called Contexts. Now when I want to write a query, Management Studio doesn't recognize the tables that belong to the new Schema. It says: 'Invalid object name Contexts.ContextLibraries'... Transact-SQL: INSERT INTO [Contexts].[ContextLibraries] (ChannelId, [IsSystem]) VALUES (@ChannelId, 1) When I try the same thing on my local database, it does work... Any ideas? I did try to change the Default schema for the user from dbo to Contexts but this doesn't work. Also checked Contexts in Schemas owned by this user without success. Update: Apparently the sql query does work but the editor gives a fault saying the object is invalid.

    Read the article

  • Event based PROJECT MANAGEMENT

    - by andreas
    Hey fellas! Where i am working we have a number of contractor programmers. We handle the requirements and the project management of our products. I have been trying using various project timing and estimations techniques but cant get the hang of it yet. I have read Joel's evidence based scheduling and i was wondering if there's anything out there that can help me apply that theory? not looking for complex software but perhaps something in excel? Any help will be appreciated Andreas

    Read the article

  • UIWebView memory management

    - by wolfrevo
    Hello, I have a problem with memory management. I am developing an application that makes heavy use of UIWebView. This app generates dynamically lots of UIWebViews while loading content from my server. Some of these UIWebViews are quite large and have a lot of pictures. If I use instruments to detect leaks, I do not detect any. However, lots of objects are allocated and I suspect that has to do with the UIWebViews. When the webviews release because no longer needed, it appears that not all memory is released. I mean, after a request to my server the app creates an UITableView and many webviews (instruments say about 8Mb). When user tap back, all of them are released but memory usage only decrements about 2-3 Mb, and after 5-10 minutes using the app it crashes. Am I missing something? Anyone know what could be happening? Thank you!

    Read the article

  • Looking for Light Time Management Software Suggestions (for Mac)

    - by tmo256
    I'm looking for a simple project management app that performs task scheduling, along the line of Merlin or MS Project, but no where near as robustly. I don't need to deal with other (human) resources, but I work on anything from 3 to 6 different projects at a time. What I'd like is to be able to input deadlines and tasks, and have a schedule suggested to complete them. I do technical work, but I don't think I need anything specifically for software development, especially considering I do plenty of other kinds of things, like graphic design and social media PR. I'd really like this to be dead simple, as simple as possible. Suggestions? OmniPlan, something web-based? Definitely cannot afford anything too extravagant, really looking for something under $200. Thanks for your input!

    Read the article

  • Group SQL tables in SQL Server Management Studio object explorer

    - by MainMa
    I have a table which has approximately sixty tables, and other tables are added constantly. Each table is a part of a schema. A such quantity of tables makes it difficult to use Microsoft SQL Server Management Studio 2008. For example, I must scroll up in object explorer to access database related functions, or scroll down each time I need to access Views or Security features. Is it possible to group several tables to be able to expand or collapse them in Object Explorer? Maybe a folder may be displayed for each schema, letting collapse the folders I don't need to use?

    Read the article

  • Objective-C Memory Management: When do I [release]?

    - by Sahat
    I am still new to this Memory Management stuff (Garbage Collector took care of everything in Java), but as far as I understand if you allocate memory for an object then you have to release that memory back to the computer as soon as you are finished with your object. myObject = [Object alloc]; and [myObject release]; Right now I just have 3 parts in my Objective-C .m file: @Interface, @Implementation and main. I released my object at the end of the program next to these guys: [pool drain]; return 0; But what if this program were to be a lot more complicated, would it be okay to release myObject at the end of the program? I guess a better question would be when do I release an object's allocated memory? How do I know where to place [myObject release];?

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >