Search Results

Search found 46582 results on 1864 pages for 'ibm system x'.

Page 307/1864 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Overview of SOA Diagnostics in 11.1.1.6

    - by ShawnBailey
    What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections. In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. RDA (Remote Diagnostic Agent) RDA is a standalone tool that is used to collect both static configuration and dynamic runtime information from the SOA environment. RDA is generally run manually from the command line against a domain or single server. When opening a new Service Request, including an RDA collection can dramatically decrease the back and forth required to collect logs and configuration information for Support. After installing RDA you configure it to use the SOA Suite module as decribed in the referenced resources. The SOA module includes the Oracle WebLogic Server (WLS) module by default in order to include all of the relevant information for the environment. In addition to this basic configuration there is also an advanced mode where you can set the number of thread dumps for the collections, log files, Incidents, etc. When would you use it? When creating a Service Request or otherwise working with Oracle resources on an issue, capturing environment snapshots to baseline your configuration or to diagnose an issue on your own. How is it related to the other tools? RDA is related to DFW in that it collects the last 10 Incidents from the server by default. In a similar manner, RDA is related to ODL through its collection of the diagnostic logs and these may contain information from Selective Tracing sessions. Examples of what it currently collects: (for details please see the links in the Resources section) Diagnostic Logs (ODL) Diagnostic Framework Incidents (DFW) SOA MDS Deployment Descriptors SOA Repository Summary Statistics Thread Dumps Complete Domain Configuration RDA Resources: Webcast Recording: Using RDA with Oracle SOA Suite 11g Blog Post: Diagnose SOA Suite 11g Issues Using RDA Download RDA How to Collect Analysis Information Using RDA for Oracle SOA Suite 11g Products [ID 1350313.1] How to Collect Analysis Information Using RDA for Oracle SOA Suite and BPEL Process Manager 11g [ID 1352181.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] top DFW (Diagnostic Framework) DFW provides the ability to collect specific information for a particular problem when that problem occurs. DFW is included with your SOA Suite installation and deployed to the domain. Let's define the components of DFW. Diagnostic Dumps: Specific diagnostic collections that are defined at either the 'system' or product level. Examples would be diagnostic logs or thread dumps. Incident: A collection of Diagnostic Dumps associated with a particular problem Log Conditions: An Oracle Diagnostic Logging event that DFW is configured to listen for. If the event is identified then an Incident will be created. WLDF Watch: The WebLogic Diagnostic Framework or 'WLDF' is not a component of DFW, however, it can be a source of DFW Incident creation through the use of a 'Watch'. WLDF Notification: A Notification is a component of WLDF and is the link between the Watch and DFW. You can configure multiple Notification types in WLDF and associate them with your Watches. 'FMWDFW-notification' is available to you out of the box to allow for DFW notification of Watch execution. Rule: Defines a WLDF Watch or Log Condition for which we want to associate a set of Diagnostic Dumps. When triggered the specified dumps will be collected and added to the Incident Rule Action: Defines the specific Diagnostic Dumps to collect for a particular rule ADR: Automatic Diagnostics Repository; Defined for every server in a domain. This is where Incidents are stored Now let's walk through a simple flow: Oracle Web Services error message OWS-04086 (SOAP Fault) is generated on managed server 1 DFW Log Condition for OWS-04086 evaluates to TRUE DFW creates a new Incident in the ADR for managed server 1 DFW executes the specified Diagnostic Dumps and adds the output to the Incident In this case we'll grab the diagnostic log and thread dump. We might also want to collect the WSDL binding information and SOA audit trail When would you use it? When you want to automatically collect Diagnostic Dumps at a particular time using a trigger or when you want to manually collect the information. In either case it can be readily uploaded to Oracle Support through the Service Request. How is it related to the other tools? DFW generates Incidents which are collections of Diagnostic Dumps. One of the system level Diagonstic Dumps collects the current server diagnostic log which is generated by ODL and can contain information from Selective Tracing sessions. Incidents are included in RDA collections by default and ADRCI is a tool that is used to package an Incident for upload to Oracle Support. In addition, both ODL and DMS can be used to trigger Incident creation through DFW. The conditions and rules for generating Incidents can become quite complicated and the below resources go into more detail. A simpler approach to leveraging at least the Diagnostic Dumps is through WLST (WebLogic Scripting Tool) where there are commands to do the following: Create an Incident Execute a single Diagnostic Dump Describe a Diagnostic Dump List the available Diagnostic Dumps The WLST option offers greater control in what is generated and when. It can be a great help when collecting information for Support. There are overlaps with RDA, however, DFW is geared towards collecting specific runtime information when an issue occurs while existing Incidents are collected by RDA. There are 3 WLDF Watches configured by default in a SOA Suite 11g domain: Stuck Threads, Unchecked Exception and Deadlock. These Watches are enabled by default and will generate Incidents in ADR. They are configured to reset automatically after 30 seconds so they have the potential to create multiple Incidents if these conditions are consistent. The Incidents generated by these Watches will only contain System level Diagnostic Dumps. These same System level Diagnostic Dumps will be included in any application scoped Incident as well. Starting in 11.1.1.6, SOA Suite is including its own set of application scoped Diagnostic Dumps that can be executed from WLST or through a WLDF Watch or Log Condition. These Diagnostic Dumps can be added to an Incident such as in the earlier example using the error code OWS-04086. soa.config: MDS configuration files and deployed-composites.xml soa.composite: All artifacts related to the deployed composite soa.wsdl: Summary of endpoints configured for the composite soa.edn: EDN configuration summary if applicable soa.db: Summary DB information for the SOA repository soa.env: Coherence cluster configuration summary soa.composite.trail: Partial audit trail information for the running composite The current release of RDA has the option to collect the soa.wsdl and soa.composite Diagnostic Dumps. More Diagnostic Dumps for SOA Suite products are planned for future releases along with enhancements to DFW itself. DFW Resources: Webcast Recording: SOA Diagnostics Sessions: Diagnostic Framework Diagnostic Framework Documentation DFW WLST Command Reference Documentation for SOA Diagnostic Dumps in 11.1.1.6 top Selective Tracing Selective Tracing is a facility available starting in version 11.1.1.4 that allows you to increase the logging level for specific loggers and for a specific context. What this means is that you have greater capability to collect needed diagnostic log information in a production environment with reduced overhead. For example, a Selective Tracing session can be executed that only increases the log level for one composite, only one logger, limited to one server in the cluster and for a preset period of time. In an environment where dozens of composites are deployed this can dramatically reduce the volume and overhead of the logging without sacrificing relevance. Selective Tracing can be administered either from Enterprise Manager or through WLST. WLST provides a bit more flexibility in terms of exactly where the tracing is run. When would you use it? When there is an issue in production or another environment that lends itself to filtering by an available context criteria and increasing the log level globally results in too much overhead or irrelevant information. The information is written to the server diagnostic log and is exportable from Enterprise Manager How is it related to the other tools? Selective Tracing output is written to the server diagnostic log. This log can be collected by a system level Diagnostic Dump using DFW or through a default RDA collection. Selective Tracing also heavily leverages ODL fields to determine what to trace and to tag information that is part of a particular tracing session. Available Context Criteria: Application Name Client Address Client Host Composite Name User Name Web Service Name Web Service Port Selective Tracing Resources: Webcast Recording: SOA Diagnostics Session: Using Selective Tracing to Diagnose SOA Suite Issues How to Use Selective Tracing for SOA [ID 1367174.1] Selective Tracing WLST Reference top DMS (Dynamic Monitoring Service) DMS exposes runtime information for monitoring. This information can be monitored in two ways: Through the DMS servlet As exposed MBeans The servlet is deployed by default and can be accessed through http://<host>:<port>/dms/Spy (use administrative credentials to access). The landing page of the servlet shows identical columns of what are known as Noun Types. If you select a Noun Type you will see a table in the right frame that shows the attributes (Sensors) for the Noun Type and the available instances. SOA Suite has several exposed Noun Types that are available for viewing through the Spy servlet. Screenshots of the Spy servlet are available in the Knowledge Base article How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS). Every Noun instance in the runtime is exposed as an MBean instance. As such they are generally available through an MBean browser and available for monitoring through WLDF. You can configure a WLDF Watch to monitor a particular attribute and fire a notification when the threshold is exceeded. A WLDF Watch can use the out of the box DFW notification type to notify DFW to create an Incident. When would you use it? When you want to monitor a metric or set of metrics either manually or through an automated system. When you want to trigger a WLDF Watch based on a metric exposed through DMS. How is it related to the other tools? DMS metrics can be monitored with WLDF Watches which can in turn notify DFW to create an Incident. DMS Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How to Reset a SOA 11g DMS Metric DMS Documentation top ODL (Oracle Diagnostic Logging) ODL is the primary facility for most Fusion Middleware applications to log what they are doing. Whenever you change a logging level through Enterprise Manager it is ultimately exposed through ODL and written to the server diagnostic log. A notable exception to this is WebLogic Server which uses its own log format / file. ODL logs entries in a consistent, structured way using predefined fields and name/value pairs. Here's an example of a SOA Suite entry: [2012-04-25T12:49:28.083-06:00] [AdminServer] [ERROR] [] [oracle.soa.bpel.engine] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 0963fdde7e77631c:-31a6431d:136eaa46cda:-8000-00000000000000b4,0] [errid: 41] [WEBSERVICE_PORT.name: BPELProcess2_pt] [APP: soa-infra] [composite_name: TestProject2] [J2EE_MODULE.name: fabric] [WEBSERVICE.name: bpelprocess1_client_ep] [J2EE_APP.name: soa-infra] Error occured while handling a post operation[[ When would you use it? You'll use ODL almost every time you want to identify and diagnose a problem in the environment. The entries are written to the server diagnostic log. How is it related to the other tools? The server diagnostic logs are collected by DFW and RDA. Selective Tracing writes its information to the diagnostic log as well. Additionally, DFW log conditions are triggered by ODL log events. ODL Resources: ODL Documentation top ADR (Automatic Diagnostics Repository) ADR is not a tool in and of itself but is where DFW stores the Incidents it creates. Every server in the domain has an ADR location which can be found under <SERVER_HOME>/adr. This is referred to the as the ADR 'Base' location. ADR also has what are known as 'Home' locations. Example: You have a domain called 'myDomain' and an associated managed server called 'myServer'. Your admin server is called 'AdminServer'. Your domain home directory is called 'myDomain' and it contains a 'servers' directory. The 'servers' directory contains a directory for the managed server called 'myServer' and here is where you'll find the 'adr' directory which is the ADR 'Base' location for myServer. To get to the ADR 'Home' locations we drill through a few levels: diag/ofm/myDomain/ In an 11.1.1.6 SOA Suite domain you will see 2 directories here, 'myServer' and 'soa-infra'. These are the ADR 'Home' locations. 'myServer' is the 'system' ADR home and contains system level Incidents. 'soa-infra' is the name that SOA Suite used to register with DFW and this ADR home contains SOA Suite related Incidents Each ADR home location contains a series of directories, one of which is called 'incident'. This is where your Incidents are stored. When would you use it? It's a good idea to check on these locations from time to time to see whether a lot of Incidents are being generated. They can be cleaned out by deleting the Incident directories or through the ADRCI tool. If you know that an Incident is of particular interest for an issue you're working with Oracle you can simply zip it up and provide it. How does it relate to the other tools? ADR is obviously very important for DFW since it's where the Incidents are stored. Incidents contain Diagnostic Dumps that may relate to diagnostic logs (ODL) and DMS metrics. The most recent 10 Incident directories are collected by RDA by default and ADRCI relies on the ADR locations to help manage the contents. top ADRCI (Automatic Diagnostics Repository Command Interpreter) ADRCI is a command line tool for packaging and managing Incidents. When would you use it? When purging Incidents from an ADR Home location or when you want to package an Incident along with an offline RDA collection for upload to Oracle Support. How does it relate to the other tools? ADRCI contains a tool called the Incident Packaging System or IPS. This is used to package an Incident for upload to Oracle Support through a Service Request. Starting in 11.1.1.6 IPS will attempt to collect an offline RDA collection and include it with the Incident package. This will only work if Perl is available on the path, otherwise it will give a warning and package only the Incident files. ADRCI Resources: How to Use the Incident Packaging System (IPS) in SOA 11g [ID 1381259.1] ADRCI Documentation top WLDF (WebLogic Diagnostic Framework) WLDF is functionality available in WebLogic Server since version 9. Starting with FMw 11g a link has been added between WLDF and the pre-existing DFW, the WLDF Watch Notification. Let's take a closer look at the flow: There is a need to monitor the performance of your SOA Suite message processing A WLDF Watch is created in the WLS console that will trigger if the average message processing time exceeds 2 seconds. This metric is monitored through a DMS MBean instance. The out of the box DFW Notification (the Notification is called FMWDFW-notification) is added to the Watch. Under the covers this notification is of type JMX. The Watch is triggered when the threshold is exceeded and fires the Notification. DFW has a listener that picks up the Notification and evaluates it according to its rules, etc When it comes to automatic Incident creation, WLDF is a key component with capabilities that will grow over time. When would you use it? When you want to monitor the WLS server log or an MBean metric for some condition and fire a notification when the Watch is triggered. How does it relate to the other tools? WLDF is used to automatically trigger Incident creation through DFW using the DFW Notification. WLDF Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How To Script the Creation of a SOA WLDF Watch in 11g [ID 1377986.1] WLDF Documentation top

    Read the article

  • AIX: iscsi volumes disappear after reboot

    - by Dan
    We have an IBM P505 AIX box, with two internal disks and a defined iSCSI volume. The iSCSI volume is defined in it's own volume group, and is connected to an IBM iSCSI DS3300 disk array via the secondary onboard ethernet port (ie, we're not using a dedicated HBA, we're using the second onboard ethernet port for iSCSI exclusively.) When we reboot the AIX box, the iSCSI volume doesn't get mounted (which is fine; I've figured out that it fails to mount because AIX tries mounting it's volumes before starting the networking stack.) The problem is, after the server has booted it fails to redetect the iSCSI target as a physical disk. This means the volume group (iscsivg) can't go online. if I run cfgmgr -v to redetect the iscsi volume it successfully detects the iscsi target volume and creates a physical volume reference, but allocates it a different volume ID to what was defined before. eg - rootvg contains hdisk 0 and 1 iscsivg was originally defined with hdisk2 as the physical iSCSI volume. after reboot and running cfgmgr -v, AIX detects physical volumes hdisk0, hdisk11 and hdisk3. As there's no hdisk2, I can't varyon the iscsivg volume group. I can't seem any existing hdisk2 definition in the ODM. I can't easily add or change the definition of the physcial disk in the iscsivg volume group as it won't "varyon". Exporting the volume group deletes it completely, recreating the volume group by "importing" it from the reallocated disk makes it available again, but surely there's a better way? Can I force a specific hdisk drive designation for an iscsi target? How do you bring online iSCSI volumes after a reboot? I assume this "just works" with a dedicated HBA instead of a generic ethernet adapter? By the way, the iSCSI volume works fine once it's mounted; we only have problems getting it working - and only with AIX. The iSCSI array works fine with our Linux and Windows servers; ie the volumes get detected and remounted after boot time without any problems, using generic ethernet adapters. Here's some of the config from the AIX box: defined disks / devices: # lsdev hdisk0 Available 06-08-01-5,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 06-08-01-8,0 16 Bit LVD SCSI Disk Drive hdisk3 Available Other iSCSI Disk Drive iscsi0 Available iSCSI Protocol Device scsi0 Available 06-08-00 PCI-X Dual Channel Ultra320 SCSI Adapter bus scsi1 Available 06-08-01 PCI-X Dual Channel Ultra320 SCSI Adapter bus ses0 Available 06-08-01-15,0 SCSI Enclosure Services Device sisscsia0 Available 06-08 PCI-X Dual Channel Ultra320 SCSI Adapter iscsi target definition in /etc/iscsi/targets: # IBM DS3300 disk array # port 1 on second controller 10.10.xx.xxx 3260 iqn.1992-01.com.lsi:1535.600a0b80005b0a7fxxxxxxxxxxxx physical volumes (after reimporting the volume group) # lspv hdisk0 0003b08a0d4936b6 rootvg active hdisk1 0003b08aaa5cb366 rootvg active hdisk3 0003b08a032d04bb iscsivg active

    Read the article

  • What steps can you take to ensure sane build environments when compiling software?

    - by Chris Adams
    Hi guys, I've been stuck with a compilation problem when building a standardised virtual machine on CentOS 5.4, and I'm in the dark here as to a) why this error is occurring, and b) how to fix it, and in the hope that someone else stumbles across this problem too, I'm hoping someone can help me find the solution here. I'm getting a configure: error: newly created file is older than distributed files! error when trying to compile Ruby Enterprise like below when I try to run the installer, and the solutions offered to on the forums (of checking the tine, and touching the files to update the time associated with them) don't seem to be helping here. What steps can I take to work out what the cause of this problem? [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo ./installer Welcome to the Ruby Enterprise Edition installer This installer will help you install Ruby Enterprise Edition 1.8.7-2009.10. Don't worry, none of your system files will be touched if you don't want them to, so there is no risk that things will screw up. You can expect this from the installation process: 1. Ruby Enterprise Edition will be compiled and optimized for speed for this system. 2. Ruby on Rails will be installed for Ruby Enterprise Edition. 3. You will learn how to tell Phusion Passenger to use Ruby Enterprise Edition instead of regular Ruby. Press Enter to continue, or Ctrl-C to abort. Checking for required software... * C compiler... found at /usr/bin/gcc * C++ compiler... found at /usr/bin/g++ * The 'make' tool... found at /usr/bin/make * Zlib development headers... found * OpenSSL development headers... found * GNU Readline development headers... found -------------------------------------------- Target directory Where would you like to install Ruby Enterprise Edition to? (All Ruby Enterprise Edition files will be put inside that directory.) [/opt/ruby-enterprise] : -------------------------------------------- Compiling and optimizing the memory allocator for Ruby Enterprise Edition In the mean time, feel free to grab a cup of coffee. ./configure --prefix=/opt/ruby-enterprise --disable-dependency-tracking checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... configure: error: newly created file is older than distributed files! Check your system clock This is a virtual machine running on virtualbox, and the time of the host and the virtual machine are identical, and up to date. I've also tried running this after updating time with an ntp-client, so no avail. I tried this after reading this post here of someone having a similar problem [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ date Tue Apr 27 08:09:05 BST 2010 The other approach I've tried is to touch the top level the files in the build folder like suggested here, but this hasn't worked either (an to be honest, I'm not sure why it would have worked either) [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo touch ruby-enterprise-1.8.7-2009.10/* I'm not sure what I can do next here - the problem seems to be the bash configure script that returns this error error: newly created file is older than distributed files!, at line :2214 { echo "$as_me:$LINENO: checking whether build environment is sane" >&5 echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&5 echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&2;} { (exit 1); exit 1; }; } fi ### PROBLEM LINE #### # this line is the problem line - this is returned true, sometimes it isn't and I can't # see a pattern that that determines when this will test will pass or not. test "$2" = conftest.file ) then # Ok. : else { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! Check your system clock" >&5 echo "$as_me: error: newly created file is older than distributed files! Check your system clock" >&2;} { (exit 1); exit 1; }; } fi the thing that makes this really frustrating is that this script works sometimes, when the VM has been running for an hour or so it works, but not at boot. There's nothing I see in the crontab that suggests any hourly tasks are run that might change the state of the system enough make a difference to this script working. I'm totally at a loss when it comes to debugging beyond here. What's the best approach to take here? Thanks

    Read the article

  • CUPS basic auth error through web interface

    - by Inaimathi
    I'm trying to configure CUPS to allow remote administration through the web interface. There's enough documentation out there that I can figure out what to change in my cupsd.conf (changing Listen localhost:631 to Port 631, and adding Allow @LOCAL to the /, /admin and /admin/conf sections). I'm now at the point where I can see the CUPS interface from another machine on the same network. The trouble is, when I try to Add Printer, I'm asked for a username and password, but my response is rejected even when I know I've gotten it right (I assume it's asking for the username and password of someone in the lpadmin group on the server machine; I've sshed in with credentials its rejecting, and the user I'm using has been added to the lpadmin group). If I disable auth outright, by changing DefaultAuthType Basic to DefaultAuthType None, I get an "Unauthorized" error instead of a password request when I try to Add Printer. What am I doing wrong? Is there a way of letting users from the local network to administer the print server through the CUPS web interface? EDIT: By request, my complete cupsd.conf (spoiler: minimally edited default config file that comes with the edition of CUPS from the Debian wheezy repos): LogLevel warn MaxLogSize 0 SystemGroup lpadmin Port 631 # Listen localhost:631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd # DefaultAuthType Basic DefaultAuthType None WebInterface Yes <Location /> Order allow,deny Allow @LOCAL </Location> <Location /admin> Order allow,deny Allow @LOCAL </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow @LOCAL </Location> # Set the default printer/job policies... <Policy default> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> # Set the authenticated printer/job policies... <Policy authenticated> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • What version of SCO Open Server are guys using out there and on what hardware?

    - by Gath
    I have some old applications which are running on SCO Open Server 5.0.5, and i would love to move them to SCO Open Server 5.0.7 and on modern hardware(servers), currently am running SCO on old IBM PL 300 personal computer, on 92MB Memory, one processor, and it has been serving the clients pretty well. Now i have New Modern IBM xseries Servers and i would love to migrate the same applications to those new servers. Problem is, SCO 5.0.5 is unable to detect some of the hardware components in the new servers. I read somewhere that SCO 5.0.7 is able to detect the newer hardware even the USB ports etc. Is there anyone running SCO Openserver out there, and on what hardware architecture are they running on? Gath

    Read the article

  • Installing nGinX Reverse Proxy on CentOS 5

    - by heavymark
    I'm trying to install nGinX as a reverse proxy on CentOS 5 with apache. The instructions to do this are here: http://wiki.mediatemple.net/w/(dv):Configure_nginx_as_reverse_proxy_web_server Note- in the instructions, for the url to get nginx I'm using the following: http://nginx.org/download/nginx-1.0.10.tar.gz Now here is my problem. After installing the required packages and running .configure I get the following: checking for OS + Linux 2.6.18-028stab094.3 x86_64 checking for C compiler ... found + using GNU C compiler + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-51) checking for gcc -pipe switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for nobody group ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found checking for SO_ACCEPTFILTER ... not found checking for TCP_DEFER_ACCEPT ... found checking for accept4() ... not found checking for int size ... 4 bytes checking for long size ... 8 bytes checking for long long size ... 8 bytes checking for void * size ... 8 bytes checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system endianess ... little endianess checking for size_t size ... 8 bytes checking for off_t size ... 8 bytes checking for time_t size ... 8 bytes checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for posix_memalign() ... found checking for memalign() ... found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found checking for System V shared memory ... found checking for POSIX semaphores ... not found checking for POSIX semaphores in libpthread ... found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... not found checking for struct dirent.d_type ... found checking for PCRE library ... found checking for system md library ... not found checking for system md5 library ... not found checking for OpenSSL md5 crypto library ... found checking for sha1 in system md library ... not found checking for OpenSSL sha1 crypto library ... found checking for zlib library ... found creating objs/Makefile Configuration summary + using system PCRE library + OpenSSL library is not used + md5: using system crypto library + sha1: using system crypto library + using system zlib library nginx path prefix: "/usr/local/nginx" nginx binary file: "/usr/local/nginx/sbin/nginx" nginx configuration prefix: "/usr/local/nginx/conf" nginx configuration file: "/usr/local/nginx/conf/nginx.conf" nginx pid file: "/usr/local/nginx/logs/nginx.pid" nginx error log file: "/usr/local/nginx/logs/error.log" nginx http access log file: "/usr/local/nginx/logs/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" It says if you get errors to stop and make sure packages are installed. I didn't get errors but as you can see I got several "not founds". Are those considered errors? If so how do I resolve that. And as noted in the link, I cannot install through yum, because it wont work with plesk then. Thanks!

    Read the article

  • Microsoft Office 2007

    - by nardone25
    Hello everyone in serverfault. I am having a big problem at my job. I will let everyone know what I am using. I have two ibm x3690 servers with vmware esxi on both. Our product server has 8 vm on there. I have two lefthand san from HP. I have a watchgurd firewall. Our other site. I have one server over there ibm x3b90 sever. with one vm on there. I have a cisco 1700 router. and another watchgurd firewall. I have a vpn tunnel to my watchgurd firewall, to my cisco router. Site one works great. site two is having problems saving word documents and having problems printing in publisher 2007. Can someone please help?

    Read the article

  • LTO 3 tape drive needing repaired

    - by DO it all Paul
    We have an IBM LTO 3 tape drive that needs repaired and with the £400 price tag i'm having to shop around for quotes. My question is has anyone actually repaired one before and how was in done? The first error LED was showing a 6, then i cleared the mangled tape only for it to start flashing alternate 'o' on the 7 segment display, simliar to a half 8, flashing top to bottom and it would just flash away like that coupled with a flashing amber light. I tried a reset holding the eject button for it to show an 'r' the go back to flashing again as before. I checked the IBM solutions for the codes but this flashing isn't documented at all. Would be great if anyone had any experience in this area. Thank you, Paul

    Read the article

  • Case insensitive bash auto-complete

    - by Vitaly Polonetsky
    Is there a way to make the file/dir auto-complete in bash case insensitive? For example I would like to write: /opt/ibm/whatever/test [TAB] And bash will auto-complete it to: /opt/IBM/Whatever/TESTfile Or at least only the last part of test to TESTfile. I know that filesystems are case-sensitive, I just don't want to remember which parts are UPPER-case, I want auto-complete to fix the path for me. And if I have both TESTfile and testfile, just show me both of them like bash does today with auto-complete conflicts.

    Read the article

  • What is the old AIX & RS/6000 console font?

    - by Xepoch
    I'm sick of Lucida Console font on my putty sessions. I used to love whatever font was employed on IBM rs/6000 consoles. Does anyone know the name of that font and if it is available (and usable) from anywhere? Sorry, I need to clarify the "consoles" to which I refer were not the serial consoles but rather smaller stand-alone X-Window systems. They could be setup without XDM and would start in text screen. I'm searching like crazy to find the actual machines I remember (there were hundreds of them at Uni) but I don't have the IBM #s in front of me. It looked amazingly similar to the PS/2 like this:

    Read the article

  • Why maximum 1.0 Gbit Ethernet connection an old notebook, and only 100 Mbit on newer faster computer

    - by Sam
    Strange problem about Ethernet speed: recently we bought an i7 core computer running Win7 64 bit with an onboard Gigabit Ethernet controller (Realtek PCIe Gbit Ethernet Family controller). Connecting this new fast pc directly to our brand new ASUS Gigabit Ethernet router via CAT6 cable(!) shows up the adapter status (see picture attached) only 100mbit, while the router is capable of 1000 mbit. More facts: Connecting an 8 year old IBM notebook with gigabit ethernet to the same cable end shows 1.0 Gbit connection in its adapter status. Speedtest.net shows 35 mbit/s down on the new computer Speedtest.net shows 78 mbit/s down on the old rusty IBM notebook. We have an 120 mbit down internet connection, which we we truly receive on another pc (also directly connected to the router) How to get the 1.0 Gbit going in the new pc ?

    Read the article

  • Installing Java 1.5 on Ubuntu?

    - by StackedCrooked
    I already have Java 1.6, but I need to test something with 1.5. I have downloaded the .bin file from http://java.sun.com/javase/downloads/index_jdk5.jsp using the Sun Download Manager. Now I want to create a deb file from this bin file: $ fakeroot make-jpkg java_ee_sdk-5_01-linux.bin Creating temporary directory: /tmp/make-jpkg.Zpm1Y7LbZ0 Loading plugins: blackdown-j2re.sh blackdown-j2sdk.sh common.sh ibm-j2re.sh ibm-j2sdk.sh j2re.sh j2sdk-doc.sh j2sdk.sh j2se.sh sun-j2re.sh sun-j2sdk-doc.sh sun-j2sdk.sh Detected Debian build architecture: i386 Detected Debian GNU type: i486-linux-gnu No matching plugin was found. Removing temporary directory: done How can I fix the "No matching plugin was found." error?

    Read the article

  • Web application and remote storage of files

    - by Matt
    Hi have a web application that can store lots and lots of files on the server. i.e. users upload data to it. The files are stored below a particular storage path. The web host will be an IBM xseries 345. However, the disks are really expensive so we would like to put the files onto a less expensive server. Now here is the question. Should I use an NFS mount on the IBM server of a path on the storage server? Or should I write some scripts to upload the files to the storage server instead. Both the storage server and the web host are on the same network. Only the web server is visible to the world. Is NFS performance suitable for an expected low to moderately loaded server?

    Read the article

  • IPv6 connections routed to IPv4 device

    - by Yvan JANSSENS
    I have an IBM 9406-250 with V5R1 and IPv4 only connectivity, and want it to be reachable over IPv6. I cannnot install an IPv6 stack on it, but I want it to be accessible by IPv6 so I can drop the requirement to VPN to my home network. I have an OpenWRT device running, which takes care of the IPv6 routing on my network and the tunnel to SIXXS, and I was wondering if it is possible to assign another IPv6 address to that device, and route it to the IPv4 IBM computer. Which software do I need for this, and how is this technique called?

    Read the article

  • Installing Java 1.5 on Ubuntu?

    - by StackedCrooked
    I already have Java 1.6, but I need to test something with 1.5. I have downloaded the .bin file from http://java.sun.com/javase/downloads/index_jdk5.jsp using the Sun Download Manager. Now I want to create a deb file from this bin file: $ fakeroot make-jpkg java_ee_sdk-5_01-linux.bin Creating temporary directory: /tmp/make-jpkg.Zpm1Y7LbZ0 Loading plugins: blackdown-j2re.sh blackdown-j2sdk.sh common.sh ibm-j2re.sh ibm-j2sdk.sh j2re.sh j2sdk-doc.sh j2sdk.sh j2se.sh sun-j2re.sh sun-j2sdk-doc.sh sun-j2sdk.sh Detected Debian build architecture: i386 Detected Debian GNU type: i486-linux-gnu No matching plugin was found. Removing temporary directory: done How can I fix the "No matching plugin was found." error? Update I downloaded jdk-1_5_0_22-linux-amd64.bin from the archive page and ran Linux installer. It works fine.

    Read the article

  • Can I use the same machine as a client and server for SSH?

    - by achraf
    For development tests, I need to setup an SFTP server. So I want to know if it's possible to use the same machine as the client and the server. I tried and I keep getting this error: > Permission denied (publickey). > Connection closed and by running ssh -v agharroud@localhost i get : > OpenSSH_3.8.1p1,OpenSSL 0.9.7d 17 Mar > debug1: Reading configuration data /etc/ssh_config > debug1: Connecting to localhost [127.0.0.1] port 22. > debug1: Connection established. > debug1: identity file /home/agharroud/.ssh/identity type -1 > debug1: identity file /home/agharroud/.ssh/id_rsa type 1 > debug1: identity file /home/agharroud/.ssh/id_dsa type -1 > debug1: Remote protocol version 2.0, remote software version OpenSSH_3.8.1p1 > debug1: match: OpenSSH_3.8.1p1 pat OpenSSH* > debug1: Enabling compatibility mode for protocol 2.0 > debug1: Local version string SSH-2.0-OpenSSH_3.8.1p1 > debug1: SSH2_MSG_KEXINIT sent > debug1: SSH2_MSG_KEXINIT received > debug1: kex:server->client aes128-cbc hmac-md5 none > debug1: kex: client->server aes128-cbc hmac-md5 none > debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent > debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP > debug1: SSH2_MSG_KEX_DH_GEX_INIT sent > debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY > debug1: Host 'localhost' is known and matches the RSA host key. > debug1: Found key in /home/agharroud/.ssh/known_hosts:1 > debug1: ssh_rsa_verify: signature correct > debug1: SSH2_MSG_NEWKEYS sent > debug1: expecting SSH2_MSG_NEWKEYS > debug1: SSH2_MSG_NEWKEYS received > debug1: SSH2_MSG_SERVICE_REQUEST sent > debug1: SSH2_MSG_SERVICE_ACCEPT > received > > ****USAGE WARNING**** > > This is a private computer system. This computer system, including all > related equipment, networks, and network devices (specifically > including Internet access) are provided only for authorized use. This > computer system may be monitored for all lawful purposes, including to > ensure that its use is authorized, for management of the system, to > facilitate protection against unauthorized access, and to verify > security procedures, survivability, and operational security. Monitoring > includes active attacks by authorized entities to test or verify the > security of this system. During monitoring, information may be > examined, recorded, copied and used for authorized purposes. All > information, including personal information, placed or sent over this > system may be monitored. > > Use of this computer system, authorized or unauthorized, > constitutes consent to monitoring of this system. Unauthorized use may > subject you to criminal prosecution. Evidence of unauthorized use collected > during monitoring may be used for administrative, criminal, or other > adverse action. Use of this system constitutes consent to monitoring for > these purposes. > > debug1: Authentications that can continue: publickey > debug1: Next authentication method: publickey > debug1: Trying private key:/home/agharroud/.ssh/identity > debug1: Offering public key:/home/agharroud/.ssh/id_rsa > debug1:Authentications that can continue:publickey > debug1: Trying private key:/home/agharroud/.ssh/id_dsa > debug1: No more authentication methods to try. > Permission denied (publickey). Any ideas about the problem ? thanks !

    Read the article

  • Disk controller speed responsible for slow write speeds?

    - by vizvayu
    I have question. I'm using ESXi 4.0U1 in an IBM x3200M2 with an integrated LSI 1064e RAID controller, without any kind of cache. I have 3 250GB HOT-SWAP SATA HDs configured in RAID1E (IME). ESXi works fine, read speed are quite OK, but write speeds are incredible slow, never more than 8MB/s, and this is the best case scenario, benchmarking with iozone streaming writes, using a VMWare Paravirtual controller and with only this VM active, no swapping of any kind (total vm memory reserved). Already wrote to IBM but I don't have any kind of pay support so they didn't even answered, so I'm just wondering... anybody has any experience with a similar setup? I just want to be sure this is hardware related and can't be fixed with some kind of config option, because I'm thinking on buying a new RAID controller (Adaptec 2405 looks nice). Thanks again!

    Read the article

  • What PowerShell/WSMan clients or queries are consuming more than 1000 requests per 2 seconds?

    - by makerofthings7
    Exchange 2010 remote administration tools are complaining with the following error [txexmb02.ibm.com] Connecting to remote server failed with the following error message : The WS-Management service cannot process the request. The system load quota of 1000 requests per 2 seconds has been exceeded. Send future requests at a slower rate or raise the system quota. The next request from this user will not be approved for at least 558475776 milliseconds. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed VERBOSE: Connecting to TXEXHC02.ibm.com The help document this error referrers to says this is a WS-Man error. We're running SCOM 2007 R2 and am thinking that is increasing the query count, but I need to prove it.

    Read the article

  • SharePoint 2010 - Access denied during ApplyWebConfigModifications()

    - by tcoalson
    I have SharePoint 2010 installed on a Windows Server 2008 R2 machine which is also hosting SQL Sever 2008 R2. I am attempting to deploy a solution that includes web parts in the 2010 environment that is working fine in MOSS 2007. The Web Part feature has a feature receiver that updates the web.config. When I try to activate the feature through the Site Collection Feature GUI, I receive an access denied message. I am logged on to the server and in SharePoint with the APP Pool account which is also a member of the domain administrator group, local administrator group and SharePoint Farm Admin group. This account is also dbo on SQL Server. This same feature activates fine using the stsadm command. I have dug into this issue at length and here is what I have found: Looking at the Microsoft assemblies in reflector, my error is coming from the SPWebApplication.ApplyWebConfigModifications() method. I can see the trace statements from SPWebConfigFileChanges.RemoveModificationsWebConfigXMLDocument and SPWebConfigFileChanges.ApplyModificationsWebConfigXMLDocument. The next line is a Save(str). Below is the output from the SharePoint logs that pertain to this error: Apply web config modifications to web app 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation General 8grn Medium WebConfigModification: Applying web config modifications to web app in server tw-s1-m4400-007 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 88gw Medium WebConfigModification: Applying web config modifications to file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 887b Medium Removing web config node - Path configuration/system.web/httpModules Node name add[@name='JivePageController'] 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 887b Medium Removing web config node - Path configuration/system.web/httpHandlers Node name add[@path='ScriptResource.axd'] 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 887b Medium Removing web config node - Path configuration/runtime/*[local-name()="assemblyBinding" and namespace-uri()="urn:schemas-microsoft-com:asm.v1"] Node name [local-name()="dependentAssembly"][/@name="System.Web.Extensions.Design"] 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 887b Medium Removing web config node - Path configuration/runtime/*[local-name()="assemblyBinding" and namespace-uri()="urn:schemas-microsoft-com:asm.v1"] Node name [local-name()="dependentAssembly"][/@name="System.Web.Extensions"] 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8gp8 Medium WebConfigModification: Adding web config node - Path - configuration/runtime/*[local-name()="assemblyBinding" and namespace-uri()="urn:schemas-microsoft-com:asm.v1"] Node name - [local-name()="dependentAssembly"][/@name="System.Web.Extensions"] Node value - in web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8gp8 Medium WebConfigModification: Adding web config node - Path - configuration/runtime/*[local-name()="assemblyBinding" and namespace-uri()="urn:schemas-microsoft-com:asm.v1"] Node name - [local-name()="dependentAssembly"][/@name="System.Web.Extensions.Design"] Node value - in web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8gp8 Medium WebConfigModification: Adding web config node - Path - configuration/system.web/httpHandlers Node name - add[@path='ScriptResource.axd'] Node value - in web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8gp8 Medium WebConfigModification: Adding web config node - Path - configuration/system.web/httpModules Node name - add[@name='JivePageController'] Node value - in web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x15C4) 0x1444 SharePoint Foundation Topology e5mb Medium WcfReceiveRequest: LocalAddress: 'http://tw-s1-m4400-007.jivedemo.local:32843/15702467ece1408f881abeabac3b5077/MetadataWebService.svc' Channel: 'System.ServiceModel.Channels.ServiceChannel' Action: xxx MessageId: 'urn:uuid:4e859532-ed7f-4937-8b88-68d3af43d589' 9f403ede-2c94-490b-a05c-e169cc5fe58d 02/24/2010 16:05:41.10 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology f6kh High WebConfigModification: Save of web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config for applying modifications to web app SharePoint - 2008 failed. Error message - Access to the path 'C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config' is denied. 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.10 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8j2o High WebConfigModification: Changes not applied to web application SharePoint - 2008 with Url xxx 5a817a37-7bf6-4d26-be51-207369e38f5b Any help would be appreciated!

    Read the article

  • Set the JAXB context factory initialization class to be used

    - by user1902288
    I have updated our projects (Java EE based running on Websphere 8.5) to use a new release of a company internal framework (and Ejb 3.x deployment descriptors rather than the 2.x ones). Since then my integration Tests fail with the following exception: [java.lang.ClassNotFoundException: com.ibm.xml.xlxp2.jaxb.JAXBContextFactory] I can build the application with the previous framework release and everything works fine. While debugging i noticed that within the ContextFinder (javax.xml.bind) there are two different behaviours: Previous Version (Everything works just fine): None of the different places brings up a factory class so the default factory class gets loaded which is com.sun.xml.internal.bind.v2.ContextFactory (defined as String constant within the class). Upgraded Version (ClassNotFound): There is a resource "META-INF/services/javax.xml.bind.JAXBContext" beeing loaded successfully and the first line read makes the ContextFinder attempt to load "com.ibm.xml.xlxp2.jaxb.JAXBContextFactory" which causes the error. I now have two questions: What sort is that resource? Because inside our EAR there is two WARs and none of those two contains a folder services in its META-INF directory. Where could that value be from otherwise? Because a filediff showed me no new or changed properties files. No need to say i am going to read all about the JAXB configuration possibilities but if you have first insights on what could have gone wrong or help me out with that resource (is it a real file i have to look for?) id appreciate a lot. Many Thanks! EDIT (according to comments Input/Questions): Out of curiosity, does your framework include JAXB JARs? Did the old version of your framework include jaxb.properties? Indeed (i am a bit surprised) the framework has a customized eclipselink-2.4.1-.jar inside the EAR that includes both a JAXB implementation and a jaxb.properties file that shows the following entry in both versions (the one that finds the factory as well as in the one that throws the exception): javax.xml.bind.context.factory=org.eclipse.persistence.jaxb.JAXBContextFactory I think this is has nothing to do with the current issue since the jar stayed exactly the same in both EARs (the one that runs/ the one with the expection) It's also not clear to me why the old version of the framework was ever selecting the com.sun implementation There is a class javax.xml.bind.ContextFinder which is responsible for initializing the JAXBContextFactory. This class searches various placess for the existance of a jaxb.properties file or a "javax.xml.bind.JAXBContext" resource. If ALL of those places dont show up which Context Factory to use there is a deault factory loaded which is hardcoded in the class itself: private static final String PLATFORM_DEFAULT_FACTORY_CLASS = "com.sun.xml.internal.bind.v2.ContextFactory"; Now back to my problem: Building with the previous version of the framework (and EJB 2.x deployment descriptors) everything works fine). While debugging i can see that there is no configuration found and thatfore above mentioned default factory is loaded. Building with the new version of the framework (and EJB 3.x deployment descriptors so i can deploy) ONLY A TESTCASE fails but the rest of the functionality works (like i can send requests to our webservice and they dont trigger any errors). While debugging i can see that there is a configuration found. This resource is named "META-INF/services/javax.xml.bind.JAXBContext". Here are the most important lines of how this resource leads to the attempt to load 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' which then throws the ClassNotFoundException. This is simplified source of the mentioned javax.xml.bind.ContextFinder class: URL resourceURL = ClassLoader.getSystemResource("META-INF/services/javax.xml.bind.JAXBContext"); BufferedReader r = new BufferedReader(new InputStreamReader(resourceURL.openStream(), "UTF-8")); String factoryClassName = r.readLine().trim(); The field factoryClassName now has the value 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' (The day i understand how to format source code on stackoverflow will be my biggest step ahead.... sorry for the formatting after 20 mins it still looks the same :() Because this has become a super lager question i will also add a bounty :)

    Read the article

  • Facing Null Pointer Exception while using BIRT

    - by srikanth
    Hi, I am using BIRT APIs in a java program.My code is : package com.tecnotree.mdx.product.utils; import java.util.HashMap; import java.util.logging.Level; import org.eclipse.birt.core.exception.BirtException; import org.eclipse.birt.core.framework.Platform; import org.eclipse.birt.report.engine.api.EngineConfig; import org.eclipse.birt.report.engine.api.EngineConstants; import org.eclipse.birt.report.engine.api.HTMLRenderOption; import org.eclipse.birt.report.engine.api.IReportEngine; import org.eclipse.birt.report.engine.api.IReportEngineFactory; import org.eclipse.birt.report.engine.api.IReportRunnable; import org.eclipse.birt.report.engine.api.IRunAndRenderTask; import org.eclipse.birt.report.engine.api.ReportEngine; public class ExampleReport { public static void main(String[] args) { // Variables used to control BIRT Engine instance ReportEngine eng = null; IReportRunnable design = null; IRunAndRenderTask task = null; HTMLRenderOption renderContext = null; HashMap contextMap = null; HTMLRenderOption options = null; final EngineConfig conf = new EngineConfig(); conf .setEngineHome("C:\\birt-runtime-2_5_2\\birt-runtime-2_5_2\\ReportEngine"); System.out.println("conf " + conf.getBIRTHome()); conf.setLogConfig(null, Level.FINE); try { Platform.startup(conf); } catch (BirtException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } IReportEngineFactory factory = (IReportEngineFactory) Platform .createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY); System.out.println("Factory : " + factory.toString()); System.out.println(conf.toString()); IReportEngine engine = factory.createReportEngine(conf); System.out.println("Engine : " + engine); try { design = eng .openReportDesign("C:\\birt-runtime-2_5_2\\birt-runtime-2_5_2\\ReportEngine\\samples\\hello_world.rptdesign"); } catch (Exception e) { System.err .println("An error occured during the opening of the report file!"); e.printStackTrace(); System.exit(-1); } task = eng.createRunAndRenderTask(design); renderContext = new HTMLRenderOption(); renderContext.setImageDirectory("image"); contextMap = new HashMap(); contextMap.put(EngineConstants.APPCONTEXT_HTML_RENDER_CONTEXT, renderContext); task.setAppContext(contextMap); options = new HTMLRenderOption(); options.setOutputFileName("c:/temp/output.html"); options.setOutputFormat("html"); task.setRenderOption(options); try { task.run(); } catch (Exception e) { System.err.println("An error occured while running the report!"); e.printStackTrace(); System.exit(-1); } System.out.println("All went well. Closing program!"); eng.destroy(); } } But i am facing NullPointerException while creating the report. STACKTRACE : Exception in thread "main" java.lang.NullPointerException at org.eclipse.birt.report.engine.api.impl.ReportEngine$EngineExtensionManager.<init>(ReportEngine.java:784) at org.eclipse.birt.report.engine.api.impl.ReportEngine.<init>(ReportEngine.java:109) at org.eclipse.birt.report.engine.api.impl.ReportEngineFactory$1.run(ReportEngineFactory.java:18) at org.eclipse.birt.report.engine.api.impl.ReportEngineFactory$1.run(ReportEngineFactory.java:1) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.birt.report.engine.api.impl.ReportEngineFactory.createReportEngine(ReportEngineFactory.java:14) at com.tecnotree.mdx.product.utils.ExampleReport.main(ExampleReport.java:47) Please help me regarding this... my project deadline has been reached... Appreciate your Reply Thanks in Advance

    Read the article

  • TemplateBinding with Converter - what is wrong?

    - by MartyIX
    I'm creating a game desk. I wanted to specify field size (one field is a square) as a attached property and with this data set value of ViewPort which would draw 2x2 matrix (and tile mode would do the rest of game desk). I'm quite at loss what is wrong because the binding doesn't work. Testing line in XAML for the behaviour I would like to have: <DrawingBrush Viewport="0,0,100,100" ViewportUnits="Absolute" TileMode="None"> The game desk is based on this sample of DrawingPaint: http://msdn.microsoft.com/en-us/library/aa970904.aspx (an image is here) XAML: <Window x:Class="Sokoban.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:Sokoban" Title="Window1" Height="559" Width="419"> <Window.Resources> <local:FieldSizeToRectConverter x:Key="fieldSizeConverter" /> <Style x:Key="GameDesk" TargetType="{x:Type Rectangle}"> <Setter Property="local:GameDeskProperties.FieldSize" Value="50" /> <Setter Property="Fill"> <Setter.Value> <!--<DrawingBrush Viewport="0,0,100,100" ViewportUnits="Absolute" TileMode="None">--> <DrawingBrush Viewport="{TemplateBinding local:GameDeskProperties.FieldSize, Converter={StaticResource fieldSizeConverter}}" ViewportUnits="Absolute" TileMode="None"> <DrawingBrush.Drawing> <DrawingGroup> <GeometryDrawing Brush="CornflowerBlue"> <GeometryDrawing.Geometry> <RectangleGeometry Rect="0,0,100,100" /> </GeometryDrawing.Geometry> </GeometryDrawing> <GeometryDrawing Brush="Azure"> <GeometryDrawing.Geometry> <GeometryGroup> <RectangleGeometry Rect="0,0,50,50" /> <RectangleGeometry Rect="50,50,50,50" /> </GeometryGroup> </GeometryDrawing.Geometry> </GeometryDrawing> </DrawingGroup> </DrawingBrush.Drawing> </DrawingBrush> </Setter.Value> </Setter> </Style> </Window.Resources> <StackPanel> <Rectangle Style="{StaticResource GameDesk}" Width="300" Height="150" /> </StackPanel> </Window> Converter and property definition: using System; using System.Collections.Generic; using System.Text; using System.Windows.Controls; using System.Windows; using System.Diagnostics; using System.Windows.Data; namespace Sokoban { public class GameDeskProperties : Panel { public static readonly DependencyProperty FieldSizeProperty; static GameDeskProperties() { PropertyChangedCallback fieldSizeChanged = new PropertyChangedCallback(OnFieldSizeChanged); PropertyMetadata fieldSizeMetadata = new PropertyMetadata(50, fieldSizeChanged); FieldSizeProperty = DependencyProperty.RegisterAttached("FieldSize", typeof(int), typeof(GameDeskProperties), fieldSizeMetadata); } public static int GetFieldSize(DependencyObject target) { return (int)target.GetValue(FieldSizeProperty); } public static void SetFieldSize(DependencyObject target, int value) { target.SetValue(FieldSizeProperty, value); } static void OnFieldSizeChanged(DependencyObject target, DependencyPropertyChangedEventArgs e) { Debug.WriteLine("FieldSize just changed: " + e.NewValue); } } [ValueConversion(/* sourceType */ typeof(int), /* targetType */ typeof(Rect))] public class FieldSizeToRectConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { Debug.Assert(targetType == typeof(int)); int fieldSize = int.Parse(value.ToString()); return new Rect(0, 0, 2 * fieldSize, 2 * fieldSize); } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { // should not be called in our example throw new NotImplementedException(); } } }

    Read the article

  • mexTcpBinding in WCF - IMetadataExchange errors

    - by David
    I'm wanting to get a WCF-over-TCP service working. I was having some problems with modifying my own project, so I thought I'd start with the "base" WCF template included in VS2008. Here is the initial WCF App.config and when I run the service the WCF Test Client can work with it fine: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8731/Design_Time_Addresses/WcfTcpTest/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> This works perfectly, no issues at all. I figured changing it from HTTP to TCP would be trivial: change the bindings to their TCP equivalents and remove the httpGetEnabled serviceMetadata element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1337/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="netTcpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexTcpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> But when I run this I get this error in the WCF Service Host: System.InvalidOperationException: The contract name 'IMetadataExchange' could not be found in the list of contracts implemented by the service Service1. Add a ServiceMetadataBehavior to the configuration file or to the ServiceHost directly to enable support for this contract. I get the feeling that you can't send metadata using TCP, but that's the case why is there a mexTcpBinding option?

    Read the article

  • ELMAH - Filtering 404 Errors

    - by Nathan Taylor
    I am attempting to configure ELMAH to filter 404 errors and I am running into difficulties with the XML-provided filter rules in my Web.config file. I followed the tutorial here and here and added an <is-type binding="BaseException" type="System.IO.FileNotFoundException" /> declaration under my <test><or>... declaration, but that completely failed. When I tested it locally I stuck a breakpoint in protected void ErrorLog_Filtering() {} of the Global.asax found that the System.Web.HttpException that gets fired by ASP.NET for a 404 doesn't have a base type of System.IO.FileNotFound, but rather it is simply a System.Web.HttpException. Next I decided to try a <regex binding="BaseException.Message" pattern="The file '/[^']+' does not exist" /> in the hopes that any exception matching the pattern "The file '/foo.ext' does not exist" would get filtered, but that too having no effect. As a last resort I tried <is-type binding="BaseException" type="System.Exception" />, and even that is entirely disregarded. I'm inclined to think there's a configuration error with ELMAH, but I fail to see any. Am I missing something blatantly obvious? Here's the relevant stuff from my web.config: <configuration> <configSections> <sectionGroup name="elmah"> <section name="security" requirePermission="false" type="Elmah.SecuritySectionHandler, Elmah"/> <section name="errorLog" requirePermission="false" type="Elmah.ErrorLogSectionHandler, Elmah"/> <section name="errorMail" requirePermission="false" type="Elmah.ErrorMailSectionHandler, Elmah"/> <section name="errorFilter" requirePermission="false" type="Elmah.ErrorFilterSectionHandler, Elmah" /> </sectionGroup> </configSections> <elmah> <errorFilter> <test> <or> <equal binding="HttpStatusCode" value="404" type="Int32" /> <regex binding="BaseException.Message" pattern="The file '/[^']+' does not exist" /> </or> </test> </errorFilter> <errorLog type="Elmah.XmlFileErrorLog, Elmah" logPath="~/App_Data/logs/elmah" /> </elmah> <system.web> <httpModules> <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah"/> <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah"/> </httpModules> </system.web> <system.webServer> <modules> <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah"/> <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" /> </modules> </system.webServer> </configuration>

    Read the article

  • JpegBitmapEncoder.Save() throws exception when writing image with metadata to MemoryStream

    - by Dmitry
    I am trying to set up metadata on JPG image what does not have it. You can't use in-place writer (InPlaceBitmapMetadataWriter) in this case, cuz there is no place for metadata in image. If I use FileStream as output - everything works fine. But if I try to use MemoryStream as output - JpegBitmapEncoder.Save() throws an exception (Exception from HRESULT: 0xC0000005). After some investigation I also found out what encoder can save image to memory stream if I supply null instead of metadata. I've made a very simplified and short example what reproduces the problem: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Drawing; using System.Drawing.Imaging; using System.Windows.Media.Imaging; namespace JpegSaveTest { class Program { public static JpegBitmapEncoder SetUpMetadataOnStream(Stream src, string title) { uint padding = 2048; BitmapDecoder original; BitmapFrame framecopy, newframe; BitmapMetadata metadata; JpegBitmapEncoder output = new JpegBitmapEncoder(); src.Seek(0, SeekOrigin.Begin); original = JpegBitmapDecoder.Create(src, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.OnLoad); if (original.Frames[0] != null) { framecopy = (BitmapFrame)original.Frames[0].Clone(); if (original.Frames[0].Metadata != null) metadata = original.Frames[0].Metadata.Clone() as BitmapMetadata; else metadata = new BitmapMetadata("jpeg"); metadata.SetQuery("/app1/ifd/PaddingSchema:Padding", padding); metadata.SetQuery("/app1/ifd/exif/PaddingSchema:Padding", padding); metadata.SetQuery("/xmp/PaddingSchema:Padding", padding); metadata.SetQuery("System.Title", title); newframe = BitmapFrame.Create(framecopy, framecopy.Thumbnail, metadata, original.Frames[0].ColorContexts); output.Frames.Add(newframe); } else { Exception ex = new Exception("Image contains no frames."); throw ex; } return output; } public static MemoryStream SetTagsInMemory(string sfname, string title) { Stream src, dst; JpegBitmapEncoder output; src = File.Open(sfname, FileMode.Open, FileAccess.Read, FileShare.Read); output = SetUpMetadataOnStream(src, title); dst = new MemoryStream(); output.Save(dst); src.Close(); return (MemoryStream)dst; } static void Main(string[] args) { string filename = "Z:\\dotnet\\gnom4.jpg"; MemoryStream s; s = SetTagsInMemory(filename, "test title"); } } } It is simple console application. To run it, replace filename variable content with path to any .jpg file without metadata (or use mine). Ofc I can just save image to temporary file first, close it, then open and copy to MemoryStream, but its too dirty and slow workaround. Any ideas about getting this working are welcome :)

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >