Search Results

Search found 2567 results on 103 pages for 'pg pool'.

Page 19/103 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • IPv4 private address assignment

    - by helloworld922
    I'm working on a private network which uses static IPv4 addresses as well as DHCP addressing for the physical LAN network. At a previous company I worked at they would assign static addresses in the 10.*.*.* space and all DHCP/LAN addresses were assigned in the 192.168.*.* space. Both of these address spaces are defined in the IPv4 private address space and there were never any internal conflicts. From personal experience at home, school, at work, and pretty much any other machine I've dealt with extensively (Windows and a few Linux distros), the DHCP server would always by default choose an address from the 192.168.*.* address space. Now my question is can I rely on this behavior? Do DHCP servers always by default assign from the 192.168.*.* pool (or any pool other than the 10.*.*.* pool), leaving the 10.*.*.* pool free for private static addressing? If not, under what conditions might a DHCP server choose an address in the 10.*.*.* address space?

    Read the article

  • Server 2003 answers ping, but wont serve http, ftp,smtp or pop3

    - by Manfred
    After reboot, my server wont respond to any incoming request until it is rebooted again. Then, about 5-6 hrs later, any website on it will return a ping, but it will not serve the page, nor will it serve ftp, pop3 or smtp requests. The System log shows W3SVC errors 1014 and 1074, which relate to an Application pool not replying; I have one phpAdmin app pool which I have stopped - it is showing a solitary website as the default App, but the server no longer serves php extensions, and I can't transfer the default website to another pool to kill the whole app pool. I would appreciate your help.

    Read the article

  • Monitor a log file on Linux and send each line to another program

    - by mlambie
    I run an apt-cacher-ng server on Ubuntu Linux which writes logs in the following format: 1299745593|O|149406|XXX.XXX.XXX.XXX|uburep/pool/main/t/tiff/libtiff4_3.9.2-2ubuntu0.4_amd64.deb 1299745593|O|10154976|XXX.XXX.XXX.XXX|uburep/pool/main/l/linux-firmware/linux-firmware_1.34.4_all.deb 1299748529|O|39368|XXX.XXX.XXX.XXX|uburep/pool/main/n/nagios-nrpe/nagios-nrpe-server_2.12-4ubuntu1_amd64.deb 1300155440|O|680100|XXX.XXX.XXX.XXX|uburep/pool/main/t/tzdata/tzdata_2011c-0ubuntu0.10.04_all.deb It shows the timestamp, direction (in or out), byte count, IP and filename. Every time a line is written to it, I'd like to also send that line to another program. I will have this program insert the line into a database so that I can crunch some statistics about how much bandwidth we're saving through operating a caching server. I do not want to cat the log file every X minutes (via cron) looking for new entries as it'd be somewhat computationally uneconomical. Instead I'd prefer to have a daemon monitor the log, and when a change is detected, each line is sent to my database-insertion script. Will swatch achieve this, or are there better options?

    Read the article

  • Local or public NTP servers?

    - by BeeOnRope
    For a relatively large network (thousands of hosts) - what are the arguments for and against running a locally managed (pool of) NTP server(s) (perhaps periodically set via some public NTP server) and having all other hosts on the network use that (pool of) NTP server(s) versus having all hosts simply use public NTP servers directly, say via ntp.pool.org? Aside from the pros and cons, What is typical best practice today?

    Read the article

  • NAT : understanding about interconnection

    - by PITCHY
    English version below J'ai 2 routeurs A et B relié en série avec les ip respectives ( 10.0.0.1/30 10.0.0.2/30) sur le routeur A j'ai activé la fonction NAT avec un pool (200.0.0.1 - 200.0.0.15/28). Lorsque je sors je prends donc un ip du pool par exemple 200.0.0.10. Comment ça fonctionne sachant que ma nouvelle ip (200.0.0.10) ne se trouve pas sur le meme réseau que mon interface de destination (10.0.0.2)? English: I have 2 routers A and B, interconnected with a serial connection, with the ip's 10.0.0.1/30 for A and 10.0.0.2/30 for B. On router A NAT was activated with the pool 200.0.0.1 - 200.0.0.15/28. When connection to this router, I get an ip from the pool, for example 200.0.0.10. Knowing my new ip is 200.0.0.10, which is not on the same network as my destination interface (10.0.0.2), how can this work?

    Read the article

  • forward same port but for two different IPs (cisco)

    - by Colin
    Hi! I have a cisco running IOS 12.0(25) responding to two different IPs addresses: IP_A and IP_B. Behind this router I also have two different servers: server_A and server_B. What I want is to forward port 22 to both servers, so: IP_A, port22 -> server_A, port22 IP_B, port22 -> server_B, port22 ATM this only works for one of them (server_A), this is my config: interface Ethernet0/0 description Internet ip address IP_A 255.255.255.0 ip address IP_B 255.255.255.0 secondary no ip directed-broadcast ip nat outside no ip mroute-cache no cdp enable ip nat pool pool_A IP_A IP_A netmask 255.255.255.0 ip nat pool pool_B IP_B IP_B netmask 255.255.255.0 ip nat inside source list A pool pool_A overload ip nat inside source list B pool pool_B overload ip nat inside source static tcp server_B 22 IP_B 22 extendable ip nat inside source static tcp server_A 22 IP_A 22 extendable access-list A permit server_A access-list B permit server_B

    Read the article

  • Bacula stops writing to disk volume after 2GB

    - by m.list
    Bacula Version: 5.2.5 I have configured bacula to write volumes to disk, however bacula stops writing to the volume as soon as it reaches 2gb. The file system is not an issue as I have stored files larger than 2gb. 06-Dec 17:22 backup-sd JobId 8421: End of Volume "Full-Monthly-0005" at 0:2147475577 on device "FileStorage" (/nfs/backup-pool). Write of 64512 bytes got 8069. 06-Dec 17:22 backup-sd JobId 8421: End of medium on Volume "Full-Monthly-0005" Bytes=2,147,475,578 Blocks=33,288 at 06-Dec-2012 17:22. backup1@backup:/nfs/backup-pool$ ls -alh Full-Monthly-0005 <br> -rw-r----- 1 bacula tape 2.0G Dec 3 16:14 Full-Monthly-0005 bacula-dir.conf: Pool { Name = Full-Monthly Pool Type = Backup Recycle = yes Volume Retention = 5 months Volume Use Duration = 1 day Maximum Volumes = 5 Maximum Volume Bytes = 12gb } bacula-sd.conf: Device { Name = FileStorage Media Type = File Archive Device = /nfs/backup-pool LabelMedia = yes # lets Bacula label unlabeled media Random Access = Yes RemovableMedia = no AlwaysOpen = no Label media = yes Maximum Volume Size = 12gb } In my original configuration Maximum Volume Bytes and Maximum Volume Size were not set at all and so should have defauted to no maximum but that did not work either.

    Read the article

  • The provider is not compatible with the version of Oracle Client

    - by jkrebsbach
    There was a recent upgrade of the Oracle client on a production web server housing serveral app pools each containing multiple web sites.  The upgrade consisted of adding the 11.2 client tools on the web server, but leaving the 9.2 and 10.2 versions alone.   After learning the upgrade occured, I updated Oracle.DataAccess.dll on my site to the 11.2 dll, and the web site ran smoothly.   The next day I discover the site is throwing an error:   Oracle.DataAccess.Client.OracleException:   The provider is not compatible with the version of Oracle Client   The App Pool will connect with an installed version of the Oracle client provider, and coordinate database interactions.  If a request comes in for a web site configured using an earlier version of the provider, that is the version the App Pool will use for Oracle interactions.  If a request is made for a site configured for a different version of the provider, the App Pool will see a conflict and throw the above error.   This web site: http://splinter.com.au/blog/?p=156 described the steps to grab the DLLs for a provider and bring them local (for Oracle 11.1), but even when I brought the DLLs for the Oracle Provider into the local bin directory, the App Pool would still determine the version of the oracle client and continue to throw the exception.   My only resolution for the above problem has been to have dedicated IIS app pools per version of Oracle client being leveraged.

    Read the article

  • JDBC Connection Pools in Glassfish

    - by Dana Singleterry
    I've been attempting to configure Glassfish 3.1.2.2 for ADF 11g and the need arose to create a jdbc connection pool to my Oracle XE 11g database. While this is really very trivial there were no samples of how to do this and documentation, while good, rarely ever provides concrete examples. After fumbling around for a few minutes searching for an example I gave up and figured it out on my own. Here are the steps for any of you that may be in need. This can be done either via the Glassfish command line tool asadmin or through the admin console. I'm doing this through the admin console. Start Glassfish and connect to the admin console with the credentials you defined at installation: http://localhost:4848 Navigate to Resources | JDBC | JDBC Connection Pools and select New. Be sure to enter Resource Type & Datasource Classname under General Settings tab. You can go with the defaults for Pool Settings etc... View Image Go to the Additional Properties tab and create username, password, and url properties with the respective values. View Image Navigate to Resources | JDBC | JDBC Resources and select New. Be sure to enter the JNDI Name and select the Pool Name for the jdbc connection pool you created previously. View Image Navigate to Configurations | server-config | JVM Settings and select the JVM Options tab. Add the values highlighted: -Doracle.jdbc.J2EE13Compliant=true is used to make sure the driver behaves in a JEE-compliant manner. View Image To integrate the JDBC driver into a GlassFish Server domain, copy the JAR files into the domain-dir/lib directory, then restart the server. The JAR file for the Oracle 11 database driver is ojdbc6dms.jar. An upcoming entry will demonstrate configuring Glassfish for Oracle ADF Applications.

    Read the article

  • To encryption=on or encryption=off a simple ZFS Crypto demo

    - by darrenm
    I've just been asked twice this week how I would demonstrate ZFS encryption really is encrypting the data on disk.  It needs to be really simple and the target isn't forensics or cryptanalysis just a quick demo to show the before and after. I usually do this small demo using a pool based on files so I can run strings(1) on the "disks" that make up the pool. The demo will work with real disks too but it will take a lot longer (how much longer depends on the size of your disks).  The file hamlet.txt is this one from gutenberg.org # mkfile 64m /tmp/pool1_file # zpool create clear_pool /tmp/pool1_file # cp hamlet.txt /clear_pool # grep -i hamlet /clear_pool/hamlet.txt | wc -l Note the number of times hamlet appears # zpool export clear_pool # strings /tmp/pool1_file | grep -i hamlet | wc -l Note the number of times hamlet appears on disk - it is 2 more because the file is called hamlet.txt and file names are in the clear as well and we keep at least two copies of metadata. Now lets encrypt the file systems in the pool. Note you MUST use a new pool file don't reuse the one from above. # mkfile 64m /tmp/pool2_file # zpool create -O encryption=on enc_pool /tmp/pool2_file Enter passphrase for 'enc_pool': Enter again: # cp hamlet.txt /enc_pool # grep -i hamlet /enc_pool/hamlet.txt | wc -l Note the number of times hamlet appears is the same as before # zpool export enc_pool # strings /tmp/pool2_file | grep -i hamlet | wc -l Note the word hamlet doesn't appear at all! As a said above this isn't indended as "proof" that ZFS does encryption properly just as a quick to do demo.

    Read the article

  • Processing component pools problem - Entity Subsystem

    - by mani3xis
    Architecture description I'm creating (designing) an entity system and I ran into many problems. I'm trying to keep it Data-Oriented and efficient as much as possible. My components are POD structures (array of bytes to be precise) allocated in homogeneous pools. Each pool has a ComponentDescriptor - it just contains component name, field types and field names. Entity is just a pointer to array of components (where address acts like an entity ID). EntityPrototype contains entity name and array of component names. Finally Subsystem (System or Processor) which works on component pools. Actual problem The problem is that some components dependents on others (Model, Sprite, PhysicalBody, Animation depends on Transform component) which makes a lot of problems when it comes to processing them. For example, lets define some entities using [S]prite, [P]hysicalBody and [H]ealth: Tank: Transform, Sprite, PhysicalBody BgTree: Transform, Sprite House: Transform, Sprite, Health and create 4 Tanks, 5 BgTrees and 2 Houses and my pools will look like: TTTTTTTTTTT // Transform pool SSSSSSSSSSS // Sprite pool PPPP // PhysicalBody pool HH // Health component There is no way to process them using indices. I spend 3 days working on it and I still don't have any ideas. In previous designs TransformComponent was bound to the entity - but it wasn't a good idea. Can you give me some advices how to process them? Or maybe I should change the overall design? Maybe I should create pools of entites (pools of component pools) - but I guess it will be a nightmare for CPU caches. Thanks

    Read the article

  • ZFS Basics

    - by user12614620
    Stage 1 basics: creating a pool # zpool create $NAME $REDUNDANCY $DISK1_0..N [$REDUNDANCY $DISK2_0..N]... $NAME = name of the pool you're creating. This will also be the name of the first filesystem and, by default, be placed at the mountpoint "/$NAME" $REDUNDANCY = either mirror or raidzN, and N can be 1, 2, or 3. If you leave N off, then it defaults to 1. $DISK1_0..N = the disks assigned to the pool. Example 1: zpool create tank mirror c4t1d0 c4t2d0 name of pool: tank redundancy: mirroring disks being mirrored: c4t1d0 and c4t2d0 Capacity: size of a single disk Example 2: zpool create tank raidz c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 Here the redundancy is raidz, and there are five disks, in a 4+1 (4 data, 1 parity) config. This means that the capacity is 4 times the disk size. If the command used "raidz2" instead, then the config would be 3+2. Likewise, "raidz3" would be a 2+3 config. Example 3: zpool create tank mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0 This is the same as the first mirror example, except there are two mirrors now. ZFS will stripe data across both mirrors, which means that writing data will go a bit faster. Note: you cannot create a mirror of two raidzs. You can create a raidz of mirrors, but to do that requires trickery.

    Read the article

  • Ubuntu NTP issues

    - by Anups
    I am trying to setup the NTP server on Ubuntu machine. Am breaking my head in this particular issue. Getting an error ntpdate[5005]: no server suitable for synchronization found when doing the command ntppdate. Can anyone please help me out in this? /etc/ntp.conf: server 0.ubuntu.pool.ntp.org server 1.ubuntu.pool.ntp.org server 2.ubuntu.pool.ntp.org server 3.ubuntu.pool.ntp.org Also when I gave command netstat -anltp | grep "LISTEN" tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 1816/dnsmasq tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 939/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1013/cupsd tcp 0 0 127.0.0.1:39558 0.0.0.0:* LISTEN 5529/rsession tcp 0 0 0.0.0.0:902 0.0.0.0:* LISTEN 1275/vmware-authdla tcp 0 0 127.0.0.1:47304 0.0.0.0:* LISTEN 5822/rsession tcp6 0 0 :::80 :::* LISTEN 1400/apache2 tcp6 0 0 :::22 :::* LISTEN 939/sshd tcp6 0 0 ::1:631 :::* LISTEN 1013/cupsd So what should I do so that it listens on 123? If I get output as PORT STATE SERVICE 123/udp open ntp If I give command nmap -p 123 -sU -P0 192.168.36.198, it means UDP is open right? Then why doesn't it show in the command to to show listening ports?

    Read the article

  • Quick Fix for GlassFish/MySQL NoPasswordCredential Found

    - by MarkH
    Just the other day, I stood up a GlassFish 3.1.2 server in preparation for a new web app we've developed. Since we're using MySQL as the back-end database, I configured it for MySQL (driver) and created the requisite JDBC resource and supporting connection pool. Pinging the finished pool returned a success, and all was well. Until we fired up the app, that is -- in this case, after a weekend. Funny how things seem to break when you leave them alone for a couple of days. :-) Strangely, the error indicated "No PasswordCredential found". Time to re-check that pool. All the usual properties and values were there (URL, driverClass, serverName, databaseName, portNumber, user, password) and were populated correctly. Yes, the password field, too. And it had pinged successfully. So why the problem? A bit of searching online produced enough relevant material to offer promise. I didn't take notes as I was investigating the cause (note to self), but here were the general steps I took to resolve the issue: First, per some guidance I had found, I tried resetting the password value to nothing (using () for a value). Of course, this didn't fix anything; the database account requires a password. And when I tried to put the value back, GlassFish politely refused. Hmm. I'd seen that some folks created a new pool to replace the "broken" one, and while that did work for them, it seemed to simply side-step the issue. So I deleted the password property - which GlassFish allowed me to do - and restarted the domain. Once I was back in, I re-added the password property and its value, saved it, and pinged...success! But now to the app for the litmus test. The web app worked, and everything and everyone was now happy. Not bad for a Monday.  :-D Hope this helps, Mark

    Read the article

  • ??????????/??????????????|WebLogic Channel|??????

    - by ???02
    WebLogic Server???????????????????????????????????????????????????????????????????????????1???????????????????????????????????????????????????WebLogic Server???????????????????????????????????????????????????????????????????????·?????????????????????·??????????????????????????????????????????????????2011?11????????Oracle DBA & Developers Days 2011??????????????????????????WebLogic Server??????????????????WebLogic Server???????????????????????????????????????????????????·?????????????????????????????2??????????????????????????????????????????????·??????????(???)?WebLogic Server?????????????????????? WebLogic Server????????????????????????????OS???JVM???WebLogic Server????????????????5????????????????WebLogic Server?????????????????????????????????????? ????????·????????????????????WebLogic Server???????????????? ?????????????????????????????????????Listen????????????????Listen????????Muxer??????????????·????·????????????????????????????????????????·????????????????·?????????????????????????????????????????????????????????????????????????????????JDBC?????????????????????????????????????¦???????????·??????????――?????·??????·???????????WebLogic????? ?????????????????????·?????????????????????????3????????????1????????·?????????????¦????????????! ?????????????????JRockit Flight Recorder?????????1??????????????――????????????????? 1?????????????????? ???????????????????????????????????????????????????????·????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????????????????CPU?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????¦????????????WebLogic Server - ???????·?????? ????????????????????????WebLogic Server?????????????????????????????????????????????????????????????3??? ??????????????????????????(????????)?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??3?????????????????????????????????????????????????????????????????????? ????????????50~600???????????????100~500???????????????????????????????????????????????300??????????????????????200?400??????????????????????????? ???50?100???????????????????????????????????????????????????????????????????????????????????????????500?600???????????????????????????????????????????????????????????????? ??????????????????????????????50?100????????????500?600???????????????????100~500?????????????????????????????????????????200~400????????????????????????????? ???????????????????????????????????????????????????????????????????"????????"??????????????????????????JDBC?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????¦????WebLogic Server(JRockit) - ????????????????????????????????2???????????????――???????? ? ?????????????????? 2???????????????????????????????????????????????????????????????????????????????????????????????????????????1???????????400??????(???????????????????????????????domain.xsd????????????????????????????????????????)? ???????????????????????????????????????????????????????????????????????????????????????????????????????????????????Dweblogic.SelfTuningThreadPoolSizeMin={???????}Dweblogic.SelfTuningThreadPoolSizeMax={???????}?????????????????<server>  <name>MANAGED_SRV</name>  <self-tuning-thread-pool-size-min>{???????}</self-tuning-thread-pool-size-min>  <self-tuning-thread-pool-size-max>{???????}</self-tuning-thread-pool-size-max>  ...?...</server> ???????????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????¦????????????WebLogic Server - ????????/???? ?????1?????????????·???????????????????????????????????????????????????·??????????????????????????????????·?????????????0~2????????????????????????????????????????????????????????????????????????????????????????·??????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????·??????????????????????????????????? ???????????????????????????????????????????????????????????????????????????????????????????????????????????·???????????? ???????????????????????????????????????????????????????????????????????·??????????????????????????????? ????????????????WebLogic Server??????????????????????????????????????????????????????????????????????????????????????????????????????????????·?????????????? ??????????????????????????2????????·??????????????????????2?????????????¦Oracle DBA & Developers Days 2011????????????????????WebLogic Server??????????????¦????·????Oracle WebLogic Server 12c Forum - ????????Java??????????? -??2012?1?25?13:30~17:00 ?? ????????????13F?????

    Read the article

  • Conflict Minerals - Design to Compliance

    - by C. Chadwick
    Dr. Christina  Schröder - Principal PLM Consultant, Enterprise PLM Solutions EMEA What does the Conflict Minerals regulation mean? Conflict Minerals has recently become a new buzz word in the manufacturing industry, particularly in electronics and medical devices. Known as the "Dodd-Frank Section 1502", this regulation requires SEC listed companies to declare the origin of certain minerals by 2014. The intention is to reduce the use of tantalum, tungsten, tin, and gold which originate from mines in the Democratic Republic of Congo (DRC) and adjoining countries that are controlled by violent armed militia abusing human rights. Manufacturers now request information from their suppliers to see if their raw materials are sourced from this region and which smelters are used to extract the metals from the minerals. A standardized questionnaire has been developed for this purpose (download and further information). Soon, even companies which are not directly affected by the Conflict Minerals legislation will have to collect and maintain this information since their customers will request the data from their suppliers. Furthermore, it is expected that the public opinion and consumer interests will force manufacturers to avoid the use of metals with questionable origin. Impact for existing products Several departments are involved in the process of collecting data and providing conflict minerals compliance information. For already marketed products, purchasing typically requests Conflict Minerals declarations from the suppliers. In order to address requests from customers, technical operations or product management are usually responsible for keeping track of all parts, raw materials and their suppliers so that the required information can be provided. For complex BOMs, it is very tedious to maintain complete, accurate, up-to-date, and traceable data. Any product change or new supplier can, in addition to all other implications, have an effect on the Conflict Minerals compliance status. Influence on product development  It makes sense to consider compliance early in the planning and design of new products. Companies should evaluate which metals are needed or contained in supplier parts and if these could originate from problematic sources. The answer influences the cost and risk analysis during the development. If it is known early on that a part could be non-compliant with respect to Conflict Minerals, alternatives can be evaluated and thus costly changes at a later stage can be avoided. Integrated compliance management  Ideally, compliance data for Conflict Minerals, but also for other regulations like REACH and RoHS, should be managed in an integrated supply chain system. The compliance status is directly visible across the entire BOM at any part level and for the finished product. If data is missing, a request to the supplier can be triggered right away without having to switch to another system. The entire process, from identification of the relevant parts, requesting information, handling responses, data entry, to compliance calculation is fully covered end-to-end while being transparent for all stakeholders. Agile PLM Product Governance and Compliance (PG&C) The PG&C module extends Agile PLM with exactly this integrated functionality. As with the entire Agile product suite, PG&C can be configured according to customer requirements: data fields, attributes, workflows, routing, notifications, and permissions, etc… can be quickly and easily tailored to a customer’s needs. Optionally, external databases can be interfaced to query commercially available sources of Conflict Minerals declarations which obviates the need for a separate supplier request in many cases. Suppliers can access the system directly for data entry through a special portal. The responses to the standard EICC-GeSI questionnaire can be imported by the supplier or internally. Manual data entry is also supported. A set of compliance-specific dashboards and reports complement the functionality Conclusion  The increasing number of product compliance regulations, for which Conflict Minerals is just one example, requires companies to implement an efficient data and process management in this area. Consumer awareness in this matter increases as well so that an integrated system from development to production also provides a competitive advantage. Follow this link to learn more about Agile's PG&C solution

    Read the article

  • Evaluating Solutions to Manage Product Compliance? Don't Wait Much Longer

    - by Kerrie Foy
    Depending on severity, product compliance issues can cause all sorts of problems from run-away budgets to business closures. But effective policies and safeguards can create a strong foundation for innovation, productivity, market penetration and competitive advantage. If you’ve been putting off a systematic approach to product compliance, it is time to reconsider that decision, or indecision. Why now?  No matter what industry, companies face a litany of worldwide and regional regulations that require proof of product compliance and environmental friendliness for market access.  For example, Restriction of Hazardous Substances (RoHS) is a regulation that restricts the use of six dangerous materials used in the manufacture of electronic and electrical equipment.  ROHS was originally adopted by the European Union in 2003 for implementation in 2006, and it has evolved over time through various regional versions for North America, China, Japan, Korea, Norway and Turkey.  In addition, the RoHS directive allowed for material exemptions used in Medical Devices, but that exemption ends in 2014.   Additional regulations worth watching are the Battery Directive, Waste Electrical and Electronic Equipment (WEEE), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) directives.  Additional evolving regulations are coming from governing bodies like the Food and Drug Administration (FDA) and the International Organization for Standardization (ISO). Corporate sustainability initiatives are also gaining urgency and influencing product design. In a survey of 405 corporations in the Global 500 by Carbon Disclosure Project, co-written by PwC (CDP Global 500 Climate Change Report 2012 entitled Business Resilience in an Uncertain, Resource-Constrained World), 48% of the respondents indicated they saw potential to create new products and business services as a response to climate change. Just 21% reported a dedicated budget for the research. However, the report goes on to explain that those few companies are winning over new customers and driving additional profits by exploiting their abilities to adapt to environmental needs. The article cites Dell as an example – Dell has invested in research to develop new products designed to reduce its customers’ emissions by more than 10 million metric tons of CO2e per year. This reduction in emissions should save Dell’s customers over $1billion per year as a result! Over time we expect to see many additional companies prove that eco-design provides marketplace benefits through differentiation and direct customer value. How do you meet compliance requirements and also successfully invest in eco-friendly designs? No doubt companies struggle to answer this question. After all, the journey to get there may involve transforming business models, go-to-market strategies, supply networks, quality assurance policies and compliance processes per the rapidly evolving global and regional directives. There may be limited executive focus on the initiative, inability to quantify noncompliance, or not enough resources to justify investment. To make things even more difficult to address, compliance responsibility can be a passionate topic within an organization, making the prospect of change on an enterprise scale problematic and time-consuming. Without a single source of truth for product data and without proper processes in place, ensuring product compliance burgeons into a crushing task that is cost-prohibitive and overwhelming to an organization. With all the overhead, certain markets or demographics become simply inaccessible. Therefore, the risk to consumer goodwill and satisfaction, revenue, business continuity, and market potential is too great not to solve the compliance challenge. Companies are beginning to adapt and even thrive in today’s highly regulated and transparent environment by implementing systematic approaches to product compliance that are more than functional bandages but revenue-generating engines. Consider partnering with Oracle to help you address your compliance needs. Many of the world’s most innovative leaders and pioneers are leveraging Oracle’s Agile Product Lifecycle Management (PLM) portfolio of enterprise applications to manage the product value chain, centralize product data, automate processes, and launch more eco-friendly products to market faster.   Particularly, the Agile Product Governance & Compliance (PG&C) solution provides out-of-the-box functionality to integrate actionable regulatory information into the enterprise product record from the ideation to the disposal/recycling phase. Agile PG&C makes it possible to efficiently manage compliance per corporate green initiatives as well as regional and global directives. Options are critical, but so is ease-of-use. Anyone who’s grappled with compliance policy knows legal interpretation plays a major role in determining how an organization responds to regulation. Agile PG&C gives you the freedom to configure product compliance per your needs, while maintaining rigorous control over the product record in an easy-to-use interface that facilitates adoption efforts. It allows you to assign regulations as specifications for a part or BOM roll-up. Each specification has a threshold value that alerts you to a non-compliance issue if the threshold value is exceeded. Set however many regulations as specifications you need to make sure a product can be sold in your target countries. Another option is to implement like one of our leading consumer electronics customers and define your own “catch-all” specification to ensure compliance in all markets. You can give your suppliers secure access to enter their component data or integrate a third party’s data. With Agile PG&C you are able to design compliance earlier into your products to reduce cost and improve quality downstream when stakes are higher. Agile PG&C is a comprehensive solution that makes product compliance more reliable and efficient. Throughout product lifecycles, use the solution to support full material disclosures, efficiently manage declarations with your suppliers, feed compliance data into a corrective action if a product must be changed, and swiftly satisfy audits by showing all due diligence tracked in one solution. Given the compounding regulation and consumer focus on urgent environmental issues, now is the time to act. Implementing an enterprise, systematic approach to product compliance is a competitive investment. From the start, Agile Product Governance & Compliance enables companies to confidently design for compliance and sustainability, reduce the cost of compliance, minimize the risk of business interruption, deliver responsible products, and inspire new innovation.  Don’t wait any longer! To find out more about Agile Product Governance & Compliance download the data sheet, contact your sales representative, or call Oracle at 1-800-633-0738. Many thanks to Shane Goodwin, Senior Manager, Oracle Agile PLM Product Management, for contributions to this article. 

    Read the article

  • SharePoint 2010 Hosting :: HTTP Error 503. The Service in Unavailable.

    - by mbridge
    Error Message: HTTP Error 503. The service in unavailable. The application pool is shutdown and the Event Viewer indicates: Log Name:      Application Source:        Microsoft-Windows-User Profiles Service Date:          4/24/2010 4:58:28 PM Event ID:      1500 Task Category: None Level:         Error Keywords: User:          GUERILLA\spapppool Computer:      SPS2010RTM.guerilla.local Description: Windows cannot log you on because your profile cannot be loaded. Check that you are connected to the network, and that your network is functioning correctly. DETAIL – Unspecified error Solution: Option 1 – Log on locally with the service account once to create a local profile for it. Option 2 - Modify the application pool by going into IIS > Application Pools > Right-Click offending app pool > Advanced Settings > Set “Load User Profile” to False. Give it try!! Good luck!!

    Read the article

  • ORA - 01033 Error in Java

    - by user1721921
    Today I am getting below error in J2EE application: java.sql.SQLException: ORA-01033: ORACLE initialization or shutdown in progress at java.lang.Throwable.<init>(Throwable.java) at java.lang.Throwable.<init>(Throwable.java) at java.sql.SQLException.<init>(SQLException.java:53) at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134) at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:289) at oracle.jdbc.ttc7.O3log.receive1st(O3log.java:408) at oracle.jdbc.ttc7.TTC7Protocol.logon(TTC7Protocol.java:260) at oracle.jdbc.driver.OracleConnection.<init>(OracleConnection.java:365) at oracle.jdbc.driver.OracleDriver.getConnectionInstance(OracleDriver.java:547) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:347) at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:169) at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPhysicalConnection(OracleConnectionPoolDataSource.java:149) at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPooledConnection(OracleConnectionPoolDataSource.java:95) Can someone please let me know what does that error mean?

    Read the article

  • Clock drift even though NTPD running

    - by droffo
    I'm having a problem with the clock drifting on my PC. I'M running Ubuntu 10.10 on an somewhat crusty IBM e-server (1.5GB RAM, 2.4GHz CPU) ntpd is running (started at run level 2) servers are defined: server 1.us.pool.ntp.org server 2.us.pool.ntp.org server 3.us.pool.ntp.org server time.nrc.ca server ntp1.cmc.ec.gc.ca server ntp2.cmc.ec.gc.ca server wuarchive.wustl.edu server clock.psu.edu Looking at the log file, it would seem that the ntp daemon is running, but the system clock never seems to be set, however. If I manually set the time from a Casio "atomic" watch, the date/time displayed by the Clock applet drifts out of sync over time. Looking at the log file (below) it would seem the ntp daemon started ok and is running. So I am totally flummoxed right now :-( Here's a copy of my ntp.log file.

    Read the article

  • IntelliTrace collector error Some or all identity references could not be translated

    - by Tarun Arora
    If you are running the IntelliTrace stand alone collector to collect the trace against an Application Pool which is running under the identity “.\<username>” then you are likely to run into the following exception, Start-IntelliTraceCollection : Some or all identity references could not be translated. At line:1 char:29 + Start-IntelliTraceCollection <<<<  "FabrikamFiber.Web" C:\IntelliTraceCTP\collection_plan.ASP.NET.trace.xml C:\Intell iTraceLogs     + CategoryInfo          : NotSpecified: (:) [Start-IntelliTraceCollection], IdentityNotMappedException     + FullyQualifiedErrorId : System.Security.Principal.IdentityNotMappedException,Microsoft.VisualStudio.IntelliTrace    .PowerShell.StartIntelliTraceCollectionCommand   Steps to reproduce the issue The application pool “FabrikamFiber.Web” is using the identity “.\Admin”   Workaround Change the identity of the application pool to <MachineName|Domain>\<UserName>. So, in the above work around if I change the identity to “Production\Admin” then the IntelliTrace does not throw an exception. This error has been reported to Microsoft and it is expected that it will be fixed in one of the future releases. Enjoy!

    Read the article

  • Rewrite subfolder URL so it appears differently

    - by pg
    I want to change the URL that appears when you go to: MYSUBDOMAIN.MYCOMPANY.COM/carbonated-milk/ to: carbonated-milk.com/ I'm trying to figure out what to put in my .htaccess folder to do this and I've been reading through all kinds of doc files and through other peoples' questions on stackoverflow but can't come up with an answer for this. Do any of you have any idea?

    Read the article

  • PostgreSQL Backup/Restore

    - by Ian
    What's the best way to backup a postgresql database? I've tried using the documentation at www.postgresql.org, but I always get integrity errors when restoring. Right now I'm using this for backup: pg_dump -U myuser -d mydatabase db.pg.dump for restore: pg_restore -c -r -U myuser -d mydatabase db.pg.dump But I'm not getting the desired results..

    Read the article

  • How do I re-enable the IPMI temperature sensors?

    - by NobleUplift
    I've never had a problem reading temperature sensors with ipmitool on my server, but recently the temperature readings started showing up as disabled: # ipmitool sdr list Temp | disabled | ns Temp | disabled | ns Ambient Temp | 21 degrees C | ok CMOS Battery | 0x00 | ok VCORE | 0x00 | ok VDDIO | 0x00 | ok VDDA | 0x00 | ok VTT | 0x00 | ok VCORE | 0x00 | ok VDDIO | 0x00 | ok VDDA | 0x00 | ok VTT | 0x00 | ok VDD 1.2V PG | 0x00 | ok Linear PG | 0x00 | ok I am using OpenIPMI 2.0.19 and ipmitool 1.8.12. How can I re-enable my temperature sensors?

    Read the article

  • Mounting case-sensitive shared folder in VirtualBox

    - by rhettg
    I have an OSX host with a ubuntu VM trying to mount a shared folder. I'm using the options: sudo mount -t vboxsf -o uid=1000,gid=100 pg ~/pg-host The folder mounts fine, however it appears that the mounted directory is case-insensitive, even though my OSX drive is formatted case-sensitive. Are there any options to control this behavior ?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >