Search Results

Search found 23044 results on 922 pages for 'oracle solaris 11'.

Page 336/922 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • crontab environment

    - by Adamski
    I have written various scripts to launch Java server applications, which are typically run for 24 hours before being shut down (by invoking the same script with a different parameter). The script relies on environment variables defined in a file: ~/<user>.env, which I source from .bashrc. This works fine when invoking the script from the command line but if I want to add the script as a crontab entry I run into the problem where .bashrc isn't read. My question: What is the best practice approach for solving this problem? I realise I could define a crontab entry such as: * * * * 1-5 /usr/bin/bash -c '. /home/myuser/myuser.env && /home/myuser/scripts/myscript.sh' ... but this seems plain ugly. Alternatively I could source myuser.env at the beginning of every script, but this would become a nightmare to maintain. Any help appreciated.

    Read the article

  • Trigger ZFS dedup one-off scan/rededup

    - by Jake Wharton
    I have a ZFS filesystems which has been running for some time and I recently had the opportunity to upgrade it (finally!) to the latest ZSF version. Our data doesn't scream dedup but I firmly believe based on small tests that we could gain anywhere from 5-10% of our space back for free by utilizing it. I have enabled dedup on the filesystem and new files are slowly being dedupified but the majority (95%+) of our data already exists on the filesystem. Short of moving the data off-pool and then recopying it back, is there any way to trigger a dedup scan of existing data? It doesn't have to be asynchronous or live. (And FYI there isn't enough room on the pool to copy the entire filesystem to another and then just switch the mounts.)

    Read the article

  • Why's SMC failing on startup?

    - by Brian Knoblauch
    Trying to remove a user from one of our servers, but I seem to be thwarted at every turn... SMC refuses to load the user list (failing with a NoClassDefFoundError in the listAll method of UserContent). vipw just returns with "vipw: /etc/passwd file busy". I'm the only user on the system at the moment (it's our backup SRSS box), and both of these fail even right after a reboot. I don't have console access at the moment either unfortunately (or I would try single user mode). Of course, even if init mode S worked and let me do this one task, it doesn't solve the root problem. Ideas?

    Read the article

  • Older raid controllers in raid 5 vs. Jbod and SW raid

    - by TEB
    Hi. Im in the fortunate position to have 6 Supermicro older VOD servers with the following config: Supermicro 3U case, 3xPSU Dual Xeon 3ghz P4 class cpu (5 years old.. havnt checked the exact type) 4GB Ram 3ware 9500-8 SATA controller 8 SATA SLOTS and alot of free drives. 2GB FLASH Bootdrive What im curious about is the RAID5 performance on these old beasts in HW mode vs. SW on Linux with the controller set in JBOD mode. Im thinking on using Centos 5.5 or Ubuntu or ZFS RaidZ on Opensolaris. Any tips? or reccomendations ? best regards TEB

    Read the article

  • What would cause Memcached to Hang for 2+ seconds?

    - by Brad Dwyer
    I'm going nuts trying to scale memcached. From their site: Memcached operations are almost all O(1). Connecting to it and issuing a get or stat command should never lag. If connecting lags, you may be hitting the max connections limit. See ServerMaint for details on stats to monitor. If issuing commands lags, you can have a number of tuning problems. Most common are hardware problems, not enough RAM (swapping), network problems (bandwidth, dropped packets, half-duplex connections). On rare occasion OS bugs or memcached bugs can contribute. Well.. it is most certainly not performing like an O(1) operation for me. Under low to normal load on our site memcached response times for get and set ops are about 0.001 seconds. Not bad. But if we triple the load we get outliers that take 100x (or in rare cases 1000x!) that long. I even had one instance where it took 2.2442 seconds for memcached to store a value. Obviously this is killing our site. Here's the output of Memcached-getStats during one of the slow periods: [pid] => 18079 [uptime] => 8903 [threads] => 4 [time] => 1332795759 [pointer_size] => 32 [rusage_user_seconds] => 26 [rusage_user_microseconds] => 503872 [rusage_system_seconds] => 125 [rusage_system_microseconds] => 477008 [curr_items] => 42099 [total_items] => 422500 [limit_maxbytes] => 943718400 [curr_connections] => 84 [total_connections] => 4946 [connection_structures] => 178 [bytes] => 7259957 [cmd_get] => 1679091 [cmd_set] => 351809 [get_hits] => 1662048 [get_misses] => 17043 [evictions] => 0 [bytes_read] => 109388476 [bytes_written] => 3187646458 [version] => 1.4.13 So things that I have ruled out so far are: Hitting the max connections limit (curr_connections of 84 is well below the default of max of 1024) Swapping - the machine has 900M out of 1024M of memory dedicated to memcached on a dedicated machine. It only appears to be using about 7MB of data as per the bytes stat. How would I diagnose the other hardware problems? prstat doesn't really show a whole lot going on in terms of CPU or memory usage. Not sure how to figure out the network problems but as this is a dedicated server on the same private network as the web box I don't think it's a connectivity issue (ping is less than a millisecond between the boxes). Is there something else I'm missing here? It's driving me nuts. Edit: Also forgot to mention that I've tried both persistent and non-persistent connections with minimal-to-no impact.

    Read the article

  • auspex LFS backups

    - by user1250465
    I have some backup tapes which existed on an AUSPEX file server. The backups were written to tape with the SunOs version of the CPIO command. Now that I need to restore them, (of course there are no more auspex servers in existance), the backups won't restore because the headers are not standard. I have dumped the tape images to disk. PAX, CPIO, and TAR cannot read the images. I've tried all of the CPIO format options. The errors I get are "name too long", "byte swapped in header", or just junk output. I can open up the images and read the contents of the files, but cannot restore the images. I have found that SunOs had a special header in CPIO V2.5 images. I have found the source for cpio, now I need definition of the SunOs header inside CPIO?

    Read the article

  • Is it possible to temporarily disable non-global zones?

    - by Gary
    I frequently need to install a package on the global zone for a quick test on a development box. When there are multiple prompts for one package I have to answer them for each zone. If the zone is not running then I need to wait for the zone to start up, answer the prompts, etc. This is particularly annoying when if I'm getting packages from http://www.sunfreeware.com and using the pkg-get utility which nicely pulls in dependencies for you. Can I disable the zones temporarily? I haven't found a way to do this. Thanks.

    Read the article

  • When does `cron.daily` run?

    - by warren
    When do entries in cron.daily (and .weekly and .hourly) run, and is it configurable? I haven't found a definitive answer to this, and am hoping there is one. I'm running RHEL5 and CentOS 4, but for other distros/platforms would be great, too.

    Read the article

  • How do I force a specific MTU for only certain TCP ports?

    - by Dave S.
    Background I have a set of embedded hardware deployed in the field. These remote machines connect back to my servers at AWS running Ubuntu and I use the iptables mangle chain to lower the MTU to 500 so these devices are happy. For reference, this is the iptables rule I am using: -A POSTROUTING -p tcp --sport 12345 --tcp-flags SYN,RST SYN -o eth0 -j TCPMSS --set-mss 500 Current Problem I'm trying to spin up some servers on the Joyent Cloud using SmartOS, but I can't find any information on selectively changing the MTU like I can on Linux (e.g. all info I've found is on changing it globally, which is not what I want). How would I do it so that all connections on TCP port 12345 get the MTU I want?

    Read the article

  • Trigger ZFS dedup one-off scan/rededup

    - by Jake Wharton
    I have a ZFS filesystems which has been running for some time and I recently had the opportunity to upgrade it (finally!) to the latest ZFS version. Our data doesn't scream dedup but I firmly believe based on small tests that we could gain anywhere from 5-10% of our space back for free by utilizing it. I have enabled dedup on the filesystem and new files are slowly being dedupified but the majority (95%+) of our data already exists on the filesystem. Short of moving the data off-pool and then recopying it back, is there any way to trigger a dedup scan of existing data? It doesn't have to be asynchronous or live. (And FYI there isn't enough room on the pool to copy the entire filesystem to another and then just switch the mounts.)

    Read the article

  • can I consolidate a multi-disk zfs zpool to a single (larger) disk?

    - by rmeden
    I have this zpool: bash-3.2# zpool status dpool pool: dpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM dpool ONLINE 0 0 0 c3t600601604F021A009E1F867A3E24E211d0 ONLINE 0 0 0 c3t600601604F021A00141D843A3F24E211d0 ONLINE 0 0 0 I would like to replace both of these disks with a single (larger disk). Can it be done? zpool attach allows me to replace one physical disk, but it won't allow me to replace both at once.

    Read the article

  • ZFS recordsize for VirtualBox and other virtual disks

    - by JOTN
    Has anyone run across any good benchmarks or other research on tuning the ZFS recordsize when putting virtual disk files on it for a guest OS? I'm using VirtualBox at the moment. I have notice significant performance improvement when working with a DBMS by setting the ZFS recordsize to the same as the DB blocksize, so I'm guessing matching the blocksize of the guest filesystem would also be a good idea.

    Read the article

  • SSH Keys Authentication keeps asking for password

    - by Rhyuk
    Im trying to set access from ServerA(SunOS) to ServerB(Some custom Linux with Keyboard Interactive login) with SSH Keys. As a proof of concept I was able to do it between 2 virtual machines. Now in my real life scenario it isnt working. I created the keys in ServerA, copied them to ServerB, chmod'd .ssh folders to 700 on both ServerA,B. Here is the log of what I get. debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: Peer sent proposed langtags, ctos: debug1: Peer sent proposed langtags, stoc: debug1: We proposed langtags, ctos: en-US debug1: We proposed langtags, stoc: en-US debug1: SSH2_MSG_KEX_DH_GEX_REQUEST sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: dh_gen_key: priv key bits set: 125/256 debug1: bits set: 1039/2048 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'XXX.XXX.XXX.XXX' is known and matches the RSA host key. debug1: Found key in /XXX/.ssh/known_hosts:1 debug1: bits set: 1061/2048 debug1: ssh_rsa_verify: signature correct debug1: newkeys: mode 1 debug1: set_newkeys: setting new keys for 'out' mode debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: newkeys: mode 0 debug1: set_newkeys: setting new keys for 'in' mode debug1: SSH2_MSG_NEWKEYS received debug1: done: ssh_kex2. debug1: send SSH2_MSG_SERVICE_REQUEST debug1: got SSH2_MSG_SERVICE_ACCEPT debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /XXXX/.ssh/identity debug1: Trying public key: /xxx/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /xxx/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Password: Password: ServerB has pretty limited actions since its a custom propietary linux. What could be happening? EDIT WITH ANSWER: Problem was that I didnt have those settings enabled in the sshd_config (Refer to accepted answer) AND that while pasting the key from ServerA to ServerB it would interpret the key as 3 separate lines. What I did was, in case you cant use ssh-copy-id like I couldnt. Paste the first line of your key in your "ServerB" authorized_keys file WITHOUT the last 2 characters, then type yourself the missing characters from line 1 and the first one from line 2, this will prevent adding a "new line" between the first and second line of the key. Repeat with the 3d line.

    Read the article

  • Does ZFS replace the need for hardware/software RAID?

    - by user53744
    I want to provide protection against data loss on my servers. Typically, I'd use hardware RAID 1 or 5, but I've been reading up on ZFS. Is it correct that ZFS itself provides RAID 1 or 5 like data protection WITHOUT needing a RAID controller card? If so, I assume a single hard drive is not enough to provide data protection since if that drive fails, all data fails, so how many hard drives do I need to be running for ZFS to provide this protection?

    Read the article

  • ZFS Configuration advice

    - by rbarrette
    I need some advice on configuring ZFS. Here is what I have: Physical Disks: 4x 3 TB 2x 2 TB 2x 1 TB What is the best configuration for my Vdevs and storage pool. I want to maximaze space but still maintain redundancy. Should I just get 2 more 3TB's and just create 2x 3-3TB raid2z storage pools? Create a 1x 4-3TB raidz2 vdev? Can I put redundancy at the pool level and create individual vdevs for each drive and then add 2x 1TB+2TB striped vdevs to keep all vdevs the same size. Keep in mind I do need to migrate data from the smaller drives and am planning on adding more 3tb drives later on. What do you think?

    Read the article

  • Nexenta, NFS and LOCK_EX

    - by Givre
    I'm currently using an LAMP architecture and I expect a big problem :( I have several http web server using PHP5. All are mounting via NFS (v3) the directory for all the hosted websites. The file server is running the Nexenta Storage Appliance using ZFS . The problem is all the NFS client trying to write something in a file over the NFS get this problem : This is inside the apache2 process: open("/nfs/website1/file.txt", ORDWR|OCREAT, 0600) = 11647 fstat(11647, {stmode=SIFREG|0600, st_size=23754, ...}) = 0 flock(11647, LOCK_EX And the process never get the LOCK and keep waiting for... always. The effect? All the apache2 procces get used and waiting.. my severs can't still proccess the others requests because there is no more proccess available. I don't now where to find a solution.. for me it.'s on the NFS server side.. but wich configuration is wrong or missing ? How can I find what is wrong? If you need more information about the configuration, just ask me what can help you more :)

    Read the article

  • LDAP replication breaking referrals

    - by MasterZ
    We have an issue that we believe is caused by ldap replication changing the port of the referal from 686 (secure) to 389 (unsecure). If we setup a new referral everything works, but then as soon as we change someone's password it changes on the master, and then the master replicates and the referral breaks. Any further attempts to modify someone's account give the error "PAM: Cannot connect to LDAP". We used snoop and monitored the firewall to see what was going on. The first password attempt (the one that works) goes over port 686 (as it is supposed to) but every subsequent attempt attempts to use port 389, and therefore fails. We only have 1 referral configured on the client, port 686

    Read the article

  • SMF restarting service whenever there's output?

    - by Phillip Oldham
    I'm trying to add a custom service to SMF's configuration, which seems successful in that the service starts and there is a log file, but therein lies the problem; the service, on start-up, prints some logging messages to the stderr. It seems that SMF is seeing those messages and, believing them to be errors, restarts the service, giving up after a number of tries and leaving the service off. Here's part of the log output: [ Mar 30 14:59:54 Enabled. ] [ Mar 30 14:59:54 Executing start method ("java server.CustomServer"). ] Starting server... [ Mar 30 15:00:04 Method or service exit timed out. Killing contract 107. ] Running the server directly on the commandline is fine, and AFACS there are no errors being encountered during startup, other than the output. What would be the best way to manage this service with SMF? The logging is needed for diagnosing problems, and would be problematic to disable. Is it possible to configure this service to only restart if the service exists?

    Read the article

  • Clear/ Reset Result Table of Search page in OAF

    - by PRajkumar
    Normally problem faced by developers after creating Search Page is how to Clear/ Reset Result Table when developer open search page first time or after search when developer redirecting back to same search page from any other page (say delete page or update page)   Add following Code in your Search page Controller where you have constructed your Query Region   import oracle.apps.fnd.framework.webui.beans.layout.OAQueryBean; ... public void processRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processRequest(pageContext, webBean);  OAQueryBean queryBean = (OAQueryBean)webBean.findChildRecursive("QueryRN");   // Here QueryRN is your Query Region Name as shown in following snap shot  queryBean.clearSearchPersistenceCache(pageContext); }     Note – After add this code, no need to worry about state of Application Module (AM). This code will clean up result table automatically every time when you will open Search page first time and when you are redirecting back to search page. But still as per good coding standard while redirecting back to search page always keep AM state to FALSE

    Read the article

  • How Social Is Your Contact Center?

    - by Charles Knapp
    More than 75% of consumers have complained on a social site after a poor customer experience. Yet, 70% of companies have little understanding of the social media conversations about their brand. To deliver upon your brand promise, retain customers, and increase their lifetime value, you must deliver great customer experiences across social, mobile, phone, and chat channels. Siloed channels produce poor customer experiences. Social channels must integrate with the people, processes, technology, and traditional channels used to satisfy customers. The more effective a company’s social marketing, the greater the demand for effective social service. However, service is not a job for social marketers. It is a job for service specialists, focused on KPIs such as response time, first contact resolution, satisfaction, churn, retention, and customer lifetime value. Most social-enabled contact centers are at the early adopter stage, attempting to “bolt on” social media as a side process. Many are experiencing inconsistent customer experiences, higher costs, and negligible return on investments. Service leaders should consider carefully how to integrate social channels with their current customer service and support people, processes, technology, and channels. Here is one company realizing success: the pre-integrated Oracle RightNow Social Experience “empowers our contact center operations by enabling our agents to join customer conversations that are happening on social sites like Twitter and Facebook and integrate those conversations into our overall multichannel customer engagement processes.” — Lisa Larson, Drugstore.com

    Read the article

  • Comments syntax for Idoc Script

    - by kyle.hatlestad
    Maybe this is widely known and I'm late to the party, but I just ran across the syntax for making comments in Idoc Script. It's been something I've been hoping to see for a long time. And it looks like it quietly snuck into the 10gR3 release. So for comments in Idoc Script, you simply [[% surround your comments in these symbols. %]] They can be on the same line or span multiple lines. If you look in the documentation, it still mentions making comments using the syntax. Well, that's certainly not an ideal approach. You're stuffing your comment into an actual variable, it's taking up memory, and you have to watch double-quotes in your comment. A perhaps better way in the old method is to start with my comments . Still not great, but now you're not assigning something to a variable and worrying about quotes. Unfortunately, this syntax only works in places that use the Idoc format. It can't be used in Idoc files that get indexed (.hcsp & .hcsf) and use the <!--$...--> format. For those, you'll need to continue using the older methods. While on the topic, I thought I would highlight a great plug-in to Notepad++ that Arnoud Koot here at Oracle wrote for Idoc Script. It does script highlighting as well as type-ahead/auto-completion for common variables, functions, and services. For some reason, I can never seem to remember if it's DOC_INFO_LATESTRELEASE or DOC_INFO_LATEST_RELEASE, so this certainly comes in handy. I've updated his plug-in to use this new comments syntax. You can download a copy of the plug-in here which includes installation instructions.

    Read the article

  • Sun Storage 2500-M2 Array and Sun Fire X4470 M2 Server

    - by nospam(at)example.com (Joerg Moellenkamp)
    There is some new hardware in the Oracle portfolio. The first one is the Sun Fire X4470 M2 Server. There was a lot of talk about the system before because of benchmark results, but now it's finally announced. Two or four Intel Xeon E7-4800. Up to 1 TB as the system provides 64 DIMM slots with 16 GB DDR DIMMs. The memory is placed on those riser cards right behind the fans of this chassis. Up to 6 internal drives. In a 3 RU package. Another announcement was the Sun Storage 2500 M2 announced yesterday: From 5 to 48 drives (the later number with three expansion trays) for up to 28.8 TB of storage. The array is SAS based internally. You can put 300GB and 600 GB in it. The 2540-M2 provides 4 (8 optional) FC ports with up to 8 GB/sec. The 2530-M2 has 4 SAS2 ports with up to 6 GBit/s. It has 2 integrated controllers providing 2 GB cache protected by a power backup for 72 hours. The controller enables the arrays to deliver 0, 1, 10, 3, 5, 6, (P+Q) RAID levels.

    Read the article

  • New Fusion Community, Community Name Changes and Upcoming Webcasts

    - by cwarticki
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Check out the new MOS Customer Relationship Management (CRM) community. This community has been featured in marketing events and is one of the more active communities so far. Support has also renamed the Fusion HCM community (now Human Capital Management (HCM)) and the Technical – FA community (now Fusion Applications Technology) in order to standardize our naming convention. Finally, we have two upcoming webcasts: 18-OCT-2012 : Fusion Apps Security - User & Role Management using Oracle Identity Manager featured in our Fusion Applications Technology community 01-NOV-2012: Fusion Apps Security – Troubleshoot Data Role Issues featured in our Fusion Applications Technology community. Check out our new Community. Attend our upcoming webcasts. Participate.  Engage. Contribute. ~Chris

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >