Search Results

Search found 2310 results on 93 pages for 'solaris containers'.

Page 30/93 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • sudo equivalent configuration on soalris10

    - by daedlus
    Hi , I am looking to configure on solaris10 to achieve the below: user=jon group=jtu jon is owner of /opt/app user=ken group=jtu ken is owner of /data on Linux I have added the below line %jtu ALL= NOPASSWD: /bin/*, /usr/bin/* so that jon is able to access /data/tmp and delete files. This doesn't work on solaris10 since there is no sudo by default. How to configure on solaris10 for jon to be able to delete files in /data/tmp? Thanks

    Read the article

  • RBAC configuration on solaris10

    - by scot
    Hi , I am looking for RBAC configuration on solaris10 to achieve the below: user=jon group=jtu jon is owner of /opt/app user=ken group=jtu ken is owner of /data on Linux I have added the below line %jtu ALL= NOPASSWD: /bin/*, /usr/bin/* so that jon is able to access /data/tmp and delete files. This doesn't work on solaris10 since there is no sudo by default. How to configure RBAC in solaris10 for jon to be able to delete files in /data/tmp? Thanks

    Read the article

  • ZFS - zpool ARC cache plus L2ARC benchmarking

    - by jemmille
    I have been doing lots of I/O testing on a ZFS system I will eventually use to serve virtual machines. I thought I would try adding SSD's for use as cache to see how much faster I can get the read speed. I also have 24GB of RAM in the machine that acts as ARC. vol0 is 6.4TB and the cache disks are 60GB SSD's. The zvol is as follows: pool: vol0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vol0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 cache c3t5001517958D80533d0 ONLINE 0 0 0 c3t5001517959092566d0 ONLINE 0 0 0 The issue is I'm not seeing any difference with the SSD's installed. I've tried bonnie++ benchmarks and some simple dd commands to write a file then read the file. I have run benchmarks before and after adding the SSD's. I've ensured the file sizes are at least double my RAM so there is no way it can all get cached locally. Am I missing something here? When am I going to see benefits of having all that cache? Am I simply not under these circumstances? Are the benchmark programs not good for testing the effect of cache because of the the way (and what) it writes and reads?

    Read the article

  • Getting ZFS per dataset IO statistics (or NFS per export IO statistics)

    - by jkj
    Where do I find statistics about how IO is divided between zfs datasets? (zpool iostat only tells me how much IO a pool is experiencing.) All the relevant datasets are used through NFS, so I'd be happy with per export NFS IO statistics also. We're currently running OpenIndiana [edit] It seems that operation and byte counter are available in kstat kstat -p unix:*:vopstats_??????? ... unix:0:vopstats_2d90002:nputpage 50 unix:0:vopstats_2d90002:nread 12390785 ... unix:0:vopstats_2d90002:read_bytes 22272845340 unix:0:vopstats_2d90002:readdir_bytes 477996168 ... ...but the strange hexadecimal ID numbers have to be resolved from /etc/mnttab (better ideas?) rpool/export/home/jkj /export/home/jkj zfs rw,...,dev=2d90002 1308471917 Now writing a munin plugin to use the data...

    Read the article

  • Setting up vncserver on OpenSolaris zone

    - by k.park
    I am running OpenSolaris 5.10 and set up a sparse zone(inherits most of bin directories from global zone). I ended up copying many etc and var files from global zone, eventually most of the stuff(firefox,gvim, etc.) working through ssh via X11. However, I am having problems setting up vncserver on the zone. This is what I get if I tried to start the vncserver. vncext: VNC extension running! vncext: Listening for VNC connections on port 5911 vncext: created VNC server for screen 0 Fatal server error: could not open default font 'fixed' _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 xsetroot: unable to open display '%zone%:11' _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 vncconfig: unable to open display "%zone%:11" twm: unable to open display "%zone%:11" xterm Xt error: Can't open display: %zone%:11 I already chmoded /tmp/.X11-pipe with 777, and there is no pipe in /tmp/.X11-pipe or /tmp/.X11-unix directory. Here is my cat /etc/release: OpenSolaris 2009.06 snv_111b X86 Copyright 2009 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 07 May 2009 BRAND: ipkg

    Read the article

  • OpenSolaris: unable to contact repository, connected to internet via proxy

    - by John-ZFS
    Opensolaris b134: unable to set packages catalog this system is connected to internet via proxy, while this works on browser, how to make console/terminal aware? user1@opensolaris134:~# pkg set-authority -O http://pkg.opensolaris.org/dev opensolaris.org pkg set-publisher: Could not refresh the catalog for opensolaris.org user1@opensolaris134:~# pkg image-update pkg: 0/1 catalogs successfully updated: Unable to contact valid package server Encountered the following error(s): Unable to contact any configured publishers. This is likely a network configuration problem.

    Read the article

  • How to build a NAS?

    - by Walter White
    Hi all, I have quite a bit of photos I'd like to organize and get away from sparse DVDs and move to a NAS solution. Ideally, this would let me have some level of redundancy and more easily find what I'm looking for. That being said, hard drives are relatively cheap. My next question is, I would like to run ZFS on the drives with the ability to add / remove drives for additional redundancy, or change the configuration of the drives possibly. Is there a NAS box that let's you run your OS of choice (FreeNAS) so all I'd need to do is get the hard drives, the NAS box, and modify the firmware / OS with FreeNAS? Walter

    Read the article

  • SMF restarting service whenever there's output?

    - by Phillip Oldham
    I'm trying to add a custom service to SMF's configuration, which seems successful in that the service starts and there is a log file, but therein lies the problem; the service, on start-up, prints some logging messages to the stderr. It seems that SMF is seeing those messages and, believing them to be errors, restarts the service, giving up after a number of tries and leaving the service off. What would be the best way to manage this service with SMF? The logging is needed for diagnosing problems, and would be problematic to disable. Is it possible to configure this service to only restart if the service exists?

    Read the article

  • log in to v20z console using serial management

    - by conandor
    I have a Netra T1 (SPARC) which has LOM that enable to log in to the OS console locally for troubleshooting without plug in a monitor and keyboard. I also have a SunFire V20z (x86) which has a SP. However I do not find any equivalent command to reach the console. It would not be ideal for me to bring along monitor and keyboard to the data center for troubleshooting locally. For example, I accidentally messup the OS network configuration and unable to log in the OS. I have to log to the console through serial management/LOM to troubleshoot it. However it seem doesn't work in x86 machine. Is there anyway I can able to do the same thing on x86 machine?

    Read the article

  • Download Sun Studio via CLI

    - by ramesh.mimit
    Can anybody please guide me how to download the sun studio from CLI. I was using wget and lynx programs but not worked. As I have only SSH access to my server and I cant not download it on local machine and upload it on server, will be bad option for me as it will take hours to upload. Sun Studio download requires registration + authentication. I have both but not sure how to include those options while downloading via CLI.

    Read the article

  • nocheck within admin file for pkgadd still asks questions

    - by romant
    I place the following into an admin file called noask mail= instance=overwrite partial=nocheck runlevel=nocheck idepend=nocheck rdepend=nocheck space=nocheck setuid=nocheck conflict=nocheck action=nocheck basedir=default Then run pkgadd -a noask -d sed-4.1.5-sol10-x86-local - yet am still queried for: 'Select package(s) you wish to process' Is there a way around the questioning without doing an "echo yes" at the front? Thank you

    Read the article

  • Can I recover a zpool after it's been exported, given that devices have not been reallocated?

    - by cali-spc
    I had a zpool we'll call 'testpool'. testpool had 3 devices included in it, and a single zfs called 'test'. I needed to move 'test' to a new, smaller pool. I wanted to name the new pool the same name 'testpool'. Basically did the following. zfs send testpool@backup > /tmp/test-dump zpool export -f testpool zpool create -f testpool newdevice zfs receive -F testpool < /tmp/test-dump Unfortunately I found out that the testpool@backup snapshot was the wrong snapshot. Too old. I have yet to reallocate the three devices that were in the OLD testpool. (None of these 3 devices are 'newdevice', they are a separate 3.) Is there any way I can recover data in those devices? I'm thinking since I named the new, smaller pool the same as the old zpool, I'm pretty much SOL. But if not, that would be nice to know. Edit: More info I did a 'zpool import' and got this. bash-3.00# zpool import pool: testpool id: 14781458723915654709 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE c5t8d0 ONLINE c5t9d0 ONLINE c5t10d0 ONLINE So I'm guessing I just need the syntax to import this zpool using its numeric identifier, while giving it a new name. S.

    Read the article

  • crontab environment

    - by Adamski
    I have written various scripts to launch Java server applications, which are typically run for 24 hours before being shut down (by invoking the same script with a different parameter). The script relies on environment variables defined in a file: ~/<user>.env, which I source from .bashrc. This works fine when invoking the script from the command line but if I want to add the script as a crontab entry I run into the problem where .bashrc isn't read. My question: What is the best practice approach for solving this problem? I realise I could define a crontab entry such as: * * * * 1-5 /usr/bin/bash -c '. /home/myuser/myuser.env && /home/myuser/scripts/myscript.sh' ... but this seems plain ugly. Alternatively I could source myuser.env at the beginning of every script, but this would become a nightmare to maintain. Any help appreciated.

    Read the article

  • Why's SMC failing on startup?

    - by Brian Knoblauch
    Trying to remove a user from one of our servers, but I seem to be thwarted at every turn... SMC refuses to load the user list (failing with a NoClassDefFoundError in the listAll method of UserContent). vipw just returns with "vipw: /etc/passwd file busy". I'm the only user on the system at the moment (it's our backup SRSS box), and both of these fail even right after a reboot. I don't have console access at the moment either unfortunately (or I would try single user mode). Of course, even if init mode S worked and let me do this one task, it doesn't solve the root problem. Ideas?

    Read the article

  • Trigger ZFS dedup one-off scan/rededup

    - by Jake Wharton
    I have a ZFS filesystems which has been running for some time and I recently had the opportunity to upgrade it (finally!) to the latest ZSF version. Our data doesn't scream dedup but I firmly believe based on small tests that we could gain anywhere from 5-10% of our space back for free by utilizing it. I have enabled dedup on the filesystem and new files are slowly being dedupified but the majority (95%+) of our data already exists on the filesystem. Short of moving the data off-pool and then recopying it back, is there any way to trigger a dedup scan of existing data? It doesn't have to be asynchronous or live. (And FYI there isn't enough room on the pool to copy the entire filesystem to another and then just switch the mounts.)

    Read the article

  • Older raid controllers in raid 5 vs. Jbod and SW raid

    - by TEB
    Hi. Im in the fortunate position to have 6 Supermicro older VOD servers with the following config: Supermicro 3U case, 3xPSU Dual Xeon 3ghz P4 class cpu (5 years old.. havnt checked the exact type) 4GB Ram 3ware 9500-8 SATA controller 8 SATA SLOTS and alot of free drives. 2GB FLASH Bootdrive What im curious about is the RAID5 performance on these old beasts in HW mode vs. SW on Linux with the controller set in JBOD mode. Im thinking on using Centos 5.5 or Ubuntu or ZFS RaidZ on Opensolaris. Any tips? or reccomendations ? best regards TEB

    Read the article

  • auspex LFS backups

    - by user1250465
    I have some backup tapes which existed on an AUSPEX file server. The backups were written to tape with the SunOs version of the CPIO command. Now that I need to restore them, (of course there are no more auspex servers in existance), the backups won't restore because the headers are not standard. I have dumped the tape images to disk. PAX, CPIO, and TAR cannot read the images. I've tried all of the CPIO format options. The errors I get are "name too long", "byte swapped in header", or just junk output. I can open up the images and read the contents of the files, but cannot restore the images. I have found that SunOs had a special header in CPIO V2.5 images. I have found the source for cpio, now I need definition of the SunOs header inside CPIO?

    Read the article

  • What would cause Memcached to Hang for 2+ seconds?

    - by Brad Dwyer
    I'm going nuts trying to scale memcached. From their site: Memcached operations are almost all O(1). Connecting to it and issuing a get or stat command should never lag. If connecting lags, you may be hitting the max connections limit. See ServerMaint for details on stats to monitor. If issuing commands lags, you can have a number of tuning problems. Most common are hardware problems, not enough RAM (swapping), network problems (bandwidth, dropped packets, half-duplex connections). On rare occasion OS bugs or memcached bugs can contribute. Well.. it is most certainly not performing like an O(1) operation for me. Under low to normal load on our site memcached response times for get and set ops are about 0.001 seconds. Not bad. But if we triple the load we get outliers that take 100x (or in rare cases 1000x!) that long. I even had one instance where it took 2.2442 seconds for memcached to store a value. Obviously this is killing our site. Here's the output of Memcached-getStats during one of the slow periods: [pid] => 18079 [uptime] => 8903 [threads] => 4 [time] => 1332795759 [pointer_size] => 32 [rusage_user_seconds] => 26 [rusage_user_microseconds] => 503872 [rusage_system_seconds] => 125 [rusage_system_microseconds] => 477008 [curr_items] => 42099 [total_items] => 422500 [limit_maxbytes] => 943718400 [curr_connections] => 84 [total_connections] => 4946 [connection_structures] => 178 [bytes] => 7259957 [cmd_get] => 1679091 [cmd_set] => 351809 [get_hits] => 1662048 [get_misses] => 17043 [evictions] => 0 [bytes_read] => 109388476 [bytes_written] => 3187646458 [version] => 1.4.13 So things that I have ruled out so far are: Hitting the max connections limit (curr_connections of 84 is well below the default of max of 1024) Swapping - the machine has 900M out of 1024M of memory dedicated to memcached on a dedicated machine. It only appears to be using about 7MB of data as per the bytes stat. How would I diagnose the other hardware problems? prstat doesn't really show a whole lot going on in terms of CPU or memory usage. Not sure how to figure out the network problems but as this is a dedicated server on the same private network as the web box I don't think it's a connectivity issue (ping is less than a millisecond between the boxes). Is there something else I'm missing here? It's driving me nuts. Edit: Also forgot to mention that I've tried both persistent and non-persistent connections with minimal-to-no impact.

    Read the article

  • Is it possible to temporarily disable non-global zones?

    - by Gary
    I frequently need to install a package on the global zone for a quick test on a development box. When there are multiple prompts for one package I have to answer them for each zone. If the zone is not running then I need to wait for the zone to start up, answer the prompts, etc. This is particularly annoying when if I'm getting packages from http://www.sunfreeware.com and using the pkg-get utility which nicely pulls in dependencies for you. Can I disable the zones temporarily? I haven't found a way to do this. Thanks.

    Read the article

  • When does `cron.daily` run?

    - by warren
    When do entries in cron.daily (and .weekly and .hourly) run, and is it configurable? I haven't found a definitive answer to this, and am hoping there is one. I'm running RHEL5 and CentOS 4, but for other distros/platforms would be great, too.

    Read the article

  • SamFS performance problem on file creation

    - by Gregor Longariva
    I have two samfs filesystems (samfs1 and samfs2), both on the same 6130, both with the same config/watermarks/timeouts etc. creating a file on samfs2 works as it should, on samfs1 not. A little simple script shows up, that every while and then the file creation needs between 11 and 28 seconds: stan 12:32 [scratch]# while ( 1 ) while? echo - while? time echo test file while? time mv file file2 while? echo + while? sleep 1 while? end 0.00u 0.00s 0:00.01 0.0% 0.00u 0.00s 0:00.00 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.03 0.0% + 0.00u 0.00s 0:23.71 0.0% 0.00u 0.00s 0:00.14 0.0% + 0.00u 0.00s 0:00.18 0.0% 0.00u 0.00s 0:00.13 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.06 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.04 0.0% + 0.00u 0.00s 0:00.04 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.01 0.0% + 0.00u 0.00s 0:26.05 0.0% 0.00u 0.00s 0:00.50 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.06 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.12 0.0% + Any idea where the problem could be?

    Read the article

  • Trigger ZFS dedup one-off scan/rededup

    - by Jake Wharton
    I have a ZFS filesystems which has been running for some time and I recently had the opportunity to upgrade it (finally!) to the latest ZFS version. Our data doesn't scream dedup but I firmly believe based on small tests that we could gain anywhere from 5-10% of our space back for free by utilizing it. I have enabled dedup on the filesystem and new files are slowly being dedupified but the majority (95%+) of our data already exists on the filesystem. Short of moving the data off-pool and then recopying it back, is there any way to trigger a dedup scan of existing data? It doesn't have to be asynchronous or live. (And FYI there isn't enough room on the pool to copy the entire filesystem to another and then just switch the mounts.)

    Read the article

  • How do I force a specific MTU for only certain TCP ports?

    - by Dave S.
    Background I have a set of embedded hardware deployed in the field. These remote machines connect back to my servers at AWS running Ubuntu and I use the iptables mangle chain to lower the MTU to 500 so these devices are happy. For reference, this is the iptables rule I am using: -A POSTROUTING -p tcp --sport 12345 --tcp-flags SYN,RST SYN -o eth0 -j TCPMSS --set-mss 500 Current Problem I'm trying to spin up some servers on the Joyent Cloud using SmartOS, but I can't find any information on selectively changing the MTU like I can on Linux (e.g. all info I've found is on changing it globally, which is not what I want). How would I do it so that all connections on TCP port 12345 get the MTU I want?

    Read the article

  • can I consolidate a multi-disk zfs zpool to a single (larger) disk?

    - by rmeden
    I have this zpool: bash-3.2# zpool status dpool pool: dpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM dpool ONLINE 0 0 0 c3t600601604F021A009E1F867A3E24E211d0 ONLINE 0 0 0 c3t600601604F021A00141D843A3F24E211d0 ONLINE 0 0 0 I would like to replace both of these disks with a single (larger disk). Can it be done? zpool attach allows me to replace one physical disk, but it won't allow me to replace both at once.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >