Search Results

Search found 3295 results on 132 pages for 'solaris cluster'.

Page 38/132 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Reading a ZFS USB drive with Mac OS X Mountain Lion

    - by Karim Berrah
    The problem: I'm using a MacBook, mainly with Solaris 11, but something with Mac OS X (ML). The only missing thing is that Mac OS X can't read my external ZFS based USB drive, where I store all my data. So, I decided to look for a solution. Possible solution: I decided to use VirtualBox with a Solaris 11 VM as a passthrough to my data. Here are the required steps: Installing a Solaris 11 VM Install VirtualBox on your Mac OS X, add the extension pack (needed for USB) Plug your ZFS based USB drive on your Mac, ignore it when asked to initialize it. Create a VM for Solaris (bridged network), and before installing it, create a USB filter (in the settings of your Vbox VM, go to Ports, then USB, then add a new USB filter from the attached device "grey usb-connector logo with green plus sign")  Install a Solaris 11 VM, boot it, and install the Guest addition check with "ifconfg -a" the IP address of your Solaris VM Creating a path to your ZFS USB drive In MacOS X, use the "Disk Utility" to unmount the USB attached drive, and unplug the USB device. Switch back to VirtualBox, select the top of the window where your Solaris 11 is running plug your ZFS USB drive, select "ignore" if Mac OS invite you to initialize the disk In the VirtualBox VM menu, go to "Devices" then "USB Devices" and select from the dropping menu your "USB device" Connection your Solaris VM to the USB drive Inside Solaris, you might now check that your device is accessible by using the "format" cli command If not, repeat previous steps Now, with root privilege, force a zpool import -f myusbdevicepoolname because this pool was created on another system check that you see your new pool with "zpool status" share your pool with NFS: share -F NFS /myusbdevicepoolname Accessing the USB ZFS drive from Mac OS X This is the easiest step: access an NFS share from mac OS Create a "ZFSdrive" folder on your MacOS desktop from a terminal under mac OS: mount -t nfs IPadressofMySoalrisVM:/myusbdevicepoolname  /Users/yourusername/Desktop/ZFSdrive et voila ! you might access your data, on a ZFS USB drive, directly from your Mountain Lion Desktop. You might play with the share rights in order to alter any read/write rights as needed. You might activate compression, encryption inside the Solaris 11 VM ...

    Read the article

  • exported variable not persisted after script execution

    - by Daniele
    I'm facing a wierd issue. I've a vm with solaris 11, and trying to write some bash scripts. if, on the shell, I type : export TEST=aaa and subsequently run: set I correctly see a new environment variable named TEST whose value is aaa. If, however I do basically the same thing in a script. when the script terminates, I do not see the variable set. To make a concrete example, if in a file test.sh I have: #!/usr/bin/bash echo 1: $TEST #variable not defined yet, expect to print only 1: echo 2: $USER TEST=sss echo 3: $TEST export TEST echo 4: $TEST it prints: 1: 2: daniele 3: sss 4: sss and after its execution, TEST is not set in the shell. Am I missing something? I tried both to do export TEST=sss and the separate variable set/export with no difference.

    Read the article

  • Sun T1000 Fan Noise - Big Issue

    - by Ed Austin
    Long and short, the T1000 makes more noise than 7 HP Servers combined. Firmware updated to latest version. PICL running under Solaris OK. Diagnostics all OK. It is driving us mad. We have a server room in our small office and we can hear this baby through thick concrete.... I cannot believe Sun doesn't mention how loud they are.... thought this was an environmentally friendly server.. Help really appreciated before it drives us all mad.

    Read the article

  • How to synchronize users, passwords, hosts, etc without NIS

    - by joshxdr
    I am administering a very small solaris 2.6 network with 4 boxes total. Is it possible to use scp or similar to replace NIS for synchronizing users, groups, hosts, etc? This network is only a small part of my job and I don't want to spend too much time on it, and I am worried the setup and maintenence of NIS will not pay off. I need it to behave like a proper multi-user system, when a user logs into any machine, the users, passwords, hosts, etc. are always the same. Is there an easy way to do this with scp? Right now I copy /etc/passwd from one box to another with scp, but sometimes I make mistakes or forget a step, and scp inside of shell scripts don't seem to works so well since they require password authentication. Any recommendations would be welcome.

    Read the article

  • Extremely slow startup of tomcat

    - by Henrik
    I have a tomcat 7 installation on a Solaris 10 server. My problem is that starting the server (or deploying a new war) is extremely slow. It usually take 30 - 60 minutes. The war application is a medium sized grails application so there are quite a a lot of files. The server is running other server applications as well but from my basic skills I don't see this as a problem. Can anyone give me some tips on how to analyse this? Settings in Tomcat, java, server, disc access or something else? I use these parameters to tomcat: CATALINA_OPTS="-Dcom.sun.management.jmxremote=true -Djava.awt.headless=true -Dfile.encoding=UTF-8 -server -Xms1536m -Xmx1536m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=512m -XX:MaxPermSize=512m -XX:+DisableExplicitGC" And I use a 32 bit java 1.6.

    Read the article

  • Fastest OS for Java applications

    - by user33408
    Hello I have a multithreaded Java program. I am dealing with certain perfomance problems. I have improved everything. Hardware + Software. Now I think that it's time to move to proper Operating System. I was wondering, what OS is the fastest for Java Virtual machine? I am using Sun Java6. Do you think that Sun Solaris would be the best choice for Java applications? Or FreeBSD? or CentOS (I am using it currently)? Thanks

    Read the article

  • NFS compound failed for server foosrv: error 7 (RPC: Authentication error)

    - by automatthias
    I'm setting up an Ubuntu NFS server with a Solaris 10 client. The basic configuration looks okay to me, and it was also working for some time. I'm getting an "RPC: Authentication error" message on the client. server /etc/exports: /export/opencsw-future 192.168.3.0/24(rw,nohide,insecure,no_subtree_check,async) /export/opencsw-current 192.168.3.0/24(rw,nohide,insecure,no_subtree_check,async) $ ls -ld /export/opencsw-current drwxr-xr-x 7 maciej maciej 4096 2012-02-05 14:55 /export/opencsw-current client $ grep opencsw /etc/vfstab foosrv:/opencsw-current - /export/opencsw-current nfs - yes - $ sudo mount /export/opencsw-current NFS compound failed for server foosrv: error 7 (RPC: Authentication error) (...repeated...) nfs mount: mount: /export/opencsw-current: Permission denied My server host name resolves to both IPv4 and IPv6 addresses.

    Read the article

  • Top causes of slow ssh logins

    - by Peter Lyons
    I'd love for one of you smart and helpful folks to post a list of common causes of delays during an ssh login. Specifically, there are 2 spots where I see a range from instantaneous to multi-second delays. Between issuing the ssh command and getting a login prompt and between entering the passphrase and having the shell load Now, specifically I'm looking at ssh details only here. Obviously network latency, speed of the hardware and OSes involved, complex login scripts, etc can cause delays. For context I ssh to a vast multitude of linux distributions and some Solaris hosts using mostly Ubuntu, CentOS, and MacOS X as my client systems. Almost all of the time, the ssh server configuration is unchanged from the OS's default settings. What ssh server configurations should I be interested in? Are there OS/kernel parameters that can be tuned? Login shell tricks? Etc?

    Read the article

  • to understand the code- how the heap is written in process migration in solaris

    - by akshay
    hi guys i need help understanding what this piece of code actually does as it is a part of my project i am stuck here. the code is from libckpt on solaris. /********************************** * function: write_heap * args: map_fd -- file descriptor for map file * data_fd -- file descriptor for data file * returns: no. of chunks written on success, -1 on failure * side effects: writes all included segments of the heap to ckpt files * misc.: If we are forking and copyonwrite is set, we will write the heap from bottom to top, moving the brk pointer up each time so that we don't get a page copied if the * called from: take_ckpt() ***********************************/ static int write_heap(int map_fd, int data_fd) { Dlist curptr, endptr; int no_chunks=0, pn; long size; caddr_t stop, addr; if(ckptflags.incremental){ /-- incremental checkpointing on? --/ endptr = ckptglobals.inc_list-main-flink; /*-- for each included chunk of the heap --*/ for(curptr = ckptglobals.inc_list->main->blink->blink; curptr != endptr; curptr = curptr->blink){ /*-- write out the last page in the included chunk --*/ stop = curptr->addr; pn = ((long)curptr->stop - (long)sys.DATASTART) / PAGESIZE; if(isdirty(pn)){ addr = (caddr_t)max((long)curptr->addr, (long)((pn * PAGESIZE) + sys.DATASTART)); size = (long)curptr->stop - (long)addr; debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x, pn = %d\n", addr, addr+size, pn); if(write_chunk(addr, size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } /*-- write out all the whole pages in the middle of the chunk --*/ for(pn--; pn * PAGESIZE + sys.DATASTART >= stop; pn--){ if(isdirty(pn)){ addr = (caddr_t)((pn * PAGESIZE) + sys.DATASTART); debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x, pn = %d\n", addr, addr+PAGESIZE, pn); if(write_chunk(addr, PAGESIZE, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } /*-- write out the first page in the included chunk --*/ addr = curptr->addr; size = ((pn+1) * PAGESIZE + sys.DATASTART) - addr; if(size > 0 && (isdirty(pn))){ debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x\n", addr, addr+size); if(write_chunk(addr, size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } } else{ /-- incremental checkpointing off! --/ endptr = ckptglobals.inc_list-main-blink; /*-- for each included chunk of the heap --*/ for(curptr = ckptglobals.inc_list->main->flink->flink; curptr != endptr; curptr = curptr->flink){ debug(stderr, "DEBUG: saving memory from 0x%x to 0x%x\n", curptr->addr, curptr->addr+curptr->size); if(write_chunk(curptr->addr, curptr->size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } return no_chunks; }

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Which process is using my NAS?

    - by sethu
    I have a nas connected to my cluster. The NAS holds all our home directories. When I did a set of experiments last week, saving a 1 GB file to the nas took around 30 seconds. If i do the same to a local disk it takes 18 seconds. But when I tried doing the same process today, it takes 150 seconds. I am unsure what is the problem . Can someone help me pointout the issue? Is it possible to find out which process is accessing the NAS or how much NAS bandwidth is getting used ? Thanks for your help. -Sethu

    Read the article

  • coordinating a script to run on only one of identical load-balanced servers

    - by Amos Shapira
    I have two identically configured CentOS 5 servers (possibly more in the future). I need to run a cron job on any one of them and that it'll run only on one of them. I know about RedHat Cluster Suite (we use it on other servers), but it's a too big a gun to use for this task, plus it doesn't really behave well for less than three nodes. Is there anything light-weight I can use for that? The servers can communicate with each other directly. I suppose I can develope something over ssh or nrpe (two server which are already installed on these servers), but I was wondering whether there is something already available.

    Read the article

  • diskpart on RDM's ...

    - by karnash
    HI, We have ESXi cluster which is attached to clariion CX4 We have windows 2008 R2 as the guest OS. Attahed to this vm is 2 x 1.95T RDM's I select disk 1 create partition primary size=1 (1MB) then list partition Partition ### Type Size Offset * Partition 1 Primary 1024 KB 1024 KB Then I do the same for the other disk and offset is 1024KB I need to present 4T disk to this vm so I right click on disk 1 convert to simple volume then extend it by adding the second disk now when I do list partition, I see the off set is set to 31k. Can anyone please guide me. Thanks

    Read the article

  • Linux HA - Best Heartbeat hardware solution

    - by Martino Dino
    Hi all I would ask anyone what is the best layer 2 medium for heartbeat in Linux and how it's best configured. More precisely I've been thinking about a dedicated NIC for that purpose but then i thought that if a switch breaks then i would loose the heartbeat connection for most of the cluster and STONITH 'BUM'!!! Will probably loose my job after :) Distributing the heartbeat onto the main NICs of every node trough a vif sounds reasonable but im not sure if this is the best option (at least the switches are redundant to some extent). Is it possible to use heartbeat over a bonded interface and that sounds reasonable? Do you have any other tip/solution for that issue?

    Read the article

  • Upgrading drives on a MD3000

    - by Anonymouse
    Hello, Our MD3000 array is getting full as our databases are growing and we need more spaces. Currently, we use a MD3000 with a two-servers Windows 2003 cluster and 15x 73GB SAS drives. Disk groups are configured in RAID1 of two drives. The approach we are currently investigating is simply swapping the existing SAS drives with bigger ones (300GB instead of 73GB), one at a time, and let each RAID1 array rebuilt. Is it a good approach? Will we be able to resize the array afterwards? Will we be able to resize the partitions afterwards? Can the Dell M3000 Management software do it or will we have to bring the server offline and use some partition software to do it? Thanks in advance.

    Read the article

  • How to make a DHCP server on virtual machine serves other virtual machines(on different physical machines)?

    - by Tony
    I'm building a virtual cluster with VirtualBox and Opensuse. I have 10 physical machines and need several vms on each. The virtual machines are supposed to be in a "private" network, but still have internet access. I was asked to set up a virtual head node working as DHCP server. I installed DHCP server on the virtual head node and it seems works. On VirtualBox I set 2 network adapters to the head node, one bridged adapter and one internal network. One vm on the same physical machine has been set nic as internal network adapter. The vm can get IP address (so DHCP works) but can't access internet. What should I do? Specifically, what network adapter should I choose for head-node and work-nodes in VirtualBox? What in the virtual machines should I do?

    Read the article

  • Decent 1gb switch (16-24 port) for rack...

    - by TomTom
    Hallo, for a rack containing a smaller nubmer of servers (5 at the moment, going to stay in this area), I look to replace the currently aging 100mbit switch with a 1gb switch. This is for the backend between the servers. I expect some ISCIS traffic there ,so a 10gbit option would be nice (preferably for two ports, as extension modules). I dont need management, this is a pure backend of an internal cluster. I do VLAN, but there is no sensible management the switch can do there. I wuold like: * 1he only, obviously * preferable limited moving parts. * Low price ;) * Enough power to run at least half the ports in full speed at the same time. Anyone any recommendations?

    Read the article

  • How to broadcast a command on Windows

    - by Xiao Jia
    I am going to frequently deploy different versions of a program on a cluster of Windows machines (mostly Windows XP), so I am willing to use a command-line broadcasting tool (either built-in or 3rd-party) to (1) download a file from some URL, and (2) execute the same command, on all the machines. I googled for a very long time but got nothing related to my goal. (Only pages about broadcasting a message, broadcasting ping, or programmatically broadcast via TCP/IP, etc.) Are there any tool for this purpose? Or is it possible to do it pragmatically (without installing extra client programs on those machines)?

    Read the article

  • Configuring MPI on 2 nodes

    - by Wysek
    I'm trying to create really simple "cluster" from 2 multicore computers using openmpi. My problem is that I can't find any tutorials on that matter. I don't want to use torque because it's not necessary in my case nevertheless all tutorials give configuration details either about torque or mpd (which doesn't exist in openmpi implementation). Could you give me some tips or links to appropriate manuals? Steps I've already completed: - openmpi installation - network configuration (computers see each other) - ssh password-less login to second computer I tried using machinefiles without further configuration and with just 2 IPs in it. But jobs don't seem to start at all after initialization part. (MPI seems to work because I'm able to scatter jobs on multiple cores of both computers without communication between them).

    Read the article

  • Java application failing on special characters.

    - by Scottm
    An application I am working on reads information from files to populate a database. Some of the characters in the files are non-English, for example accented French characters. The application is working fine in Windows but on our Solaris machine it is failing to recognise the special characters and is throwing an exception. For example when it encounters the accented e in "Gérer" it says :- Encountered: "\u0161" (353), after : "\'G\u00c3\u00a9rer les mod\u00c3" (an exception which is thrown from our application) I suspect that in order to stop this from happening I need to change the file.encoding property of the JVM. I tried to do this via System.setProperty() but it has not stopped the error from occurring. Are there any suggestions for what I could do? I was thinking about setting the basic locale of the solaris platform in /etc/default/init to be UTF-8. Does anyone think this might help? Any thoughts are much appreciated.

    Read the article

  • How can I find out the original username a process was started with?

    - by szabgab
    There is a perl script that needs to run as root but the we must make sure the user who runs the script did not log-in originally as user foo as it will be removed during the script. So how can I find out if the user, who might have su-ed several times since she logged in has not impersonated 'foo' at any time in that chain? I found an interesting perl script that was calling the following two shell scripts, but I think that would only work on Solaris. my $shell_paren = `ps -ef | grep -v grep | awk \'{print \$2\" \"\$3}\' | egrep \"^@_\" | awk \'{print \$2}'`; my $parent_owner = `ps -ef | grep -v grep | awk \'{print \$1\" \"\$2}\' | grep @_ | awk \'{print \$1}\'`; This needs to work on both Linux and Solaris and I'd rather eliminate the repeated calls to he the shell and keep the whole thing in Perl.

    Read the article

  • haproxy and tomcat intermittent hangs

    - by Lorin
    I am trying to run haproxy in front of tomcat on a Solaris x86 box, but I am getting intermittent failures. At seemingly random intervals, the request just hangs until haproxy times out the connection. I thought maybe it was my app, but I've been able to reproduce it with the tomcat manager app, and hitting tomcat directly there is no problems at all. Hitting it repeatedly with curl will cause the error within 10-15 tries curl -ikL http://admin:admin@<my server>:81/manager/status haproxy is running on port 81, tomcat on port 7000. haproxy returns a 504 gateway timeout to the client, and puts this into the log file: Sep 7 21:39:53 localhost haproxy[16887]: xxx.xxx.xxx.xxx:65168 [07/Sep/2009:21:39:23.005] http_proxy http_proxy/tomcat7000 5/0/0/-1/30014 504 194 - - sHNN 0/0/0/0/0 0/0 "GET /manager/status HTTP/1.1" Tomcat shows nothing, no error in the logs and no indication that the request ever makes it to the tomcat server. The request count is not incremented, the manager app only shows activity on one thread, serving up the manager app. Here are my haproxy and tomcat connector settings, I've been playing with both a good deal trying to chase down the issue, so they may not be ideal, but they definitely don't seem like they should cause this error. server.xml <Connector port="7000" protocol="HTTP/1.1" enableLookups="false" maxKeepAliveRequests="1" connectionLinger="10" /> haproxy config global log loghost local0 chroot /var/haproxy listen http_proxy :81 mode http log global option httplog option httpclose clitimeout 150000 srvtimeout 30000 contimeout 3000 balance roundrobin cookie SERVERID insert server tomcat7000 127.0.0.1:7000 cookie server00 check inter 2000

    Read the article

  • haproxy and tomcat intermittent hangs

    - by user7347
    I am trying to run haproxy in front of tomcat on a Solaris x86 box, but I am getting intermittent failures. At seemingly random intervals, the request just hangs until haproxy times out the connection. I thought maybe it was my app, but I've been able to reproduce it with the tomcat manager app, and hitting tomcat directly there is no problems at all. Hitting it repeatedly with curl will cause the error within 10-15 tries curl -ikL http://admin:admin@<my server>:81/manager/status haproxy is running on port 81, tomcat on port 7000. haproxy returns a 504 gateway timeout to the client, and puts this into the log file: Sep 7 21:39:53 localhost haproxy[16887]: xxx.xxx.xxx.xxx:65168 [07/Sep/2009:21:39:23.005] http_proxy http_proxy/tomcat7000 5/0/0/-1/30014 504 194 - - sHNN 0/0/0/0/0 0/0 "GET /manager/status HTTP/1.1" Tomcat shows nothing, no error in the logs and no indication that the request ever makes it to the tomcat server. The request count is not incremented, the manager app only shows activity on one thread, serving up the manager app. Here are my haproxy and tomcat connector settings, I've been playing with both a good deal trying to chase down the issue, so they may not be ideal, but they definitely don't seem like they should cause this error. server.xml <Connector port="7000" protocol="HTTP/1.1" enableLookups="false" maxKeepAliveRequests="1" connectionLinger="10" /> haproxy config global log loghost local0 chroot /var/haproxy listen http_proxy :81 mode http log global option httplog option httpclose clitimeout 150000 srvtimeout 30000 contimeout 3000 balance roundrobin cookie SERVERID insert server tomcat7000 127.0.0.1:7000 cookie server00 check inter 2000

    Read the article

  • Write access from a Windows client via a ZFS SMB, to a file created on the host in OpenIndiana

    - by Gerald Kaszuba
    I've got an OpenIndiana server running ZFS that is shared using a nobody user and group. I don't fully understand Solaris ACL permissions, but I do know Linux style permissions. The client is Windows 8 and the server is OpenIndiana is oi_148. I'm failing to work out how to make write permission work correctly for the Windows client. It is able to make new files, but can not modify files created by the shell in OpenIndiana. When a file ("local file") is created locally as the user nobody in bash, and another file ("smb file") created remotely via SMB (as nobody also), they are quite different in permissions: # ls -V -rw-r--r-- 1 nobody nobody 0 Dec 2 12:24 local file owner@:rw-p--aARWcCos:-------:allow group@:r-----a-R-c--s:-------:allow everyone@:r-----a-R-c--s:-------:allow -rwx------+ 1 nobody nobody 0 Dec 2 12:24 smb file user:nobody:rwxpdDaARWcCos:-------:allow group:2147483648:rwxpdDaARWcCos:-------:allow In bash, I'm able to write to smb file, but vice versa, the Windows client is not able to write to local file. This is confusing to me because it appears that it should allow the SMB client to write to local file, because nobody is the owner and it has a w in the ACL. The sharesmb setting is is fairly boring, although I'm hoping there can something to set in here similar to a umask: sharesmb name=shared,guestok=true How can I make these two work together and have a symmetrical permission system, where both SMB and the local user produce the same permissions? Is there some sort of ACL that can set at the root of the file system to allow all files to be created in a similar manner?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >