Search Results

Search found 311 results on 13 pages for 'chown'.

Page 8/13 | < Previous Page | 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Setting Linux UID on NFS volume from EMC NX4

    - by ethrbunny
    I have an EMC NX4 from which there are several CIFS shares with corresponding NFS mount points. The CIFS user ids seem fine but when viewed from Linux they are all 327xx numbers and can't be set from the file system. (IE CHOWN doesn't work - permission denied). On our other (older) EMC devices we used an MMC app to set the Linux UID for each user. I don't seem to have such an app on the 'Applications and Tools' CD for this new device. Is there some other method for setting these? Did I setup the system incorrectly?

    Read the article

  • Can not run ifconfig like commands via browser

    - by savruk
    Problem is I cannot run "ifconfig" or similar commands via browser. Environment: Programming language : python Server : lighttpd(CGI) , running on busybox. Well machine is really small and so I am really restricted. Tried techniques: chown every script to root. But there is no differences. Why? Because lighttpd runs under another user, I mean not under root. As it is not root, when I try to run script from browser it always calls the python file with its uid. So it makes it impossible to run "ifconfig eth0 192.168.2.123" like commands via web browser. I get "ifconfig: SIOCSIFADDR: Permission denied" error. What can I do? I do not have any sudoers file, so cannot modify sudo command. Well, I don't even have "sudo" command :) Thanks for your help

    Read the article

  • how to have files created by CMS have the same ownership as SSH user

    - by Cam
    I am having difficulty on our ubuntu server whereby I have an SSH user that when I create files using this user the ownership is web_user:www-data The problem is when a file is uploaded or created using a content management system like joomla. When files are uploaded through Joomla - such as components / modules... The ownership is set to www-data:www-data This means that I need to then chown all new files to web_user:www-data so we can edit the files. Is there a way to set for a directory and sub-directories that all new files created have the ownership of web_user:www-data? Do I need to use something like setuid or setgid? Any help would be greatly appreciated.

    Read the article

  • how to have files created by CMS have the same ownership as SSH user

    - by Cam
    I am having difficulty on our ubuntu server whereby I have an SSH user that when I create files using this user the ownership is web_user:www-data The problem is when a file is uploaded or created using a content management system like joomla. When files are uploaded through Joomla - such as components / modules... The ownership is set to www-data:www-data This means that I need to then chown all new files to web_user:www-data so we can edit the files. Is there a way to set for a directory and sub-directories that all new files created have the ownership of web_user:www-data? Do I need to use something like setuid or setgid? Any help would be greatly appreciated.

    Read the article

  • installing SQLite3 gem on remote FreeBSD server using RVM - root permissions needed?

    - by atmosx
    I am trying to install ruby SQLite3 gem, on a remote freebsd server. I'm using RVM which in theory does not need 'root permission' to compile gems but I get a root error, here: [user ~]$ gem install sqlite3 -- --with-sqlite3-dir=/home/www/atma/opt/ [...] make install /usr/bin/install -c -o root -g wheel -m 0755 sqlite3_native.so /home/www/atma/.gems/gems/sqlite3-1.3.6/lib/sqlite3 install: /home/www/atma/.gems/gems/sqlite3-1.3.6/lib/sqlite3/sqlite3_native.so: chown/chgrp: Operation not permitted make: * [/home/www/atma/.gems/gems/sqlite3-1.3.6/lib/sqlite3/sqlite3_native.so] Error 71 Gem files will remain installed in /home/www/atma/.gems/gems/sqlite3-1.3.6 for inspection. Results logged to /home/www/atma/.gems/gems/sqlite3-1.3.6/ext/sqlite3/gem_make.out Any ideas how to approach this? Maybe re-installing RVM? best regards, PA

    Read the article

  • PHP fopen fails - does not have permission to open file in write mode.

    - by George
    Hello. I have an Apache 2.17 server running on a Fedora 13. I want to be able to create a file in a directory. I cannot do that. Whenever I try to open a file with php for writing fopen(,'w'), it tells me that I don't have permission to do that. So i checked the httpd.conf file in /etc/httpd/conf/. It says user apache, group apache. So I changed ownership (chown -R apache:apache .*) of my whole /www directory to apache:apache. I also run chmod -R 777 * Apart from knowing how terribly dangerous this is, it actually still gives me the same error, even though I even allow public write!

    Read the article

  • Homedir inside homedir restricted access

    - by blid
    On my VPS i've installed debian, apache+php. I have 2 users: foo and bar. Apache is configured to execute php files from /home/foo/htdocs. I created dir: /home/foo/htdocs/bar/ and made it home dir for user bar. Hover, I need to make a restriction: bar can't read, write or executre any files outside his own dir, but Apache has to be able execute all php files from /htdocs. I tried to chown the bar dir only for user bar, also experimented a lot with chmod but without a result so far. If there's any better way to satisfy my needs don't hesitate to write about it. Thanks in advance

    Read the article

  • not able to upload files into mediawiki -- weird one

    - by Michael
    Completely frustrating me. When I try and upload a small jpeg file I get the following error: Warning: wfMkdirParents: failed to mkdir "/usr/local/mediawiki-1.20.5/images/5/5d" mode 0777 in /usr/local/mediawiki-1.20.5/includes/GlobalFunctions.php on line 2546 CentOS 6.4 MediaWiki 1.20.5 PHP 5.5.0RC1 (apache2handler) MySQL 5.5.31 php.ini safe_mode = off; file_uploads = On max_file_uploads = 20 localsettings.php $wgEnableUploads = true; $wgUseImageMagick = true; $wgImageMagickConvertCommand = "/usr/bin/convert"; images folder chown apache:apache images/ chmod 755 -R images/ (threw error) chmod 777 -R images/ (threw error) I've restart apache and still cannot upload. I'm stumped. Any ideas?

    Read the article

  • rsync windows to linux permission denied

    - by user64908
    Using Command rsync -avzP --delete --omit-dir-times ../../ [email protected]:/var/www/mysite/ I'm getting rsync: mkstemp "/var/www/mysite/.." failed: Permission denied (13) If ext is in the www-data group should I still set all the files to be owned by user www-data? I am trying to publish the files with rsync and then set the permissions using sudo chown -R www-data doc sudo chgrp -R www-data doc but I can't even rsync because of the permission denied. The SSH works fine, the rsync too except when it tries to write over or update some of the files in /var/www Client * Windows 7 * Cygwin 1.7.16 (GNU bash, version 4.1.10(4)-release (i686-pc-cygwin)) * rsync version 3.0.9 protocol version 30 Server * Ubuntu 12.04 * Apache2 * Root Accounts [ubuntu,ext] * Groups [www-data] * sudo vigr has www-data:x:33:ubuntu,ext I have already configure this http://stackoverflow.com/questions/2124169/cwrsync-ignores-nontsec-on-windows-7 This article has also managed to confuse me http://unix.stackexchange.com/questions/41687/how-should-i-rsync-files-in-var-www-if-i-want-them-to-be-owned-by-www-data What is the right procedure?

    Read the article

  • How to mount encrypted volume at login (Ubuntu 12.04, pam_mount)

    - by Nick Lothian
    I'm trying to get pam_mount working on Ubuntu 12.04. I have /dev/sda1 (encrypted partition) with /dev/dm-1 (ext4 formatted) inside it. Should ~/.pam_mount.conf.xml be trying to mount /dev/sda1 or /dev/dm-1? If I use the line: <volume fstype="ext4" path="/dev/dm-1" mountpoint="~/slowstore" options="rw" /> then it nearly works. It prompts for the password (ok, I'd like pam_mount to do that for me, but still..) then I get: pam_mount(rdconf2.c:126): checking sanity of luserconf volume record (/dev/dm-1) pam_mount(rdconf2.c:132): user-defined volume (/dev/dm-1), volume not owned by user If I do: sudo chown nick:disk /dev/dm-1 Then re-login the encrypted partition mounts correctly (ignoring th fact I have to reneter the password). However, if I log out completely the ownership on /dev/dm-1 gets reset to root:disk. What am I doing wrong?

    Read the article

  • which user is the website host

    - by Kossel
    I m learning about server, and I'm configuring nginx mysql php wordpress. the server distro is debian 6. I created a new user and I wish each user is the owner of the site folder /var/www/site.one so I chown -R kossel:kossel site.one my problem is, my wordpress only work if I chmod 644 wp-config.php, which all can read wordpress site suggest that file should be 640. and my question is: when someone open mydomain.com, wordpress has to access wp-config.php file, but which user is it actually using to "read" that file? root? user kossel? anyone else? how can I properly give it permission or owner??

    Read the article

  • Unable to sunchronize local and remote directories ("set times: Operation not permitted")

    - by Tom Auger
    I'm running into FTP errors using software like NetBeans or WinSCP: whenever I attempt to perform a synchronization or update of files from local -- server I get errors on the client saying "set times: Operation not permitted". This is clearly an issue with the way I've configured my Fedora installation. The user that I'm logging in with cannot touch -t any of these files, though he IS part of a group that has r/w access on the files. I do have root / sudo access to this server. What I would like to know is: a) is it likely that this problem would be solved by allowing my FTP user to "touch -t" these files b) how do I enable a certain user to be able to set timestamps on files without giving them ownership of the files (certain of these files need to be owned by Apache, for instance, so I don't want to chown them). Thanks in advance.

    Read the article

  • SMB/CIFS connection, attempting to change the permissionswithin rhel5 to comply with the clients needs

    - by Skreemer
    I can get the mount to work and as written in /etc/fstab: //pcsprdvhost.prod.tsh.mis.mckesson.com/sftphome /sftphome2 cifs username=myuser,workgroup=domain,password=mypassword,noserverinfo,uid=tmadmin,gid=tibco,nounix,file_mode=0777,dir_mode=0777 0 2 this means that every directory under /sftphome2 looks like: drwxrwxrwx 1 tmadmin tibco 0 Jul 6 2010 D0000001 When I issue: chown -R D0000001:D0000001_admin D0000001 Nothing happens. When I pull the uid and gid specifications out I get the system owner/group of root:sys What I need to be able to do is change the sub-directories under /sftphome2 to whatever owner and group (and permissions) I desire versus the ones that are getting specified. How do I do this?

    Read the article

  • Ubuntu 10.04 Apache Configuration for Websites

    - by completenoob
    Looking at a basic Ubuntu 10.04 server setup, Apache points to /var/www for where to it looks for files to serve up. The default apache user is www. I'm just trying to set up a plain old WordPress blog. Should I just dump the files into /var/www/ as root or www? User www seems inconvenient since I won't log in as the user, but I guess I can chown the files in /var/www to www. Not that I would log in as root either, but what is the recommended user who should own the /var/www files? Thanks for the help.

    Read the article

  • Turn "log slow queries" ON.

    - by CodedK
    Hello, I'm trying to log mysql slow queries, but I can't turn it on. I will explain all my steps: Open and Edit my.cnf and add the following lines: long_query_time = 5 slow_query_log_file = /myfolder/slowq.log log_slow_queries = 1 =(I have MySQL 5.0.7) Give mysql user permitions to write on the file: chown -R mysql:mysql /var/lib/mysql Create the file: touch /myfolder/slowq.log Chmod for this file to 777. service mysqld restart From MySQL Admin Panel I can see that the "log_slow_queries" var is OFF! Also no logs are created. Thanks in advance! Best Regards, Panos.

    Read the article

  • sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0

    - by 7UR7L3
    Whenever I try to do anything at all that requires my password it returns this: u7ur7l3@ubuntu:~$ sudo sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0 sudo: fatal error, unable to load plugins u7ur7l3@ubuntu:~$ So I can't install anything from the Software Center / package manager or run any commands in terminal that require my password. I can log in, but that's pretty much it. I accidentally changed the permissions of some files, then changed some more trying to fix it :/. Now I'm completely lost as to what to do. This is what happened when I tried to get sudo working again using pkexec: u7ur7l3@ubuntu:~$ pkexec chown root /usr/lib/sudo/sudoers.so Error getting authority: Error initializing authority: Error calling StartServiceByName for org.freedesktop.PolicyKit1: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ExecFailed: Failed to execute program /usr/lib/dbus-1.0/dbus-daemon-launch-helper: Success u7ur7l3@ubuntu:~$ sudo ls sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0 sudo: fatal error, unable to load plugins And to change permissions I was using Root Actions as a dolphin service/ plugin thing, so history doesn't show me the permission changes. I just realized that sounds don't work at all anymore. When I go into Phonon my default settings and playback devices aren't even there. Also I don't have the option to shutdown, I can only log out or leave.

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • Reinstalled Ubuntu 12.04 and now I cannot change preferences like theme, wallpaper, and nautilus preferences

    - by krishnab
    So I just down-rev'd ubuntu from 13.04 back to 12.04 LTS desktop 64 (Precise). I am using Unity. I just reformatted the Ubuntu partition, but kept my home directory intact, and everything seemed to reconnect just fine. No data was lost. However, I found that I cannot seem to change my preferences. So I cannot seem to change my desktop background, no matter how many ways I try--Ubuntu Tweak, Gnome Tweak, system settings. I also cannot change the system GTK+ theme, though apparently I am able to change the windows border theme. Further, I cannot seem to change my Nautilus preferences--so I cannot seem the make the default view a list view, and I cannot make the "single-click" behavior the default. I even went into the nautilus org.gnome.nautilus settings to manually change things, but no luck. I thought it was a permissions issue, so I did a chown on the home folder and on the .gvfs folder. Still no luck. So somewhere there seems to be a permission that I am not catching. Does anyone have any suggestions? Thanks.

    Read the article

  • Missing /dev/xconsole causes rsyslog to stop as well as all other services

    - by George Van Tuyl
    We are running Ubuntu-10.04.04LTS in Hyper-V environments. We found that the services ssh http or anything else stopped because the rsyslog daemon had died with the message unable to find the /dev/xconsole file. I fixed it temporarily with the following. FILE=/dev/xconsole if [ -e $FILE ]; then echo "$FILE exists Carry on!" else mknod -m 640 /dev/xconsole c 1 3 chown syslog:adm /dev/xconsole echo "Created $FILE." fi The problem is that I can not get rsyslog daemon to process these 8 lines when I restart the daemon. Also restarting the daemon removes the /dev/xconsole file and we are back to all service stopped. In addressing this problem I have inserted the if--fi lines after the start and restart conditions in the rsyslog script. The problem is I do not get an echo to stdio. Does someone have an idea on how to make the rsyslog report to stdio when it creates the /dev/xconsole device. Thanks George Van Tuyl

    Read the article

  • How to run an Application as another user?

    - by takpar
    I use krusader for file management stuff. the problem is that apache's DocumentRoot should be under chown www-data:www-data /path/to/www. so using krusader (which is run under my account) I've not write access to /path/to/www while I really need. I don't know how other developers can continue doing things with such a restriction! I wondered if I could run krusader as www-data then I will be able to easily play with files. but using su - www-data asked me for www-data's password!! So, how can I run an application (like krusader) as another user (like www-data) in Gnome? or is there any other solution for my case? (tough I'm really curious to know the answer!) keep in mind that I know I can run it as root! but this will cause some permission problems when using cp and mkdir, you know. PS: sudo and gksudo did not help: $ gksudo -u -www-data krusader No protocol specified krusader: cannot connect to X server :0.0 Final Note: according the best answer, i did chmod u+w /path/to/www and my problem solved. but i still has not been succeeded in opening krusader as another user!

    Read the article

  • Making files generally available on Linux system (when security is relatively unimportant)?

    - by Ole Thomsen Buus
    Hi, I am using Ubuntu 9.10 on a stationary PC. I have a secondary 1 TB harddrive with a single big logical partition (currently formatted as ext4). It is mounted as /usr3 with options user, exec in /etc/fstab. I am doing highspeed imaging experiments. Well, only 260fps, but that still creates many individual files since each frames is saved as one png-file. The stationary is not used by anyone other than me which is why the default security model posed by ubuntu is not necessary. What is the best way to make the entire contents of /usr3 generally available on all systems. In case I need to move the harddrive to another Ubuntu 9.x or 10.x machine? When grabbing image with the firewire camera I use a selfmade grabbing software-utility (console based) in sudo-mode. This creates all files with root as owner and group. I am logged in as user otb and usually I do the following when having to make files generally available to otb: sudo chown otb -R * sudo chgrp otb -R * sudo chmod a=rwx -R * This takes some time since the disk now contains individual ~200000 files. After this, how would linux behave if I moved the harddrive to another system where the user otb is also available? Would the files still be accessible without sudo use?

    Read the article

  • Samba new file ownership, permissions configuration

    - by Martin Melka
    I have recently installed Samba on my server. Now I have a question about permissions and how to set it up. Currently I mount the Samba shared drive to my laptop with this line in /etc/fstab: //<host>/share /mnt/melka-server-data/ cifs username=<usrname> password=<passwd> _netdev 0 0 This works, as I can read from the files and create them (as root). The problem is when I want to create files as a regular user. I always get a Permission Denied error. These are ll outputs of the mounted folder: magicmaster@magicmaster-kubuntu:/mnt$ ll total 8 drwxr-xr-x 3 root root 4096 lis 11 14:15 ./ drwxr-xr-x 26 root root 4096 ríj 26 11:01 ../ drwxrwxrwx 8 magicmaster magicmaster 0 lis 12 22:12 melka-server-data/ and the inside: magicmaster@magicmaster-kubuntu:/mnt/melka-server-data$ ll total 4 drwxrwxrwx 8 magicmaster magicmaster 0 lis 12 22:12 ./ drwxr-xr-x 3 root root 4096 lis 11 14:15 ../ drwxrwxrwx 5 magicmaster magicmaster 0 lis 12 09:35 downloads/ drwxrwxrwx 2 magicmaster magicmaster 0 ríj 28 12:57 lost+found/ drwxrwxrwx 15 magicmaster magicmaster 0 lis 12 09:45 movies/ drwxrwxrwx 2 magicmaster magicmaster 0 lis 1 21:15 newest/ drwxrwxrwx 3 magicmaster magicmaster 0 lis 2 23:14 photos/ drwxrwxrwx 2 magicmaster magicmaster 0 ríj 30 12:44 software/ -rw-r--r-- 1 nobody nogroup 0 lis 12 22:12 zdar I called sudo chown -R magicmaster:magicmaster melka-server-data/ to try and change all the files to belong to me. Then the file zdar was created by magicmaster just by calling touch. I got the Permission Denied, but it was still created, though it belongs to nobody and I can't write into it. When I create a file as root, it still belongs to nobody, but at least I can write into it. What am I missing? I didn't notice anything in Samba config that would be related to this and I don't like the idea of having to log on as root in order to copy files.. Thanks

    Read the article

  • hdmi audio works only with aplay -D alsa test wavs; open source radeon drivers; kernel 3.5 vgaswitcheroo

    - by user108754
    I've trolled the internets to make hdmi work on my system Ubuntu 12.04 software center kernel 3.5 uname: Linux ubuntu 3.5.0-18-generic #29~precise1-Ubuntu SMP...x86_64 x86_64 x86_64 GNU/Linux open source radeon drivers vgaswitcheroo (hybrid intel/radeon gpu): I boot with intel, not radeon, running. (and recall that with kernel 3.5, vgaswitcheroo now gives info on a third item, "DIS-Audio"; it indicates pwr on my system) ( /etc/rc.local: chown user:user /sys/kernel/debug/ # change "username" with your user name echo OFF /sys/kernel/debug/vgaswitcheroo/switch ) grub indeed now has "radeon.audio=1" for testing audio, I did aplay -l which gave me the card and device, which made me try aplay -D plughw:1,3 /usr/share/sounds/alsa/Front_Center.wav and lo! I get crystal clear sound on my hdtv. If I play an mp3 file as the argument to that command, I get noise as, I guess, aplay interprets the mp3 code as a wav. If I play a .wav that is not in the /usr/share/sounds/alsa/ directory, I get nothing. Internet flash video in browser plays no sound over hdmi. Both system sounds control and pavucontrol have hdmi cedar selected. Alas, I can not get sound for any gui test (left, right). Why would only aplay, and only when directed with "-D plughw", yield sound over hdmi? I've also tried only using one sound program at a time, if it was a limitation of alsa, so I tried aplay with web browser and even the sound control gui closed. I tried each of the last two, running alone. No improvement. alsamixer only shows hda intel and I think it's only the intel audio, not the hdmi.

    Read the article

  • Disable discrete AMD GPU

    - by Smajl
    My notebook has two graphic cards and it suffers from severe overheating after installing Ubuntu (no problem with Windows 7 on the same machine). I figured out that the problem may be in the graphic card and I would like to disable the discrete one. I followed some tutorials on this topic (for example http://planetoss.com/articles/how-to-disable-the-discrete-amd-graphics-card-in-linux/). But the problem is, that after executing the commands, nothing really happens and both GPU are still running. Here is what I have done: smajl@smajl-mini:~$ sudo chown smajl /sys/kernel/debug/vgaswitcheroo/switchsmajl@smajl-mini:~$ echo IGD > /sys/kernel/debug/vgaswitcheroo/switch smajl@smajl-mini:~$ sudo cat /sys/kernel/debug/vgaswitcheroo/switch 0:IGD:+:DynPwr:0000:01:05.0 1:DIS-Audio: :Pwr:0000:02:00.1 2:DIS: :DynPwr:0000:02:00.0 smajl@smajl-mini:~$ echo OFF > /sys/kernel/debug/vgaswitcheroo/switch smajl@smajl-mini:~$ sudo cat /sys/kernel/debug/vgaswitcheroo/switch 0:IGD:+:DynPwr:0000:01:05.0 1:DIS-Audio: :Pwr:0000:02:00.1 2:DIS: :DynPwr:0000:02:00.0 What am I missing here? Also, more on the overheating topic: 1) Installed TLP 2) updated system 3) set power setting mode to "power save" ...and nothing helps Tried same thing with Linux Mint without success. Is there anything else to try if I manage to disable the second GPU and the problem preserves? Otherwise I would have to get back to win in order not to melt my laptop.. :-/

    Read the article

  • Setting up a shared media drive

    - by Sam Brightman
    I want to have a shared media drive be transparently usable to all users, whilst also sticking to FHS and Ubuntu standards. The former takes priority if necessary. I currently mount it at /media/Stuff but /media is supposed to be for external media, I believe. The main issue is setting permissions so that access to read and write to the drive can be granted to multiple users working within the same directories. InstallingANewHardDrive seems both slightly confused and not what I want. It claims that this sets ownership for the top-level directory (despite the recursion flag): sudo chown -R USERNAME:USERNAME /media/mynewdrive And that this will let multiple users create files and sub-directories but only delete their own: sudo chgrp plugdev /media/mynewdrive sudo chmod g+w /media/mynewdrive sudo chmod +t /media/mynewdrive However, the group writeable bit does not seem to get inherited, which is troublesome for keeping things organised (prevents creation inside sub-folders originally made by another user). The sticky bit is probably also unwanted for the same reason, although currently it seems that one userA (perhaps the owner of the mount-point?) can delete the userB's files, but not vice-versa. This is fine, as long as userB can create files inside the directory of userA. So: What is the correct mount point? Is plugdev the correct group? Most importantly, how to set up permissions to maintain an organised media drive? I do not want to be running cron jobs to set permissions regularly!

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13  | Next Page >