Search Results

Search found 20186 results on 808 pages for 'bin folder'.

Page 33/808 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Minimum permissions needed to create a user Home Folder in Windows Active Directory

    - by Jim
    We would like the Help Desk to have the responsibility of creating User Home folders instead of our 2nd level support. The help desk global group is already an Account Operator, so in Active Directory they are able to edit all User Attributes just fine. The problem is figuring out the minimum level of permissions needed on the File Server to create the home share, with out giving them access to everyone home share. So if they open AD Users and Computer, open the properties for a user, and enter \home\users\%username% in the profile tab and then click OK, they get the following error. The \home\users\username home folder was not created because you do not have create access on the server. The user account has been updated with the new home folder value but you must create the directory manually after obtaining the required access right. Right now I have given the Helpdesk group Full Control on the root folder only (no files or subdirectories) The directory is actually created, but the permissions on the newly created folder only show administrators full control, and no permissions for the configured user account. It sure sounds like I'd have to make the helpdesk local admins on the file servers, which is what I'd like to avoid. Especially since the file servers are a large cluster hosting much much more than the entire orgs home share structure.

    Read the article

  • sharing a folder between linux and windows over the internet

    - by valya
    Hello Currently my job is to make websites with Django. I use many things like virtualenv, PIL, etc. The problem is, I can't stand Linux on my desktop. I like it on servers, It's greate to use it over the SSH. But for desktop? No way. But for the development Linux is quite essential. Of course almost everything is ported to Windows, but it's not as simple to use as in Linux. For example, Windows shell is awful in comparison with Linux. So I've tried Cygwin, but it's too damn slow. Every time django dev server reloads, it tooks almost 20-30 seconds. In comparison, then using "native" python on Windows or Linux, it reloads instantly. Even worse, Cygwin makes all my system very slow. I've been thinking about it and have thought up a way to go. I can share a folder with my application with some Linux box. The devserver and everything will run on that box, while I'll be happy editing files and running the browser on my Windows 7. SSH shell is much quickier and handy than Cygwin. Currently there are no Linux boxes in my home network (except for my android phone :) but I have several VDS boxes with Debian. So, how do I share a Windows folder with VDS box? I can't rely on my desktop IP but I can rely on the VDS's one. I need sharing to be as quick as possible (well, 2-3 seconds ping is OK) and "native" for both systems, so I could use a folder like a normal folder in both Windows and Linux.

    Read the article

  • My files disappeared from the UbuntuOne synced folder

    - by Junji
    I set up an UbuntuOne account on PC1 (Ubuntu 10.10) and the same account on PC2 (Ubuntu 10.04). I did the following: Created file named maverick.txt in PC1's ~/Ubuntu One/log Created file named venus.txt in PC2's ~/Ubuntu One/log Bot files appeared in one.ubuntu.com A few hours later, those two files are disappeared from PC1's Ubuntu One/log PC2's Ubuntu One/log one.ubuntu.com So, my files are gone forever. Why did this happen? Is there any way to recover those files?

    Read the article

  • Permanent redirect to different domain followed by temporary redirect to folder

    - by Ricardo Amaral
    I have old-domain.com which I want to migrate to new-domain.com. However, the content on the old domain is, well, old. And I'm currently in the process of redesigning my whole site. My idea is to do a permanent (301) redirect from old-domain.com to new-domain.com so that search engines know about the new domain and forget about the old one. But since the content is old I was thinking to do a temporary (302) redirect from new-domain.com to new-domain.com/old/ until the new content/site is ready to be published. Is this, for some reason, a bad idea? Or there's nothing wrong with it? One last thing... If I go with this, what should I do when the new content is ready? Should I just remove the 302 redirect and that's it, or should I do something else to notify search engines that the temporary redirect is over?

    Read the article

  • Files inside Alias folder not accessible

    - by John Isaacks
    In my apache2.conf I have an alias setup like this: Alias /cake/ /var/www-cake/repo <Directory /var/www-cake/repo> Order allow,deny Allow from all AllowOverride All Options +Indexes </Directory> inside the /var/www-cake/repo directory I just have 1 file that is index.php when I go to http://linux-server/cake/ I get a directory listing that shoes the index.php file. When I click on the file it takes me to http://linux-server/cake/index.php in which I get a 404 page not found error. What do I need to do to make the files accessible?

    Read the article

  • Managing Files/Folder in Content Repositories or File Systems with Oracle ADF and WebCenter

    - by Shay Shmeltzer
    One more entry in a set of entries (1,2,3) about the capabilities that WebCenter adds to ADF applications. WebCenter is basically the new Portal framework in the Oracle stack - and one key thing that portals do is work with content, allowing you to compose and publish content from files as well as save and store content. In this demo you'll see how using a set of taskflows provided by WebCenter you can add a file management, creation and viewing capabilities to a regular ADF application. To try this out you don't need any fancy content management system - we'll just use your file system for now. All you need is the WebCenter extension installed in JDeveloper, and then you can follow the demo on your own JDeveloper instance. You'll define a connection to your content repository you'll be able to add a bunch of pre-built WebCenter taskflows into your page. And suddenly you can upload/download/create and view document directly from your applicaiton. Check it out:

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • ~/.xinput.d folder is ignored in Ubuntu 13.04

    - by CaptSaltyJack
    It used to be that you could make a file ~/.xinput.d/en_US and put xinput commands in there, such as enabling drag lock. Now, for some reason, in 13.04 this does not work. Anyone know why this changed, and how to set these? I suppose I could just put the xinput commands in a script file and have it execute upon login. I'm just wondering why the old method stopped working. EDIT: Current file /etc/X11/xinit/xinput.d/en_US: xinput set-prop 17 316 1 xinput set-prop 17 317 350 But I've realized that for some reason, the touchpad ID changes. Right now it's 15. Also, the actual properties such as "Drag Lock" can change. So this method doesn't work.

    Read the article

  • Access denied error while mounting a shared folder?

    - by SSH
    I am a linux newbie and I have a very basic question. I have three machines - machineA 10.108.24.132 machineB 10.108.24.133 machineC 10.108.24.134 and all those machines have Ubuntu 12.04 installed in it and I have root access to all those three machines. Now I am supposed to do below things in my above machines - Create mount point /opt/exhibitor/conf Mount the directory in all servers. sudo mount <NFS-SERVER>:/opt/exhibitor/conf /opt/exhibitor/conf/ I have already created /opt/exhibitor/conf directory in all those three machines as mentioned above. Now I am trying to create a Mount Point on all those three machines. So I followed the below process - Install NFS support files and NFS kernel server in all the above three machines $ sudo apt-get install nfs-common nfs-kernel-server Create the shared directory in all the above three machines $ mkdir /opt/exhibitor/conf/ Edited the /etc/exports and added the entry like this in all the above three machines - # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /opt/exhibitor/conf/ 10.108.24.*(rw) Run exportfs in all the above three machines root@machineA:/# exportfs -rv exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "10.108.24.*:/opt/exhibitor/conf/". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exporting 10.108.24.*:/opt/exhibitor/conf Now I did showmount on machineA root@machineA:/# showmount -e 10.108.24.132 Export list for 10.108.24.132: /opt/exhibitor/conf 10.108.24.* And also I have started the NFS server like this in all the above three machines - sudo /etc/init.d/nfs-kernel-server start And now when I did this, I am getting an error - root@machineA:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ mount.nfs: access denied by server while mounting 10.108.24.132:/opt/exhibitor/conf I have also tried doing the same thing from machineB and machineC as well and still I get the same error- root@machineB:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ root@machineC:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ Did my /etc/exports file looks good? As I have the same content in all the three machines. And also are there any logs related to NFS which I can see to find any clues? Any idea what wrong I am doing here? UPDATE:- So my etc/exports files would be like this in all the three machines - # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /opt/exhibitor/conf/ 10.108.24.132(rw) /opt/exhibitor/conf/ 10.108.24.133(rw) /opt/exhibitor/conf/ 10.108.24.134(rw) Just a quick check - The IP Address that I am taking for each machine as mentioned above is like this - root@machineB:/# ifconfig eth0 Link encap:Ethernet HWaddr 00:50:56:ad:5b:a7 inet addr:10.108.24.133 Bcast:10.108.27.255 Mask:255.255.252.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5696812 errors:0 dropped:12462 overruns:0 frame:0 TX packets:5083427 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7904369145 (7.9 GB) TX bytes:601844910 (601.8 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:187144 errors:0 dropped:0 overruns:0 frame:0 TX packets:187144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:24012302 (24.0 MB) TX bytes:24012302 (24.0 MB) Here the IP Address that I am taking for machineB is 10.108.24.133.

    Read the article

  • root folder php scripts not running in nginx

    - by Thermionix
    nginx with php-fpm on ubuntu 12.04 server. attempting to access /var/www/test.php (via https://example.net/test.php) downloads the script instead of executing it. if I place the test.php in a subdirectory, i.e. /var/www/test/test.php it executes. root.conf; root /var/www; include php-fpm.conf; location ~ /\. { access_log off; log_not_found off; deny all; } php-fpm.conf; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.socket; include fastcgi_params; } fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • Webmaster tools showing 404 for non existent folder pages

    - by Jody
    Google webmaster tools is reporting some/many 404 urls that don't exist on my site. The links are things such as domain.com/xyz/ However that doesn't exist, but domain.com/xyz/index.html does exist. The "linked from" pages all show proper links to the "/xyz/index.html". The page without index.html DOES 404, but why is google even trying these urls if they are not linked to? My real question, is there a way to have google stop attempting to load these pages, and ultimately remove these from the crawl errors report. Thanks.

    Read the article

  • Copying VBoxAdditions to usr/share/virtualbox folder

    - by Joe
    Since for some reason VBox does not find the Additions on the internet, I was trying to install them in the Ubuntu directory Vbox is looking for them - which is: usr/share/virtualbox but I am denied permission to do so. Any way around it? I am relatively new to ubuntu (know how to use the GUI, but still learning how to talk to the machine proper, so many things will be new to me; used to be power user/analyst for MS Windows, 98-Vista, so not a PC newbie, but still I'd say Linux newbie). Any suggestion is more than welcome! Thanks Joe

    Read the article

  • xgettext output to specific folder

    - by John
    I am new using xgettext command So I don't know what am I doing wrong. I put the command xgettext -n *.php -o --output='/home/public/sample' in my script but I get an error: xgettext: cannot create output file "--output=/home/public/sample": No such file or directory But when I run xgettext -n *.php - messages.po file gets created in my current directory! Is there a way to specify the location where to create messages.po file?

    Read the article

  • How to make a launcher that will first navigate to a folder and then execute a command that resides in normal /usr/bin/

    - by Nirmik
    Okay this Question is basically directed for using GRIVE the linux client for Google Drive Details on how to do it are Here. The thing is that,evrytime i want the folder to sync,I have to navigate to the google drive folder and then execute the grive commnd. I want to make it simple..I want to make a launcher(I know how to make a *.desktop file). But in a .desktop file you always give path to executable file(generally .sh). Here,there is no script in the Grive folder.The app is as usual in /usr/bin/grive Now how do I make the launcher to first navigate to the grive folder and then execut the grive command.. Thanx :)

    Read the article

  • Synchronize folder on network, preserving hard links

    - by Waleed Hamra
    I have few computers using Windows XP Pro. I want to synchronize/back a folder from one machine, to another one. This far, It's a simple problem, and I've used FreeFileSync for such operations, with very satisfactory results. But, this all changes when hard links come into play. Today's folder contains lots of hard links, using such backup programs will result in hard links being treated as multiple files, and copied as such, greatly increasing folder size on destination, and defeating the purpose of using all these hard links in the first place. It gets more complicated when we take into consideration the fact that network shares on Windows DON'T expose hard linking facilities, meaning that running a hard-link-aware tool like rsync using --hard-links will be of no use. So my question, how can i backup my folder to the other computer, while preserving hard links? I don't mind installing 3rd party tools to do it, as obviously, the standard windows shares approach won't work... I am guessing there might be some tool that can be installed on both machines and works in a server/client mode? anyone has any idea how to do this?

    Read the article

  • .htaccess ReWrite wildcard folder paths from host

    - by JHuangweb
    My desired result is change a file to root / from a N number of paths. For example: www.host.com/a/b/c/e/f/g/images/1.jpg, where A~G is not always given. Result: www.host.com/images/1.jpg This is what I have so far: www.host.com/a/images -- www.host.com/images Using: RewriteRule ^a\/images/$ images/$1 [L] What I need is a wildcard in front of /images/ Like this: RewriteRule ^*/images/$ images/$1 [L] How can I do this correctly in .htaccess?

    Read the article

  • Basics about file/folder permissions on Win 7

    - by Altar
    Hi. Under Win XP I never touched the permissions of a file/folder. I was happy with the way it worked. But recently I installed Windows 7 on a drive that previously hosted Windows XP. Now, some programs do not have 'read' and/or 'write' access to their own folders - and I am not talking about system folders like 'Program Files' but normal folders like 'C:\my data\my own folder\program folder'. I see that for folders created under Win XP I have some user groups that do not exist for 'normal' folders (folders created by me recently under Windows 7). For example, for the Win XP folder I have: Creator owner System Account unknown(S-1-5-21 blablabla... Admins Users For Win7 folders I have: Authenticated users System Admins Users How should I proceed? Should I give the right to the "Users" account to write to XP folders? Should I make the old folders (the XP folders) to have the same groups of users as the normal (Win7) ones by adding the "Authenticated users" account to those folders? Should I delete the "Account unknown" account from my system? (In this case, how?) Many thanks.

    Read the article

  • Samba user does not have folder read permission

    - by user289455
    I have set up a special user for read only samba shares. I set him up in Samba and as a system user. I shared a couple of folders but that user cannot access them. I know samba is working because I also shared them with the main user of the system which is an admin account and it works fine. How can I allow this user to have read permissions on all the directories I want to share without changing anything for any other users of the system? For example, I don't want to give him ownership of any of the files/directories. Just ongoing recursive read access. ongoing recursive is important. If someone adds a file or directory, I still want him to automatically be able to read it.

    Read the article

  • How to add a writable folder to the PHP document root on linux

    - by Ron Whites
    We are building an example bash script for our PHP TestCoverage Tool use on Linux. The development environment is Ubuntu 12.04_1 but we intend to have the linux example work across as many linux versions as possible without modification. The example linux script requires a variable be set to the PHP Document Root path and by default uses a small PHP example source to show the user how our GUI and text report shows the covered and uncovered PHP code areas. The linux script is also intended to be easily alterable by the user to automate the TestCoverage display of users PHP code. The problem we are having with Ubuntu 12.04 (any linux?) is that the PHP Apache2 document root is defined in /etc/apache2/sites-available/default as /var/www and /var/www is defaulted with "drwxr-xr-x" read only access. So in order to add our own folder as /var/www/SDTestCoverage we must change /var/www to "drwxrwxrwx" read-write access. So it seems our script (at least on Ubuntu) will need to ... 1. acquire and save the /var/www permissions then do .. 2. sudo chmod 777 /var/www (to make writable) 3. mkdir -p /var/www/SDTestCoverage (create our folder under the document root) 4. sudo chmod 777 /var/www/SDTestCoverage (make our subfolder writable) 5. and finally restore /var/www permissions Thanks and our Questions are .. 1. Is this the standard way (using Ubuntu) one adds a writable folder under the PHP Document Root? 2. Is this the most general purpose way one adds a writable folder under the PHP Document Root on other versions of Linux?

    Read the article

  • /build folder used by PEAR

    - by Paul
    I've just noticed a root directory (/build) which I can't seem to find any information for. It looks like it's some sort of staging ground for PEAR (PHP). There are only two folders of different php versions in it, and each of those has a few PEAR tar files I've installed (via the PEAR command line). I'm really only asking this question because I find is strange PEAR (and only PEAR) would create its own root directory to store files. Is this normal? Does Ubuntu provide a /build directory for applications to use?

    Read the article

  • SEO - folder or file [closed]

    - by ErmSo
    Possible Duplicate: Should I use a file extension or not? I'm creating a website with a number of pricing options. Each price plan has it's own page and there is also a comparison page. As far as SEO is concerned, which of the following is better? or does it not make a difference? Option one - folders /pricing/plans /pricing/plans/free Option two- files /pricing/plans.php /pricing/free-plan.php

    Read the article

  • See number of SVN Checkins per folder

    - by Farseeker
    I have a very large SVN repository (working copy of several gb) that has just reached its 20,000th checkin. As a bit of an interesting statistic for our team (and to partly celebrate our 20,000th checkin) I'd like to make a graph showing which folders in the repository have had the most checkins. Is there any way to do this? We mostly use integrated SVN clients in our IDE and Tortoise SVN, but I'm willing to get other tools for this one-off thing.

    Read the article

  • Restrict user to folder (not root) on VSFTPD in Ubuntu

    - by omega1
    I am a new Linux (Ubuntu) user and have a VPS where I am setting up a backup FTP service. I have followed this guide, which I have managed to do correctly and it works. I have two users (user1 and user2) with the same directory /home/users/test. user1 can read/write and user2 can only read. This works OK. When the users log in, they go straight into the correct directory /home/users/test, but they can navigate back down to the home directory, which I do not want to happen. I cannot seem to find out how to not allow this, and have them not be able to navigate back to the /home/ or /home/test/ directories.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >