Search Results

Search found 16768 results on 671 pages for 'folder compare'.

Page 33/671 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Files inside Alias folder not accessible

    - by John Isaacks
    In my apache2.conf I have an alias setup like this: Alias /cake/ /var/www-cake/repo <Directory /var/www-cake/repo> Order allow,deny Allow from all AllowOverride All Options +Indexes </Directory> inside the /var/www-cake/repo directory I just have 1 file that is index.php when I go to http://linux-server/cake/ I get a directory listing that shoes the index.php file. When I click on the file it takes me to http://linux-server/cake/index.php in which I get a 404 page not found error. What do I need to do to make the files accessible?

    Read the article

  • Managing Files/Folder in Content Repositories or File Systems with Oracle ADF and WebCenter

    - by Shay Shmeltzer
    One more entry in a set of entries (1,2,3) about the capabilities that WebCenter adds to ADF applications. WebCenter is basically the new Portal framework in the Oracle stack - and one key thing that portals do is work with content, allowing you to compose and publish content from files as well as save and store content. In this demo you'll see how using a set of taskflows provided by WebCenter you can add a file management, creation and viewing capabilities to a regular ADF application. To try this out you don't need any fancy content management system - we'll just use your file system for now. All you need is the WebCenter extension installed in JDeveloper, and then you can follow the demo on your own JDeveloper instance. You'll define a connection to your content repository you'll be able to add a bunch of pre-built WebCenter taskflows into your page. And suddenly you can upload/download/create and view document directly from your applicaiton. Check it out:

    Read the article

  • Set usergroup and persmissions ftp folder back to default

    - by OrangeTux
    Argh, I tried to create a new ftp user via the commandline. But i did something wrong and now I can access the server via FTP but I can't see any files. It doesn't make any sense wich user I'm using. ls -la drwxr-xr-x 13 root ftp 4096 2012-03-30 09:47 . drwxr-xr-x 7 web6 ftp 4096 2012-03-26 09:28 .. drwxr-xr-x 4 web6 ftp 4096 2012-03-26 13:31 actions drwxr-xr-x 2 web6 ftp 4096 2012-03-26 11:46 bin -rwxr-xr-x 1 web6 ftp 1520 2012-03-24 23:32 changelog.txt drwxr-xr-x 2 web6 ftp 4096 2012-03-26 13:30 css drwxr-xr-x 8 web6 ftp 4096 2012-03-24 22:43 external -rwxr-xr-x 1 web6 ftp 333 2012-03-26 15:12 .htaccess drwxr-xr-x 3 web6 ftp 4096 2012-02-27 15:07 images -rwxr-xr-x 1 web6 ftp 1606 2012-03-26 21:25 index.php drwxr-xr-x 2 web6 ftp 4096 2012-02-18 13:20 js drwxr-xr-x 2 web6 ftp 4096 2012-02-03 00:34 layout drwxr-xr-x 2 web6 ftp 4096 2012-03-29 23:35 library drwxr-xr-x 2 web6 ftp 4096 2012-03-30 09:47 log -rwxr-xr-x 1 web6 ftp 396 2012-03-24 15:04 menu.php drwxr-xr-x 2 web6 ftp 4096 2012-03-30 12:01 python drwxr-xr-x 2 web6 ftp 4096 2012-03-23 10:51 todo I can't see no dirs and files because I changed the groupowner or I changed the rights of the groupowner of the ftp dir. How can I set the ownership of the files back to default so I can access the files via FTP again? Many thanks in advance

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • ~/.xinput.d folder is ignored in Ubuntu 13.04

    - by CaptSaltyJack
    It used to be that you could make a file ~/.xinput.d/en_US and put xinput commands in there, such as enabling drag lock. Now, for some reason, in 13.04 this does not work. Anyone know why this changed, and how to set these? I suppose I could just put the xinput commands in a script file and have it execute upon login. I'm just wondering why the old method stopped working. EDIT: Current file /etc/X11/xinit/xinput.d/en_US: xinput set-prop 17 316 1 xinput set-prop 17 317 350 But I've realized that for some reason, the touchpad ID changes. Right now it's 15. Also, the actual properties such as "Drag Lock" can change. So this method doesn't work.

    Read the article

  • Compare 2 sets of data in Excel and returning a value when multiple columns match

    - by Susan C
    I have a data set for employees that contains name and 3 attributes (job function, job grade and location). I then have a data set for open positions that contains the requisition number and 3 attributes (job function, job grade and job location). For every employee, i would like the three attributes associated with them compared to the same three attributes of the open positions and have the cooresponding requisition numbers displayed for each employee where there is a match.

    Read the article

  • How does BTRFS compare to ZFS?

    - by Zubair
    I am considering which OS and filesystem to use on some new servers I have and am considering either Free BSD with ZFS, or Linux with BTRFS. The programs I have run on both systems, so the only issue is reliability of the filesystems and performance, etc.

    Read the article

  • Access denied error while mounting a shared folder?

    - by SSH
    I am a linux newbie and I have a very basic question. I have three machines - machineA 10.108.24.132 machineB 10.108.24.133 machineC 10.108.24.134 and all those machines have Ubuntu 12.04 installed in it and I have root access to all those three machines. Now I am supposed to do below things in my above machines - Create mount point /opt/exhibitor/conf Mount the directory in all servers. sudo mount <NFS-SERVER>:/opt/exhibitor/conf /opt/exhibitor/conf/ I have already created /opt/exhibitor/conf directory in all those three machines as mentioned above. Now I am trying to create a Mount Point on all those three machines. So I followed the below process - Install NFS support files and NFS kernel server in all the above three machines $ sudo apt-get install nfs-common nfs-kernel-server Create the shared directory in all the above three machines $ mkdir /opt/exhibitor/conf/ Edited the /etc/exports and added the entry like this in all the above three machines - # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /opt/exhibitor/conf/ 10.108.24.*(rw) Run exportfs in all the above three machines root@machineA:/# exportfs -rv exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "10.108.24.*:/opt/exhibitor/conf/". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exporting 10.108.24.*:/opt/exhibitor/conf Now I did showmount on machineA root@machineA:/# showmount -e 10.108.24.132 Export list for 10.108.24.132: /opt/exhibitor/conf 10.108.24.* And also I have started the NFS server like this in all the above three machines - sudo /etc/init.d/nfs-kernel-server start And now when I did this, I am getting an error - root@machineA:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ mount.nfs: access denied by server while mounting 10.108.24.132:/opt/exhibitor/conf I have also tried doing the same thing from machineB and machineC as well and still I get the same error- root@machineB:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ root@machineC:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ Did my /etc/exports file looks good? As I have the same content in all the three machines. And also are there any logs related to NFS which I can see to find any clues? Any idea what wrong I am doing here? UPDATE:- So my etc/exports files would be like this in all the three machines - # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /opt/exhibitor/conf/ 10.108.24.132(rw) /opt/exhibitor/conf/ 10.108.24.133(rw) /opt/exhibitor/conf/ 10.108.24.134(rw) Just a quick check - The IP Address that I am taking for each machine as mentioned above is like this - root@machineB:/# ifconfig eth0 Link encap:Ethernet HWaddr 00:50:56:ad:5b:a7 inet addr:10.108.24.133 Bcast:10.108.27.255 Mask:255.255.252.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5696812 errors:0 dropped:12462 overruns:0 frame:0 TX packets:5083427 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7904369145 (7.9 GB) TX bytes:601844910 (601.8 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:187144 errors:0 dropped:0 overruns:0 frame:0 TX packets:187144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:24012302 (24.0 MB) TX bytes:24012302 (24.0 MB) Here the IP Address that I am taking for machineB is 10.108.24.133.

    Read the article

  • Active Directory problems while trying to perfom compare operation

    - by Alex
    I have CentOs 5.5 with Apache 2.2 and SVN installed. Also I have Windows 2003 R2 with Active Directory. I'm trying to authorize users via AD so each user have access to repo if he is a member of corespondent group in AD. Here is my apache config: LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so LDAPVerifyServerCert off ServerName svn.mydomain.com DocumentRoot /var/www/svn.mydomain.com/htdocs RewriteEngine On [Location /] AuthType basic AuthBasicProvider ldap AuthzLDAPAuthoritative on AuthLDAPURL ldaps://comp1.mydomain.com:636/DC=mydomain,DC=com?sAMAccountName?sub?(objectClass=*) AuthLDAPBindDN [email protected] AuthLDAPBindPassword binduserpassword [/Location] [Location /repos/test] DAV svn SVNPath /var/svn/repos/test AuthName "SVN repository for test" Require ldap-group CN=test,CN=ProjectGroups,DC=mydomain,DC=com [/Location] When I'm using "Require valid-user" everything goes fine, "Require ldap-user" also works. But as soon as I use "Require ldap-group" authorization fails. Trere are no errors in apache logs, but Active Directory shows folowing error: Event Type: Information Event Source: NTDS LDAP Event Category: LDAP Interface Event ID: 1138 Date: 10/9/2010 Time: 1:28:52 PM User: MYDOMAIN\binduser Computer: COMP1 Description: Internal event: Function ldap_compare entered. Event Type: Error Event Source: NTDS General Event Category: Internal Processing Event ID: 1481 Date: 10/9/2010 Time: 1:28:52 PM User: MYDOMAIN\binduser Computer: COMP1 Description: Internal error: The operation on the object failed. Additional Data Error value: 2 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of: 'DC=mydomain,DC=com' I'm confused by this problem. What I'm doing wrong?

    Read the article

  • root folder php scripts not running in nginx

    - by Thermionix
    nginx with php-fpm on ubuntu 12.04 server. attempting to access /var/www/test.php (via https://example.net/test.php) downloads the script instead of executing it. if I place the test.php in a subdirectory, i.e. /var/www/test/test.php it executes. root.conf; root /var/www; include php-fpm.conf; location ~ /\. { access_log off; log_not_found off; deny all; } php-fpm.conf; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.socket; include fastcgi_params; } fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • Webmaster tools showing 404 for non existent folder pages

    - by Jody
    Google webmaster tools is reporting some/many 404 urls that don't exist on my site. The links are things such as domain.com/xyz/ However that doesn't exist, but domain.com/xyz/index.html does exist. The "linked from" pages all show proper links to the "/xyz/index.html". The page without index.html DOES 404, but why is google even trying these urls if they are not linked to? My real question, is there a way to have google stop attempting to load these pages, and ultimately remove these from the crawl errors report. Thanks.

    Read the article

  • Sharing samba-folder with root access

    - by Industrial
    Hi everyone, I have a staging server in my network running Ubuntu server 10.10, being my main development area. As I need to access the files in the Apache root from other computers in the network, I have setup samba with the following settings: [www] comment = Apache root www path = /var/www writable = yes force user = root force group = root On the host computer, running Ubuntu 10.10 desktop, I am trying to mount the drive with a bash file looking like below: #!/bin/bash sudo mount -t cifs //192.168.1.5/www /media/www/ -o username=myusername,password=mypassword,rw,iocharset=utf8,file_mode=0777,dir_mode=0777 What happens is that I get mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) thrown in my face whilst trying to execute the mount. I've done exactly the same, with exactly the same smb.conf & mount-bash file on another computer in my network, but this just wont work. What am I doing wrong? I am running out of ideas.

    Read the article

  • Fuji camera "mounts" but folder not in Dolphin After Kubuntu 13.10 upgrade

    - by user207207
    Fuji camera mount reported in attached devices but not visible in Dolphin After Kubuntu 13.10 upgrade Have reinstalled the driver, and a few other suggestions, for other cameras mounts failing on previous Ubuntu upgrades. I have already spent a couple of hours trying to get my photo's off the camera, very annoying. Worked perfectly in 11.04, 11.10, 12.04, 12.10 and 13.04. dmesg | tail; lsusb; lsb_release -a [ 6181.858786] CPUM: APIC 03 at 00000000fee00000 (mapped at ffffc90009400000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396236] CPUM: APIC 00 at 00000000fee00000 (mapped at ffffc90000c6a000) - ver 0x80050010, lint0=0x10700 lint1=0x00400 pc=0x00400 thmr=0x10000 [17261.396239] CPUM: APIC 03 at 00000000fee00000 (mapped at ffffc90000c72000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396241] CPUM: APIC 02 at 00000000fee00000 (mapped at ffffc90000c70000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396255] CPUM: APIC 01 at 00000000fee00000 (mapped at ffffc90000c6e000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [32456.884907] usb 2-5: new high-speed USB device number 2 using ehci-pci [32457.654046] usb 2-5: New USB device found, idVendor=04cb, idProduct=01e8 [32457.654050] usb 2-5: New USB device strings: Mfr=0, Product=2, SerialNumber=3 [32457.654052] usb 2-5: Product: Digital Camera [32457.654053] usb 2-5: SerialNumber: 4C3230302020091117CAA59WP18548 Bus 002 Device 002: ID 04cb:01e8 Fuji Photo Film Co., Ltd Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 003: ID 2013:024f PCTV Systems nanoStick T2 290e Bus 001 Device 002: ID 046d:082d Logitech, Inc. HD Pro Webcam C920 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub No LSB modules are available. Distributor ID: Ubuntu vissibleDescription: Ubuntu 13.10 Release: 13.10 Codename: saucy sudo apt-get install gvfs-bin gvfs-mount gphoto2://[usb:002,002] Error mounting location: Error initializing camera: -108: No such file or directory ...... I have reported a bug in Dolphin, which has been transferred to Solid. Further information : I ran solid-hardware list details udi = '/org/kde/solid/udev/sys/devices/pci0000:00/0000:00:04.1/usb2/2-5' parent = '/org/kde/solid/udev' (string) vendor = '04cb' (string) product = 'Digital Camera' (string) description = 'Camera' (string) Block.major = 189 (0xbd) (int) Block.minor = 137 (0x89) (int) Block.device = '/dev/bus/usb/002/010' (string) Camera.supportedProtocols = {'ptp'} (string list) Camera.supportedDrivers = {'gphoto'} (string list) I still can't get my photo's off, I can see the folders using the Gimp menu. If anyone has got any ideas, I'm willing to try them.

    Read the article

  • Copying VBoxAdditions to usr/share/virtualbox folder

    - by Joe
    Since for some reason VBox does not find the Additions on the internet, I was trying to install them in the Ubuntu directory Vbox is looking for them - which is: usr/share/virtualbox but I am denied permission to do so. Any way around it? I am relatively new to ubuntu (know how to use the GUI, but still learning how to talk to the machine proper, so many things will be new to me; used to be power user/analyst for MS Windows, 98-Vista, so not a PC newbie, but still I'd say Linux newbie). Any suggestion is more than welcome! Thanks Joe

    Read the article

  • xgettext output to specific folder

    - by John
    I am new using xgettext command So I don't know what am I doing wrong. I put the command xgettext -n *.php -o --output='/home/public/sample' in my script but I get an error: xgettext: cannot create output file "--output=/home/public/sample": No such file or directory But when I run xgettext -n *.php - messages.po file gets created in my current directory! Is there a way to specify the location where to create messages.po file?

    Read the article

  • Synchronize folder on network, preserving hard links

    - by Waleed Hamra
    I have few computers using Windows XP Pro. I want to synchronize/back a folder from one machine, to another one. This far, It's a simple problem, and I've used FreeFileSync for such operations, with very satisfactory results. But, this all changes when hard links come into play. Today's folder contains lots of hard links, using such backup programs will result in hard links being treated as multiple files, and copied as such, greatly increasing folder size on destination, and defeating the purpose of using all these hard links in the first place. It gets more complicated when we take into consideration the fact that network shares on Windows DON'T expose hard linking facilities, meaning that running a hard-link-aware tool like rsync using --hard-links will be of no use. So my question, how can i backup my folder to the other computer, while preserving hard links? I don't mind installing 3rd party tools to do it, as obviously, the standard windows shares approach won't work... I am guessing there might be some tool that can be installed on both machines and works in a server/client mode? anyone has any idea how to do this?

    Read the article

  • .htaccess ReWrite wildcard folder paths from host

    - by JHuangweb
    My desired result is change a file to root / from a N number of paths. For example: www.host.com/a/b/c/e/f/g/images/1.jpg, where A~G is not always given. Result: www.host.com/images/1.jpg This is what I have so far: www.host.com/a/images -- www.host.com/images Using: RewriteRule ^a\/images/$ images/$1 [L] What I need is a wildcard in front of /images/ Like this: RewriteRule ^*/images/$ images/$1 [L] How can I do this correctly in .htaccess?

    Read the article

  • Subversion: How to compare differences between incoming changes?

    - by misbehavens
    I would like to see the changes that my co-workers have made before I accept the incoming changes. So I start by getting the status svn st -u ...which tells me that I've got an incoming change * 9803 incomingChanges.html M 9803 localChanges.html M * 9803 localAndIncoming.html I can see what I've changed svn diff localChanges.html ...but how can I diff incomingChanges.html and/orlocalAndIncoming.html to show what has been changed, and how it's different than my working copy?

    Read the article

  • Basics about file/folder permissions on Win 7

    - by Altar
    Hi. Under Win XP I never touched the permissions of a file/folder. I was happy with the way it worked. But recently I installed Windows 7 on a drive that previously hosted Windows XP. Now, some programs do not have 'read' and/or 'write' access to their own folders - and I am not talking about system folders like 'Program Files' but normal folders like 'C:\my data\my own folder\program folder'. I see that for folders created under Win XP I have some user groups that do not exist for 'normal' folders (folders created by me recently under Windows 7). For example, for the Win XP folder I have: Creator owner System Account unknown(S-1-5-21 blablabla... Admins Users For Win7 folders I have: Authenticated users System Admins Users How should I proceed? Should I give the right to the "Users" account to write to XP folders? Should I make the old folders (the XP folders) to have the same groups of users as the normal (Win7) ones by adding the "Authenticated users" account to those folders? Should I delete the "Account unknown" account from my system? (In this case, how?) Many thanks.

    Read the article

  • How to add a writable folder to the PHP document root on linux

    - by Ron Whites
    We are building an example bash script for our PHP TestCoverage Tool use on Linux. The development environment is Ubuntu 12.04_1 but we intend to have the linux example work across as many linux versions as possible without modification. The example linux script requires a variable be set to the PHP Document Root path and by default uses a small PHP example source to show the user how our GUI and text report shows the covered and uncovered PHP code areas. The linux script is also intended to be easily alterable by the user to automate the TestCoverage display of users PHP code. The problem we are having with Ubuntu 12.04 (any linux?) is that the PHP Apache2 document root is defined in /etc/apache2/sites-available/default as /var/www and /var/www is defaulted with "drwxr-xr-x" read only access. So in order to add our own folder as /var/www/SDTestCoverage we must change /var/www to "drwxrwxrwx" read-write access. So it seems our script (at least on Ubuntu) will need to ... 1. acquire and save the /var/www permissions then do .. 2. sudo chmod 777 /var/www (to make writable) 3. mkdir -p /var/www/SDTestCoverage (create our folder under the document root) 4. sudo chmod 777 /var/www/SDTestCoverage (make our subfolder writable) 5. and finally restore /var/www permissions Thanks and our Questions are .. 1. Is this the standard way (using Ubuntu) one adds a writable folder under the PHP Document Root? 2. Is this the most general purpose way one adds a writable folder under the PHP Document Root on other versions of Linux?

    Read the article

  • Samba user does not have folder read permission

    - by user289455
    I have set up a special user for read only samba shares. I set him up in Samba and as a system user. I shared a couple of folders but that user cannot access them. I know samba is working because I also shared them with the main user of the system which is an admin account and it works fine. How can I allow this user to have read permissions on all the directories I want to share without changing anything for any other users of the system? For example, I don't want to give him ownership of any of the files/directories. Just ongoing recursive read access. ongoing recursive is important. If someone adds a file or directory, I still want him to automatically be able to read it.

    Read the article

  • Compare Error logs across service fleet, find unique errors

    - by neuroelectronic
    I'm looking for a tool where I can list the servers to check, the location of the file and it would return a list of the most common errors across those servers (say like 2 or 3 servers for report brevity) and get a report something like this Server.A Server.B Server.C -------- -------- -------- 42 error.X 39 error.X 61 error.X 21 error.Y 7 error.Y 5 error.A 17 error.B 6 error.A 4 error.Y 4 error.A 2 error.R 3 error.S 3 error.R 1 error.S 1 error.R Of course, excluding timestamps and other error details and just grepping out the common sub-strings and listing them like so. I'd be able to look at the table and see that error.B is unique to Server.A and conclude that there is something up with Server.A. Does something like this already exist? Is this something I'll have to code myself? I'm not necessarily looking for this specific report, just the functionality to find unique errors across a set of error logs.

    Read the article

  • Record Matching Software to Compare two tables and match on % Based

    - by Crazyd
    So I have some table with Name, Address, and Zip with no record data attached; and I have a table which has all the same, but has more information and I need a way to merge the tables when they don't match 100%. How do I match them up if they aren't Identical? I'm a newb @ SQL, but I know they won't match up for the most part and I can't be the only one with this issue. However software which will do this has proven to be difficult. Writing software to do this would even be worse than having to do it in the first place. I know I can do this in excel; kinda, but with the amount of records I have its proving to be difficult over a million.

    Read the article

  • /build folder used by PEAR

    - by Paul
    I've just noticed a root directory (/build) which I can't seem to find any information for. It looks like it's some sort of staging ground for PEAR (PHP). There are only two folders of different php versions in it, and each of those has a few PEAR tar files I've installed (via the PEAR command line). I'm really only asking this question because I find is strange PEAR (and only PEAR) would create its own root directory to store files. Is this normal? Does Ubuntu provide a /build directory for applications to use?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >