Search Results

Search found 17202 results on 689 pages for 'folder permissions'.

Page 71/689 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Managing Files/Folder in Content Repositories or File Systems with Oracle ADF and WebCenter

    - by Shay Shmeltzer
    One more entry in a set of entries (1,2,3) about the capabilities that WebCenter adds to ADF applications. WebCenter is basically the new Portal framework in the Oracle stack - and one key thing that portals do is work with content, allowing you to compose and publish content from files as well as save and store content. In this demo you'll see how using a set of taskflows provided by WebCenter you can add a file management, creation and viewing capabilities to a regular ADF application. To try this out you don't need any fancy content management system - we'll just use your file system for now. All you need is the WebCenter extension installed in JDeveloper, and then you can follow the demo on your own JDeveloper instance. You'll define a connection to your content repository you'll be able to add a bunch of pre-built WebCenter taskflows into your page. And suddenly you can upload/download/create and view document directly from your applicaiton. Check it out:

    Read the article

  • Set usergroup and persmissions ftp folder back to default

    - by OrangeTux
    Argh, I tried to create a new ftp user via the commandline. But i did something wrong and now I can access the server via FTP but I can't see any files. It doesn't make any sense wich user I'm using. ls -la drwxr-xr-x 13 root ftp 4096 2012-03-30 09:47 . drwxr-xr-x 7 web6 ftp 4096 2012-03-26 09:28 .. drwxr-xr-x 4 web6 ftp 4096 2012-03-26 13:31 actions drwxr-xr-x 2 web6 ftp 4096 2012-03-26 11:46 bin -rwxr-xr-x 1 web6 ftp 1520 2012-03-24 23:32 changelog.txt drwxr-xr-x 2 web6 ftp 4096 2012-03-26 13:30 css drwxr-xr-x 8 web6 ftp 4096 2012-03-24 22:43 external -rwxr-xr-x 1 web6 ftp 333 2012-03-26 15:12 .htaccess drwxr-xr-x 3 web6 ftp 4096 2012-02-27 15:07 images -rwxr-xr-x 1 web6 ftp 1606 2012-03-26 21:25 index.php drwxr-xr-x 2 web6 ftp 4096 2012-02-18 13:20 js drwxr-xr-x 2 web6 ftp 4096 2012-02-03 00:34 layout drwxr-xr-x 2 web6 ftp 4096 2012-03-29 23:35 library drwxr-xr-x 2 web6 ftp 4096 2012-03-30 09:47 log -rwxr-xr-x 1 web6 ftp 396 2012-03-24 15:04 menu.php drwxr-xr-x 2 web6 ftp 4096 2012-03-30 12:01 python drwxr-xr-x 2 web6 ftp 4096 2012-03-23 10:51 todo I can't see no dirs and files because I changed the groupowner or I changed the rights of the groupowner of the ftp dir. How can I set the ownership of the files back to default so I can access the files via FTP again? Many thanks in advance

    Read the article

  • On AWS EC2, Unable to run sudo command after modifying permissions to /usr folder

    - by Kayote
    All, We have searched quite a bit and a few of 'Eliah Kagan's' posts are great about getting access back to sudo. However, our server is on AWS EC2 & I am a complete newbie to this. We are trying to setup Cronjobs for backing up our server data. What we did: Using Putty, we created a script file: usr/share/site-db-backup/backupToS3.php, however, Ubuntu was not saving the changes we made as it reported we did not have permission as user 'Ubuntu'. Error details are: "Upload of file backupToS3.php was successful but error occurred while setting the permission &/ or timestamp. If the problem persists, turn on 'ignore permission errors' option. Permission denied. Error code: 3 Request code 9" So, we ran the command "sudo chmod -R a+rwx /usr" for granting permission to the folder 'usr'. However, now whatever sudo command is run, we get the error: "/usr/lib/sudo/sudoers.so must be only be writable by owner. fatal error, unable to load plugins." We are complete newbies to Ubuntu & EC2 so do need step by step guidance of how to get sudo back & successfully write to the Crontab script sitting in 'usr/' folder.

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • ~/.xinput.d folder is ignored in Ubuntu 13.04

    - by CaptSaltyJack
    It used to be that you could make a file ~/.xinput.d/en_US and put xinput commands in there, such as enabling drag lock. Now, for some reason, in 13.04 this does not work. Anyone know why this changed, and how to set these? I suppose I could just put the xinput commands in a script file and have it execute upon login. I'm just wondering why the old method stopped working. EDIT: Current file /etc/X11/xinit/xinput.d/en_US: xinput set-prop 17 316 1 xinput set-prop 17 317 350 But I've realized that for some reason, the touchpad ID changes. Right now it's 15. Also, the actual properties such as "Drag Lock" can change. So this method doesn't work.

    Read the article

  • Do PHP-FPM (and other PHP handlers) need execute permissions on the PHP files they're serving?

    - by Andrew Cheong
    I read in a post at Server Fault that PHP-FPM needs execute permissions. However, the answer in When creating a website, what permissions and directory structure? only grants read and write permissions to PHP-FPM. Maybe I don't quite understand how PHP handlers (or CGI in general) work, but the two claims seem contradictory to me. As I understand, when Apache / Nginx gets a request for foobar.php, it "passes" the file to an appropriate handler. That is, I imagine it's as if www-root (or apache or whomever the webserver's running as) were to run some command, /usr/sbin/php-fpm foobar.php Actually, no, that's naive, I just realized. PHP-FPM must be a running instance (if it's to be performant, and cache, etc.), so probably PHP-FPM is just being told, "Hey, quick, process this file for me!" In either case, I don't see why execute permissions are necessary. It's not like the webserver needs to literally execute the file, i.e. ./foobar.php Is the Server Fault answer simply mistaken?

    Read the article

  • Access denied error while mounting a shared folder?

    - by SSH
    I am a linux newbie and I have a very basic question. I have three machines - machineA 10.108.24.132 machineB 10.108.24.133 machineC 10.108.24.134 and all those machines have Ubuntu 12.04 installed in it and I have root access to all those three machines. Now I am supposed to do below things in my above machines - Create mount point /opt/exhibitor/conf Mount the directory in all servers. sudo mount <NFS-SERVER>:/opt/exhibitor/conf /opt/exhibitor/conf/ I have already created /opt/exhibitor/conf directory in all those three machines as mentioned above. Now I am trying to create a Mount Point on all those three machines. So I followed the below process - Install NFS support files and NFS kernel server in all the above three machines $ sudo apt-get install nfs-common nfs-kernel-server Create the shared directory in all the above three machines $ mkdir /opt/exhibitor/conf/ Edited the /etc/exports and added the entry like this in all the above three machines - # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /opt/exhibitor/conf/ 10.108.24.*(rw) Run exportfs in all the above three machines root@machineA:/# exportfs -rv exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "10.108.24.*:/opt/exhibitor/conf/". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exporting 10.108.24.*:/opt/exhibitor/conf Now I did showmount on machineA root@machineA:/# showmount -e 10.108.24.132 Export list for 10.108.24.132: /opt/exhibitor/conf 10.108.24.* And also I have started the NFS server like this in all the above three machines - sudo /etc/init.d/nfs-kernel-server start And now when I did this, I am getting an error - root@machineA:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ mount.nfs: access denied by server while mounting 10.108.24.132:/opt/exhibitor/conf I have also tried doing the same thing from machineB and machineC as well and still I get the same error- root@machineB:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ root@machineC:/# sudo mount -t nfs 10.108.24.132:/opt/exhibitor/conf /opt/exhibitor/conf/ Did my /etc/exports file looks good? As I have the same content in all the three machines. And also are there any logs related to NFS which I can see to find any clues? Any idea what wrong I am doing here? UPDATE:- So my etc/exports files would be like this in all the three machines - # /etc/exports: the access control list for filesystems which may be exported # to NFS clients. See exports(5). # # Example for NFSv2 and NFSv3: # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) # # Example for NFSv4: # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) # /opt/exhibitor/conf/ 10.108.24.132(rw) /opt/exhibitor/conf/ 10.108.24.133(rw) /opt/exhibitor/conf/ 10.108.24.134(rw) Just a quick check - The IP Address that I am taking for each machine as mentioned above is like this - root@machineB:/# ifconfig eth0 Link encap:Ethernet HWaddr 00:50:56:ad:5b:a7 inet addr:10.108.24.133 Bcast:10.108.27.255 Mask:255.255.252.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5696812 errors:0 dropped:12462 overruns:0 frame:0 TX packets:5083427 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7904369145 (7.9 GB) TX bytes:601844910 (601.8 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:187144 errors:0 dropped:0 overruns:0 frame:0 TX packets:187144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:24012302 (24.0 MB) TX bytes:24012302 (24.0 MB) Here the IP Address that I am taking for machineB is 10.108.24.133.

    Read the article

  • root folder php scripts not running in nginx

    - by Thermionix
    nginx with php-fpm on ubuntu 12.04 server. attempting to access /var/www/test.php (via https://example.net/test.php) downloads the script instead of executing it. if I place the test.php in a subdirectory, i.e. /var/www/test/test.php it executes. root.conf; root /var/www; include php-fpm.conf; location ~ /\. { access_log off; log_not_found off; deny all; } php-fpm.conf; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.socket; include fastcgi_params; } fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • Webmaster tools showing 404 for non existent folder pages

    - by Jody
    Google webmaster tools is reporting some/many 404 urls that don't exist on my site. The links are things such as domain.com/xyz/ However that doesn't exist, but domain.com/xyz/index.html does exist. The "linked from" pages all show proper links to the "/xyz/index.html". The page without index.html DOES 404, but why is google even trying these urls if they are not linked to? My real question, is there a way to have google stop attempting to load these pages, and ultimately remove these from the crawl errors report. Thanks.

    Read the article

  • Sharing samba-folder with root access

    - by Industrial
    Hi everyone, I have a staging server in my network running Ubuntu server 10.10, being my main development area. As I need to access the files in the Apache root from other computers in the network, I have setup samba with the following settings: [www] comment = Apache root www path = /var/www writable = yes force user = root force group = root On the host computer, running Ubuntu 10.10 desktop, I am trying to mount the drive with a bash file looking like below: #!/bin/bash sudo mount -t cifs //192.168.1.5/www /media/www/ -o username=myusername,password=mypassword,rw,iocharset=utf8,file_mode=0777,dir_mode=0777 What happens is that I get mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) thrown in my face whilst trying to execute the mount. I've done exactly the same, with exactly the same smb.conf & mount-bash file on another computer in my network, but this just wont work. What am I doing wrong? I am running out of ideas.

    Read the article

  • Fuji camera "mounts" but folder not in Dolphin After Kubuntu 13.10 upgrade

    - by user207207
    Fuji camera mount reported in attached devices but not visible in Dolphin After Kubuntu 13.10 upgrade Have reinstalled the driver, and a few other suggestions, for other cameras mounts failing on previous Ubuntu upgrades. I have already spent a couple of hours trying to get my photo's off the camera, very annoying. Worked perfectly in 11.04, 11.10, 12.04, 12.10 and 13.04. dmesg | tail; lsusb; lsb_release -a [ 6181.858786] CPUM: APIC 03 at 00000000fee00000 (mapped at ffffc90009400000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396236] CPUM: APIC 00 at 00000000fee00000 (mapped at ffffc90000c6a000) - ver 0x80050010, lint0=0x10700 lint1=0x00400 pc=0x00400 thmr=0x10000 [17261.396239] CPUM: APIC 03 at 00000000fee00000 (mapped at ffffc90000c72000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396241] CPUM: APIC 02 at 00000000fee00000 (mapped at ffffc90000c70000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396255] CPUM: APIC 01 at 00000000fee00000 (mapped at ffffc90000c6e000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [32456.884907] usb 2-5: new high-speed USB device number 2 using ehci-pci [32457.654046] usb 2-5: New USB device found, idVendor=04cb, idProduct=01e8 [32457.654050] usb 2-5: New USB device strings: Mfr=0, Product=2, SerialNumber=3 [32457.654052] usb 2-5: Product: Digital Camera [32457.654053] usb 2-5: SerialNumber: 4C3230302020091117CAA59WP18548 Bus 002 Device 002: ID 04cb:01e8 Fuji Photo Film Co., Ltd Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 003: ID 2013:024f PCTV Systems nanoStick T2 290e Bus 001 Device 002: ID 046d:082d Logitech, Inc. HD Pro Webcam C920 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub No LSB modules are available. Distributor ID: Ubuntu vissibleDescription: Ubuntu 13.10 Release: 13.10 Codename: saucy sudo apt-get install gvfs-bin gvfs-mount gphoto2://[usb:002,002] Error mounting location: Error initializing camera: -108: No such file or directory ...... I have reported a bug in Dolphin, which has been transferred to Solid. Further information : I ran solid-hardware list details udi = '/org/kde/solid/udev/sys/devices/pci0000:00/0000:00:04.1/usb2/2-5' parent = '/org/kde/solid/udev' (string) vendor = '04cb' (string) product = 'Digital Camera' (string) description = 'Camera' (string) Block.major = 189 (0xbd) (int) Block.minor = 137 (0x89) (int) Block.device = '/dev/bus/usb/002/010' (string) Camera.supportedProtocols = {'ptp'} (string list) Camera.supportedDrivers = {'gphoto'} (string list) I still can't get my photo's off, I can see the folders using the Gimp menu. If anyone has got any ideas, I'm willing to try them.

    Read the article

  • Copying VBoxAdditions to usr/share/virtualbox folder

    - by Joe
    Since for some reason VBox does not find the Additions on the internet, I was trying to install them in the Ubuntu directory Vbox is looking for them - which is: usr/share/virtualbox but I am denied permission to do so. Any way around it? I am relatively new to ubuntu (know how to use the GUI, but still learning how to talk to the machine proper, so many things will be new to me; used to be power user/analyst for MS Windows, 98-Vista, so not a PC newbie, but still I'd say Linux newbie). Any suggestion is more than welcome! Thanks Joe

    Read the article

  • xgettext output to specific folder

    - by John
    I am new using xgettext command So I don't know what am I doing wrong. I put the command xgettext -n *.php -o --output='/home/public/sample' in my script but I get an error: xgettext: cannot create output file "--output=/home/public/sample": No such file or directory But when I run xgettext -n *.php - messages.po file gets created in my current directory! Is there a way to specify the location where to create messages.po file?

    Read the article

  • Synchronize folder on network, preserving hard links

    - by Waleed Hamra
    I have few computers using Windows XP Pro. I want to synchronize/back a folder from one machine, to another one. This far, It's a simple problem, and I've used FreeFileSync for such operations, with very satisfactory results. But, this all changes when hard links come into play. Today's folder contains lots of hard links, using such backup programs will result in hard links being treated as multiple files, and copied as such, greatly increasing folder size on destination, and defeating the purpose of using all these hard links in the first place. It gets more complicated when we take into consideration the fact that network shares on Windows DON'T expose hard linking facilities, meaning that running a hard-link-aware tool like rsync using --hard-links will be of no use. So my question, how can i backup my folder to the other computer, while preserving hard links? I don't mind installing 3rd party tools to do it, as obviously, the standard windows shares approach won't work... I am guessing there might be some tool that can be installed on both machines and works in a server/client mode? anyone has any idea how to do this?

    Read the article

  • .htaccess ReWrite wildcard folder paths from host

    - by JHuangweb
    My desired result is change a file to root / from a N number of paths. For example: www.host.com/a/b/c/e/f/g/images/1.jpg, where A~G is not always given. Result: www.host.com/images/1.jpg This is what I have so far: www.host.com/a/images -- www.host.com/images Using: RewriteRule ^a\/images/$ images/$1 [L] What I need is a wildcard in front of /images/ Like this: RewriteRule ^*/images/$ images/$1 [L] How can I do this correctly in .htaccess?

    Read the article

  • How to add a writable folder to the PHP document root on linux

    - by Ron Whites
    We are building an example bash script for our PHP TestCoverage Tool use on Linux. The development environment is Ubuntu 12.04_1 but we intend to have the linux example work across as many linux versions as possible without modification. The example linux script requires a variable be set to the PHP Document Root path and by default uses a small PHP example source to show the user how our GUI and text report shows the covered and uncovered PHP code areas. The linux script is also intended to be easily alterable by the user to automate the TestCoverage display of users PHP code. The problem we are having with Ubuntu 12.04 (any linux?) is that the PHP Apache2 document root is defined in /etc/apache2/sites-available/default as /var/www and /var/www is defaulted with "drwxr-xr-x" read only access. So in order to add our own folder as /var/www/SDTestCoverage we must change /var/www to "drwxrwxrwx" read-write access. So it seems our script (at least on Ubuntu) will need to ... 1. acquire and save the /var/www permissions then do .. 2. sudo chmod 777 /var/www (to make writable) 3. mkdir -p /var/www/SDTestCoverage (create our folder under the document root) 4. sudo chmod 777 /var/www/SDTestCoverage (make our subfolder writable) 5. and finally restore /var/www permissions Thanks and our Questions are .. 1. Is this the standard way (using Ubuntu) one adds a writable folder under the PHP Document Root? 2. Is this the most general purpose way one adds a writable folder under the PHP Document Root on other versions of Linux?

    Read the article

  • /build folder used by PEAR

    - by Paul
    I've just noticed a root directory (/build) which I can't seem to find any information for. It looks like it's some sort of staging ground for PEAR (PHP). There are only two folders of different php versions in it, and each of those has a few PEAR tar files I've installed (via the PEAR command line). I'm really only asking this question because I find is strange PEAR (and only PEAR) would create its own root directory to store files. Is this normal? Does Ubuntu provide a /build directory for applications to use?

    Read the article

  • SEO - folder or file [closed]

    - by ErmSo
    Possible Duplicate: Should I use a file extension or not? I'm creating a website with a number of pricing options. Each price plan has it's own page and there is also a comparison page. As far as SEO is concerned, which of the following is better? or does it not make a difference? Option one - folders /pricing/plans /pricing/plans/free Option two- files /pricing/plans.php /pricing/free-plan.php

    Read the article

  • See number of SVN Checkins per folder

    - by Farseeker
    I have a very large SVN repository (working copy of several gb) that has just reached its 20,000th checkin. As a bit of an interesting statistic for our team (and to partly celebrate our 20,000th checkin) I'd like to make a graph showing which folders in the repository have had the most checkins. Is there any way to do this? We mostly use integrated SVN clients in our IDE and Tortoise SVN, but I'm willing to get other tools for this one-off thing.

    Read the article

  • Export images from a SQL Server Table to a Folder with SSIS

    Can I export images from SQL Server to a file in Windows? What SQL Server options are available to do so? Check out this tip to learn more. Keep your database and application development in syncSQL Connect is a Visual Studio add-in that brings your databases into your solution. It then makes it easy to keep your database in sync, and commit to your existing source control system. Find out more.

    Read the article

  • Recover deleted folder form bookmarks bar?

    - by OverTheRainbow
    I googled for this, but didn't find an answer. I removed a folder in Google Chrome's Bookmarks bar. Chrome says nothing when doing this, and I assumed it wouldn't actually delete the data from the Bookmarks manager, just the folder in the Bookmarks bar. Turns out I was wrong, and now I lost hundred's of URLs. I closed and restarted Chrome since then, so data is apparently no longer on disk. Since Google Sync is on by default, it says I have "536 bookmarks", I installed Chrome on another computer, logged on to Google... but the folder is still gone. I can't believe Chrome doesn't prompt the user with an obvious message for something that important. Is there somehow a way to recover a folder removed from the Bookmarks bar? Thank you. Edit: Amazingly, Chrome doesn't 1) provide a way to remove an item from the Bookmarks bar without also deleting it from the Bookmarks list, and 2) doesn't even warn the user of the consequences when doing so! The only way to recover data is: if you haven't closed the browser yet, make a backup of the Bookmarks file, close the browser, replace the now-leaner Bookmarks file with the previous version, and restart Chrome if you have closed it, recover the file from your backup. You did backup that file, right? ;-)

    Read the article

  • Finding files with bash and copy to another location and reducing depth of folders

    - by Kevin F
    I'm trying to recover a mates hard drive, there is no structure what so ever so music and images are everywhere but in named folders sometimes 5 folders deep, I've managed to write a one-liner that finds the files and copies them to a mounted drive but it preserves the file structure completely. What I'm after is a bit of code that searches the drive and copies to another location and copies just the parent folder with the mp3/jpg files within and not the complete path. The other issue I have is the music is /folder/folder/folder/Artist/1.mp3..2.mp3..10.mp3 etc etc so I have to preserve the folder 'Artist' to give him any hope of finding his tracks again. What I have working currently: find /media/HP/ -name *.mp3 -fprintf /media/HP/MUSIC/Script.sh 'mkdir -p "/media/HP/MUSIC/%h" \n cp "%h/%f" "/media/HP/MUSIC/%h/"\n' I then run the script.sh and it does all the copying. Many Thanks

    Read the article

  • two finder windows next to each other like Ubuntu F3 shortcut

    - by RussellHarrower
    Ok, so at work I use Ubuntu and its create for copying a files from one folder to another folder without having two finder windows open. What I would like is to have that function on my mac and I am wondering if anyone knows how to do this. I understand Ubuntu is Linux and Mac well its from Linux has all the features of Linux but is a mac. The feature may not be there but if it is, it would help me alot. As I am always moving files around servers and systems

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >