Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 456/1833 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • Inheriting file ownership on linux

    - by John Hunt
    We have an ongoing problem here at work. We have a lot of websites set up on shared hosts, our cms writes many files to these sites and allows users of the sites to upload files etc.. The problem is that when a user uploads a file on the site the owner of that file becomes the webserver and therefore prevents us being able to change permissions etc via FTP. There are a few work arounds, but really what we need is a way to set a sticky owner if that's possible on new files and directories that are created on the server. Eg, rather than php writing the file as user apache it takes on the owner of the parent directory. I'm not sure if this is possible (I've never seen it done.) Any ideas? We're obviously not going to get a login for apache to the server, and I doubt we could get into the apache group either. Perhaps we need a way of allowing apache to set at least the group of a file, that way we could set the group to our ftp user in php and set 664 and 775 for any files that are written? Cheers, John.

    Read the article

  • Redirect from folder containing website

    - by Sam
    I have a website reached from this url: http://www.mysite.com/cms/index.php being served from this directory: public_html/cms/index.php In public_html I have this .htaccess RewriteRule (.*) cms/$1 [L] Which lets me get to the site like this: http://www.mysite.com/index.php But now if I reference the 'old' address, I'd like to redirect to the rewritten address with a permanent redirect code. for example: http://www.mysite.com/cms/?q=node/1 is redirected to... http://www.mysite.com/?q=node/1 How can I make this happen? EDIT: Also in the .htaccess file supplied with Drupal(cms), this is written. I've tried enabling it, but it doesn't seem to have any effect. # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal EDIT: Including more of my .htaccess file - seems relevant. # Block access to "hidden" directories whose names begin with a period. RewriteRule "(^|/)\." - [F] #Strip cms folder from url RewriteRule (.*) cms/$1 RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^ index.php [L] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch>

    Read the article

  • Linux NFS create mask and force user equivalent

    - by Mike
    I have two Linux servers: fileserver Debian 5.0.3 (2.6.26-2-686) Samba version 3.4.2 apache Ubuntu 10.04 LTS (2.6.32-23-generic) Apache 2.2.14 I have a number of Samba shares on fileserver so that I can access files from Windows PCs. I am also exporting /data/www-data to the apache server, where I have it mounted as /var/www. The setup is okay, except for when I come to create files on the NFS mount. I end up with files that cannot be read by Apache, or which cannot be modified by other users on my system. With Samba, I can specify force user, force group, create mask and directory mask, and this ensures that all files are created with suitable permissions for my Apache web server. I can't find a way to do this with NFS. Is there a way to force permissions and ownership with NFS - am I missing something obvious? Although I've spent quite a bit of time with Linux, and am weaning myself off Windows, I still haven't quite got to grip with Linux permissions... If this is not the right way to do things, I am open to alternative suggestions.

    Read the article

  • How to go about rotating logs which are arbitrary named and placed in deeply nested directories?

    - by Roman Grazhdan
    I have a couple of hosts which are basically a playground for developers. On these hosts, each of them has a directory under /tmp where he is free to do all he wants - store files, write logs etc. Of course, the logs are to be rotated, or else the disc will be 100% full in a week. The files can be plenty, but I've dealt with it with paths like /tmp/[a-e]*/* and so on and lived happily for a while, but as they try new cool stuff on the machine logrotate rules grow ugly and unmanageable, and it's getting more difficult to understand which files hit the glob. Also, logrotate would segfault if asked to rotate a socket. I don't feel like trying to enforce some naming policies in that environment, I think it's going to take quite a lot of time and get people annoyed and still would fail at some point. And I still need to manage the logs, not just rm the dirs at night. So is it a good idea in circumstances like these to write a script which would handle these temporary files? I prefer sticking with standard utilities whenever possible, but here I think logrotate is getting less and less manageable. And probably someone heard of some logrotate alternatives which would work well in such an environment? I don't need emailing logs or some other advanced features, so theoretically some well commented find | xargs would do. P.S. I do have a log aggregator but this stuff is not going to touch my little cute logstash machine.

    Read the article

  • What Windows app can sort a huge XML file?

    - by Torben Gundtofte-Bruun
    I have some enormous XML-based configuration files, with 125000 lines in them. The problem is that they are auto-generated by the system I use, and "child" tags are in a random order within their respective parent tag. This means that a diff comparison is impossible. I want to recursively sort all tags within a parent tag by the value in name="". Some parent tags only appear once and don't have a name="" parameter; these should be sorted by the tag name itself. Once the files are sorted like this, they can be compared quite easily using normal tools. We are currently using ExamXML which can match unsorted XML files, but it fails because the files are too big. Is there an application that can do this? (Windows much preferred; Linux only as a last resort) I do not want to dive into development or XSLT jobs. I am thinking that someone must have made a simple sorting tool like this already - I just can't find it using Google. Update: With help from this site, I created a small package that I want to share: XML-Sorter_v0.3.zip Update: Follow-up question here.

    Read the article

  • PhpMyAdmin 500 Internal Server Error on Nginx/php5-fpm/Debian

    - by ThrownAway
    I downloaded PhpMyAdmin a while ago and am having a hard time getting it to work. Requesting localhost/phpmyadmin gives a 500 Internal Server Error response, but there's nothing in the error log. These are the steps I did: Downloaded the newest phpmyadmin and unzipped all the files to /var/vhosts/phpmyadmin/www/ Created a new php5-fpm pool and a server block on nginx Changed the owner of all the files inside phpmyadmin/ Tried requesting localhost/phpmyadmin and localhost/phpmyadmin/setup The phpmyadmin is running inside a chroot, and all the files are owned by www-data so it shouldn't be a permission error. I made a new php file in the same directory to produce an error and it logs just fine so it has to be just phpmyadmin. Here's my php5-fpm pool: [phpmyadmin] listen = /var/vhosts/phpmyadmin/tmp/.php.sock; user = www-data group = www-data chroot = /var/vhosts/phpmyadmin/ chdir = / php_admin_value[error_reporting] = E_ALL php_admin_value[error_log] = error.log php_admin_flag[log_errors] = on php_admin_flag[display_errors] = on php_value[session.save_handler] = files php_value[session.save_path] = /tmp And Nginx server block: server { listen 80; root /var/vhosts/phpmyadmin/www; server_name pma.domain; location / { try_files $uri $uri/ /index.html; autoindex on; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_pass unix:/var/vhosts/phpmyadmin/tmp/.php.sock; fastcgi_param SCRIPT_FILENAME /www$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param DOCUMENT_ROOT /www; } index index.html index.htm index.php; try_files $uri $uri/ =404; } Any ideas what could be wrong? Why is it not producing any errors even though I've forced them to be on?

    Read the article

  • Setting umask for all users

    - by Yarin
    I'm trying to set the default umask to 002 for all users including root on my CentOS box. According to this and other answers, this can be achieved by editing /etc/profile. However the comments at the top of that file say: It's NOT a good idea to change this file unless you know what you are doing. It's much better to create a custom.sh shell script in /etc/profile.d/ to make custom changes to your environment, as this will prevent the need for merging in future updates. So I went ahead and created the following file: /etc/profile.d/myapp.sh with the single line: umask 002 Now, when I create a file logged in as root, the file is born with 664 permissions, the way I had hoped. But files created by my Apache wsgi application, or files created with sudo, still default to 644 permissions... $ touch newfile (as root): Result = 664 (Works) $ sudo touch newfile: Result = 644 (Doesn't work) Files created by Apache wsgi app: Result = 644 (Doesn't work) Files created by Python's RotatingFileHandler: Result = 644 (Doesn't work) Why is this happening, and how can I ensure 664 file permissions system wide, no matter what creates the file? UPDATE: I ended up finding a cleaner solution to this on a per-directory basis using ACLs, which I describe here.

    Read the article

  • Productive Writing Software

    - by Nick Retallack
    What do you guys use to write a personal journal, notes, and reference information that you want to group and search through later? I'm not one of those crazy people who likes to share their journal with the world. Some things I like to keep to myself. It's quite nice to have a reference. Preferences for cross-platform stuff (windows, mac, linux) Some Background For Windows, there's a program called Yeah Write, which really changed my expectations about how easy it should be to start writing something. You don't open or close files -- just click an empty slot and start writing, or click a filled slot to work on another file. And you can organize things into categories by creating tabs. Now that I carry a Macbook, I've just been using TextEdit. I like it because I can't lose my work when my computer crashes: it auto-saves everything and restores it when I launch TextEdit again. But I make a mess, leave thirty open files, and save everything to one directory with no tags for easy grouping. Saving files and selecting the directory they should be in is too clunky to do in the middle of a meeting. And organizing files into folders in the Finder is a pain, since there's no tree view like on Windows. Sure, I'm lazy, but I miss Yeah Write. I'm not going to get a windows laptop just to use it though. Since the laptop I take my notes on is a mac, I'm gonna be biased toward mac solutions.

    Read the article

  • Mapped network drive connection timeout

    - by Terix
    I have server "Alpha" and server "Beta". Server Beta has a shared folder, that is mapped on server Alpha as "X:" On server Alpha there is a .vbs script that runs and take some files on local drive and copy them on X: drive. My issue is if no user log on server Alpha for a long time, it seems like the tcp connection underneath the mapped drive has a timeout, and the vbs script fails on the copy of file. As soon I log with remote desktop on Alpha server, the .vbs is successfull on the copy of the files. I have made many tests, using file logs to check what was happening and I found no way to refresh the connection and let the .vbs be able to copy files unattended.. I have always to log with remote desktop on Alpha server to refresh the connection and let the .vbs copy the files without issues. What can I do to avoid to log every time? The .vbs script runs 3 times a day and is very annoying. I do not have control over server Beta so I cannot change anything there, and I am very limited on changes I can do on server Alpha ( I cannot change registry and that sort of things)

    Read the article

  • USB Device Not Recognized (Mac)

    - by Nargis
    Fortunately, my Mac-pro also made one of my USB storage devices inoperable. My data loss in that USB device but such as another USB device and USB keyboard are unaffected. I have heard that my friend usually trigger this problem by having at least two devices plugged in - typically thumb drives/USB flash drives, and then once a second flash drive is plugged in that become unrecognized. I have only two USB ports and first I think port loose when I connect two USB devices. But later I found these hidden files (“.Spotlight-V100”, “.TemporaryItems”, “.Trashes”, and “._.Trashes”) are created by Mac OS. And before unrecognized that USB device I have deleted these files and my friend had also done the same action. Now I don’t want to test for next USB device to become unrecognized and I won’t deleted any hidden system file inside the flash drives. But I really want to know why these problems happened. Can I delete these hidden files when I only connect to virtual machine (Vista), because I used to delete all useless hidden files from USB flash drives? Any suggestions or thoughts to prevent this or alternative suggestions to fix the problem that take lossless would be much appreciated.

    Read the article

  • Exchange 2010 UR3 - customizing OWA logon page

    - by STGdb
    I have an Exchange 2010 UR3 deployment that I need to customize the OWA logon page for. I've created a new LGNTOPL.GIF file to replace the existing one in the folder: “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.158.1\themes\resources” When I bring up OWA, I still get the original “Outlook Web App” logo. I’ve searched and found a couple of other instances of LGNTOPL.GIF in the directories: “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.123.3\themes\resources” “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.146.0\themes\resources” “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\Current\themes\resources” I’ve replaced the LGNTOPL.GIF file in each of the above directories but got the same results. I’ve tried clearing my browser cache and even using multiple browsers from multiple PC’s but the same results. I’ve even tried making my GIF file the same pixel size as the original LGNTOPL.GIF logo but still the same results. I’ve tried restarting IIS on the CAS server and restarting the server but same results. Has something changed with Exchange 2010 UR3 when trying to customize OWA? I don't see anything documented about any change to OWA customization. Thanks

    Read the article

  • One Way Sync with Dropbox?

    - by user244805
    Is there any way I can mirror a dropbox folder to my C drive by just running a portable file? Extra background information because I know you guys hate it when you don't get the entire situation: I go back to University in fall and I need a new storage solution. I decided to use DropBox to sync my tiny University files (< 5 MB). I need to access these files from 4 machines: Windows 7 Home machine Windows 7 University A machine Windows 7 University B machine Android tablet 1 and 4 are a non-issue. The problem lies with 2 and 3. I want to be able to edit my files on 2 and 3 but those machines are not mine. There is an easy fix. Run a portable version of the DropBox syncer on a USB drive. But the problem is that I don't want to carry a USB drive around with me all the time. In that case, I can just run the small portable DropBox syncer off the internet. But where will it to store the files? A temporary directory on the C drive. There is only one issue left: there are hundreds of machines that I will randomly use that fit in categories 2 and 3. My portable DropBox syncer will notice that the temporary directory is empty on each new PC I use and instead of downloading my DropBox folder to the machine, it syncs the other way around i.e. it deletes my entire DropBox. The solution is to mirror my DropBox onto the temporary directory before running the DropBox syncer.

    Read the article

  • Recovering data from an external hard drive

    - by CCallaghan
    I have a WD Elements 2GB hard drive (formatted NTFS). I accidentally kicked out the USB cable while writing data to the disk, and now I can't access most of the data. Although this was ostensibly my backup drive, there is a great deal of important material on there which was only on there. I realise how idiotic this makes me. (So, formatting is not an option.) Things I've tried/information I've gathered: Windows Explorer will recognise the drive itself. However, it will not access most directories therein (and will sometimes crash when exploring). I can access all of the directories through the command line, but the dir command will often report that it can't read any files in most of the directories. The situation was similar when I hooked it up to an Ubuntu machine: the file explorer crashed, but I could access directories - but not files in those directories - via terminal commands. Several files I tried to copy out either resulted in an I/O error being reported or resulted in the command line crashing. The Disk Management utility on Windows reports a healthy disk formatted as NTFS and not RAW. It also indicates the correct amount of space used up and its capacity (so it seems that the files are not deleted). I've tried to run chkdsk, but that hangs on Step 2 (checking indexes) at 74%. Step 1 reported no bad sectors. I tried Recuva, but that didn't seem to work (stalled at 0% for half an hour). I should also note that the disk doesn't seem to be spinning smoothly; it seems to be chopping back, like it's reading the same sector over and over again. I noticed this after I kicked out the cable. Any help would be greatly appreciated. Update: It would seem the problem has taken a turn for the worse. The external hard drive now shows up on my computer as a local disk and is not mountable by Linux.

    Read the article

  • Windows mounted network drives slow after upgrading switch

    - by Kver
    On our small business network our old 10/100 consumer grade switch gave up the ghost, and we replaced it with a proper business-grade gigabyte switch. After wiring it in our Linux and Mac users immediately got back to working off of network drives; But 2 of our 3 Windows 7 PCs have suddenly experienced a tremendous slowdown with mapped network drives; Windows will become stuck "discovering" a folder causing applications to freeze when trying to open files. It will instantly display and browse files, but the moment you try to open one the bug hits. To remedy this we have our users copying files to the desktop, but it can take a few minutes while windows is stuck "calculating" the time it will take to copy. These aren't big files, mostly excel sheets less than 500KB - these operations are instant on Linux and Mac. (The third Windows machine is having no issues) I've tried remapping the drives, mapping to different drive letters, rebooting, etc. I'm at a loss, because switches are mostly transparent, and it's only after the switch was replaced that the Windows PCs started acting up. What black-magic voodoo am I missing to make Windows work? Thank you.

    Read the article

  • Deleting old system folders from a drive that is no longer the windows installation drive

    - by grenade
    I dropped my laptop and was no longer able to boot. There were error messages about a corrupt boot record. Replacing the hard drive and reinstalling Win 7 was how I dealt with it. The old drive still appears to be good and I can read and write to it when I connect it as a second drive and mount as D:. However, if I try to recover the space being used by the windows, programdata, program files & program files(x86) folders, by deleting them I get error messages about needing permission from trustedinstaller. If I set myself as the owner of the folders and retry the delete I get error messages about needing permission from myself! Since I'm pretty sure that I have permission from myself to delete the folders, I can only assume that the OS or file system has gotten its panties twisted. I have tried shift, right click, delete from explorer and also if I run "del /f /s /q D:\Windows" from an admin command prompt, I get a succession of Access is denied messages as well. How do I delete D:\Windows, D:\ProgramData, D:\Program Files & D:\Program Files(x86) from a drive that is not the Windows installation drive?

    Read the article

  • Searching SharePoint site with Windows Explorer

    - by alexsome
    Every week, I manually backup recent versions of the files on my group's SharePoint site. I open the library in Windows Explorer, search for all files modified in the past week, then copy and paste them to a network location. We need this process because our SharePoint site has a quota that we would easily meet if we had unlimited versions, so we keep a history of older versions on the network. Recently I got an upgrade to my work computer and I am unable to search the site using Windows Explorer. When I run the search for files modified in the last week no results are returned. If I run a search with no criteria on the file library, all the files are found but the "modified on" field is blank. So the search results only have the file and type fields. The new computer has Windows XP, just like the old one did. I hope this makes sense. Does anyone have any clue what the problem could be? I'd be happy to provide more info if necessary. It's bugging me to no end and I'm not even sure where to begin looking - it's either a trivial issue or a very obscure one. Thanks a lot.

    Read the article

  • is there a way to run a command before puppet implements a change?

    - by Patrick
    I want to have puppet run a specific command before performing any type of change. I am aware of the prerun_command option in the main puppet.conf, but this is not what I'm looking for. I want the command to only run if something is about to change, not on every puppet run. Here's the scenario. Let's say I have a bunch of web servers behind a load balancer. I then want puppet to update the web site files. But in order to prevent issues where some files have been updated, but other files haven't, and the mixed versions causing problems, I want to take the server out of the load balancer pool. I could write a script which when run will tell the load balancer to remove the box from the pool. Then puppet can do the change, and use postrun_command to put the box back in the pool once complete. But I need a way to run that script to remove the server from the pool. The only solution I can think of is to keep 2 copies of the files on the box. One a staging copy, and when puppet updates that, use a notify action to trigger the removal script, and then copy from staging into the live location. But I was hoping for something a little more generic that would work on any change being performed (upgrading a package, restarting a service, creating a user, anything).

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • lamp -- edit PHP file but doesn't change web output -- including die()

    - by Reid W
    Server is standard Linux server on Amazon Web Services. Cent OS 5/Apache/PHP 5.3. No APC. It's worked fine for over a year, but now when I edit some but not all PHP files on the server using vi, the changes don't affect the web output. For example, I edit myfile.php and put a die() at the top, but when I load the page in my web browser, instead of the die() I see the content that would show up if the die() weren't there. svn updating the file in question doesn't help either. Files are on an Amazon EBS partition symlinked to /var/www/html. Just to reiterate -- this has worked fine for a long time. Restarting apache didn't help, nor did rebooting the server. What's weird is that it's just some of the files but not all. File ownership/permissions are the same for the "good" and "problem" files. I'm not a Linux newbie but am at a complete loss with this, and couldn't find anything on Google either. Any hints would be much appreciated!

    Read the article

  • Grub Installation Failed: Fatal Error ... now what I do?

    - by eklavya
    I know there are some threads that touch this but I feel I have done something uniquely stupid. hence the post and plea for help. I am a beginner @ Linux. So I have a PC with a HDD (hard disk drive) and SSD (solid state drive) It was running Linux Mint /dev/sda1 - HDD Partition 1 - 2 TB (mounted this is /home /dev/sda2 - HDD Partition 2 - 1 TB (separate back up drive, i was backing up files to this) /dev/sdb1 - SSD Partition 1 - 100 GB (OS) /dev/sdb2 - SSD Partition 2 - 20 GB (Swap) The operating system was Linux Mint and was installed on the /dev/sdb1 i.e the solid state drive. I had partitioned off the sda into 2 TB and 1TB and presented the 2 TB as the /home to the OS. Anyway last night I decided to make a return to Ubuntu via the path of Elementary OS. Everything went fine with the install until it stated that GRUB installed failed and this was a Fatal error (no kidding I said). No I am stuck. I have definitely done something wrong and don't know what it is... My biggest pain is the files on the /dev/sda2. I want to save these before I try something drastic like wiping off the /dev/sda completely. So I have the following questions... Can I use a liveCD USB to save these files ? I can see the /dev/sda2 but was unable to access the files in the liveCD last not least ... how do I fix the main issue here. Why could the OS not install GRUB 2b... why is my SSD the /dev/sdb ... and not /dev/sda. Does that have something to do with it that my master boot record sits on the HDD /dev/sda and not /dev/sdb

    Read the article

  • Understanding Netbook Partitions & UNR Installation

    - by Wesley
    Hi all, I have a Samsung N120 netbook (with upgraded 2GB RAM). I'm just looking at the Disk Management right now (in Windows XP) and I'm trying to understand what partition holds what. There is "Local Disk (C:)" which is 40GB, "RECOVERY" (no drive letter) which is 6GB and then "TEMP_PART01 (D:)" which is 103.05GB. XP is installed on Local Disk (C:) and I've only used this hard drive for all my files, etc. Recovery is recovery... probably not removable anyways. Now, what bugs me is the TEMP_PART01 (D:) partition, which contains quite a bit of random junk, such as EULA text documents, an "external installer", UI Wrapper Resource DLLs, a "VC_RED" Windows Installer Package and a few more files. I have no clue what any of it means, but I'm assuming that this was probably stuff that could have been on the Local Disk (C:), along with the WINDOWS, Program Files, and Docs and Settings folder. So, how should I go about this? Should I have kept all my data on D: and left all OS related files/folders on C:? Now, I want to install Ubuntu Netbook Remix. Question is, will this install within Windows, if I want to dual boot it? If not, would I partition D: into two small chunks, one on which I would install UNR? There are basically two questions in here, but it'd be great to get answers for both! Thanks in advance.

    Read the article

  • Exchange 2010 UR3 - customizing OWA logon page

    - by STGdb
    I have an Exchange 2010 UR3 deployment that I need to customize the OWA logon page for. I've created a new LGNTOPL.GIF file to replace the existing one in the folder: “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.158.1\themes\resources” When I bring up OWA, I still get the original “Outlook Web App” logo. I’ve searched and found a couple of other instances of LGNTOPL.GIF in the directories: “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.123.3\themes\resources” “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.146.0\themes\resources” “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\Current\themes\resources” I’ve replaced the LGNTOPL.GIF file in each of the above directories but got the same results. I’ve tried clearing my browser cache and even using multiple browsers from multiple PC’s but the same results. I’ve even tried making my GIF file the same pixel size as the original LGNTOPL.GIF logo but still the same results. I’ve tried restarting IIS on the CAS server and restarting the server but same results. Has something changed with Exchange 2010 UR3 when trying to customize OWA? I don't see anything documented about any change to OWA customization. Thanks

    Read the article

  • How to delete a residual Ubuntu directory from Windows?

    - by memo1288
    I'm using Windows 7. After installing (and uninstalling) Ubuntu on my laptop, I found that it left a folder called ".Trash-1000" on my H drive. I cannot remove it: if I try to delete it from Explorer, it says: The file name you specified is not valid or too long. Specify a different file name. If I try to remove it from the command line, this is what happens: H:\>rmdir .Trash-1000 /S /Q .Trash-1000\files\Screenshot from 2013-09-24 11:57:32.png - The filename, direct ory name, or volume label syntax is incorrect. .Trash-1000\files\Screenshot from 2013-09-24 12:03:45.2.png - The filename, dire ctory name, or volume label syntax is incorrect. .Trash-1000\info\Screenshot from 2013-09-24 11:57:32.png.trashinfo - The filenam e, directory name, or volume label syntax is incorrect. .Trash-1000\info\Screenshot from 2013-09-24 12:03:45.2.png.trashinfo - The filen ame, directory name, or volume label syntax is incorrect. The files mentioned there are the contents of that folder. Using quotes around the folder name yields the same result. Trying to delete any of the sub-folders results in the same error, and trying to remove any of the files inside results in "No such file or directory". As I said before, I no longer have Ubuntu installed. How can I remove this folder?

    Read the article

  • Looking for a powershell script that can pull a file from a set of PC's and FTP

    - by DangeRuss
    I'm looking to write a script (preferably powershell) that will essentially copy a file from a bunch of PC's and FTP it to a server. So the structure of the environment is that we have a file on multiple PC's (around 50 or so) that need to placed on a server. Sometimes one of the PC's may be turned off so the script would first need to ensure the PC is up and running (maybe a ping result), then it would need to go into a directory on that PC, pull a file off of it, rename the file, place into a source directory, then remove the file. Naming convention doesn't matter, but date/time stamp would be easiest. Ideally, it would be best to first move all the files to a source directory to save on FTP bandwidth, but since the files will be named the same, the files must be renamed during the move process. Move not copy because the directory needs to be empty so the file can be re-created the next day. So once moved to the source directory, now all the files need to be FTP'd to a server for processing. After all of this, we need to know which PC's on the list did not respond so we can manually retrieve the file so the script should output a file (txt is fine) that will show which PC's were offline. Everything is one domain and script will be run from an server with admin creds. Thank you!

    Read the article

  • How can I recover my data from a damaged hard drive?

    - by krk
    a few days ago when I was working on windows my laptop was beaten on the side where the hard drive is located. As a result, it was damaged and I couldn't access the windows partition. I had to boot the linux one, which is working without any trouble. I have 2 partition formatted with ntfs, the one with windows on it, and the other one intended to store data. I mounted the windows partition from ubuntu and I could see all my files. But when I tried to mount the data partition it was impossible. It threw me an error, it couldn't recognize ntfs partition. I try to copy the damaged disk into an external hard drive using the command: dd if=/dev/sda of=/dev/sdb conv=noerror,sync The progress stopped at 60%. I was still unable to mount the data partition. Now I'm trying to backup my files using an utility called Photorec. The problem is that it is recovering my files in a disorderly way, it is all mixed up and I need my original directory structure, it will become an endless task to organize the files as they were before. Is there any way I can get my partition back?

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >