Search Results

Search found 6431 results on 258 pages for 'magic folders'.

Page 51/258 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • How To Replace Notepad in Windows 7

    - by Trevor Bekolay
    It used to be that Notepad was a necessary evil because it started up quickly and let us catch a quick glimpse of plain text files. Now, there are a bevy of capable Notepad replacements that are just as fast, but also have great feature sets. Before following the rest of this how-to, ensure that you’re logged into an account with Administrator access. Note: The following instructions involve modifying some Windows system folders. Don’t mess anything up while you’re in there! If you follow our instructions closely, you’ll be fine. Choose your replacement There are a ton of great Notepad replacements, including Notepad2, Metapad, and Notepad++. The best one for you will depend on what types of text files you open and what you do with them. We’re going to use Notepad++ in this how-to. The first step is to find the executable file that you’ll replace Notepad with. Usually this will be the only file with the .exe file extension in the folder where you installed your text editor. Copy the executable file to your desktop and try to open it, to make sure that it works when opened from a different folder. In the Notepad++ case, a special little .exe file is available for the explicit purpose of replacing Notepad.If we run it from the desktop, it opens up Notepad++ in all its glory. Back up Notepad You will probably never go back once you switch, but you never know. You can backup Notepad to a special location if you’d like, but we find it’s easiest to just keep a backed up copy of Notepad in the folders it was originally located. In Windows 7, Notepad resides in: C:\Windows C:\Windows\System32 C:\Windows\SysWOW64 in 64-bit versions only Navigate to each of those directories and copy Notepad. Paste it into the same folder. If prompted, choose to Copy, but keep both files. You can keep your backup as “notepad (2).exe”, but we prefer to rename it to “notepad.exe.bak”. Do this for all of the folders that have Notepad (2 total for 32-bit Windows 7, 3 total for 64-bit). Take control of Notepad and delete it Even if you’re on an administrator account, you can’t just delete Notepad – Microsoft has made some security gains in this respect. Fortunately for us, it’s still possible to take control of a file and delete it without resorting to nasty hacks like disabling UAC. Navigate to one of the directories that contain Notepad. Right-click on it and select Properties.   Switch to the Security tab, then click on the Advanced button. Note that the owner of the file is a user called “TrustedInstaller”. You can’t do much with files owned by TrustedInstaller, so let’s take control of it. Click the Edit… button. Select the desired owner (you could choose your own account, but we’re going to give any Administrator control) and click OK. You’ll get a message that you need to close and reopen the Properties windows to edit permissions. Before doing that, confirm that the owner has changed to what you selected. Click OK, then OK again to close the Properties window. Right-click on Notepad and click on Properties again. Switch to the Security tab. Click on Edit…. Select the appropriate group or user name in the list at the top, then add a checkmark in the checkbox beside Full control in the Allow column. Click OK, then Yes to the dialog box that pops up. Click OK again to close the Properties window. Now you can delete Notepad, by either selecting it and pressing Delete on the keyboard, or right-click on it and click Delete.   You’re now free from Notepad’s foul clutches! Repeat this procedure for the remaining folders (or folder, on 32-bit Windows 7). Drop in your replacement Copy your Notepad replacement’s executable, which should still be on your desktop. Browse to the two or three folders listed above and copy your .exe to those locations. If prompted for Administrator permission, click Continue. If your executable file was named something other than “notepad.exe”, rename it to “notepad.exe”. Don’t be alarmed if the thumbnail still shows the old Notepad icon. Double click on Notepad and your replacement should open. To make doubly sure that it works, press Win+R to bring up the Run dialog box and enter “notepad” into the text field. Press enter or click OK. Now you can allow Windows to open files with Notepad by default with little to no shame! All without restarting or having to disable UAC! Similar Articles Productive Geek Tips Search and Replace Specific Formatting (fonts, styles,etc) in Microsoft WordHow to Drag Files to the Taskbar to Open Them in Windows 7Customize the Windows 7 or Vista Send To MenuKill Processes from the Windows Command LineChange Your Windows 7 Library Icons the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Use My TextTools to Edit and Organize Text Discovery Channel LIFE Theme (Win7) Increase the size of Taskbar Previews (Win 7) Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor

    Read the article

  • Backup Your Windows Home Server Off-Site with Asus Webstorage

    - by Mysticgeek
    Windows Home Server lets you backup machines on your network easily. But what about backing up the server data? Today we take a look at ASUS WebStorage for Windows Home Server, which provides you with secure off-site backup for WHS. To use the ASUS WebStorage service you’ll need to sign up for a free account. It offers 1GB of free storage, then you can purchase an unlimited backup package for $39.99 for a year subscription. Note: They also offer online storage for individual PCs as well. Install ASUS WebStorage for WHS Browse to your shared folders on the server and open the Add-Ins folder and copy over the WHSConnectorSetup2.2.4.088.msi file (link below) then close out of the folder. Now launch Windows Home Server Console from one of the computers on your network, click Settings, then Add-ins. Under Available Add-ins click the Available tab and you’ll see the Asus WebStorage installer file we just copied over. Click the Install button. Installation kicks off and when it’s complete, you’ll need to close out of the console and reconnect. Using ASUS WebStorage WHS Connector  When you reconnect to WHS Console, scroll over to the ASUS WebStorage icon and click on Settings. Now log into your ASUS account… Now select the folders you want to backup to the WebStorage service. Select the radio button next to Enable to initialize the backup process… The backup process begins. You can change which folders are backed up simply by disabling the backup process, uncheck the folder(s), then enable the backup again. ASUS WebStorage Site After you have files backed up to the ASUS site, log into your account, and your presented with an overview of the amount of storage you’re using. It also shows what type of files are taking certain amounts of space.   You can browse through your backed up files and folders. It allows you to share and sync backed up data as well. Navigate to the file you want and you can easily download it by clicking on it, or share it out by clicking the share link below it. If you choose to share it, you’re provided with a link to the file to send out to other users.   Conclusion Users of Windows Home Server have been looking for an inexpensive cloud backup solution for quite some time. There are services such as JungleDisk, KeepVault, Wuala…etc. These services probably do a better job, but can start getting expensive once you start uploading a GBs of data. Another disappointment of ASUS WebStorage is you can only backup your WHS shares (from what we’ve been able to determine), it’s an “all or nothing” type of thing. You cannot go in and select individual files and folders. The initial upload speeds can be a bit slow as well, although that might have something to do with limited upload speeds on the DSL connection we used to test it. Retrieving your data from the ASUS site is a breeze though, and all the data files are organized quite well. The WHS Addin is very easy to install and use. If you’re looking for an off-site solution to backup your WHS data, you can test out ASUS WebStorage for free with a 1GB limit. This is good for testing the service and it might be exactly what you’re looking for. Other users may want a more advanced solution like KeepVault or CloudBerry…which is a front end for Amazon S3 storage. Download ASUS WebStorage WHS Addin Other WHS Offsite Backup Solutions CloudBerry, JungleDisk, KeepVault, Wuala Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscRemove a Network Computer from Windows Home ServerShare Ubuntu Home Directories using Samba TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • What are ping packets made of?

    - by Mr. Man
    What exactly are in the packets that are sent via the ping command? I was reading a Wikipedia article about magic numbers and saw this: DHCP packets use a "magic cookie" value of '63 82 53 63' at the start of the options section of the packet. This value is included in all DHCP packet types. so what else is in the packets?

    Read the article

  • Re-partitioning a harddrive without wiping the OS

    - by Johnny W
    Hello. I have a friend who's put himself in that age-old position: His OS partition has turned out to be too small for his needs. He'd really like to be able to repartition his harddrive without formatting it. In the past Partition Magic would have leapt to mind, but apparently Symantec bought that in 2003 and never updated it (and then officially discontinued it). Is there a "modern day" Partition Magic that every uses for desperate situations like this, that also works under Windows 7? Thanks

    Read the article

  • CHMOD To Prevent Deletion Of File Directory

    - by Sohnee
    I have some hosting on a Linux server and I have a few folders that I don't ever want to delete. There are sub folders within these that I do want to delete. How do I set the CHMOD permissions on the folders I don't want to delete? Of course, when I say "I don't ever want to delete" - what I mean is that the end customer shouldn't delete them by accident, via FTP or in a PHP script etc. As an example of directory structure... MainFolder/SubFolder MainFolder/Another I don't want "MainFolder" to be accidentally deleted, but I'm happy for "SubFolder" and "Another" to be removed!

    Read the article

  • DFS replication initial step problem

    - by vn
    Heya, I just setup DFS on my network and it's working fine, and now I'm trying to setup DFS-R on a test folder, but then at the end of the procedure (all went fine, selected my 2 folders, primary folder, replication topology and such) I get this error message (roughly translated from french) : Unable to define security on the replicated folder. The shared administration folder doesn't exist. I'm also wondering if there's any required security on the folders to replicate so that DFS-R can access it. I was trying to add SYSTEM in the security, but it won't find it/allow me. The folder has many many files and folders on the primary DFS pointer, but none on the 2nd, just created it with quite the same rights. Note that the primary DFS pointer is on a 2008 server and the DFS service and the secondary DFS pointer are on a 2008r2. Any help is very appreciated, thanks.

    Read the article

  • git/gitolite: big git repo with several mini projects

    - by Jay
    I'm pretty new to the whole version control thing, and even more so with git. I recently installed git on my computer(s) and set it up on a NAS server. However, I have several client folders with several project folders per client folder. Each one of these client folders is a giant repo, encompassing every project inside it. What I'm wondering is, is there a way to break this apart? So, for instance: The NAS is my 'origin', and has gitolite installed On computer1 I have every project folder in a client folder ever created (clean branch), In computer2 I do not a new checkout of the client branch (because all the projects in that branch are all completed and I don't need a working copy of it), but I do have a brand new project folder for that client "newproject". Is there a way to commit and push to the NAS repo from computer2? Or perhaps is there a better way of organizing all this?

    Read the article

  • Percona MySQL 5.5 fails to start

    - by keymone
    trying to setup new server here but keep getting this in error log: mysqld_safe Starting mysqld daemon with databases from /data/mysql/myisam [Warning] Can't create test file /data/mysql/myisam/hostname.lower-test [Warning] Can't create test file /data/mysql/myisam/hostname.lower-test [Note] Flashcache bypass: disabled [Note] Flashcache setup error is : setmntent failed /usr/sbin/mysqld: File '/var/mysql/bin/bin-log.index' not found (Errcode: 13) [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended everything under /data/mysql (it's ibdata and myisam folders) is owned my mysql:mysql and has proper permissions same goes for folders with bin and relay logs under /var/mysql apparmor is purged from server any ideas? PS it seems like something else apart from apparmor is affecting permissions to access mysql files after i changed data directory to more default one - /var/lib/mysql and "Can't create test file" error is gone, but "'/var/mysql/bin/bin-log.index' not found (Errcode: 13)" is still there PPS so i installed apparmor back and added all folders to mysqld's profile and errors mentioned above are now gone(or mysql doesn't even get to that point now) what i have now is this: /usr/sbin/mysqld: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory banging my head against the wall.

    Read the article

  • quick folder access on Linux (akin to Launchy)

    - by Eli Bendersky
    Launchy is a great piece of software, I use it on Windows mainly for quickly accessing folders. I love its auto-indexing in the background, and hardly ever browse through folders manually these days, solves me lots of time. On Linux (Ubuntu 9.10), I usually "live" in the terminal, however. Therefore, Launchy on Linux (or Gnome Do, or its other replacements) are not what I need - as it opens the file manager, and I don't need the file manager. What I do need is something that indexes my folders and lets me cd into them quickly in the terminal. For example: mycd python_c Will cd to: ~/dev/scripts/python_code I hope my intention is understood :-) Are you familiar with such tools?

    Read the article

  • Folder permissions when using /etc/skel and pam

    - by rothgar
    I have a Red Hat 5.8 server that is bound to active directory and users are authenticated via active directory when they log in via sftp. User home folders are created during login using /etc/pam.d/system-auth. The specific line that creates the home folder is session optional pam_mkhomedir.so skel=/etc/skel/ umask=0066 This correctly gives home folders 711 permissions so no one else can read their directories. The problem is, the pam_mkhomedir.so also modifies permissions on all folders/files inside the /etc/skel folder which I don't want. There is a public_html folder (for apache) which needs to have 755 permissions so users can create web pages. Is there a way for me to either a) stop pam_mkhomedir.so from recursively changing all the file permissions or b) create a script that creates the public_html folder after skel is copied and to set the correct permissions?

    Read the article

  • MinGW Doesn't Generate an Object File When Compiling

    - by Nathan Campos
    I've just bought a new laptop for me on the travel, then on my free time, I've started to test MinGW on it by trying to compile my own OS that is written in C++, then I've created all the files needed and the kernel.cpp: extern "C" void _main(struct multiboot_data* mbd, unsigned int magic); void _main( struct multiboot_data* mbd, unsigned int magic ) { char * boot_loader_name =(char*) ((long*)mbd)[16]; /* Print a letter to screen to see everything is working: */ unsigned char *videoram = (unsigned char *) 0xb8000; videoram[0] = 65; /* character 'A' */ videoram[1] = 0x07; /* forground, background color. */ } And tried to compile it with g++ G: g++ -o C:\kernel.o -c kernel.cpp -Wall -Wextra -Werror -nostdlib -nostartfiles -nodefaultlibs kernel.cpp: In function `void _main(multiboot_data*, unsigned int)': kernel.cpp:8: warning: unused variable 'boot_loader_name' kernel.cpp: At global scope: kernel.cpp:4: warning: unused parameter 'magic' G: But it don't create any binary file at C:/. What can I do?

    Read the article

  • Add shared contacts to Outlook 2007 address book

    - by PHLiGHT
    Hello we just upgraded from exchange 2003 to 2010. 2010 seems to be pushing people to stop using public folders. Public folders had the nice feature that you could see the contacts of the public folder in your address book. I haven't found a way to add shared contacts to the Outlook address book. How do you do it? If I am unable to find the solution I will likely have to go through the hassle of migrating the public folders over. I was having a problem with that so I went the route of shared contacts.

    Read the article

  • Mapping drives on MacOSX (Leopard/Snow Leopard) permanently inside LAN

    - by Shyam
    Hi, How do I "map" shared folders on a Mac, permanently? With map, I do not mean 'connect', but permanently add it to the system so it exists after reboot. Since workstations tend to shutdown, I wonder also the symptoms and cures in case that happens. In Linux, this can be done using the fstab file, but I noticed that volumes are mounted in a different structure than Linux. I need this to backup some workstations, doing a recursive job over a single directory, that should contain the shared folders. I use Terminal to access the main system, so by preference, the solution would be nice that works within a bash shell vs GUI. I can access all folders in Finder. Thanks!

    Read the article

  • QNAP ftp: hide Mac OSX hidden files .DS_Store and others

    - by tobia.zanarella
    I am using a QNAP as an FTP server. Users access this NAS from their Mac OSX Lion clients and put files and folders inside the QNAP shares, which are later being accessed via FTP from external points. The problem is that inside the folders uploaded to the QNAP folders Mac OS leaves its .DS_Store and .* hidden files. I know that for a Samba share, for example, I need to add two lines to the /etc/samba/smb.conf file: veto files = /._*/.DS_Store/ delete veto files = yes But what can I do on a QNAP FTP service? Thank you.

    Read the article

  • What useful things can one add to one's .bashrc ?

    - by gyaresu
    Is there anything that you can't live without and will make my life SO much easier? Here are some that I use ('diskspace' & 'folders' are particularly handy). # some more ls aliases alias ll='ls -alh' alias la='ls -A' alias l='ls -CFlh' alias woo='fortune' alias lsd="ls -alF | grep /$" # This is GOLD for finding out what is taking so much space on your drives! alias diskspace="du -S | sort -n -r |more" # Command line mplayer movie watching for the win. alias mp="mplayer -fs" # Show me the size (sorted) of only the folders in this directory alias folders="find . -maxdepth 1 -type d -print | xargs du -sk | sort -rn" # This will keep you sane when you're about to smash the keyboard again. alias frak="fortune" # This is where you put your hand rolled scripts (remember to chmod them) PATH="$HOME/bin:$PATH"

    Read the article

  • IMAP and Folder Subscriptions

    - by tom
    Hey guys I'm trying to figure out how folder subscriptions work with IMAP Clients and, in my case, in an Exchange 2007 environment. I have managed to deduce, I think correctly, that the folder subscription settings for individuals are stored centrally on the mailbox (At least, having configured 2 profiles on thunderbird, on 2 different machines, they subscribe to the exact same folders and dont display the exact same ones). The point of this being that I have had 2 or 3 different people in the past day report to me that they are having problems subscribing to certain folders, and having certain folder subscriptions remembered (particularly sub-folders). This is across both Thunderbird 2 and 3. can anyone suggest anything? Tom

    Read the article

  • No Binary File Generation

    - by Nathan Campos
    I've just bought a new laptop for me on the travel, then on my free time, I've started to test MinGW on it by trying to compile my own OS that is written in C++, then I've created all the files needed and the kernel.cpp: extern "C" void _main(struct multiboot_data* mbd, unsigned int magic); void _main( struct multiboot_data* mbd, unsigned int magic ) { char * boot_loader_name =(char*) ((long*)mbd)[16]; /* Print a letter to screen to see everything is working: */ unsigned char *videoram = (unsigned char *) 0xb8000; videoram[0] = 65; /* character 'A' */ videoram[1] = 0x07; /* forground, background color. */ } And tried to compile it with g++ G: g++ -o C:\kernel.o -c kernel.cpp -Wall -Wextra -Werror -nostdlib -nostartfiles -nodefaultlibs kernel.cpp: In function `void _main(multiboot_data*, unsigned int)': kernel.cpp:8: warning: unused variable 'boot_loader_name' kernel.cpp: At global scope: kernel.cpp:4: warning: unused parameter 'magic' G: But it don't create any binary file at C:/, what can I do?

    Read the article

  • How can I share a Windows 7 library with anonymous access without using a Homegroup?

    - by Triynko
    The "homegroup" feature is useless, because it requires a password, and therefore doesn't support anonymous, no-hassle access to shares from devices such as my Sony Bravia TV and non-Windows7 machines. So I turned off homegroup and reverted to the standard shared folders protocol. I'd like to share my Music "Library", so I can play files from my TV through the Surround Sound System, but there seems to be no option to share a library folder other than through a homegroup. I don't want to have to individually share the folders that belong to the library, because that would defeat the purposes of the library, which is to manage which folders are included in the library while also providing an easily accessible view of them all at once. Does anyone know how to share a Windows 7 library without that useless homegroup feature?

    Read the article

  • Moving case sensitive Linux files via Windows

    - by sunwukung
    Hi the company I work for is currently trying to move a Magento installation from one server to another - however, the product images are saved in folders alphabetically indexed folders - but with an added twist, some of the letters are the same but have a different case - i.e. a, A, b, C, D, e, E, f, F, G, h, I That being the case, when we try to drag those files down from FTP in order to move them, Windows does not honour the case sensitive distinction and we are losing several image folders. Is there a simple workaround for this issue? Any help is greatly appreciated. regards SWK

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Apache 2.2 Windows XP Uninstall Doesn't Remove All Files

    - by AJ
    Hello, I am trying to uninstall Apache 2.2 on Windows XP. The original installation was from the binary msi distribution. The uninstall function of the msi ran successfully, but it failed to remove a couple of folders: C:\Apache2.2\conf C:\Apache2.2\logs I am unable to remove these folders manually because they contain files for which I do not have ownership. And this is the source of my confusion: why if I installed the program do I not have permission to remove it? To be clear, I do not have local admin rights (nor can I request them), yet the files in these two remaining folders are owned by Administrator. How is it that Administrator created these files (and how can I possibly remove them)? Thanks, -aj

    Read the article

  • Photo Management Software for OS X That Supports Multiple Library Locations

    - by Lance Rushing
    I'm looking at the possibility of changing our photo management software. Thinking about Aperture or Lightroom. And want to know if either supports: Having folders/libraries on separate harddrives/Volumes. Tolerates having network folders occasionally connected Snappy interface A solid enough piece of software as not to crash or behave weirdly. [ so the wife doesn't get stuck and have to call me ;) ] Background The wife is photographer and I'm the computer programmer. Our current setup is Picasa, with a "Recent/Working" folder on the local iMac harddrive, and "Archive" folders on NFS mounted Linux Server, Raid 5, for redundant and extra storage capacity. (the Linux server also syncs with Amazon's EC2) Picasa is doing OK, but I get annoyed when it doesn't do behave properly. Usually around issues when the Linux disk isn't mounted. And overall, I wish Picasa seemed a little more polished and snappier. Thanks, Lance

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >