Search Results

Search found 45466 results on 1819 pages for 'config files'.

Page 507/1819 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • Turn on PC power remotely through the Internet?

    - by W.N.
    I use SVN for my work at home and office, but I usually forget to commit the changes before shutdown. Therefore, I wish I could turn on my home/office PC at office/home. I already have TeamViewer installed on both PCs, so it will be okay as soon as the power is turned on. I have read many articles about this, I found both my PC and office computers support Wake-on-LAN. However, I don't know much about other config. And I need to turn on my computers through the Internet, not on LAN. My office Internet connection has static IP, however, my home Internet connection has dynamic IP, it changes as soon as I reset the modem, but it is not a big problem, I rarely turn the Internet modem off. And I don't have privilege to config office Internet connection, but I have Administration privilege on both PCs. Please give me details steps to turn on my office PC from home, and turn on my home PC from office.

    Read the article

  • Can I compile mutt under cygwin?

    - by openist
    I've been trying to compile mutt under cygwin for a few days. The included version is outdated and does not include things I need like header caching. Anyways, I always get the message: "configure: error: no curses library found" I have all the curses + devel stuff installed + termpcap, which I heard might be related. I've tried re-installing, i've tried specifying the location on the configure command line, but i'm not sure i'm doing it right: "--with-curses=/usr/lib/libncurses.a --with-curses=/usr/lib/libncurses.dll.a --with-curses=/usr/include/ncurses" Here's my config.log: http://floatsolutions.net/docs/config.log Any ideas? EDIT: Context

    Read the article

  • Store and Encrypt data over the internet.

    - by sotsec
    I am trying to build a system where I will be able to access my files remotely. I want to setup an external hard drive or a NAS that I will access over the internet, and I want every file that is stored on that system to be encrypted. Could you please suggest me what is the best way of doing that? Or if you have any knowledge, what is the best way to access your files remotely with maximum safety? but the same time the space that the files are allocated is protected against theft(encryption) etc. thank you

    Read the article

  • Searching Multiple Terms

    - by nevets1219
    I know that grep -E 'termA|termB' files allows me to search multiple files for termA OR termB. What I would like to do instead is search for termA AND termB. They do not have to be on the same line as long as the two terms exists within the same file. Essentially a "search within result" feature. I know I can pipe the results of one grep into another but that seems slow when going over many files. grep -l "termA" * | xargs grep -l "termB" | xargs grep -E -H -n --color "termA|termB" Hopefully the above isn't the only way to do this. It would be extra nice if this could work on Windows (have cygwin) and Linux. I don't mind installing a tool to perform this task.

    Read the article

  • Folder sync application which can sync over Internet (the other machine specified by an IP)?

    - by Adal
    I need to sync some folders between two Win 7 machines. While they are connected to the same LAN, they can't see each-other over Windows Networking since sharing is disabled on both of them (security reasons). Do you know any sync app which can work over IP? The folder I need to sync has 500,000 files in it (80 GB in total), so the sync app should be pretty efficient. At the moment I copy the files from one machine to the other over FTP, but it takes forever, since a separate connection is opened for each file. Or maybe you know some app which can efficiently transfer a large number of files between two machines on the Internet?

    Read the article

  • Hard disk failure. Can I recover my "move"d folders?

    - by Doug
    I am in the process of moving all my files from an old laptop to new one. I just moved 11gb of data from my old laptop to a hard drive (external) and then upon moving it out to the new hard drive, the hard drive is getting a CRC (Data Error (Cyclic Redundancy Check). Now I am looking for a solution to recover the files that I moved on my old laptop (not the external). I understand they they are just marked for potential overwriting to free up space. I was getting ready to test out GetDataBack, but it says to install it on a healthy windows and use the recover-needed drive as an external. However, I don't want to turn off my computer without first getting the okay since it is in a "moved" state. Please help! What can I do to recover the Moved files. I haven't touched the computer since it has been moved. What can I use to recover them?

    Read the article

  • Blurred refurbished TFT

    - by PeterMmm
    I got 3 refurbished PC+TFT 19". All TFT shows a very blurry image. I estimate the TFT's are 2-3 yrs old. The PC's running Windows7 Pro. The resolution is set to the TFT native values. It could be possible that all TFT's are broken but I have similiar models that are up to 5 yrs old, without any issue. I still think it could be a config issue but from the hardware it is possible that a TFT get broken I shows up a very blurry image. Update TFT HP 2035, grafic Intel Q35/GMA 3100, analog D-SUB connector, Manuf. date Sep 2005. Config. resolution 1600x1200, PPP 150% Without ClearType it is worst. Desktop Icon titles seems to be good and clear. But in Notepad for example the effect is that on the right of the characters is a pinky shadow.

    Read the article

  • batch file to disable network share on Windows XP

    - by Robb
    Loosely related to this question Network Share causing Cygwin to run slowly after 'ls', I'd like to write a little batch file that I can execute to disconnect the host from any network shares and subsequently another batch file to reconnect. Ideally, this would be something that I can execute from a PuTTY terminal, SSHed into the box running cygwin. I'm pretty sure the batch files can be written easily, but I don't know about executing them from a PuTTY terminal. Regardless, I'd still like the batchfiles anyways. For the sake of simplicity my process would be: Log into server via PuTTY Run batch files to disconnect shares Do what I need to do Run batch files to reconnect shares Exit session, closing PuTTY

    Read the article

  • Data unaccessible from WHS drive

    - by Eakraly
    Hi all, I had a Windows Home server machine that crashed. Before doing anything I decided to get data out of it. So I took the hard drive and connected it to Windows7 pc. What I get is that I cannot access almost any file! I do see directory structure, I can open small files like .txt and .ini but bigger files like .iso and video - no go. Same goes for Ubuntu and OS X - I can see files and even copy them - but they are corrupted. Any ideas for what the problem is?

    Read the article

  • Data unaccessible from WHS drive

    - by Eakraly
    Hi all, I had a Windows Home server machine that crashed. Before doing anything I decided to get data out of it. So I took the hard drive and connected it to Windows7 pc. What I get is that I cannot access almost any file! I do see directory structure, I can open small files like .txt and .ini but bigger files like .iso and video - no go. Same goes for Ubuntu and OS X - I can see files and even copy them - but they are corrupted. Any ideas for what the problem is?

    Read the article

  • Persistent retrying resuming downloads with curl

    - by Svish
    I'm on a mac and have a list of files I would like to download from an ftp server. The connection is a bit buggy so I want it to retry and resume if connection is dropped. I know I can do this with wget, but unfortunately Mac OS X doesn't come with wget. I could install it, but to do that (unless I have missed something) I need to install XCode and MacPorts first, which I would like to avoid. Curl is available though it seems, but I don't know how that works or how to use it really. If I have a list of files in a text file (one full path per line, like ftp://user:pass@server/dir/file1) how can I use curl to download all those files? And can I get curl to never give up? Like, retry infinitely and resume downloads where it left off and such?

    Read the article

  • How to auto detect text file encoding?

    - by ???
    There are many plain text files which were encoded in variant charsets. I want to convert them all to UTF-8, but before running iconv, I need to know its original encoding. Most browsers have an Auto Detect option in encodings, however, I can't check those text files one by one because there are too many. Only having known the original encoding, I then can convert the texts by iconv -f DETECTED_CHARSET -t utf-8. Is there any utility to detect the encoding of plain text files? It doesn't have to be a 100% perfect correct, but it should recognize most of them.

    Read the article

  • Our company has 100,000s+ photos, how to store and browse/find these efficiently?

    - by tobefound
    We currently store our photos in a structure like this: folder\1\10000 - 19999.JPG|ORF|TIF (10 000 files) folder\2\20000 - 29999.JPG|ORF|TIF (10 000 files) etc... They are stored on 4 different 2TB D-link NASes attached and shared on our office network (\\nas1, \\nas2, and so on...) Problems: 1) When a client (Windows only, Vista and 7) wishes to browse the let's say \\nas1\folder\1\ folder, performance is quite poor. A problem. List takes a long time to generate in explorer window. Even with icons turned off. 2) Initial access to the NAS itself is sometimes slow. Problem. SAN disks too expensive for us. Even with iSCSI interface/switch technology. I've read a lot of tech pages saying that storing 100 000+ files in one single folder shouldn't be a problem. But we don't dare go there now that we experience problems on a 10K level. All input greatly appreciated, /T

    Read the article

  • close ssh sessions

    - by egor7
    I'm using ~/.ssh/config for logging to the internal.local corporate server: Host internal.local ProxyCommand ssh -e none corporate.proxy nc %h %p But after closing session (typing exit), my sshd session on server stays still active (I see it through different connection). Hot do I close session or change my config in the appropriate way, to eleminate hang sessions? First check from the second, root session: ps -fu user_name user_name 861 855 0 16:58:16 pts/3 0:00 -bash user_name 855 854 0 16:58:13 ? 0:00 /usr/lib/ssh/sshd After logging out: user_name 855 854 0 16:58:13 ? 0:00 /usr/lib/ssh/sshd Just after scp files to/from the internal.local a new scp sessions still hangs on the server.

    Read the article

  • Copying large file from SD to HDD via USB failing on Ubuntu

    - by Kent Boogaart
    Hi, I'm attempting to copy some large files from my camera (Canon EOS 500D) to my laptop, which is running 64 bit Ubuntu 9.04. I am using USB to connect the two devices. For most files, it is simply a matter of control-C and control-V. I have done this successfully many times with both photos and small movies (eg. 180MB). However, when I attempt to do this with very large files (eg. 3GB), the copy seems to start with a lot of activity both on the camera and laptop, but after 10 minutes or so the camera is automatically unmounted and the copy fails to complete. I have read that this might be due to the device not mounting as a mass storage device, but I cannot see any obvious way for me to change this behavior. Can anyone offer any direction here? I'll get a USB card reader if necessary, but I'd prefer to be able to just plug my camera in. Thanks

    Read the article

  • rsync invocation to replace symlinks pointing to source?

    - by bdbaddog
    Currently I'm moving a big filesystem to a new server as the original fileserver is no longer able to handle the filesystem writes. To make this quick I made symlinks at the target filesystem pointing to the original filesystem. Initially: /company/release (mountpoint of the original filesystem) After migration: /company/release.old (points to original filesystem after automount map update) /company/release (points to new fileserver/filesystem after automount map update) In /company/release there are symlinks like the following: /company/release/product-1.0.tar.gz - /company/release.old/product-1.0.tar.gz /company/release/product-1.0 - /company/release.old/product-1.0 (this is a tree of files) Using symlinks allowed me to move the writes to the new filesystem quickly. Now I'd like to slowly migrate the existing files and directories to the new filesystem. The problem I'm running into is that since the symlinks point back at the original files rsync doesn't see any difference and so it doesn't actually copy the file(s) or directory(s) and remove/overwrite the symlinks. Is there a set of rsync flags which will do what I want?

    Read the article

  • linux system problem

    - by snakec
    very first thanks to u all for Ur support. I'm quite new to Linux .i know how to install software but i don't know 1: how to install library .a or .so files i: how to install tar.gz i use the method like ./configure. make make install but most of the time i got the message nothing to make .in lots of tar.gz there is no installation document no make file no .configure file that make me quite confused how to install them or run them now i got sample source code of cuda i got them in tar.gz form when i extract them i found a folder in folder i found folder like c ,doc,shared etc when i open each folder i found more folder n file like that src, doc common ,lib, in these folder i found source code file header files libraries file make files i don't know how to run this kind of project can the be installed on the system how to run them they don't have .run file or script they don't have configure file can any one explain me how to compile them ,how to run them & how to install them

    Read the article

  • Would SSD drives benefit from a non-default allocation unit size?

    - by davebug
    The default allocation unit size recommended when formatting a drive in our current set-up is 4096 bytes. I understand the basics of the pros and cons of larger and smaller sizes (performance boost vs. space preservation) but it seems the benefits of a solid state drive (seek times massively lower than hard disks) may create a situation where a much smaller allocation size is not detrimental. Were this the case it would at least partially help to overcome the disadvantage of SSD (massively higher prices per GB). Is there a way to determine the 'cost' of smaller allocation sizes specifically related to seek times? Or are there any studies or articles recommending a change from the default based on this newer tech? (Assume the most average scattering of sizes program files, OS files, data, mp3s, text files, etc.)

    Read the article

  • initctl respawn does not reload configuration

    - by DELUXEnized
    My upstart service is running with the respawn option. I was hoping that if I deploy a new service config, the config will be loaded, when the service respawns. Neither the initctl reload-configuration command forces a reload, nor the restart command. Only an explicit stop and start reloads the configuration. The problem is, that I can not stop and start the service, at deploy time. The service itself schedules its restart by just shutting down. Is this behavior by design or am I missing something? Would it change anything, if I did the respawn with a second watchdog-service by an explicit start if my service stops? Why is there a difference between an explicit start/stop and the restart command or respawn option. Thanks.

    Read the article

  • How can I exclude a file in a folder from basic auth (regex help)?

    - by simon180
    Hi I have a folder on my site which contains admin files and I've added basic auth following a little unwanted attention. This works fine however a couple of the admin functions won't work through basic auth as they handle file uploads and so I want to exclude these files from the auth. It shouldn't have any security implications as any rogue user wouldn't be able to access the pages that could create a session to use these functions. I am using the following basic code to exclude a file: <FilesMatch "(index.php\/myadminfolder\/myurl\/myaction/someotherstuff?)$"> Satisfy Any Order allow,deny Allow from all Deny from none </FilesMatch> The URL exclusion is not working. The URL to exclude is in the form: index.php/directory/subdirectory/action/uniqueid/blah What is the correct URL string to add to FilesMatch to exclude any files that start with the pattern of index.php/directory/subdirectory/action - regardless of what comes after action? Thanks Simon

    Read the article

  • Solaris NFS: user permissions

    - by cjavapro
    I am very new to NFS. I would like to make sure I am clear. If the NFS server shares a directory rw,, and all the files in the directory are permissions 700 and user/group for those files is root/root,,, On the client you would have to log in as root to see it. Is this correct? I am aware that a non root user on the client could make a direct connection to override this. (as in don't use the mount, just use an NFS client hack.) It really seems like anyone who has access to the client machine should have access to the files and that the client machine should be ignoring permissions. Only the server should handle permissions. Am I correct in my understanding? Is it normal to have this type of layout? Is there a way to ignore the permissions on the client side?

    Read the article

  • vsftpd per group configuration

    - by roqs
    I want to configure a vsftpd in a per group fashion instead of per user configuration. It's possible? Suppose i have two groups: groupA and groupB, so my goal is: users in groupA have permission (wrx) to all files in directory dir1 users in groupB have permission (wrx) to all files in directory dir2 users of the system have permission (wrx) to all files in directory dir3 For example: ftp@test:/home/ftp# ls -l drwxrwxr-x 16 root groupA 4096 Jun 3 10:45 dir1 drwxrwxr-x 2 root groupB 4096 Jun 3 10:56 dir2 drwxrwxr-x 8 root users 4096 Jun 3 11:01 dir3 How to do that with vsftpd?

    Read the article

  • Unable to read DVD on laptop

    - by usabilitest
    I have a DVD disk that I created on an HP laptop. I believe it was setup to be accessed as a USB device. I do not have that laptop any longer and I still would like to access the files on that DVD from my current DELL laptop but it can not open the disk. My guess, the session nwas not closed on the disk and for some reason the new laptop can not read it. My question is, is there any way I can access the files using new laptop, of do I need to find similar HP to try to access the files and possibly close the session? Are there any utilities out there that could help me? Thanks.

    Read the article

  • Restarting Nagios Using PHP

    - by X-Ware
    I am making a tool that is interacting with NAGIOS where some config files should be added so a restart will be needed. What I need to know is how to restart NAGIOS using PHP code since this tool is written in PHP .. when I try to do this using: shell_exec("service nagios restart"); changes do not take place but when I do this manually by the console all changes I did using the PHP script are applied ... after 2 minutes research I found that I am asking linux to execut this command while I am logged in as apache user so I changed the command to: shell_exec('echo "mypass" | sudo -S service nagios restart'); still having the same problem ... new config files are not read until I restart manually any suggestions will be appreciated :)

    Read the article

  • Smart Auto-completion in SVN (and other programs!)

    - by Jimmy
    When I type "svn add path/to/somefile..." and tab to autocomplete, the system should ONLY complete files/directories that are NOT under currently under SVN control. Likewise, when I commit, remove or resolve files, the tab completion should only complete files that are relevant to what I'm doing. This is especially important in SVN where I can waste thousands of keystrokes typing long path and file names, but it of applies to other programs. I know bash has a bash_completion file that can be used to programatically alter this behaviour but I've not found a decent example of SVN completion which actually completes file names rather than SVN command names. My question is: Does anyone have such a setup? Does anyone use a different shell or tool that does something similar? Has anyone given this any thought?

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >