Search Results

Search found 39577 results on 1584 pages for 'temp files'.

Page 543/1584 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • Modifying value of "Rating" column within Explorer for arbitrary file types

    - by Fake Name
    Basically, I have a large body of assorted media (text, images, flash files, archives, folders, etc...) and I'm attempting to organize it. Windows Explorer has a rating column, but there seems to be no way to modify the rating of the files short of opening them in their type-specific software (e.g. Media player, or Photo viewer). However, this does not work when the file is of an unsupported type (.rar, .swf ...), or a directory. I'd be more than willing to consider a file-manager replacement (I've alreadly looked at quite a few, Directory Opus, Total Commander, etc...), or even a solution that stores the rating metadata in a hidden file in each folder, or a separate database. The one real critical requirement is the ability to sort by rating, and being filetype-agnostic. Basically, is there any way to categorize a large collection of assorted files by rating that will work with any file type, including directories? - Ideally, there would be an easy way to add arbitrary columns to windows explorer, and edit them directly. However, there seems to be no way to do this. The rating column is the next best thing.

    Read the article

  • Is my "Generic" USB Flash Drive broken?

    - by Jesse J.
    So here is the situation. I find myself technological knowledgeable about many things ( I love to code, whether it's websites, C#, C++ or so on). However: My 2 toddlers (my wife actually) bought me a "Generic" 128 GB USB Storage Device (Usb Flash Drive) for Father's Day. I thought awesome at first..... WRONG! Nothing but problems with it. 3-4mb/s MAX transfer speed. I can bear with it. BUT! When I went to reformat my computer I transferred my save files from my games over to the stick and then the USB Stick managed to become corrupted. Not just a simple format would work either. It's screwed. I tried to use (Manually changed usb drive letter troubleshooting it to X) "chmod X: /X /F /R" with administrator rights, I did this after a long session to make it work with no errors (had to delete the log) and I finally recovered the files, however when I go to use it (transfer to or from) it transfers a couple kb to the stick or from it and then freezes, It says (Windows 7): Name: From: Folder (X:\File\Location) To: Folder C:\Users\Username\Desktop) Items Remaining: 0 (0 bytes) Speed: 0 bytes/second It does this forever... and ever... and ever... It transfered 3 files atleast, and then stopped. This is a new USB Stick bought from a "High" reputation company on eBay. Is the USB Stick screwed?

    Read the article

  • What cause high CPU usage on the server during file upload

    - by bosiang
    When I try to upload a huge file size (approx 2GB), the server cpu usage goes really high. What should I do to fix this? I just use standard html form and php, for file upload. I'm sorry if I post on the wrong forum. Please point me to the right direction here is the result of "top" command during uploading 4 files (18mb, 38mb, 60mb, 33mb) 1904 apache 20 0 33504 5740 1952 R 28.3 0.2 0:02.19 httpd 1905 apache 20 0 33504 5740 1952 R 28.3 0.2 0:01.99 httpd 1903 apache 20 0 33232 6968 3060 R 28.0 0.2 0:01.98 httpd 1910 apache 20 0 33240 6020 2248 S 11.5 0.2 0:02.85 httpd 2133 root 20 0 2656 1124 896 R 1.6 0.0 0:00.71 top 1 root 20 0 2864 1404 1188 S 0.0 0.0 0:03.99 init the code for chunking, although eventhough I don't use this code (just simple file upload), it still cause that high cpu usage function sendRequest() { //clean the screen //bars.innerHTML = ''; var file = document.getElementById('fileToUpload'); for(var i = 0; i < file.files.length; i++) { var blob = file.files[i]; var originalFileName = blob.name; var filePart = 0 const BYTES_PER_CHUNK = 100 * 1024 * 1024; // 10MB chunk sizes. var realFileSize = blob.size; var start = 0; var end = BYTES_PER_CHUNK; totalChunks = Math.ceil(realFileSize / BYTES_PER_CHUNK); alert(realFileSize); while( start < realFileSize ) { if (blob.webkitSlice) { //for Google Chrome var chunk = blob.webkitSlice(start, end); } else if (blob.mozSlice) { //for Mozilla Firefox var chunk = blob.mozSlice(start, end); } uploadFile(chunk, originalFileName, filePart, totalChunks, i); filePart++; start = end; end = start + BYTES_PER_CHUNK; } } }

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • Mod_pagespeed, Varnish and Apache cache issues after new code pushes

    - by WerkkreW
    I have a rather strange issue. In my environment we are running a load balanced cluster of 8 apache servers with a master-master MySQL backend. In front of apache we have Varnish in the cache layer. We have been running Apache mod_pagespeed for several weeks now and for the most part it has been working great. The issue arises when we do fresh code updates from Git, and and/all of the JS/CSS assets change. Basically the problem appears to be two fold. One, after the code push we generally take the opportunity to flush varnish, restart apache, and restart varnish. In doing this all of the mod_pagespeed combinied/minified files are cleared out ensuring that all of the new JS/CSS assets are fresh. The problem is, upon doing this the file names that mod_pagespeed creates change, but the old files (appear) to be still cached for many people client side leading to very unexpected results. However, if we do not restart apache, the changes to the files may or may not appear client side due to the cached minified assets. The simple solution is to disable mod_pagespeed, however I would rather not do that as it has made a fairly large impact in performance. I feel as if there must be a better way to deal with the inconsistencies in cache between the client and server to prevent having people to go to great lengths or perform a large number of page refreshes to see a working page. I can provide configuration snippets if anyone needs them. If you would like to inspect the site, source, headers, or anything try the following addresses: http://wellplayed.org http://wellplayed.org/tv Thanks in advance!

    Read the article

  • Can I install windows on an SSD and access data from my old windows HDD?

    - by nzifnab
    I purchased new computer components, switching my hardware from AMD and Radeon to Intel and Nvidia. I kept components from my old computer like the powersupply and two HDDs. Everything appeared to install correctly and the system booted into the BIOS just fine (after a brief snafu with the CPU fan). My goal was to use the two harddrives and just be able to turn on the computer and load up my old windows install with all the files, programs, and documents. I expected to have to call Microsoft to re-register the windows install for the new hardware (since I had to do that last time I upgraded w/ the same windows version). When the computer attempts to boot into windows it briefly flashes a bluescreen and then restarts. System recovery gives a message something like "BadDriver Failover" something something. I assume this is because it's trying to use amd drivers for an intel chipset (or something...?) and I've been as-yet unsuccessful in getting it to boot into my old windows partition. SO! I decided eff it, maybe I'll go visit my nearest Micro Center and buy a 200 GB SSD, install windows onto that, and then... be able to access just the contents of both of my other harddrives? I don't intend on running any of the programs but there were some saved files I would like to salvage from the 500 GB harddrive, The 1.5 TB harddrive only had files on it, no OS or applications so I'd also expect to still be able to access it. Is this possible? Can I install the SSD and only format/install windows onto it and still access the contents from my two transferred-over drives?

    Read the article

  • Best solution top keep data secure

    - by mrwooster
    What is the simplest and most elegant way of storing a small amount of data in a reasonably secure way? I am not looking for ridiculous levels of advanced encryption (AES-256 is more than enough) and I am only looking to encrypt a small number of files. The files I wish to encrypt are mostly comprised of password lists and SSH keys for servers. Unfortunately it is impossible to keep track of ever changing passwords for my servers (and SSH keys) and so need to keep a list of the passwords. Obviously this list needs to be secure, and also portable (I work from multiple locations). At the moment, I use a 10MB encrypted disk image on my mac (std .dmg AES-256) and just mount it whenever I need access to the data. To my knowledge this is very secure and I am very happy using it. However, the data is not very portable. I would like to be able to access my data from other machines (especially ones running linux), and I am aware that there are quite a few issues trying to mount an encrypted .dmg on linux. An alternative I have considered is to create a tar archive containing the files and use gpg --symmetric to encrypt it, but this is not a very elegant solution as it requires gpg to be installed on every system. So, what over solutions exist, and which ones would you consider to be the most elegant? Ty

    Read the article

  • Revamping an old and unstable IT-solution for a customer?

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • How is it possible to list all folders that a particular user/group has permissions on?

    - by Lord Torgamus
    Is it possible to list all folders/files that a given group has explicit permissions on, for a machine running Windows Server 2003? If so, how? It would be nice to see inherited permissions as well, but I could do with just explicit permissions. A little background: I'm trying to update groups/permissions on a test server. One of the groups, Devs, wasn't implemented correctly when it was created, and my goal is to remove it from the system. It has been replaced by LeadDevelopers, which has permissions on many — but naturally not all — of the same folders. I want to make sure that I don't accidentally orphan any folders or cause any other issues when I remove Devs. It did have some admin-level permissions. EDIT: The answers so far — at least *cacls and AccessEnum — provide a way to find out which groups/users have permissions on known directories/files. I actually want the reverse of this behavior: I know the group, and I'm looking for the directories/files for which the group has permissions. Also, as I noted in a comment, the Devs group is not itself a member of any other group.

    Read the article

  • Skipping nginx PHP cache for certain areas of a site?

    - by DisgruntledGoat
    I have just set up a new server with nginx (which I am new to) and PHP. On my site there are essentially 3 different types of files: static content like CSS, JS, and some images (most images are on an external CDN) main PHP/MySQL database-driven website which essentially acts like a static site dynamic PHP/MySQL forum It is my understanding from this question and this page that the static files need no special treatment and will be served as fast as possible. I followed the answer from the above question to set up caching for PHP files and now I have a config like this: location ~ \.php$ { try_files $uri =404; fastcgi_cache one; fastcgi_cache_key $scheme$host$request_uri; fastcgi_cache_valid 200 302 304 30m; fastcgi_cache_valid 301 1h; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example$fastcgi_script_name; fastcgi_param HTTPS off; } However, now I want to prevent caching on the forum (either for everyone or only for logged-in users - haven't checked if the latter is feasible with the forum software). I've heard that "if is evil" inside location blocks, so I am unsure how to proceed. With the if inside the location block I would probably add this in the middle: if ($request_uri ~* "^/forum/") { fastcgi_cache_bypass 1; } # or possible this, if I'm able to cache pages for anonymous visitors if ($request_uri ~* "^/forum/" && $http_cookie ~* "loggedincookie") { fastcgi_cache_bypass 1; } Will that work fine, or is there a better way to achieve this?

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • Why still use JPG compression? [closed]

    - by Torben Gundtofte-Bruun
    Back when the JPG image format was introduced, it made a lot of sense to reduce the file size, even accepting a loss in image quality, because files were being downloaded over a slow and expensive modem connection. In today's world, file size is no longer a concern, at least not regarding JPG where it seems silly to save 45kB on a photo. But my image editing apps still prompt me for the desired compression level when I save a file. Does it still make sense to go with the default 85? Why should I not crank it up to 100 for all files? Update based on comments: For web work, I might use PNG instead. But every smartphone and camera produces JPG files. The question arises when I save these edits. Audience is my own harddisk. We're talking photos, 2-5MB apiece. Chroma, subsampling, DCT - sorry, never heard of it. I'm a home user, not Photoshop guru. For the record, I use Paint Shop Pro on Win, and Gimp on Linux.

    Read the article

  • Constructor and Destructor of a singleton object called twice

    - by Bikram990
    I'm facing a problem in singleton object in c++. Here is the explanation: Problem info: I have a 4 shared libraries (say libA.so, libB.so, libC.so, libD.so) and 2 executable binary files each using one another shared library( say libE.so) which deals with files. The purpose of libE.so is to write data into a file and if the executable restarts or size of file exceeds a certain limit it is zipped and a new file is created with time stamp in name. It is using singleton object. It exports a handler class for getting and using singleton. Compressing only happens in the above said two cases. The user/loader executable can specify the starting name of file only no other control is provided by handler class. libA.so, libB.so, libC.so and libD.so have almost same behavior. They all have a class and declare and object of an handler which gets the instance of the singleton in libE.so and uses it for further purpose. All these libraries are linked to two executable binary files. If only one of the two executable runs then its fine, But if both executable runs one after other then the file of the first started executable gets compressed. Debug info: The constructor and destructor of the singleton object is called twice.(for each executable) The object of singleton is a static object and never deleted. The executable is not able to exit/return gives: glibc detected * (exe1 or exe2): double free or corruption (!prev): some_addr * Running with binaries valgrind gives that the above error is due to the destructor of the singleton object. Thanks

    Read the article

  • Urgent: how to deny read access to a ExecCGI directory

    - by Malvolio
    First, I can't believe that that isn't the default behavior. Second, yikes! I don't know how long my code's been hanging out there, with all sort of cool secret stuff, just waiting for some hacker who knows Apache better than I do. EDIT (and apology) Well, this is sort of embarrassing. Here's what happened: We had some Python scripts available to the web, at /aux/file.py, which were not surprisingly at /var/www/http/aux . Separately, we were running an app server and Apache proxies through at /servlets/. A contractor had constructed the WAR file by bundling up all the generated files including the Python files (which are in a directory also called aux, not surprisingly), so if you typed in /servlets/aux/file.py, the web-server would ask the app-server for it and the app-server would just supply the file. It was the latter URL that this morning I happened to type in by accident and lo, the source appeared. Until I realized the shear unlikelihood of what I had done, the situation was rating about 8.3 on the sphincter scale. After a tense half-hour or so I realized that it had nothing to do with the CGI (and that serving files that were also executable would be not only foolish but also impossible), and was able to address the real problems. So -- sorry, everybody. Let the scorn-fest commence.

    Read the article

  • Change the logical name of sql server express 2005 database file?

    - by oob
    In Microsoft SQL Server Management Studio Express for Sql Server Express 2005, I needed to copy a database for testing and keep it on the same server as the old database. I did the following: Right Click on Databases Created new database Detached the database I wanted to copy "Restored" my new database from the backup file of my old database. I did this by clicking the 'Overwrite the existing database' box on the Options pane, and I changed the paths in the 'restore as' options so that they pointed to my new .mdf and .ldf files. Everything is working like I want. Problem is, when I right-click - Properties - Files on my new database, the logical name of the .mdf file is the same as the logical name of the old .mdf file. They are actually different files - they just share the same logical name? I guess maybe this isn't a short-term problem, but I can see it confusing somebody down the road. Any way to change the logical name of the .mdf file? UPDATE EDIT - Apparently you can just change the logical name through the GUI by, get this, clicking on it and typing a new name. I could swear that was not possible when I posted this, but maybe it was and I somehow missed it! Either way - the solution below should still work but doing it through the GUI is also an option.

    Read the article

  • Modifying value of "Rating" column within Explorer for arbitrary file types.

    - by Fake Name
    Basically, I have a large body of assorted media (text, images, flash files, archives, folders, etc...) and I'm attempting to organize it. Windows Explorer has a rating column, but there seems to be no way to modify the rating of the files short of opening them in their type-specific software (e.g. Media player, or Photo viewer). However, this does not work when the file is of an unsupported type (.rar, .swf ...), or a directory. I'd be more than willing to consider a file-manager replacement (I've alreadly looked at quite a few, Directory Opus, Total Commander, etc...), or even a solution that stores the rating metadata in a hidden file in each folder, or a separate database. The one real critical requirement is the ability to sort by rating, and being filetype-agnostic. Basically, is there any way to categorize a large collection of assorted files by rating that will work with any file type, including directories? - Ideally, there would be an easy way to add arbitrary columns to windows explorer, and edit them directly. However, there seems to be no way to do this. The rating column is the next best thing.

    Read the article

  • How can I change the default program installation directory in Windows 7?

    - by Max
    Windows 7 is installed on my C drive, which is quite small. I am very tired of instructing new programs to put their files on my larger D drive during installation; I would like to change the default drive. This article says that you can use a registry hack, but I am giving Microsoft the benefit of the doubt and naively assuming that a configuration option exists somewhere. It's 2010... do I really have to hack my registry to make a simple tweak like this? Also, there's a ServerFault question that explains how to move the "Users" directory and create a symlink, which could also work. However, at the moment I have some apps in C:\Program Files, some apps in C:\Program Files (x86), and some apps in the corresponding folders on D:\, so it would be a hassle. Also, my small OS boot drive is a 10k RPM WD Raptor, and I feel like that probably gives a speed boost to apps installed on it that need to read & write to their directories a bunch. I wonder if it actually matters.

    Read the article

  • FTP Scripting Capabilities

    - by lmg
    I am looking for an FTP client that will allow me to do the following Include a GUI for setting up a number of FTP connections Support FTPS Run unattended on Windows Server 2008 Retry failed transactions Support email Support custom scripts I need to pull files from a few different servers and there are certain calculations that need to be done depending on which server the files come from. I've looked at SmartFTP and it looks like pretty much what I need except I can't get it to run as a Windows Scheduled Task (I currently have some support threads open in their forum). I've also looked at a few other FTP clients (Filezilla, RoboFTP, and AutoFTP (you can find the Windows 7 BSOD in this one!)) that haven't had the capabilities I'm looking for. Right now, I'm looking at WS_FTP and its scripting capabilities. It appears I can create a script to run as a scheduled task, but I can't add the script to a file transfer task. Does anyone have suggestions on how I can do post-transfer processing on the files or better yet how to integrate scripting into the file transfer task? I'm also open to other suggestions for FTP clients as well if you have them! If I can't find a suitable FTP client, custom scripting will just have to do the trick.

    Read the article

  • Noob with git repository on Windows Storage Server 2008?

    - by HibbyHoo
    I have a Western Digital Sentinel at home running Windows Storage Server 2008 R2 Essentials. I have several git repositories on it for my own personal projects, and have no problem pushing and pulling over my local network. I want to be able to access those repos remotely from anywhere. I am able to log in and remotely access folders and files on it, but I cannot clone repos using the same address. It hangs for a REALLY long time before finally failing with an error: git.exe clone --progress -v "https://myIpAddressHere/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git" "D:\repo" Cloning into 'D:\repo'... error: Failed connect to myIpAddress:443; No error while accessing https://myIpAddress/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git/info/refs fatal: HTTP request failed git did not exit cleanly (exit code 128) I'm not too privy to networking or web development, and I have only a rudimentary understanding of how to use git (with TortoiseGit). I'm having a hard time finding search results for this specific problem and a hard time interpreting generic tutorials for the general scope of this problem. TortoiseGit version: 1.7.13.0. git version: 1.7.10.mysysgit.1.

    Read the article

  • Windows 7 just deleted 4 days of work

    - by Mat
    Hey! I'm just a bit about to freak out. I just finished a project and rebooted my computer. It didn't want to boot anymore so I had to use the Windows 7 system repair option. It ran for a minute and then booted up. Now most of my source code from the last 4 days of work is gone! Background: sometimes (most often after installing new software) my notebook won't boot up anymore. It will just show the little Windows 7 flag, but not read from the hard disk anymore. If I hard-abort and reboot then, it asks me whether to start Windows normally (which won't work) or to run "Windows startup repair". If I run it, it does some stuff for about two or three minutes and then I can boot Windows again. Usually after this, .exe files I added to the computer during previous days are gone - but other files so far were not touched. But now, after this happened, a whole bunch of ".as" (ActionScript source files) from my project are gone! Does anyone know where and whether there's a way to recover them?

    Read the article

  • ZFS Data Loss Scenarios

    - by Obtuse
    I'm looking toward building a largish ZFS Pool (150TB+), and I'd like to hear people experiences about data loss scenarios due to failed hardware, in particular, distinguishing between instances where just some data is lost vs. the whole filesystem (of if there even is such a distinction in ZFS). For example: let's say a vdev is lost due to a failure like an external drive enclosure losing power, or a controller card failing. From what I've read the pool should go into a faulted mode, but if the vdev is returned the pool should recover? or not? or if the vdev is partially damaged, does one lose the whole pool, some files, etc.? What happens if a ZIL device fails? Or just one of several ZILs? Truly any and all anecdotes or hypothetical scenarios backed by deep technical knowledge are appreciated! Thanks! Update: We're doing this on the cheap since we are a small business (9 people or so) but we generate a fair amount of imaging data. The data is mostly smallish files, by my count about 500k files per TB. The data is important but not uber-critical. We are planning to use the ZFS pool to mirror 48TB "live" data array (in use for 3 years or so), and use the the rest of the storage for 'archived' data. The pool will be shared using NFS. The rack is supposedly on a building backup generator line, and we have two APC UPSes capable of powering the rack at full load for 5 mins or so.

    Read the article

  • Unwanted blank lines when committing from SVN

    - by Alon_A
    I'm using CentOS Linux 5.8 as a web server and tortoise SVN for synchronizing version of our code. We write the code in Windows 7 professional 64BIT with NetBeans and NotePad++. I'm committing the code files (.php) from the Linux Shell by this command: svn co svnFolder serverFolder --username **** --password **** The problem is, after committing the files, when I'm opening them directly from the server (for debugging) by NotePad++ (I'm doing View/Edit from Filezilla) I have extra blank lines. A code that looks like this on the localhost (On NotePad++): private $producer; private $account; private $admin; private $producerEvents; private $accountProducers; private $adminAccounts; Will look like this after committing to the server (Again, on NotePad++): private $producer; private $account; private $admin; private $producerEvents; private $accountProducers; private $adminAccounts; If I upload files by FTP, No blank lines are being added. How can I solve it ? Thanks.

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • allow public access to subfolder of protected folder on apache

    - by UnnamedMook
    I have password-protected the root folder of my website while i do maintenance, but I want to display a custom 401 error page to let people know the site is under construction. Unfortunately, my web host doesn't allow me write access to anything outside the root folder of my website, so this custom error page must by stored in the root folder or one of its subfolders. Instead of my custom error page I get the Apache default error page and it also says "Additionally, a 401 Authorization Required error was encountered while trying to use an ErrorDocument to handle the request." I searched for ways to make a subfolder of a protected directory public, and all I could find was to use the "Satisfy any" directive, but this doesn't work for me. It doesn't work on a file-only basis either, as with the .htaccess file below. #Authorization Restriction AuthType Basic AuthName "Access to root" AuthUserFile ********************************* Require user *********** Order Allow,Deny Satisfy any #Error Documents ErrorDocument 401 Error-401.html #Allow access to error documents <Files Error-*,html> Order Deny,Allow Allow from all Satisfy any </Files> I can only use .htaccess files; I don't have access to httpd.conf

    Read the article

  • Huge discrepancy in Inkscape file size

    - by Keyran
    When using Inkscape to create many pictures with common elements across them, I tend to copy the first SVG file I have created as many times as I need pictures, and then edit the copies. If I reuse files across projects, it can result in a file being copied and modified tens to hundreds of files. I have recently realized that the latest copies have a size between 29 and 60 MB, slowing my computer down significatively. My pictures are very simple, nothing that would normally go over 1 MB in size. As an experiment, I copied the entire content of one of the latest files into a new Inkscape file. I am certain that I have copied the content of the file entirely (I have only one layer and I used the "Select All" option). The new file has a size of 102,2 KB. This would indicate that about 30 MB of data per file is irrelevant to me. What could be the cause of this size difference ? Is there a way to reduce the size of a file without having to copy the content into a new file ? I am using Inkscape 0.48.4 on Debian Unstable. Thanks for any input you might be able to provide !

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >