Search Results

Search found 14231 results on 570 pages for 'folder redirection'.

Page 501/570 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • Why is piping dd through gzip so much faster than a direct copy?

    - by Foo Bar
    I wanted to backup a path from a computer in my network to another computer in the same network over a 100MBit/s line. For this I did dd if=/local/path of=/remote/path/in/local/network/backup.img which gave me a very low network transfer speed of something about 50 to 100 kB/s, which would have taken forever. So I stopped it and decided to try gzipping it on the fly to make it much smaller so that the amount to transfer is less. So I did dd if=/local/folder | gzip > /remote/path/in/local/network/backup.img.gz But now I get something like 1 MB/s network transfer speed, so a factor of 10 to 20 faster. After noticing this, I tested this on several paths and files and it was always the same. Why does piping dd through gzip also increase the transfer rates by a large factor instead of only reducing the bytelength of the stream by a large factor? I'd expected even a small decrease in transfer rates instead, due to the higher CPU consumption while compressing, but now I get a double plus. Not that I'm not happy, but just wondering. ;)

    Read the article

  • Hybrid Exchange Online setup with on premise public folders, certificate issues?

    - by exxoid
    We have a Hybrid Exchange setup with Exchange Online (v15 tenant) and Exchange 2010 on premise. The hybrid configuration for the most part is working, what I am having an issue with is getting public folders to work for cloud users. I followed the official documentation here (http://technet.microsoft.com/en-us/library/dn249373(v=exchg.150).aspx) and it kind of works. When I am accessing Outlook on a public wifi I am able to bring up the cloud mailboxes and on premise public folders show up in Outlook. When I am accessing email via Outlook as a cloud user on the same LAN as the on premise exchange, the cloud user makes the outlook.com connection for live/ad/archive mailbox but fails to create a proxy connection for the on premise public folders. The error I get is a certificate mismatch, it seems that when a user on the LAN accesses Outlook/Exchange it is using a different certificate vs. when Outlook is launched on a WiFi network. When I look at the Outlook connection information, I see the connection to outlook.com for ad/live/archive mailbox but no entry for public folder connection. Our on premise Exchange is 2010 SP3 with latest CUs. The client is a domain joined laptop with Windows 7 and Office 2010 SP2, latest windows updates applied. Our infrastructure has a working ADFS 3 and DirSync setup for Office 365. My question then is, what do I need to do to make sure that the Cloud user launching Outlook on the LAN uses the proper certificate (the wildcard 3rd party cert.. vs. the self signed certificate which it looks like it may be using during the connection attempt).

    Read the article

  • mod_rewrite changes case even if not matching RewriteCond?

    - by kirdie
    I have a really strange problem with my MediaWiki which I want to have articles of the form mywiki.org/MyArticle. Now I got most of it to work using the following code but it mysteriously cannot display the logo anymore. RewriteEngine On # don't rewrite valid requests to files and directories RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-f RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-d # mywiki.org/MyArticle gets rewritten to mywiki.org/index.php/MyArticle RewriteRule ^/(.*)$ /index.php/$1 [L,QSA] Now when I type in mywiki.org/img/logo.jpg in my browser the adress changes to http://wiki.geoknow.eu/Img/logo.jpg (capital I) and I get to the empty article page but the image is definitely there (in my document root under the img folder): /var/www/mywiki.org$ ls img logo.jpg So far so bad. But now it gets really crazy: When I add RewriteCond %{REQUEST_URI} !^/.*\.jpg my adress still gets rewritten and my access log says - - [05/Dec/2012:16:30:21 +0100] "GET /Img/geoknow_logo.jpg HTTP/1.1" 404 509 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Firefox/17.0" Where does that capital I in Img come from? The rule is not even executed because at least one condition is definitely not met now and I also don't have any to lowercase-transformation defined anywhere. What is happening there and how can I repair this? P.S.: Now all of the sudden the problem went away (the image is displayed as it should and there is no capital replacement anymore. What can cause this and why does it spontanously appear and disappear?

    Read the article

  • In need of assistance for recovering a lost partition

    - by Tek
    The program that has worked for the most part is Active@ Partition Recovery. I'm so close but yet so far from recovering my data. Okay so here's what happens. In the following screenshot (blanked out a folder and filename with profanity in case some of you guys are at work :P), it detects the partition I accidentally deleted with ALL 100% of my data listed. Of course, I didn't write ANY data to that drive after I did this. But when I click "recover" and finishes the recovery process, in Windows I click on the partition that was just was just recovered... It's EMPTY. The program seems to be able to see my lost files, but when I recover the partition windows doesn't seem to think the same =( Things I tried: I tried running a chkdsk /r /f after I recovered the partition, apparently it couldn't find any errors. Tried using other software like TestDisk to recover the partition, but they (all) act similar to Windows in that it detects the (missing) partition but when I browse it there's no files. The partition is there along with all the file and data information. The sector information is also in the screenshot, is there any way I can use this to my advantage in recovering my data? Other information: Dualboot: Win8 / Ubuntu 12.10 x64 1TB Internal desktop drive, GPT Layout, NTFS formatted drive, 64K allocation size.

    Read the article

  • Relative path incorrect in the view layer when hosting a rails3 app in a subdirectory using passenger and apache

    - by Saifis
    I want to host multiple Rails apps on a multiple server using sub-directories. And have encountered some relative path problems. I have made a symbolic link to the app's public directory and placed it in the /var/www/html directory, var/www/html/ /test_app (symbolic link to the public folder of test_app) and set apache as so LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12 PassengerRuby /usr/local/bin/ruby <VirtualHost *:80> ServerName test.com DocumentRoot /var/www/html Options Indexes FollowSymLinks -MultiViews RailsBaseURI /test_app </Location> </VirtualHost> The links in the app itself works just fine, all the links acknowledge the test_app/ directory and work, however, when it comes to showing images in the public directory in the view, the relative path goes wrong. Say I have /system/files/1/aaa.png it goes looking for it in /var/www/html/system/files/1/aaa.png rather than /var/www/html/test_app/system/files/1/aaa.png As far as I understand this is an Apache setting problem than something to be done in Rails, if its possible I would prefer to have it contained in the conf file of apache rather than having to alter the code.

    Read the article

  • VPS with multiple domains, can EXIM send mail from a different domain?

    - by Mike L.
    I building a site for a client on a VPS running CentOS 5.5 with cPanel WMH 11.28.60. The original domain is XXXXXinvestmenttrust.com. He has about a dozen domains on this server. The site I am building will have confirmation emails as well as provide users to anonymize their email address (like craigslist) I set up email piping to forward emails, but they are all being trapped in the spam folder. A close look at the headers, the emails appear to be comming from [email protected] rather than the actual domain. The IP has a rating of Neutral on www.senderbase.com. I believe it is the conflicting information in the header (the fields set by me, specify the actual domain where the headers put in place by EXIM specify to name of the server) Somewhere I read about SPF & MX entries can fix this, but I have been unable to figure out how. Also, All of the domains use the same IP, and the other websites do not send emails. So I could possibly make the domain in question, the primary (where all emails are sent from that domain by default) Is that possible?

    Read the article

  • How to find what files / directories are not copied yet?

    - by user8676
    Hi all, I found the following 'nice' situation: An archive of few disks (actually three disks) which has a bunch of photos (more or less) organized. Well, this is good. A big disk shared on a network which has a bunch of photos which has another folder structure (even if is somewhat recognizable for a human being) than the archive described above, but some of the files on this big network share are the same with the files from the archive. Well, this is bad. What we need is to move the different (new) files from the network share in the archive (perhaps we'll use for this a new disk added to archive). The program that we need is different from a regular File Duplicate Finder program because usually the File Duplicate Finder finds the duplicates from all sources comparing each file with another. We want to find the differences between the two sources. It is fine for us to have a report generated in text file which after this we'll use to do our move. A Windows solution will be preferred. Any ideas? TIA

    Read the article

  • windows xp cannot access admin share

    - by barlop
    I have 3 systems. A,B,Compx all on xp. but comps A and B have an issue with Compx. Compx has network shares I can access. I can do \\compx and get some. But I cannot access the admin share c$ \\compx\c$ gives a login prompt, and I can't get any user/pass to work. I looked at permissions but don't see an issue. Nevertheless, I will describe what I see in the permissions. In the security tab of C, I have Administrators,creator owner,everyone,bob,system,users (6 things there) "creator owner" has nothing ticked, I can't seem to change that. If I tick so they all get ticked, and click apply, 2.5min and it's completed its opration and they all untick. Though this isn't the root of the problem. Since I get the same in the share I can access. In advanced, I see those 6 things, Administrators,creator owner,everyone,bob,system,users (6 things there) all "full control" all are "this folder, subfolders and files".. except creator owner, which is just subfolders and files only I look at the properties for the share I can see. looks the same, except in security..advanced, double clicking any of them the boxes are all ticked but greyed. That's not the problem though since I can access that share. So, I don't know what the problem is.

    Read the article

  • InstallShield or Windows installer corrupted

    - by Bobby S
    Just recently I've been unable to install any software on my Windows 7 machine. Anything that uses InstallShield or the Windows installer will just hang or give a weird error. I noticed there will be many duplicate isbew64.exe processes (like 25) that launch and then just sit there or else a lot of msiexec.exe *32 processes, depending on what I'm trying to install. One piece of software specifically is the Logitech Harmony software. It gives me an *is_string_not_defined* error, saying c:\program files (x86)\:\ the filename, directory name, or volume label syntax is incorrect. The other thing I was trying to install was Battlefield: Bad Company 2, and that just hangs as well, and then just leaves all the Windows installer processes running in the background after I quit the install process. Very odd. I've checked well and googled these issues, it doesn't appear to be any sort of malware issue. I feel like it's related to some kind of corrupted installer application. I've rebooted, deleted the InstallShield folder in program files/common files as some places online suggested but to no avail. I have no idea what to do, any ideas?

    Read the article

  • Viewing local websites on my iOS device over Wi-fi

    - by John
    Trying to view some local html/css/js files in a mobile browser on my iOS device. Thought maybe file-sharing would be an option, and is, but I'm not completely satisfied with it. Any time I try to do the following an error occurs. Web sharing is on and available at http://192.168.1.101/~user but I have to manually copy the files in. If I try to symlink a folder in so that the address could be viewed at ''~user/some_dir by issuing $ ln -s /Users/user/dev/some_dir ~/Sites/ then I get a 403 forbidden error. I've tried to remedy this by modifying a user.conf file in /private/etc/apache2/ and using the following syntax: <Directory "/Users/user/Sites/"> Options Indexes MultiViews SymLinks AllowOverride None Order allow,deny Allow from all </Directory> but nope, still doesn't work. I get a 403 error. If I try to symlink each individual file in instead of using a directory as a sub-directory, same error. Any help would be greatly appreciated! I'd just like to symlink directories into the ~/Sites one and browse them on my iOS device over wifi. I'm on OS X 10.7 Lion trying to connect with iOS 5.

    Read the article

  • PHP potential issues with compiling 5.3.8 extensions against RHEL 6 / CentOS 6 PHP 5.3.3 package

    - by user101203
    I'm working on getting a Red Hat 6 LAMP server going and while the PHP that comes with it has many extensions we use, it doesn't have all of them. To solve this, I was thinking about either compiling the PHP extensions which come in the ext folder of the downloadable source code of PHP 5.3.3 from php.net same as #1, but using the extensions from the latest PHP version (currently 5.3.8). Do #1 but manually decide which updates to backport from the latest version of the PHP extensions into the older version and then compile the backported result A drawback to #1 is that security and bug fixes come out which we wouldn't be able to take advantage of. A drawback to #3 is that it might be a lot of work Does anyone know what the drawbacks to #2 are? I don't want to go down that route if it might result in some unexpected negative outcomes. Also, are there any other drawbacks to the other options or a better way to go altogether? I want to use the PHP 5.3.3 which comes with the Linux distro because I don't want us to get to a place again where we are forced to upgrade to a new version of PHP to stay on top of security updates like from PHP 5.2.x to 5.3.x and there be backwards incompatible changes (this is the situation we're in now with PHP 5.2.x no longer being supported).

    Read the article

  • Apache Virtualhost entry with Windows hostname

    - by gshauger
    I have a Windows Domain Controller and we use it for DNS for our internal network. I have an Ubuntu box with an IP address of 172.16.34.149. Within the Windows DNS I created the forward and reverse lookup entries for the name Endymion. Naturally when ever I FTP/SSH/HTTP/etc to the hostname Endymion it resolves correctly to my Ubuntu box. I wanted to do some web development on this box for an existing site. There were problems when I placed the website in a subfolder of /var/www/. Let's just say it was in folder /var/www/projectx/. The issue involved the incorrect resolution of non-relative urls. So I figure I could create a new DNS entry for the hostname projectx. Sure enough when I FTP/SSH/HTTP/etc to the hostname projectx it takes me to the same ubuntu box as the hostname Endymion...this is what I would expect. I now have two hostnames for the same box. I then create a Virtualhost entry in httpd.conf that looks like the following: <VirtualHost *:80> DocumentRoot /var/www/projectx ServerName projectx ServerAlias projectx </VirtualHost> Sure enough when I go to a browser and type in http://projectx/ it takes me to the correct subfolder. Everything works!!! Not so fast. I then go to http://endymion/ and instead of taking me to /var/www/ it takes me to /var/www/projectx/ Clearly I'm missing something. Help please! ;)

    Read the article

  • Getting the EFS Private Key out of system image

    - by thaimin
    I had to recently re-install Windows 7 and I lost my exported private key for EFS. I however have the entirety of my user directory and my figuring that the key must be in there SOMEWHERE. The only question is how to get it out. I did find the PUBLIC keys in AppData\Roaming\Microsoft\SystemCertificates\My\Certificates If I import them using certmg.msc it says I do have the private key in the information, but if I try export them it says I do not have the private key. Also, decryption of files doesn't work. There is also a "keys" folder at AppData\Roaming\Microsoft\SystemCertificates\My\Keys. After importing the certificates I copy those over into my new installation but it has no effect. I am starting to believe they are either in AppData\Roaming\Microsoft\Protect\S-1-5-21-...\ or AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-...\ but I am unsure how to use the files in those folders. Also, since my SID has changed, will I be able to use them? The other parts of the account have remained the same (name and password). I also have complete access to the user registry hive and most of the old system files (including the old system registry hives). I do keep seeing references to "Key Recovery Agent" but have not found anything about using, just that it can be used. Thanks!

    Read the article

  • Windows mounted network drives slow after upgrading switch

    - by Kver
    On our small business network our old 10/100 consumer grade switch gave up the ghost, and we replaced it with a proper business-grade gigabyte switch. After wiring it in our Linux and Mac users immediately got back to working off of network drives; But 2 of our 3 Windows 7 PCs have suddenly experienced a tremendous slowdown with mapped network drives; Windows will become stuck "discovering" a folder causing applications to freeze when trying to open files. It will instantly display and browse files, but the moment you try to open one the bug hits. To remedy this we have our users copying files to the desktop, but it can take a few minutes while windows is stuck "calculating" the time it will take to copy. These aren't big files, mostly excel sheets less than 500KB - these operations are instant on Linux and Mac. (The third Windows machine is having no issues) I've tried remapping the drives, mapping to different drive letters, rebooting, etc. I'm at a loss, because switches are mostly transparent, and it's only after the switch was replaced that the Windows PCs started acting up. What black-magic voodoo am I missing to make Windows work? Thank you.

    Read the article

  • Searching for just files

    - by M Schenkel
    I have a couple questions about searching for files on Windows 7. I find the XP method much easier than this new Windows 7 search. Note: I am only concerned about finding files matching a search term, not ALL files containing the search term. Is there a way to search just for files? When I use the search it seems to be searching "within" files and returning instances where the name of the file is used. Example: I have a whole web directory and want to find the javascript files. But if I enter "myjavascript.js" in the search, it also returns all the html files which reference the javascript file. This is both slow and difficult to actually find the reference to the file. Is there a way to search for an exact match? The search seems to implicitly use wildcards. For instance, say I have a bunch of files in a folder: file1.txt,file11.txt, file12.txt, file13.txt. If I enter "file1.txt" in the searcher it returns instances as if I were using a wild card file1*.txt I miss XP!!!!

    Read the article

  • How to start MSSQL Server with corrupt model db

    - by Jordan McGuigan
    After moving some databases around (restoring, deleting, etc) we experienced an issue creating new databases. Specifically, When trying to create a new database MSSQL Server it failed because the "The database 'model' is marked RESTORING and is in a state that does not allow recovery to be run". As some online solutions suggested, we tried to Start and Stop the MSSQL Service. Service would not restart because "Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive" (FYI: the drive has 100gb of free space). Tried restarting the machine the MSSQL Server is running on. When the server came back online, we received the same error. We have tried deleting tempdb.mdf and restoring the modeldb from the templates folder, but neither of these solved the issue. We are unable to connect to the database, even in single user mode. Many of the online solutions have us running SQL commands against the server, but we are unable to connect (even in single user mode) to the DB to run commands against the server. Specific error messages: Database 'model' cannot be opened. It is in the middle of a restore. (Microsoft SQL Server, Error: 927) The SQL Server (MSSQLSERVER) service is starting. The SQL Server (MSSQLSERVER) service could not be started. A service specific error occurred: 1814. We need the server up and running again ASAP.

    Read the article

  • Windows Server 2003 (w/Exchange) move to new machine

    - by James Booker
    I have an ageing domain controller (the only one on a 10-pc network) which needs rebooting often. I have a Dell Poweredge 2850 server doing nothing, so I'd like to move the DC to that, but here's the catch - I don't have Win2k Server Std install media any more as it's been lost. I purchased "Easus Todo Backup Advanced Server" which claims to be able to recover to dissimilar metal, but it's not quite working (although I don't think it's the product's fault) I know the server and PERC RAID card are good because I installed Ubuntu on the logical drive (4 x 72GB disks RAID 5) no problems. I've booted frmo the Easus Todo backup CD (which is WinPE based) and recovered to the logical disk on the RAID (after installing driver inside the WinPE environment from a NAS drive) The problem is when I boot the server, I can get the OS selection menu, but any option results in a blank screen, with no errors. I figure this is probably because the driver wasn't installed on the old machine (which is IDE-based (i know, i know!) and doesn;t have a RAID controller) I've booted from the CD and copied the mraid35x.sys file to the c:\windows\system32\drivers folder on the recovered system, but it makes no difference. I made a boot.ini with rdisks 0-10 defined, and booting from each of these resulted in a file error (i.e. 'this isn't a real disk') - the only disk that gets any response (the blank screen) is multi(0)disk(0)rdisk(0)partition(1) which just gives me the blank black screen and no disk activity. Is there any way I can force the drvier to be installed on the source system (so i can do a full backup again), i've tried right-clicking the oemsetup.inf and clicking install, but it didn't actually do anything. I attempted to force it with the 'Add new hardware' wizard and forcing with the 'have disk' option but it still gave me no hardware to select. Also I've got an identical machine running WinXP which uses the PERC driver successfully (which was obviously done at install time) and the boot.ini settings are the same : multi(0)disk(0)rdisk(0)partition(1) Any ideas would be appreciated.

    Read the article

  • Apache: Setting up local test server with subdomains

    - by RC
    Hi everyone, I have XAMPP running on my desktop machine, and I do all my work on it with no issue. http://localhost ---> points to public_html http://site1.localhost ---> points to site 1 http://site2.localhost ---> points to site 2 http://site3.localhost ---> points to site 3 Entering the above URLs in my web browser on the machine with Apache works great, and I can work on multiple sites within distinct subdomains. But what I want to do now is to transfer Apache and all the files to another Windows 7 machine within the LAN, but still be able to view the subdomains from my main development machine. With a vanilla XAMPP installation on the new hosting machine, entering the IP address of that machine (e.g. 192.168.1.10) into my development computer would send me to the main public_html folder. But how do I set up subdomains such that I can access it externally? For example, http://site1.devmachine Thanks for any help.

    Read the article

  • Mysql start fails with Operating System error 13

    - by curious
    I have XAMPP on my Ubuntu Lucid system and everything worked fine. But there seems to be some problem now and mysql wouldn't start. I had tried to recover a few Drupal databases and hence copied the raw files to /opt/lampp/var/mysql folder like all other database folders. And, I guess that could have caused the problem. I am pasting the last few lines of the error log. Someone please help me out. 100814 15:17:47 mysqld_safe Starting mysqld daemon with databases from /opt/lampp/var/mysql 100814 15:17:47 [Note] Plugin 'FEDERATED' is disabled. 100814 15:17:47 [ERROR] Can't open shared library 'libpbxt.so' (errno: 0 API version for STORAGE ENGINE plugin is too different) 100814 15:17:47 [Warning] Couldn't load plugin named 'PBXT' with soname 'libpbxt.so'. 100814 15:17:48 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File name /opt/lampp/var/mysql/ibdata1 InnoDB: File operation call: 'open'. InnoDB: Cannot continue operation.

    Read the article

  • Removing expired self-signed certificate in IE9 (created with IIS7.5)

    - by Itison
    Over 1 year ago, I created a self-signed certificate in IIS 7.5 and exported it. I then installed it for IE9 (it may have been IE8 at the time), which worked fine until a year later when the certificate expired. I have put this off, but today I created a new self-signed certificate in IIS, exported it, and attempted to install it in IE9. The problem is that for whatever reason, IE cannot seem to forget about the old, expired certificate. Here's what I tried initially: Accessed my ASP.NET application and see the Certificate error. Clicked "View certificates". Clicked "Install Certificate" and then Next/Next/Finish. At this point, it says the import is successful, but it still only shows the expired certificate. I've tried simply double-clicking on the exported certificate on my desktop. Initially I chose to automatically select the certificate store, but then I tried it again and manually selected "Trusted Root Certification Authorities". I've also tried dragging/dropping the certificate over an IE window and clicking "Open". The process is then exactly the same as it is if I had double-clicked on the certificate, but I had hoped that this would somehow specifically tell IE to use this certificate. I tried opening MMC and with the Certificate snap-in, confirmed that the new certificate was added under "Trusted Root Certification Authorities". It was also under my "Personal" certificates (I guess this is where it goes by default). Nothing worked, so I went through every folder in MMC and deleted the expired certificate. I also deleted the expired certificate in IIS. Nothing has worked. Any ideas? I see no clear resolution and I can't seem to find any posts related to this issue.

    Read the article

  • retain last used path to location for saving files in Windows 7

    - by Mark Miller
    I am using Microsoft Office 2010 and Windows 7 on a Dell PC. I am opening a bunch of MSWord files one at a time, copying data tables therein, pasting the data into Excel and saving the Excel files as comma delimited text files. I am creating a separate Excel file for each MSWord file. The path to the folder containing the saved comma-delimited files is quite long, something like this: c:\users\me\aa\bb\cc\dd\ee\ Every time I open Excel and save a new comma-delimited file I have to re-navigate the entire path (c:\users\me\aa\bb\cc\dd\ee). In the past Windows seemed to remember the last used path, saving a lot of tedious key-strokes. In fact, I think Windows did this for me as recently as last week, albeit on a different computer. Can I apply a setting in Windows somewhere asking it to offer the last used path as a default when saving files so I do not have to re-navigate the entire directory structure to save each new comma-delimited file? If I can, how so? Where is the option for specifying that setting? Thank you for any help.

    Read the article

  • Modifying value of "Rating" column within Explorer for arbitrary file types

    - by Fake Name
    Basically, I have a large body of assorted media (text, images, flash files, archives, folders, etc...) and I'm attempting to organize it. Windows Explorer has a rating column, but there seems to be no way to modify the rating of the files short of opening them in their type-specific software (e.g. Media player, or Photo viewer). However, this does not work when the file is of an unsupported type (.rar, .swf ...), or a directory. I'd be more than willing to consider a file-manager replacement (I've alreadly looked at quite a few, Directory Opus, Total Commander, etc...), or even a solution that stores the rating metadata in a hidden file in each folder, or a separate database. The one real critical requirement is the ability to sort by rating, and being filetype-agnostic. Basically, is there any way to categorize a large collection of assorted files by rating that will work with any file type, including directories? - Ideally, there would be an easy way to add arbitrary columns to windows explorer, and edit them directly. However, there seems to be no way to do this. The rating column is the next best thing.

    Read the article

  • Nginx and Wordpress side-by-side with static directory alias?

    - by user117161
    I'm a Nginx novice, but I have it set up with Wordpress Multisite (subdirectories) and php-fpm, and it's working great as is. This lets me set up Wordpress sites off the web root: domain.com/site1 - a Wordpress network single site, which renders as expected. domain.com/site2 - ditto etc. Concurrently, I can easily create static files in the web root that don't conflict or interact with Wordpress, and they are also rendered normally. domain.com/hello.html - rendered normally domain.com/hello.php - rendered normally, including php processing domain.com/static/hello.php - rendered normally (along as "static" isn't a WP single site name) What I'd like to do, and this is where I'm out of my depth with nginx.conf, is create a root directory domain.com/static and put static sites in there domain.com/static/site3 domain.com/static/site4 and have Nginx check the request that comes into the root request comes in for: domain.com/site3 and before handing off to Wordpress, check to see if it exists in the /static folder checks: domain.com/static/site3 - static content exists there and if so, serves that content while maintaining the root URI. serves: domain.com/site3 (with content from domain.com/static/site3) if not, it lets Wordpress check if /site3 is a Wordpress single network site as it does now, and the process continues normally. In nginx.conf, in the server section, I start with this try_files rule: location / { try_files $uri $uri/ /index.php?q=$uri&$args; } I then include a bunch of Wordpress specific rules as identified at http://codex.wordpress.org/Nginx under the subdirectory section. I can see that rewrite rules might take care of it easily, but in my experimentation I've only achieved a bunch of looping (/static/static/static, etc.) and managed to bypass Wordpress if the looping stopped. Sorry if this is a very long-winded way of asking a simple question, but I'm definitely learning some of this stuff for the first time. Thanks!

    Read the article

  • Does image block (firefox addon) save internet bandwidth usage?

    - by dkjain
    Does image block save internet bandwidth usage. I have a data capped plan from my ISP ( 5GB at 2mbps and thereafter 256 kpbs / pm). I doubt if the addon or other similar addon actually saves bandwidht. Here is my point of view, pls correct if that is wrong. When a request is sent to the server, the server sends out whatever page it's requested to serve with all its text and images etc. So essentially my ISP has made his pipe available for the data to reach me thus he would count those bytes under my data plan. When the data arrives it's all first stored to my browser cache (folder) area which means all the data has actually been received by me/computer using my ISP's pipe. The browser then fetches those data from the cache and displays it. By hitting the stop button or blocking images via ur addon I am just choosing not to display the data which would remain in the cache or eventually be discarded if still on the network pipe after a timeout limit. The point is the data request have been completed by the ISP and so the data would be metered and thus using addon such as image block or hitting stop button while page is loading does not in any way save internet bandwidth. Your comments plz....... Regards dk.

    Read the article

  • How to automate downloading files?

    - by Damon
    I got a book which had a pass to access digital versions of hi-res scans of much of the artwork in the book. Amazing! Unfortunately the presentation of all the these are 177 pages of 8 images each with links to zip files of jpgs. It is extremely tedious to browse, and I would love to be able to get all the files at once rather than sitting and clicking through each one separately. archive_bookname/index.1.htm - archive_bookname/index.177.htm each of those pages have 8 links each to the files linking to files such as <snip>/downloads/_Q6Q9265.jpg.zip, <snip>/downloads/_Q6Q7069.jpg.zip, <snip>/downloads/_Q6Q5354.jpg.zip. that don't quite go in order. I cannot get a directory listing of the parent /downloads/ folder. Also, the file is behind a login-wall, so doing a non-browser tool, might be difficult without knowing how to recreate the session info. I've looked into wget a little but I'm pretty confused and have no idea if it will help me with this. Any advice on how to tackle this? Can wget do this for me automatically?

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >