Search Results

Search found 46908 results on 1877 pages for 'managing files and folder'.

Page 442/1877 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • How come Win+R prompt can open Python when it's not in my path?

    - by houbysoft
    When I use the run prompt in Windows XP Professional (Win+R), and type python.exe or python, it works and greets me with the python prompt. However, when I start a cmd window, and then type python.exe or python, it doesn't find it. This is what I expect, as the Python directory (for me, I:\Python31\) is not in my PATH. How come, then, that if I type python.exe in the Win+R prompt, it works? Edit: here is a partial output of SET, I removed most irrelevant entries, I'm not sure why is it useful, apart from the PATH variable which I already said doesn't include the Python directory. If you need a particular variable other than these, please ask. CLIENTNAME=Console CommonProgramFiles=I:\Program Files\Common Files ComSpec=I:\WINDOWS\system32\cmd.exe FP_NO_HOST_CHECK=NO OS=Windows_NT Path=I:\WINDOWS\system32;I:\WINDOWS;I:\WINDOWS\system32\WBEM;I:\WINDOWS\system32\WindowsPowerShell\v1.0;I:\Qt\2010.05\mingw\bin;I:\Program Files\CMake 2.8\bin PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.PSC1 ProgramFiles=I:\Program Files PROMPT=$P$G SESSIONNAME=Console SystemDrive=I: SystemRoot=I:\WINDOWS VBOX_INSTALL_PATH=I:\Program Files\Oracle\VirtualBox\ windir=I:\WINDOWS

    Read the article

  • Gzip compress offline?

    - by shoosh
    I've configured my site to serve compressed content by putting this line in .htaccess AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css application/javascript application/json This works perfectly for almost all files except a few large JSON files that are above 200Kb. For some reason they are not being compressed. I see that they don't using the net tab in firebug and the Network section in chrome. So as a workaround I thought I could compress these files offline and have Apache read them compressed. What tool should I use to compress them? is the linux gzip the one? any special flags or something I should use? What should I put in .htaccess so that the server would know to serve these files with content-encoding gzip ?

    Read the article

  • Build vs Rebuild

    - by prash
    Build means compile and link only the source files that have changed since the last build, while Rebuild means compile and link all source files regardless of whether they changed or not. Build is the normal thing to do and is faster. Sometimes the versions of project target components can get out of sync and rebuild is necessary to make the build successful. In practice, you never need to Clean. Build or Rebuild Solution builds or rebuilds all projects in the your solution, while Build or Rebuild <project name> builds or rebuilds the StartUp project. To set the StartUp project, right click on the desired project name in the Solution Explorer tab and select Set as StartUp project. The project name now appears in bold. Compile just compiles the source file currently being edited. Useful to quickly check for errors when the rest of your source files are in an incomplete state that would prevent a successful build of the entire project. Ctrl-F7 is the shortcut key for Compile. All source files that have changed are saved when you request a build/rebuild, so you don't have to save them first. When you run your executable (F5 or Ctrl-F5), Visual Studio saves all your changed source files and builds anything that changed, so you don't need to explicitly do those steps every time. This allows for quick "trial and error" debugging. Incidentally, if you like those little Visual Studio keyboard shortcuts, you can download posters of the C# and the VB.Net ones, respectively (I am personally a big fan of using keyboard shortcuts :) ).   Visual Studio 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=92ced922-d505-457a-8c9c-84036160639f   Visual Studio 2005 C#: http://www.microsoft.com/downloads/details.aspx?FamilyID=c15d210d-a926-46a8-a586-31f8a2e576fe&DisplayLang=en VB.NET: http://www.microsoft.com/downloads/details.aspx?FamilyID=6bb41456-9378-4746-b502-b4c5f7182203&DisplayLang=en

    Read the article

  • Optimal Compression for Speech

    - by ashes999
    I'm designing a game that depends heavily on audio; I will have some 300+ speech files (most of them just a word or two long). This can very quickly escalate the size of my final game. What's the optimal way to encode/compress speech files to keep the size minimal without getting audio artifacts? Please address both per-file compression/encoding, and also zipping/compressing the set of all speech files together in your answer. Because I'm not sure which (or combination of both) factors will give me the best results. Edit: I need this to run in Silverlight and Android, so I'm presumably stuck with only MP3 as my option (other than uncompressed wave files).

    Read the article

  • Manually updating HTML5 local storage?

    - by hustlerinc
    I'm just starting out HTML5 game developement (and game dev in general) and watching all the videos and tutorials available something has crossed my mind. Everyone keep saying I should set the cookie's (or cached files) to be expired after a certain amount of time. So that when it reaches that time the browser automatically downloads all assets again, even if it's the same asset's. Wouldn't it be possible to manually define the version of the game? For example the user has downloaded all the files for 1.01 of the game, when updating I change a simple variable to 1.02. When the user logs in it would compare his version to the current and if they are not equal only then it downloads the files? This could even be improved to download only specific files depending on what needs to be updated? Would this be possible or am I just dreaming? What are the possible downsides of this approach?

    Read the article

  • Notepad++ shortcuts not getting copied to the second computer where I want to replicate my settings

    - by Dragos Toader
    In Notepad++, there's a way to assign your custom shortcuts by going to Run - Modify Shortcut/Delete Command... This brings up the Shortcut Mapper I set up my custom shortcuts on Computer 1 I then installed Notepad++ with the same install settings and plugins on Computer 2 I then created a zip archive of my Notepad++ folder in Program Files on Computer 1 I overwrote the Notepad++ folder in Program Files on Computer 2 with this archive My custom shortcuts did not come across. I thought that the shortcuts were saved in C:\Program Files\Notepad++\shortcuts.xml I compared C:\Program Files\Notepad++\shortcuts.xml from Computer 1 with the same file on Computer 2 and the two files are identical. Why then are the shortcuts not coming across to Computer 2? Computer 1 is Windows XP Computer 2 is Windows Server 2008 R2

    Read the article

  • Move SQL Server transaction log to another disk

    - by Jim Lahman
    When restoring a database backup, by default, SQL Server places the database files in the master database file directory.  In this example, that location is in L:\MSSQL10.CHTL\MSSQL\DATA as shown by the issuance of sp_helpfile   Hence, the restored files for the database CHTL_L2_DB are in the same directory     Per SQL Server best practices, the log file should be on its own disk drive so that the database and log file can operate in a sequential manner and perform optimally. The steps to move the log file is as follows: Record the location of the database files and the transaction log files Note the future destination of the transaction log file Get exclusive access to the database Detach from the database Move the log file to the new location Attach to the database Verify new location of transaction log Record the location of the database file To view the current location of the database files, use the system stored procedure, sp_helpfile 1: use chtl_l2_db 2: go 3:   4: sp_helpfile 5: go   Note the future destination of the transaction log file The future destination of the transaction log file will be located in K:\MSSQLLog   Get exclusive access to the database To get exclusive access to the database, alter the database access to single_user.  If users are still connected to the database, remove them by using with rollback immediate option.  Note:  If you had a pane connected to the database when the it is placed into single_user mode, then you will be presented with a reconnection dialog box. 1: alter database chtl_l2_db 2: set single_user with rollback immediate 3: go Detach from the database   Now detach from the database so that we can use windows explorer to move the transaction log file 1: use master 2: go 3:   4: sp_detach_db 'chtl_l2_db' 5: go   After copying the transaction log file re-attach to the database 1: use master 2: go 3:   4: sp_attach_db 'chtl_l2_db', 5: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB.MDF', 6: 'K:\MSSQLLog\CHTL_L2_DB_4.LDF', 7: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_1.NDF', 8: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_2.NDF', 9: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_3.NDF' 10: GO

    Read the article

  • Moving multiple folders all at once in Outlook

    - by Luke
    Constantly at my shop, we are moving Outlook (or other email program) files between computers or Windows Installations, and sometimes, people have HUNDREDS of folders. Is there a quick way to move ALL the folders from multiple data files (*.PST) into one single file, without dragging each and every folder? No, I don't want to move the Inbox folder into the other Inbox folder for the quick move, I want something simple like selecting all folders and moving that way. Does such a method exist in any version of Outlook?

    Read the article

  • Nginx Rewrite Convert Querystring to Path

    - by YardenST
    I whould like this simple rewrite rule: /somefolder/mypage.aspx?myid=4343&tab=overview to be redirected to: /folder/4343/overview/ I looked for some solutions and none actually worked.. I tried: rewrite ^/somefolder/mypage.aspx?myid=(.*)&tab=overview$ /folder/$1/overview permanent; and rewrite ^/somefolder/mypage\.aspx\?myid=(.*)&tab=overview$ /folder/$1/overview permanent; What am I doing wrong? I'm getting 404 (simpler rules works just fine..) Thanks

    Read the article

  • Anyway I can trick Carbonite into backing up an external hard drive?

    - by Brian
    I use carbonite to back up my PC (Windows XP). We were running low on disk space on our home PC (down to 15 gig) so I went out and purchased an external hard drive. However, Carbonite will not back it up. I just want the external drive to be extra disk space. From their FAQ: The current version of Carbonite backs up only the files that reside on permanent hard drives on your computer. It will not back up network drives, external drives, and NAS (network accessed storage) drives. If there are files on a remote drive that you wish to include in your Carbonite backup, you should copy the files to a folder on your local hard drive. If the files are on a shared network drive, you could install Carbonite on the computer on which the network shared drive physically exists, and back the files up directly from that computer. Check back soon for a Carbonite service plan that will allow you to back up your external drives.

    Read the article

  • External hard drive issue

    - by Blind Fish
    I am running Ubuntu 12.04 and I have a 500GB external hard drive, formatted in ext4, which I have used for about a year to store batches of data that I am sifting through. About once a week I move a chunk of data from the external drive to my internal drive for processing. As I do that I delete the data from the external drive and empty the trash, thereby clearing up space on the external and also preventing myself from grabbing stuff that I've already sorted through. The vast majority of this is text files, but there are a few .jpegs and .mp4s thrown in, if that matters. Anyway, this has been working without a hitch for nearly a year. So today I plug in the drive and I have an odd issue. I was able to move folders from the external over to my internal drive with no problem, but when I tried to delete those files I was unable to do so. I kept getting the "unable to send file to trash, do you wish to delete permanently?" message. I clicked yes / delete all, but no files were actually deleted. It just sat there. Even worse, while the system was trying in vain to delete those files I was unable to move more files over. In short, I was stuck. I canceled the operation, unmounted and remounted the drive, and then I had an even bigger problem. I have a spreadsheet of all the different folders on there ( exactly 818 ) and yet when I opened the drive in Nautilus, it was only finding 512 folders. So I had 306 missing folders, but my free space was unchanged. Immediately I think that the drive may be corrupted. I went into Disk Utility and ran the check disk option. It said that the drive was NOT clean, but offered no remedy or option to repair. I went back into the drive in Nautilus and once again attempted to delete the files I had already moved. Same issue. I clicked on Details and it said that it was unable to create a trashing file I/O error. I've looked, and it's not a permissions issue. The drive and all folders that are in it all belong to me, and they are all read / write. I've started running badblocks on the drive, but it looks like that is going to take several hours to complete. Any ideas?

    Read the article

  • Windows command line ZIP extraction with checksum or similar ?

    - by Alan B
    What I need to do, at the command line, is: Extract the contents of a a ZIP archive. Change an arbitrary number of the extracted files. Repeat step 1, but because it is a huge archive, only extract the archived copies of the files changed in step 2 which is much faster. Ideally the extraction in step 3 would do something like a checksum on the files on disk and only extract those where the file in the archive has a different checksum. Or maybe compare the date changed stamp on the disk file. At the minute I use pkzipc.exe which is the command-line version of PkZip. I can't see a way to do it with this though. You can extract files from the archive that are newer than the disk files, but what I want is the opposite of that in a sense.

    Read the article

  • PDFtk Password Protection Help

    - by Dave W.
    I am using Ubuntu 11.10 and am looking for a solution to password protect a bunch of pdf files in a directory in batch. I came across PDFtk and it looks like it might do what I need, but I've reviewed the command line PDFtk examples and can't figure out if there is a way to do it in batch without having to individually specify the output file name for every file. I'm hoping a command-line guru can take a look at the PDFtk syntax and tell me if there is some trick / command that will allow me to password protect a directory of pdf files (e.g., *.pdf) and overwrite the existing files using the same name, or consistently rename the individual output files without having to specify each output name individually. Here's a link to the PDFtk command line examples page: http://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/ Thanks for your help. I think I've answered my own question. Here's a bash script that appears to do the trick. I'd welcome help evaluating why the code I've commented out doesn't work... #!/bin/bash # Created by Dave, 2012-02-23 # This script uses PDFtk to password protect every PDF file # in the directory specified. The script creates a directory named "protected_[DATE]" # to hold the password protected version of the files. # # I'm using the "user_pw" parameter, # which means no one will be able to open or view the file without # the password. # # PDFtk must be installed for this script to work. # # Usage: ./protect_with_pdftk.bsh [FILE(S)] # [FILE(S)] can use wildcard expansion (e.g., *.pdf) # This part isn't working.... ignore. The goal is to avoid errors if the # directory to be created already exists by only attempting to create # it if it doesn't exists # #TARGET_DIR="protected_$(date +%F)" #if [ -d "$TARGET_DIR" ] #then #echo # echo "$TARGET_DIR directory exists!" #else #echo # echo "$TARGET_DIR directory does not exist!" #fi # mkdir protected_$(date +%F) for i in *pdf ; do pdftk "$i" output "./protected_$(date +%F)/$i" user_pw [PASSWORD]; done echo "Complete. Output is in the directory: ./protected_$(date +%F)"

    Read the article

  • WebCenter Customer Spotlight: Alberta Agriculture and Rural Developmen

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryAlberta Agriculture and Rural Development is a government ministry that works with producers and consumers to create a strong, competitive, and sustainable agriculture and food industry in the province of Alberta, Canada The primary business challenge faced by the Alberta Ministry of Agriculture was that of managing the rapid growth of their information.  They needed to incorporate a system that would work across 22 different divisions within the ministry and deliver an improved and more efficient experience for Desktop, Web and Mobile users, while addressing their regulatory compliance needs as part of the Canadian government. The customer implemented a centralized Enterprise Content Management solution based on Oracle WebCenter Content and developed a strong and repeatable information life cycle management methodology across all their 22 divisions and agencies. With the implemented solution, Alberta Agriculture and Rural Development  centrally manages over 20 million documents for 22 divisions and agencies and they have improved time required to find records,  reliability of information, improved speed and accuracy of reporting and data security. Company OverviewAlberta Agriculture and Rural Development is a government ministry that works with producers and consumers to create a strong, competitive, and sustainable agriculture and food industry in the province of Alberta, Canada.  Business ChallengesThe business users were overwhelmed by growth in documents (over 20 million files across 22 divisions and agencies) and it was difficult to find and manage documents and versions. There was a strong need for a personalized easy-to-use, secure and dependable method of managing and consuming content via desktop, Web, and mobile, while improving efficiency and maintaining regulatory compliance by removing the risk of non-uniform approaches to retention and disposition. Solution DeployedAs a first step Alberta Agriculture and Rural Development developed a business case with clear defined business drivers: Reduce time required to find records Locate “lost” records Capture knowledge lost through attrition Increase the ease of retrieval Reduce personal copies Increase reliability of information Improve speed and accuracy of reporting Improve data security The customer implemented a centralized Enterprise Content Management solution based on Oracle WebCenter Content. They used an incremental implementation approach aligned with their divisional and agency structure which allowed continuous process improvement. This led to a very strong and repeatable information life cycle management methodology across all their 22 divisions and agencies. Business ResultsAlberta Agriculture and Rural Development achieved impressive business results: Centrally managing over 20 million files for 22 divisions and agencies Federated model to manage documents in SharePoint and other applications Doing records management for both paper and electronic records Reduced time required to find records Increased the ease of retrieval Increased reliability of information Improved speed and accuracy of reporting Improved data security Additional Information Oracle Open World 2012 Presentation Oracle WebCenter Content

    Read the article

  • No space left on disk

    - by Ned
    folks. I'm trying to copy/move files to an external 1 TB hard drive with about 50 GB remaining space. I receive a "no space left on disk" when I try. I've moved files off and retried, but still get the same message. Disk Usage Analyzer, Properties, and freeware Treesize all report available hard drive space of about 50 GB. I've tried df -i (50 GB available) and df -k, with the latter reporting only 1% of inode usage. I've been able to save files from Firefox to the drive also. I can't even rename files without getting the message. Yesterday in the midst of trying to figure this out I tried to move 4 files to the drive and got the message. Today, I found them on the drive. What's up with that? (That's the only time that has happened to my knowledge.) Is this an ubuntu problem? or is my hard drive just about to fail because of something like a controller problem? Any thoughts would be appreciated.

    Read the article

  • Sharing Windows Folders on a Network... other PCs see but can't access

    - by John
    I'm soooo tired of network setup issues. All I want to do is share a folder and all it's sub-folders so other PCs on my network can view and change this remote location. Why is it that setting a dir to "shared" doesn't actually make it usable in any way? The other PC can see the fodler but is unable to actually open it and look inside. It seems every time I want to do this I go through some semi-random process of right clicking the folder and enabling sharing, then looking in the folder properties to add permissions and other sharing... and then I end up with some folders working but others will randomly block permission on certain files or sub-dirs. I have 5 PCs in my local testing network and I cannot believe it should be this complicated... where is the simple "make this folder work on the network" option?! I have a mixture of XP, Vista & W7 machines, but this seems common to all of them.

    Read the article

  • Setting Up Local Repository with TortoiseSVN in Windows

    - by Teno
    I'm trying to set up a local repository so that all commitments are copied to the local destination, not a remote server. I followed this tutorial. What I did. Created a folder named "SVN_Repo" under C:\Documents and Settings[user-name]\My Documents\ Right clicked on the folder and chose TortoiseSVN -> Create repository here Clicked OK in the pop up dialog asking whether to create a directory structure. Created a folder named Repos for the local destination, under E:\ Right clicked on the SVN_Repo folder and chose SVN Checkout... Typed file:///E:\repos in the URL of repository field and clicked the OK button. What I got: Checkout from file:///E:/repos, revision HEAD, Fully recursive, Externals included Unable to connect to a repository at URL 'file:///E:/repos' Unable to open an ra_local session to URL Unable to open repository 'file:///E:/repos' I must be doing something wrong. Could somebody point it out? Thanks.

    Read the article

  • Protect individual sites on Ubuntu/Apache server

    - by Christoffer
    Hi,?? I need to set up a Apache server configuration for some client sites that run under the same Ubuntu 9.10 machine. All sites are allowed to run PHP, Python and Ruby on Rails. I do not control the source code of these sites and so I need to set up a filter in order to prevent one user to reach files on another users account.?? If I run a script to list files in "/" from one account, I can browse some files and directories in the actual server root. I want to set the root for each account to /var/usersite.com/www/ instead so that listing files in "/" shows the files in the client's root. ??How is this most easily configured??? Cheers!? /Christoffer

    Read the article

  • How to manage a growing team?

    - by Andra
    I'm the admin assistant of the CTO and our organization has recently experienced a lot of growth. Within six months, we have merged with another organization and our Dev team has grown from 8 to 16, with another 8 people in QA. What we're dealing with now is a highly technical individual, with little patience, managing a much larger team than he's accustomed to, 40% of which is junior as well as an increase in the number of projects. Needless to say, my boss is being pulled in too many directions at once. How can I help him manage his workload and his team so that the team feels they're getting enough help and support and remain effective? Also, where can I find additional resources on managing a growing team?

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >