Search Results

Search found 46908 results on 1877 pages for 'managing files and folder'.

Page 92/1877 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Users removing Administrator from files/folders permissions

    - by user64204
    We're running Windows Server 2003 R2 with Active Directory and are having an issue with network shares whereby users, in an attempt to secure their documents, remove everybody (including the Administrator account) from their files/folders permissions. Since the Administrator no longer has read permission to them, we can't even backup files manually as we get permission errors. One solution that we've found is to change the owner of the files and directories to the Administrator account. We can then change the permissions as we wish. The problem is that this has to be done manually so can't really be applied to an entire share. Another solution that we've tried is to use cacls as follows: cacls d:\path\to\share /C /T /E /G Administrator:F The problem with this is that we're still getting an ACCESS DENIED error on files/folders on which Administrator was removed. Q1: Is there a way to restore at least read access to all files/folders to the Administrator account in a recursive fashion? That would be for the short term. For the long term we're looking for a solution to prevent users from removing Administrator from files/folders permissions. Since we're going to migrate to Windows Server 2008 R2 soon we could wait until we've migrated to implement such solution if need be. Q2: Is there a way to prevent users from removing Administrator from files/folders permissions on Windows Server 2003/2008?

    Read the article

  • Nginx deny doesn't work for folder files

    - by user195191
    I'm trying to restrict access to my site to allow only specific IPs and I've got the following problem: when I access www.example.com deny works perfectly, but when I try to access www.example.com/index.php it returns "Access denied" page AND php file is downloaded directly in browser without processing. I do want to deny access to all the files on the website for all IPs but mine. How should I do that? Here's the config I have: server { listen 80; server_name example.com; root /var/www/example; location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to front handler expires 30d; ## Assume all files are cachable allow my.public.ip; deny all; } location @handler { ## Common front handler rewrite / /index.php; } location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*.php)/ $1 last; } location ~ .php$ { ## Execute PHP scripts if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss expires off; ## Do not cache dynamic content fastcgi_pass 127.0.0.1:9001; fastcgi_param HTTPS $fastcgi_https; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } }

    Read the article

  • Moving a lot of small files between servers using rsync

    - by Adirael
    Hello guys, I'm moving a lot of files (about 2 millions) between two servers on different locations using rsync over ssh, it seems to work fine but I just realised I'm losing some files on the process. I got server 1, with the original data, and server 2, with the copy. Server 1 runs CentOS 5 and Server 2 runs on Ubuntu 10. I'm doing the transfer on the Server's 2 command line like this: rsync -e ssh -avzn usr@server1:/remote/path /local/path The first file movement I did using tar, but I didn't though of piping it through ssh and it failed cause the disk on server 1 was almost full, so I transfered it anyways (it was about 200GB) and got about 80% of the files. Then I piped another tar with the rest of the files (they're in folders, I got 100 folders with about 30 subfolders each, with files inside) and now I got everything on server 2. I wanted to be sure, so I my two options are getting the md5sum of all the files and check them or running an rsync on server 2 against server 1, that's what I did. It got some missing stuff and now it says there's nothing more to do (DRY RUN). But I got at least two files that are missing inside a subfolder. I ran that same rsync on that folder, but still dry run. Am I doing something wrong? Thanks, and sorry for the wall of text.

    Read the article

  • "Synchronizing" files between local and remote server using Git

    - by ConcreteVitamin
    My intended goal: I maintain some files in my local computer, and I also share them with others by putting them on my website. In the past I did this by manually uploading all the files using FTP, every time I did some modifications etc. Now, I am wondering if I can use Git to help me achieve this (by "pushing" the local files to my website server). My server is hosted by Dreamhost. First Attempt: First, I try this tutorial. I first push my local files to my Github repo, and ssh into my Dreamhost server to clone --bare from the Github repo. But I find that git does not transfer my files. So I ignore the tutorial. Second Attempt: I ssh into my Dreamhost server to clone directly from Github. My files are all transfered to the server. Then, on my local computer, I git remote add dreamhost ssh://[email protected]/~/my-project. Then I add some files, and commit, and git push dreamhost master. And a bunch of errors appears: http://geotakucovi.com/gitError.jpg As a newbie Git user, I must have missed something. Please help!

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • Restore deleted default folders

    - by Helena T.
    I was tinkering with my new laptop and, on purpose, deleted those default folders in my "home" directory: "My Music", "Links", "Favorites". This, because, i decided i wanted all my data on another partition, leaving C: only for applications and configs files. But now, some of the explorer functionalities are gone: i cannot use the Favorites tree in the left side pane, also discovered that "My Documents" stores some PowerShell config file. I feel like i misunderstood this folders' purpose and by deleting them, provoked some Explorer instability. Is there any way to restore them? I do not seem to find it. Thank you for taking the time to read this.

    Read the article

  • Project Navigation and File Nesting in ASP.NET MVC Projects

    - by Rick Strahl
    More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important. Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently. To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page. Alternatives I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow. The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that. The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular. Here’s an example of what this looks like in a relatively small project: The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win. Nesting Page specific content In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder. This looks something like this: In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer> With this in place, from inside of your Views you can then reference those same resources like this:<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" /> and<script src="~/Views/Admin/QuizPrognosisItems.js"></script> which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well. Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file). However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view. I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together. Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project. Client Side Folders The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources. For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently. In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder. Here’s what that looks like for me – here this is an AngularJs project: In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application. In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files. This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders. Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why! Skipping Project Navigation altogether with Resharper I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search. Here’s what Resharper’s jump to anything looks like: Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE. Work how you like to work Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently. As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you. Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit. You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment. © Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Hack Extension Files to Make Them Version-Compatible for Firefox

    - by Asian Angel
    A well known drawback in using Firefox is the problem with extension compatibility when a new major version is released. Whether it is for a new extension that you are trying for the first time or an old favorite we have a way to get those extensions working for you again. There are multiple reasons why you might want to choose this method to fix a non-compatible extension: You are uncomfortable with tweaking the “about:config” settings You prefer to maintain the original “about:config” settings in a pristine state and like having compatibility checking active You are looking to gain some “geek cred” Keep in mind that most extensions will work perfectly well with a new version of Firefox and simply have the “version compatibility number” problem. But once in a while there may be one that needs to have some work done on it by the extension’s author. The Problem Here is a perfect example of everyone’s least favorite “extension message”. This is the last thing that you need when all that you want is for your favorite extension (or a new one) to work on a fresh clean install. Note: This works nicely to “replace” non-compatible extensions already present in your browser if you are simply upgrading. Hacking the XPI File For this procedure you will need to manually download the extension to your hard-drive (right click on the extension’s “Install Button” and select “Save As”). Once you have done that you are ready to start hacking the extension. For our example we chose the “GCal Popup Extension”. The best thing to do is place the extension in a new folder (i.e. the Desktop or other convenient location) then unzip it just the same way that you would with any regular zip file. Once it is unzipped you will see the various folders and files that were in the “xpi file” (we had four files here but depending on the extension the number may vary). There is only one file that you need to focus on…the “install.rdf” file. Note: At this point you should move the original extension file to a different location (i.e. outside of the folder) so that it is no longer present. Open the file in “Notepad” so that you can change the number for the “maxVersion”. Here the number is listed as “3.5.*” but we needed to make it higher… Replacing the “5” with a “7” is all that we needed to do. Once you have entered your new “maxVersion” number save the file. At this point you will need to re-zip all of the files back into a single file. Make certain that you “create” a file with the “.zip file extension” otherwise this will not work. Once you have the new zip file created you will need to rename the entire file including the “file extension”. For our example we copied and pasted the original extension name. Once you have changed the name click outside of the “text area”. You will see a small message window like this asking for confirmation…click “Yes” to finish the process. Now your modified/updated extension is ready to install. Drag the extension into your browser to install it and watch that wonderful “Restart to complete the installation.” message appear. As soon as your browser starts you can check the “Add-ons Manager Window” and see the version compatibility numbers for the extension. Looking very very nice! And just like that your extension should be up and running without any problems. Conclusion If you are looking to try something new, gain some geek cred, or just want to keep your Firefox install as close to the original condition as possible this method should get those extensions working nicely for you again. Similar Articles Productive Geek Tips Make Firefox Extensions Compatible After Firefox Update Breaks Them For No Good ReasonCheck Extension Compatibility for Upcoming Firefox ReleasesFirefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible ExtensionsHow To Force Extension Compatibility with Firefox 3.6+Test and Report Add-on Compatibility in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab

    Read the article

  • How To View PowerPoint 2010 Files Without Having MS Office 2010

    - by Gopinath
    For those who want to view PowerPoint 2010 files without installing Microsoft Office 2010, here is a free app : PowerPoint 2010 Viewer from Microsoft. PowerPoint Viewer 2010 is an upgrade of PowerPoint Viewer 2007 application with support to view all types of PowerPoint files created using MS Office 2010. As the public release of MS Office 2010 is just few weeks away, PowerPoint Viewer 2010 is a handy app to install as one your managers/colleagues/friends may send a PPT created using Office 2010. Another Office Viewer app that is useful for most of us is: Word 2010 Viewer. I Googled to figure out the links to download it, but seems to be Microsoft hasn’t’ released it(beware of the many fake downloads in the disguise of Word 2010 viewer). If any of you find links to download official Word 2010 viewer, let us hear. Download PowerPoint 2010 Viewer [via DI] Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Dash music lens doesn't play music files

    - by scofandr
    After installing 12.10 music lens in Dash worked not for a long time. After a little while buttons "Open in directory" and "Play" (or whatever there was) just dissapeared. The only thing I can notice is that I have immediately removed Rhythmbox (as I used to do) and installed Banshee from quantal main. Besides, track length is not being displayed correctly any more. Something like 4838:32 is shown. The situation is the same for both single tracks or albums. But music files from main menu (for example, recent files) in dash have all features working: "E-mail", "Open" and "Play". Is there any way to solve all this? Hope there's no need to reinstall Rhythmbox...

    Read the article

  • Editing Project files, Resource Editors in VS 2010

    - by rajbk
    Editing Project Files Visual Studio 2010 gives you the ability to easily edit the project file associated with your project (.csproj or .vbproj). You might do this to change settings related to how the project is compiled since proj files are MSBuild files. One would normally close Visual Studio and edit the proj file using a text editor.  The better way is to first unload the project in Visual Studio by right clicking on the project in the solution explorer and selecting “Unload Project”   The project gets unloaded and is marked “unavailable” The project file can now be edited by right clicking on the unloaded project.    After editing the file, the project can be reloaded. Resource editors in VS 2010 Visual studio also comes with a number of resource editors (see list here). For example, you could open a file using the Binary editor like so. Go to File > Open > File.. Select a File and choose the “Open With..” option in the bottom right.   We are given the option to choose an editor.   Note that clicking on the “Add..” in the dialog above allows you to include your favorite editor.   Choosing the “Binary editor” above allows us to edit the file in hex format. In addition, we can also search for hex bytes or ASCII strings using the Find command.   The “Open With..” option is also available from within the solution explorer as shown below: Enjoy!   Mr. Incredible: No matter how many times you save the world, it always manages to get back in jeopardy again. Sometimes I just want it to stay saved! You know, for a little bit? I feel like the maid; I just cleaned up this mess! Can we keep it clean for... for ten minutes!

    Read the article

  • Extracting files from merge module

    - by Mystagogue
    All I want is a command-line tool that can extract files from a merge module (.msm) onto disk. I'm trying msidb.exe and orca.exe The documentation for orca states: Many merge module options can be specified from the command line... Extracting Files from a Merge Module Orca supports three different methods for extracting files contained in a merge module. Orca can extract the individual CAB file, extract the files into a module tree and extract the files into a source image once it has been merged into a target database... Extracting Files To extract the individual files from a merge module, use the ... -x ... option on the command line, where is the desired path to the new directory tree. The specified path is used as the root path for the extracted files. All files are extracted from the CAB file embedded in the module and placed in the specified path. The directory layout for the extracted files is based on the directory tree of the merge module. It mostly sounds like exactly what I need. But when I try it, orca simply opens up an editor (with info on the msm I specified) and then does nothing. I've tried a variety of command lines, usually starting with this: orca -x theDirectory theModule.msm I use "theDirectory" as whatever empty folder I want. Like I said - it didn't do anything. Then I tried msidb, where a couple of attempts I've made look like this: msidb -d theModule.msm -w {storage} msidb -d theModule.msm -x {stream} In both cases, I don't know what to insert for {storage} or {stream} to make it happy - I don't know what those represent. Can someone explain what I'm doing wrong with the command line options? Is there any other tool that can do this?

    Read the article

  • Copyright notices/disclaimers in source files

    - by mojuba
    It's a common practice to place copyright notices, various legal disclaimers and sometimes even full license agreements in each source file of an open-source project. Is this really necessary for a (1) open-source project and (2) closed-source project? What are you trying to achieve or prevent by putting these notices in source files? I understand it's a legal question and I doubt we can get a fully competent answer here at programmers.SO (it's for programmers, isn't it?) What would also be interesting to hear is, when you put legal stuff in your source files, is it because "everyone does it" or you got legal advice? What was the reasoning?

    Read the article

  • How do I share files with someone who does not have a ubuntu one account

    - by Bob
    I tried to search for an answer, except that, uhhh there is no search. I want to share a Ubuntu One folder with people who do not have a Ubuntu One account. Is that possible? I found this answer in the faq, "Yes! Since Ubuntu 10.04 LTS, Ubuntu One supports sharing files publicly. Right click on a file already synchronized by Ubuntu One and select "Publish via Ubuntu One". Right-click on the file again and select "Copy Ubuntu One public URL". You can now paste the URL and share it with whoever you want." That doesn't seem to help me with 11.04. There is no 'Publish via Ubuntu One' button in 11.04. Does anyone know if there is a way to share files with non-ubuntu users? If so, how? Thanks In Advance, Bob

    Read the article

  • How to Share Files Online with Ubuntu One

    - by Chris Hoffman
    Ubuntu One, Ubuntu’s built-in cloud file storage service, allows you to make files publically available online or share them privately with others. You can share files over the Internet right from Ubuntu’s file browser. Ubuntu One has two file-sharing methods: Publish, which makes a file publically available on the web to anyone who knows its address, and Share, which shares a folder with other Ubuntu One users. HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux

    Read the article

  • File Folder copy

    - by Dario Dias
    Below is the VBScript code. If the file/s or folder exist I get scripting error, "File already exists". How to fix that? How to create folder only if it does not exist and copy files only that are new or do not exist in source path? How to insert the username (Point 1) after "Welcome" and at (Poin 3) instead of user cancelled? Can the buttons be changed to Copy,Update,Cancel instead of Yes,No,Cancel? (Point 2) The code: Set objFSO = CreateObject("Scripting.FileSystemObject") Set wshShell = WScript.CreateObject( "WScript.Shell" ) strUserName = wshShell.ExpandEnvironmentStrings( "%USERNAME%" ) Message = " Welcome to the AVG Update Module" & vbCR '1* Message = Message & " *****************************" & vbCR & vbCR Message = Message & " Click Yes to Copy Definition Files" & vbCR & vbCR Message = Message & " OR " & vbCR & vbCR Message = Message & " Click No to Update Definition Files." & vbCR & vbCR Message = Message & " Click Cancel (ESC) to Exit." & vbCR & vbCR X = MsgBox(Message, vbYesNoCancel, "AVG Update Module") '2* 'Yes Selected Script If X = 6 then objFSO.FolderExists("E:\Updates") if TRUE then objFSO.CreateFolder ("E:\Updates") objFSO.CopyFile "c:\Docume~1\alluse~1\applic~1\avg8\update\download\*.*", "E:\Updates\" , OverwriteFiles MsgBox "Files Copied Succesfully.", vbInformation, "Copy Success" End If 'No Selected Script If X = 7 then objFSO.FolderExists("Updates") if TRUE then objFSO.CreateFolder("Updates") objFSO.CopyFile "E:\Updates\*.*", "Updates", OverwriteFiles Message = "Files Updated Successfully." & vbCR & vbCR Message = Message & "Click OK to Launch AVG GUI." & vbCR & vbCR Message = Message & "Click Cancel (ESC) to Exit." & vbCR & vbCR Y = MsgBox(Message, vbOKCancel, "Update Success") If Y = 1 then Set WshShell = CreateObject("WScript.Shell") WshShell.Run chr(34) & "C:\Progra~1\avg\avg8\avgui.exe" & Chr(34), 0 Set WshShell = Nothing End if If Y = 3 then WScript.Quit End IF 'Cancel Selection Script If X = 2 then MsgBox "No Files have been Copied/Updated.", vbExclamation, "User Cancelled" '3* End if

    Read the article

  • Neo4j Windows Plugin Installation desktop V2.M06

    - by user2904850
    I have downloaded and installed the latest Window V2 Community M06 build of Neo4j on a windows 7 64 bit machine (I have tried the 32bit and 64 bit installs). Installation proceeds without and problems and the system runs normally but there is no /plugins directory (on both versions):- \Program Files (x86)\Neo4j Community\ \Program Files (x86)\Neo4j Community\.install4j \Program Files (x86)\Neo4j Community\bin I am trying to install the Spatial plugin .. so I tried creating the \plugins directory. I extracted the zip file and left the zip file in the directory but the plugins are not found:- C:\Users\WFN44217>curl localhost:7474/db/data/ { "extensions" : { }, "node" : "http://localhost:7474/db/data/node", "reference_node" : "http://localhost:7474/db/data/node/0", "node_index" : "http://localhost:7474/db/data/index/node", "relationship_index" : "http://localhost:7474/db/data/index/relationship", "extensions_info" : "http://localhost:7474/db/data/ext", "relationship_types" : "http://localhost:7474/db/data/relationship/types", "batch" : "http://localhost:7474/db/data/batch", "cypher" : "http://localhost:7474/db/data/cypher", "transaction" : "http://localhost:7474/db/data/transaction", "neo4j_version" : "2.0.0-M06" } I have tried some other plugins, but these are also not found. Any idea what might be missing? Extract from the log files: 2013-10-17 11:19:31.881+0000 INFO [o.n.k.i.DiagnosticsManager]: VM Arguments: [-Dexe4j.semaphoreName=Local\c:_program_files_neo4j_community_bin_neo4j-community.exe, -Dexe4j.isInstall4j=true, -Dexe4j.moduleName=C:\Program Files\Neo4j Community\bin\neo4j-community.exe, -Dexe4j.processCommFile=C:\Users\WFN44217\AppData\Local\Temp\e4j_p6384.tmp, -Dexe4j.tempDir=, -Dexe4j.unextractedPosition=0, -Djava.library.path=C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Microsoft\Web Platform Installer\;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files\Microsoft SQL Server\110\DTS\Binn\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\;C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\;C:\apache-maven-3.1.1-bin\apache-maven-3.1.1\bin\;C:\Program Files\Java\jdk1.7.0_40\bin\;C:\Program Files (x86)\Git\cmd;C:\Program Files (x86)\Git\bin;c:\ikvm-7.2.4630.5\bin;C:\Program Files\nodejs\;C:\Users\WFN44217\AppData\Roaming\npm;c:\program files\neo4j community\jre\bin, -Dexe4j.consoleCodepage=cp0, -Dinstall4j.launcherId=24, -Dinstall4j.swt=false] 2013-10-17 11:19:31.881+0000 INFO [o.n.k.i.DiagnosticsManager]: Java classpath: 2013-10-17 11:19:31.883+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/bin/neo4j-desktop-2.0.0-M06.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\charsets.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [classpath] C:\Program Files\Neo4j Community\.install4j\i4jruntime.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/jaccess.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/zipfs.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\resources.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jfr.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jsse.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [classpath] C:\Program Files\Neo4j Community\bin\neo4j-desktop-2.0.0-M06.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunec.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\classes 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\rt.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/bin/ 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunmscapi.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/dns_sd.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/dnsns.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunjce_provider.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/localedata.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/access-bridge-64.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/.install4j/i4jruntime.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\sunrsasign.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jce.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: Library path: 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32\wbem 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32\WindowsPowerShell\v1.0 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft\Web Platform Installer 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft SQL Server\110\Tools\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft SQL Server\110\DTS\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\apache-maven-3.1.1-bin\apache-maven-3.1.1\bin 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Java\jdk1.7.0_40\bin 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Git\cmd 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Git\bin 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\ikvm-7.2.4630.5\bin 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\nodejs 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Users\WFN44217\AppData\Roaming\npm 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Neo4j Community\jre\bin 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: System.properties: 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.moduleName = C:\Program Files\Neo4j Community\bin\neo4j-community.exe 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.processCommFile = C:\Users\WFN44217\AppData\Local\Temp\e4j_p6384.tmp 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.semaphoreName = Local\c:_program_files_neo4j_community_bin_neo4j-community.exe 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: sun.boot.library.path = c:\program files\neo4j community\jre\bin 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.consoleCodepage = cp0 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: path.separator = ; 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: file.encoding.pkg = sun.io 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: user.country = GB 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: user.script = 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: sun.os.patch.level = Service Pack 1 2013-10-17 11:19:31.891+0000 INFO [o.n.k.i.DiagnosticsManager]: install4j.exeDir = C:\Program Files\Neo4j Community\bin\ 2013-10-17 11:19:31.891+0000 INFO [o.n.k.i.DiagnosticsManager]: user.dir = C:\Program Files\Neo4j Community\bin

    Read the article

  • Good approach for hundreds of comsumers and big files

    - by ????? ???????
    I have several files (nearly 1GB each) with data. Data is a string line. I need to process each of these files with several hundreds of consumers. Each of these consumers does some processing that differs from others. Consumers do not write anywhere concurrently. They only need input string. After processing they update their local buffers. Consumers can easily be executed in parallel. Important: With one specific file each consumer has to process all lines (without skipping) in correct order (as they appear in file). The order of processing different files doesn't matter. Processing of a single line by one consumer is comparably fast. I expect less than 50 microseconds on Corei5. So now I'm looking for the good approach to this problem. This is going to be be a part of a .NET project, so please let's stick with .NET only (C# is preferable). I know about TPL and DataFlow. I guess that the most relevant would be BroadcastBlock. But i think that the problem here is that with each line I'll have to wait for all consumers to finish in order to post the new one. I guess that it would be not very efficient. I think that ideally situation would be something like this: One thread reads from file and writes to the buffer. Each consumer, when it is ready, reads the line from the buffer concurrently and processes it. The entry from the buffer shouldn't be deleted as one consumer reads it. It can be deleted only when all consumers have processed it. TPL schedules consumer threads itself. If one consumer outperforms the others, it shouldn't wait and can read more recent entries from the buffer. Am i right with this kind of approach? Whether yes or not, how can i implement the good solution? A bit was already discussed on StackOverflow: link

    Read the article

  • Thumbnailers of text and ogg files

    - by David López
    I use ubuntu 12.04 and I can see in nautilus thumbnailers of ogg (with embedded artwork) and text files, just like in figure It's a nice feature. I have a slow machine and I've installed Arch with LXDE and pcmanfm. I would like the same thumbnailers, but I can only see a few of them like in the figure I've installed nautilus, thunar, spacefm... and lots of different thumbnailers in my Arch machine, but I haven't be able to see the thumbnailers of text and ogg files. I think that maybe ubuntu uses a patched nautilus version with extended capabilities or something like this. Any idea? Thanks.

    Read the article

  • Share files - Ubuntu 12.4 and Windows 7 - one network - password not accepted

    - by gotqn
    I ask this question in SuperUser but no one helps me. I hope to get more attention here. I have three computers connected in one network by modem. I want to share files in this network in the most easy way (I have read about solutions using Samba). So, I have three machines: One with Windows 7 One with Windows XP One with Ubuntu 12.04 and I have the following situation: The windows PCs can share files between each other. The windows PCs can see that Ubuntu's one is in the network The PC with Ubuntu can see only the PC with Windows 7, but when I click on a folder it ask to enter the network password and it is not accepting it (I am 100% sure it's the correct one) Is there to fix this situation a little bit - at least to enable the file sharing between the Ubuntu and Windows 7 PCs or I should choose a different approach (please advice).

    Read the article

  • Nautilus tags/labels/marks/columns for folders/files

    - by madox2
    Is there any way how to mark folders or files with tags(or labels, new columns or whatever) in Nautilus? It would be nice to sort marked folders or files through this tags. Especially my first idea was to mark folders in my Movie directory with tags seen, not seen, must see, and so on. Then I realized it would be useful in any other workspaces with any custom tags... Is there any nautilus extension for this? Or any other file manager which can do this? It might look like this:

    Read the article

  • Iterating through folders and files in batch file?

    - by Will Marcouiller
    Here's my situation. A project has as objective to migrate some attachments to another system. These attachments will be located to a parent folder, let's say "Folder 0" (see this question's diagram for better understanding), and they will be zipped/compressed. I want my batch script to be called like so: BatchScript.bat "c:\temp\usd\Folder 0" I'm using 7za.exe as the command line extraction tool. What I want my batch script to do is to iterate through the "Folder 0"'s subfolders, and extract all of the containing ZIP files into their respective folder. It is obligatory that the files extracted are in the same folder as their respective ZIP files. So, files contained in "File 1.zip" are needed in "Folder 1" and so forth. I have read about the FOR...DO command on Windows XP Professional Product Documentation - Using Batch Files. Here's my script: @ECHO OFF FOR /D %folder IN (%%rootFolderCmdLnParam) DO FOR %zippedFile IN (*.zip) DO 7za.exe e %zippedFile I guess that I would also need to change the actual directory before calling 7za.exe e %zippedFile for file extraction, but I can't figure out how in this batch file (through I know how in command line, and even if I know it is the same instruction "cd"). Anyone's help is gratefully appreciated.

    Read the article

  • rm on a directory with millions of files

    - by BMDan
    Background: physical server, about two years old, 7200-RPM SATA drives connected to a 3Ware RAID card, ext3 FS mounted noatime and data=ordered, not under crazy load, kernel 2.6.18-92.1.22.el5, uptime 545 days. Directory doesn't contain any subdirectories, just millions of small (~100 byte) files, with some larger (a few KB) ones. We have a server that has gone a bit cuckoo over the course of the last few months, but we only noticed it the other day when it started being unable to write to a directory due to it containing too many files. Specifically, it started throwing this error in /var/log/messages: ext3_dx_add_entry: Directory index full! The disk in question has plenty of inodes remaining: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 60719104 3465660 57253444 6% / So I'm guessing that means we hit the limit of how many entries can be in the directory file itself. No idea how many files that would be, but it can't be more, as you can see, than three million or so. Not that that's good, mind you! But that's part one of my question: exactly what is that upper limit? Is it tunable? Before I get yelled at--I want to tune it down; this enormous directory caused all sorts of issues. Anyway, we tracked down the issue in the code that was generating all of those files, and we've corrected it. Now I'm stuck with deleting the directory. A few options here: rm -rf (dir)I tried this first. I gave up and killed it after it had run for a day and a half without any discernible impact. unlink(2) on the directory: Definitely worth consideration, but the question is whether it'd be faster to delete the files inside the directory via fsck than to delete via unlink(2). That is, one way or another, I've got to mark those inodes as unused. This assumes, of course, that I can tell fsck not to drop entries to the files in /lost+found; otherwise, I've just moved my problem. In addition to all the other concerns, after reading about this a bit more, it turns out I'd probably have to call some internal FS functions, as none of the unlink(2) variants I can find would allow me to just blithely delete a directory with entries in it. Pooh. while [ true ]; do ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; done ) This is actually the shortened version; the real one I'm running, which just adds some progress-reporting and a clean stop when we run out of files to delete, is: export i=0; time ( while [ true ]; do ls -Uf | head -n 3 | grep -qF '.png' || break; ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; export i=$(($i+10000)); echo "$i..."; done ) This seems to be working rather well. As I write this, it's deleted 260,000 files in the past thirty minutes or so. Now, for the questions: As mentioned above, is the per-directory entry limit tunable? Why did it take "real 7m9.561s / user 0m0.001s / sys 0m0.001s" to delete a single file which was the first one in the list returned by "ls -U", and it took perhaps ten minutes to delete the first 10,000 entries with the command in #3, but now it's hauling along quite happily? For that matter, it deleted 260,000 in about thirty minutes, but it's now taken another fifteen minutes to delete 60,000 more. Why the huge swings in speed? Is there a better way to do this sort of thing? Not store millions of files in a directory; I know that's silly, and it wouldn't have happened on my watch. Googling the problem and looking through SF and SO offers a lot of variations on "find" that obviously have the wrong idea; it's not going to be faster than my approach for several self-evident reasons. But does the delete-via-fsck idea have any legs? Or something else entirely? I'm eager to hear out-of-the-box (or inside-the-not-well-known-box) thinking. Thanks for reading the small novel; feel free to ask questions and I'll be sure to respond. I'll also update the question with the final number of files and how long the delete script ran once I have that. Final script output!: 2970000... 2980000... 2990000... 3000000... 3010000... real 253m59.331s user 0m6.061s sys 5m4.019s So, three million files deleted in a bit over four hours.

    Read the article

  • Source control products that support linked/shared files?

    - by Ian Boyd
    We're interested in moving from a source control system that supports the concept of shared or linked files. A shared file means: a file modified in one project, is automatically updated changed in every other project that uses that same file. It does this without a developer having to request it, reverse-integrate it, ask for it, or even want it. We're trying to see if any other commonly used source-control systems can meet our needs, and include linked or shared files. My limited research shows that: Team Foundation Server doesn't support sharing files Subversion doesn't support sharing files (including Externals) CVS doesn't support sharing files (including Modules) Anything else? (besides our current source control product, obviously) References Subversion and shared files across repositories/projects? How to share files between CVS projects? Will TFS ever support shared files for projects under source control?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >