Search Results

Search found 38248 results on 1530 pages for 'project folder'.

Page 204/1530 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Which topics do I need to research to enable me to complete my self-assigned "Learning Project"?

    - by Anonymous -
    I want to continue learning C#. I've read parts of a few books recommended on here and the language is feeling more familiar by the day. I'd like to tackle a mid-sized personal project to take my expertise to the next level. What I'd like to do, is create an application that 'manages expenses', that runs on multiple machines on a LAN. So for example, say we have person1 and person2 on seperate machines running the application, when person1 enters an expense, it will appear on person2's (pretty UI) view of the expenses database and vice versa. What topics do I need to research to make this possible for me? I plan on learning WPF for the UI (though the steep learning curve (or so I'm told) has me a little anxious about that at this stage. With regards to the database, which database would you recommend I use? I don't want a 'server' for the database to run on, so do I need to use an embedded database that each client machine runs a copy of that updates to each other (upon startup/entering of expense on any machine etc)? What topics under networking should I be looking at? I haven't studied networking before in any language, so do I need to learn about sockets or?

    Read the article

  • How to persuade C fanatics to work on my C++ open source project?

    - by paperjam
    I am launching an open-source project into a space where a lot of the development is still done Linux-kernel-style, i.e. C-language with a low-level mindset. There are multiple benefits to C++ in our space but I fear those used to working in C will be scared off. How can I make the case for the benefits of C++? Specifically, the following C++ attributes are very valuable: concept of objects and reference-counting pointers - really don't want to have to malloc(sizeof(X)) or memcpy() structs templates for specialising whole bodies of code with specific performance optimizations and for avoiding duplication of code. template metaprogramming related to the above syntactic sweetness available (e.g. operator overloading, to be used in very small doses) STL Boost libraries Many of the knee-jerk negative reactions to C++ are illfounded. Performance does not suffer: modern compilers can flatten dozens of call stack levels and avoid bloat through wide use of template specializations. Granted, when using metaprogramming and building multiple specializations of a large call tree, compile time is slower but there are ways to mitigate this. How can I sell C++?

    Read the article

  • What's wrong with this Open GL ES 2.0. Shader?

    - by Project Dumbo Dev
    I just can't understand this. The code works perfectly on the emulator(Which is supposed to give more problems than phones…), but when I try it on a LG-E610 it doesn't compile the vertex shader. This is my log error(Which contains the shader code as well): EDITED Shader: uniform mat4 u_Matrix; uniform int u_XSpritePos; uniform int u_YSpritePos; uniform float u_XDisplacement; uniform float u_YDisplacement; attribute vec4 a_Position; attribute vec2 a_TextureCoordinates; varying vec2 v_TextureCoordinates; void main(){ v_TextureCoordinates.x= (a_TextureCoordinates.x + u_XSpritePos) * u_XDisplacement; v_TextureCoordinates.y= (a_TextureCoordinates.y + u_YSpritePos) * u_YDisplacement; gl_Position = u_Matrix * a_Position; } Log reports this before loading/compiling shader: 11-05 18:46:25.579: D/memalloc(1649): /dev/pmem: Mapped buffer base:0x51984000 size:5570560 offset:4956160 fd:46 11-05 18:46:25.629: D/memalloc(1649): /dev/pmem: Mapped buffer base:0x5218d000 size:5836800 offset:5570560 fd:49 Maybe it has something to do with that men alloc? The phone is also giving a constant error while plugged: ERROR FBIOGET_ESDCHECKLOOP fail, from msm7627a.gralloc Edited: "InfoLog:" refers to glGetShaderInfoLog, and it's returning nothing. Since I removed the log in a previous edit I will just say i'm looking for feedback on compiling shaders. Solution + More questions: Ok, the problem seems to be that either ints are not working(generally speaking) or that you can't mix floats with ints. That brings to me the question, why on earth glGetShaderInfoLog is returning nothing? Shouldn't it tell me something is wrong on those lines? It surely does when I misspell something. I solved by turning everything into floats, but If someone can add some light into this, It would be appreciated. Thanks.

    Read the article

  • How can I be able to create a folder node on companyhome in Alfresco? [migrated]

    - by bluestella
    I am having a hard time looking for an answer to my question. I am trying to create a folder/space in Alfresco. But I don't have any idea doing it? Can someone help me with this? I'm using Java webscript. All I am at is this: package org.alfresco.module.demoscripts; import java.io.IOException; import org.alfresco.web.scripts.AbstractWebScript; import org.alfresco.web.scripts.WebScriptException; import org.alfresco.web.scripts.WebScriptRequest; import org.alfresco.web.scripts.WebScriptResponse; import org.json.JSONException; import org.json.JSONObject; public class SimpleWebScript extends AbstractWebScript { public void execute(WebScriptRequest req, WebScriptResponse res) throws IOException { try { // build a json object JSONObject obj = new JSONObject(); // put some data on it obj.put("field1", "data1"); // build a JSON string and send it back String jsonString = obj.toString(); res.getWriter().write(jsonString); } catch(JSONException e) { throw new WebScriptException("Unable to serialize JSON"); } } }

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

  • Documenting C# Library using GhostDoc and SandCastle

    - by sreejukg
    Documentation is an essential part of any IT project, especially when you are creating reusable components that will be used by other developers (such as class libraries). Without documentation re-using a class library is almost impossible. Just think of coding .net applications without MSDN documentation (Ooops I can’t think of it). Normally developers, who know the bits and pieces of their classes, see this as a boring work to write details again to generate the documentation. Also the amount of work to make this and manage it changes made the process of manual creation of Documentation impossible or tedious. So what is the effective solution? Let me divide this into two steps 1. Generate comments for your code while you are writing the code. 2. Create documentation file using these comments. Now I am going to examine these processes. Step 1: Generate XML Comments automatically Most of the developers write comments for their code. The best thing is that the comments will be entered during the development process. Additionally comments give a good reference to the code, make your code more manageable/readable. Later these comments can be converted into documentation, along with your source code by identifying properties and methods I found an add-in for visual studio, GhostDoc that automatically generates XML documentation comments for C#. The add-in is available in Visual Studio Gallery at MSDN. You can download this from the url http://visualstudiogallery.msdn.microsoft.com/en-us/46A20578-F0D5-4B1E-B55D-F001A6345748. I downloaded the free version from the above url. The free version suits my requirement. There is a professional version (you need to pay some $ for this) available that gives you some more features. I found the free version itself suits my requirements. The installation process is straight forward. A couple of clicks will do the work for you. The best thing with GhostDoc is that it supports multiple versions of visual studio such as 2005, 2008 and 2010. After Installing GhostDoc, when you start Visual studio, the GhostDoc configuration dialog will appear. The first screen asks you to assign a hot key, pressing this hotkey will enter the comment to your code file with the necessary structure required by GhostDoc. Click Assign to go to the next step where you configure the rules for generating the documentation from the code file. Click Create to start creating the rules. Click finish button to close this wizard. Now you performed the necessary configuration required by GhostDoc. Now In Visual Studio tools menu you can find the GhostDoc that gives you some options. Now let us examine how GhostDoc generate comments for a method. I have write the below code in my code behind file. public Char GetChar(string str, int pos) { return str[pos]; } Now I need to generate the comments for this function. Select the function and enter the hot key assigned during the configuration. GhostDoc will generate the comments as follows. /// <summary> /// Gets the char. /// </summary> /// <param name="str">The STR.</param> /// <param name="pos">The pos.</param> /// <returns></returns> public Char GetChar(string str, int pos) { return str[pos]; } So this is a very handy tool that helps developers writing comments easily. You can generate the xml documentation file separately while compiling the project. This will be done by the C# compiler. You can enable the xml documentation creation option (checkbox) under Project properties -> Build tab. Now when you compile, the xml file will created under the bin folder. Step 2: Generate the documentation from the XML file Now you have generated the xml file documentation. Sandcastle is the tool from Microsoft that generates MSDN style documentation from the compiler produced XML file. The project is available in codeplex http://sandcastle.codeplex.com/. Download and install Sandcastle to your computer. Sandcastle is a command line tool that doesn’t have a rich GUI. If you want to automate the documentation generation, definitely you will be using the command line tools. Since I want to generate the documentation from the xml file generated in the previous step, I was expecting a GUI where I can see the options. There is a GUI available for Sandcastle called Sandcastle Help File Builder. See the link to the project in codeplex. http://www.codeplex.com/wikipage?ProjectName=SHFB. You need to install Sandcastle and then the Sandcastle Help file builder. From here I assume that you have installed both sandcastle and Sandcastle help file builder successfully. Once you installed the help file builder, it will be available in your all programs list. Click on the Sandcastle Help File Builder GUI, will launch application. First you need to create a project. Click on File -> New project The New project dialog will appear. Choose a folder to store your project file and give a name for your documentation project. Click the save button. Now you will see your project properties. Now from the Project explorer, right click on the Documentation Sources, Click on the Add Documentation Source link. A documentation source is a file such as an assembly or a Visual Studio solution or project from which information will be extracted to produce API documentation. From the Add Documentation source dialog, I have selected the XML file generated by my project. Once you add the xml file to the project, you will see the dll file automatically added by the help file builder. Now click on the build button. Now the application will generate the help file. The Build window gives to the result of each steps. Once the process completed successfully, you will have the following output in the build window. Now navigate to your Help Project (I have selected the folder My Documents\Documentation), inside help folder, you can find the chm file. Open the chm file will give you MSDN like documentation. Documentation is an important part of development life cycle. Sandcastle with GhostDoc make this process easier so that developers can implement the documentation in the projects with simple to use steps.

    Read the article

  • Physical Directories vs. MVC View Paths

    - by Rick Strahl
    This post falls into the bucket of operator error on my part, but I want to share this anyway because it describes an issue that has bitten me a few times now and writing it down might keep it a little stronger in my mind. I've been working on an MVC project the last few days, and at the end of a long day I accidentally moved one of my View folders from the MVC Root Folder to the project root. It must have been at the very end of the day before shutting down because tests and manual site navigation worked fine just before I quit for the night. I checked in changes and called it a night. Next day I came back, started running the app and had a lot of breaks with certain views. Oddly custom routes to these controllers/views worked, but stock /{controller}/{action} routes would not. After a bit of spelunking I realized that "Hey one of my View Folders is missing", which made some sense given the error messages I got. I looked in the recycle bin - nothing there, so rather than try to figure out what the hell happened, just restored from my last SVN checkin. At this point the folders are back… but… view access  still ends up breaking for this set of views. Specifically I'm getting the Yellow Screen of Death with: CS0103: The name 'model' does not exist in the current context Here's the full error: Server Error in '/ClassifiedsWeb' Application. Compilation ErrorDescription: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately.Compiler Error Message: CS0103: The name 'model' does not exist in the current contextSource Error: Line 1: @model ClassifiedsWeb.EntryViewModel Line 2: @{ Line 3: ViewBag.Title = Model.Entry.Title + " - " + ClassifiedsBusiness.App.Configuration.ApplicationName; Source File: c:\Projects2010\Clients\GorgeNet\Classifieds\ClassifiedsWeb\Classifieds\Show.cshtml    Line: 1 Compiler Warning Messages: Show Detailed Compiler Output: Show Complete Compilation Source: Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.272 Here's what's really odd about this error: The views now do exist in the /Views/Classifieds folder of the project, but it appears like MVC is trying to execute the views directly. This is getting pretty weird, man! So I hook up some break points in my controllers to see if my controller actions are getting fired - and sure enough it turns out they are not - but only for those views that were previously 'lost' and then restored from SVN. WTF? At this point I'm thinking that I must have messed up one of the config files, but after some more spelunking and realizing that all the other Controller views work, I give up that idea. Config's gotta be OK if other controllers and views are working. Root Folders and MVC Views don't mix As I mentioned the problem was the fact that I inadvertantly managed to drag my View folder to the root folder of the project. Here's what this looks like in my FUBAR'd project structure after I copied back /Views/Classifieds folder from SVN: There's the actual root folder in the /Views folder and the accidental copy that sits of the root. I of course did not notice the /Classifieds folder at the root because it was excluded and didn't show up in the project. Now, before you call me a complete idiot remember that this happened by accident - an accidental drag probably just before shutting down for the night. :-) So why does this break? MVC should be happy with views in the /Views/Classifieds folder right? While MVC might be happy, IIS is not. The fact that there is a physical folder on disk takes precedence over MVC's routing. In other words if a URL exists that matches a route the pysical path is accessed first. What happens here is that essentially IIS is trying to execute the .cshtml pages directly without ever routing to the Controller methods. In the error page I showed above my clue should have been that the view was served as: c:\Projects2010\Clients\GorgeNet\Classifieds\ClassifiedsWeb\Classifieds\Show.cshtml rather than c:\Projects2010\Clients\GorgeNet\Classifieds\ClassifiedsWeb\Views\Classifieds\Show.cshtml But of course I didn't notice that right away, just skimming to the end and looking at the file name. The reason that /classifieds/list actually fires that file is that the ASP.NET Web Pages engine looks for physical files on disk that match a path. IOW, when calling Web Pages you drop the .cshtml off the Razor page and IIS will serve that just fine. So: /classifieds/list looks and tries to find /classifieds/list.cshtml and executes that script. And that is exactly what's happening. Web Pages is trying to execute the .cshtml file and it fails because Web Pages knows nothing about the @model tag which is an MVC specific template extension. This is why my breakpoints in the controller methods didn't fire and it also explains why the error mentions that the @model key word is invalid (@model is an MVC provided template enhancement to the Razor Engine). The solution of course is super simple: Delete the accidentally created root folder and the problem is solved. Routing and Physical Paths I've run into problems with this before actually. In the past I've had a number of applications that had a physical /Admin folder which also would conflict with an MVC Admin controller. More than once I ended up wondering why the index route (/Admin/) was not working properly. If a physical /Admin folder exists /Admin will not route to the Index action (or whatever default action you have set up, but instead try to list the directory or show the default document in the folder. The only way to force the index page through MVC is to explicitly use /Admin/Index. Makes perfect sense once you realize the physical folder is there, but that's easy to forget in an MVC application. As you might imagine after a few times of running into this I gave up on the Admin folder and moved everything into MVC views to handle those operations. Still it's one of those things that can easily bite you, because the behavior and error messages seem to point at completely different  problems. Moral of the story is: If you see routing problems where routes are not reaching obvious controller methods, always check to make sure there's isn't a physical path being mapped by IIS instead. That way you won't feel stupid like I did after trying a million things for about an hour before discovering my sloppy mousing behavior :-)© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Real World Nuget

    - by JoshReuben
    Why Nuget A higher level of granularity for managing references When you have solutions of many projects that depend on solutions of many projects etc à escape from Solution Hell. Links · Using A GUI (Package Explorer) to build packages - http://docs.nuget.org/docs/creating-packages/using-a-gui-to-build-packages · Creating a Nuspec File - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic2.aspx · consuming a Nuget Package - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic3 · Nuspec reference - http://docs.nuget.org/docs/reference/nuspec-reference · updating packages - http://nuget.codeplex.com/wikipage?title=Updating%20All%20Packages · versioning - http://docs.nuget.org/docs/reference/versioning POC Folder Structure POC Setup Steps · Install package explorer · Source o Create a source solution – configure output directory for projects (Project > Properties > Build > Output Path) · Package o Add assemblies to package from output directory (D&D)- add net folder o File > Export – save .nuspec files and lib contents <?xml version="1.0" encoding="utf-16"?> <package > <metadata> <id>MyPackage</id> <version>1.0.0.3</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <summary /> </metadata> </package> o File > Save – saves .nupkg file · Create Target Solution o In Tools > Options: Configure package source & Add package Select projects: Output from package manager (powershell console) ------- Installing...MyPackage 1.0.0 ------- Added file 'NugetSource.AssemblyA.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyA.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'MyPackage.1.0.0.nupkg' to folder 'MyPackage.1.0.0'. Successfully installed 'MyPackage 1.0.0'. Added reference 'NugetSource.AssemblyA' to project 'AssemblyX' Added reference 'NugetSource.AssemblyB' to project 'AssemblyX' Added file 'packages.config'. Added file 'packages.config' to project 'AssemblyX' Added file 'repositories.config'. Successfully added 'MyPackage 1.0.0' to AssemblyX. ============================== o Packages folder created at solution level o Packages.config file generated in each project: <?xml version="1.0" encoding="utf-8"?> <packages>   <package id="MyPackage" version="1.0.0" targetFramework="net40" /> </packages> A local Packages folder is created for package versions installed: Each folder contains the downloaded .nupkg file and its unpacked contents – eg of dlls that the project references Note: this folder is not checked in UpdatePackages o Configure Package Manager to automatically check for updates o Browse packages - It automatically picked up the updates Update Procedure · Modify source · Change source version in assembly info · Build source · Open last package in package explorer · Increment package version number and re-add assemblies · Save package with new version number and export its definition · In target solution – Tools > Manage Nuget Packages – click on All to trigger refresh , then click on recent packages to see updates · If problematic, delete packages folder Versioning uninstall-package mypackage install-package mypackage –version 1.0.0.3 uninstall-package mypackage install-package mypackage –version 1.0.0.4 Dependencies · <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd"> <metadata> <id>MyDependentPackage</id> <version>1.0.0</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <dependencies> <group targetFramework=".NETFramework4.0"> <dependency id="MyPackage" version="1.0.0.4" /> </group> </dependencies> </metadata> </package> Using NuGet without committing packages to source control http://docs.nuget.org/docs/workflows/using-nuget-without-committing-packages Right click on the Solution node in Solution Explorer and select Enable NuGet Package Restore. — Recall that packages folder is not part of solution If you get downloading package ‘Nuget.build’ failed, config proxy to support certificate for https://nuget.org/api/v2/ & allow unrestricted access to packages.nuget.org To test connectivity: get-package –listavailable To test Nuget Package Restore – delete packages folder and open vs as admin. In nugget msbuild: <Import Project="$(SolutionDir)\.nuget\nuget.targets" /> TFSBuild Integration Modify Nuget.Targets file <RestorePackages Condition="  '$(RestorePackages)' == '' "> True </RestorePackages> … <PackageSource Include="\\IL-CV-004-W7D\Packages" /> Add System Environment variable EnableNuGetPackageRestore=true & restart the “visual studio team foundation build service host” service. Important: Ensure Network Service has access to Packages folder Nugetter TFS Build integration Add Nugetter build process templates to TFS source control For Build Controller - Specify location of custom assemblies Generate .nuspec file from Package Explorer: File > Export Edit the file elements – remove path info from src and target attributes <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">     <metadata>         <id>Common</id>         <version>1.0.0</version>         <title />         <authors>josh-r</authors>         <owners />         <requireLicenseAcceptance>false</requireLicenseAcceptance>         <description>My package description.</description>         <dependencies>             <group targetFramework=".NETFramework3.5" />         </dependencies>     </metadata>     <files>         <file src="CommonTypes.dll" target="CommonTypes.dll" />         <file src="CommonTypes.pdb" target="CommonTypes.pdb" /> … Add .nuspec file to solution so that it is available for build: Dev\NovaNuget\Common\NuSpec\common.1.0.0.nuspec Add a Build Process Definition based on the Nugetter build process template: Configure the build process – specify: · .sln to build · Base path (output directory) · Nuget.exe file path · .nuspec file path Copy DLLs to a binary folder 1) Set copy local for an assembly reference to false 2)  MSBuild Copy Task – modify .csproj file: http://msdn.microsoft.com/en-us/library/3e54c37h.aspx <ItemGroup>     <MySourceFiles Include="$(MSBuildProjectDirectory)\..\SourceAssemblies\**\*.*" />   </ItemGroup>     <Target Name="BeforeBuild">     <Copy SourceFiles="@(MySourceFiles)" DestinationFolder="bin\debug\SourceAssemblies" />   </Target> 3) Set Probing assembly search path from app.config - http://msdn.microsoft.com/en-us/library/823z9h8w(v=vs.80).aspx -                 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <runtime>     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">       <probing privatePath="SourceAssemblies"/>     </assemblyBinding>   </runtime> </configuration> Forcing 'copy local = false' The following generic powershell script was added to the packages install.ps1: param($installPath, $toolsPath, $package, $project) if( $project.Object.Project.Name -ne "CopyPackages") { $asms = $package.AssemblyReferences | %{$_.Name} foreach ($reference in $project.Object.References) { if ($asms -contains $reference.Name + ".dll") { $reference.CopyLocal = $false; } } } An empty project named "CopyPackages" was added to the solution - it references all the packages and is the only one set to CopyLocal="true". No MSBuild knowledge required.

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • Projected Results: Sound project management practices, combined with a complete technology platform, have an immediate and lasting impact on an organization’s bottom line.

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Article By: Alan Joch, is a business and technology writer who specializes in enterprise applications, cloud computing, mobile computing, and the Web. It’s no secret that complex, large-scale projects need close management controls to ensure that they’re delivered on time and on budget. But now there’s growing evidence that failing to meet these goals can have far-reaching consequences, not only for the reputations and value of individual organizations but also for the tenure of their top executives. Government watchdogs forced one large contractor to suspend a multibillion-dollar defense program—and delay payment receipts—until a better management system was launched to more accurately track spending, project milestones, and other fundamental metrics. Significant delays in the opening of the £4.3 billion Terminal 5 at Heathrow Airport impaired an airline’s operations and contributed to a drop in its share prices. These real-world examples are noteworthy because of the huge financial risks they created. They’re also far from being isolated cases. Research by the Economist Intelligence Unit found that only 11 percent of companies claimed they delivered expected ROI on major capital projects 90 percent of the time or more. In addition, 12 percent of respondents said they achieved planned ROI less than half the time. According to Phil Thornton, lead consultant at the analyst firm Clarity Economics, the numbers demonstrate obvious challenges related to managing risks, accurately predicting ROI, and consistently delivering bottom-line growth for major capital investments “Portfolio management is a path to improve your organization’s competitive advantage. It helps make sure your organization is investing in the right things and not spending its time on things that are not delivering the intended results for the firm.” Read the full article here

    Read the article

  • Has test driven development (TDD) actually benefited a real world project?

    - by James
    I am not new to coding. I have been coding (seriously) for over 15 years now. I have always had some testing for my code. However, over the last few months I have been learning test driven design/development (TDD) using Ruby on Rails. So far, I'm not seeing the benefit. I see some benefit to writing tests for some things, but very few. And while I like the idea of writing the test first, I find I spend substantially more time trying to debug my tests to get them to say what I really mean than I do debugging actual code. This is probably because the test code is often substantially more complicated than the code it tests. I hope this is just inexperience with the available tools (RSpec in this case). I must say though, at this point, the level of frustration mixed with the disappointing lack of performance is beyond unacceptable. So far, the only value I'm seeing from TDD is a growing library of RSpec files that serve as templates for other projects/files. Which is not much more useful, maybe less useful, than the actual project code files. In reading the available literature, I notice that TDD seems to be a massive time sink up front, but pays off in the end. I'm just wondering, are there any real world examples? Does this massive frustration ever pay off in the real world? I really hope I did not miss this question somewhere else on here. I searched, but all the questions/answers are several years old at this point. It was a rare occasion when I found a developer who would say anything bad about TDD, which is why I have spent as much time on this as I have. However, I noticed that nobody seems to point to specific real-world examples. I did read one answer that said the guy debugging the code in 2011 would thank you for have a complete unit testing suite (I think that comment was made in 2008). So, I'm just wondering, after all these years, do we finally have any examples showing the payoff is real? Has anybody actually inherited or gone back to code that was designed/developed with TDD and has a complete set of unit tests and actually felt a payoff? Or did you find that you were spending so much time trying to figure out what the test was testing (and why it was important) that you just tossed out the whole mess and dug into the code?

    Read the article

  • How to manage a Closed Source High-Risk Project?

    - by abel
    I am currently planning to develop a J2EE website and wish to bring in 1 developer and 1 web designer to assist me. The project is a financial app with a niche market. I plan to keep the source closed . However, I fear that my would-be employees could easily copy the codebase and use it /sell it to a third party especially when they switch jobs. The app development will take 4-6months and perhaps more and I may have to bring in people after the app goes live. How do I keep the source to myself. Are there techniques companies use to guard their source. I foresee disabling pendrives and dvd writers on my development machines, but uploading data or attaching the code in one's mail would still be possible. My question is incomplete. But programmers who have been in my situation, please advice. How should I go about this? Building a team, maintaining code-secrecy,etc. I am looking forward to sign a secrecy contract with the employees if needed too. (Please add relevant tags) Update Thank you for all the answers. I certainly won't be disabling all USB ports and DVD writers now. But I think I should be logging activity(How exactly should I do that?) I am wary of scalpers who would join and then run off with the existing code. I haven't met any, but I have been advised to be wary of them. I would include a secrecy clause, but given this is a startup with almost no funding and in a highly competitive business niche with bigger players in the field, I doubt I would be able to detect or pursue any scalpers. How do I hire people I trust, when I don't know them personally. Their resume will be helpful but otherwise trust will develop only with time. But finally even if they do run away with the code, it is service that matters after the sale is made. So I am not really worried for the long term.

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • Why isn't the scripts in my autoload folder being executed in Vim?

    - by Codemonkey
    I'm trying to use Pathogen to manage my Vim extensions. My bundle folder looks like this: .../bundle/ +-- vim-pathogen ¦   +-- autoload ¦   +-- pathogen.vim +-- vim-smoothscroll +-- autoload +-- smooth_scroll.vim And my vimrc file includes this: let s:root = fnamemodify(resolve(expand(":p")), ":h") " Initiate pathogen. exec "source " . s:root . "/vimfiles/bundle/vim-pathogen/autoload/pathogen.vim" exec pathogen#infect() My vimrc file is a symlink located in ~ but pointing to a folder inside my Dropbox folder. This appears to work when I start Vim. Pathogen has added vim-smoothscroll to my runtimepath: :set runtimepath? runtimepath=~/Dropbox/Personal/config_sync/vim/vimfiles,~/Dropbox/Personal/config_sync/vim/vimfiles/bundle/vim-p athogen,~/Dropbox/Personal/config_sync/vim/vimfiles/bundle/vim-smoothscroll,~/.vim,~/vim/share/vim/vimfiles,~/vim/ share/vim/vim74,~/vim/share/vim/vimfiles/after,~/.vim/after The problem is that the script smooth_scroll.vim hasn't been loaded: 1: ~/.vimrc 2: ~/Dropbox/Personal/config_sync/vim/vimfiles/bundle/vim-pathogen/autoload/pathogen.vim 3: ~/vim/share/vim/vim74/syntax/syntax.vim 4: ~/vim/share/vim/vim74/syntax/synload.vim 5: ~/vim/share/vim/vim74/syntax/syncolor.vim 6: ~/vim/share/vim/vim74/filetype.vim 7: ~/vim/share/vim/vim74/menu.vim 8: ~/vim/share/vim/vim74/autoload/paste.vim 9: ~/Dropbox/Personal/config_sync/vim/vimfiles/colors/codeschool.vim 10: ~/Dropbox/Personal/config_sync/vim/_vimrc_gui 11: ~/Dropbox/Personal/config_sync/vim/_vimrc_keybinds 12: ~/vim/share/vim/vim74/plugin/getscriptPlugin.vim 13: ~/vim/share/vim/vim74/plugin/gzip.vim 14: ~/vim/share/vim/vim74/plugin/matchparen.vim 15: ~/vim/share/vim/vim74/plugin/netrwPlugin.vim 16: ~/vim/share/vim/vim74/plugin/rrhelper.vim 17: ~/vim/share/vim/vim74/plugin/spellfile.vim 18: ~/vim/share/vim/vim74/plugin/tarPlugin.vim 19: ~/vim/share/vim/vim74/plugin/tohtml.vim 20: ~/vim/share/vim/vim74/plugin/vimballPlugin.vim 21: ~/vim/share/vim/vim74/plugin/zipPlugin.vim 22: ~/vim/share/vim/vim74/syntax/ruby.vim 23: ~/vim/share/vim/vim74/syntax/vim.vim 24: ~/vim/share/vim/vim74/syntax/python.vim Why is that? Loading the script manually works fine.

    Read the article

  • How can I access a webDAV folder as a UNC share?

    - by Amar
    first of all I am just getting to know about webDAV and appreciate your patience. I have a virtual directory on IIS 6 (windows 2003) that is based on a network share on a file server different from web server. something like www.mysite.com/myreports where myreport is based on \myfileserver\reports. I have been asked by network folks to not use the UNC path like that for security reason and try to explore using webDAV. What I have been told is webDAV can give me a UNC path without requiring to open the ports required for a file server UNC path. Then I can use the new UNC path to map my virtual directory and my asp.net code will not require any change. Help: Being new to webDAV, I did some research over the web. I now can create a web folder in fileserver. I put IIS on file server as well. I can browse the content as http:://fileserver/mywebDAVreports which is based on reports folder. But I don't know how to get a UNC for this web folder to be able to map my virtual directory on web server. I appreciate any help on this. Regards, Amar

    Read the article

  • Moving symlinks into a folder based on id3 tags.

    - by Reti
    I'm trying to get my music folder into something sensible. Right now, I have all my music stored in /home/foo so I have all of the albums soft linked to ~/music. I want the structure to be ~/music/<artist>/<album> I've got all of the symlinks into ~/music right now so I just need to get the symlinks into the proper structure. I'm trying to do this by delving into the symlinked album, getting the artist name with id3info. I can do this, but I can't seem to get it to work correctly. for i in $( find -L $i -name "*.mp3" -printf "%h\n") do echo "$i" #testing purposes #find its artist #the stuff after read file just cuts up id3info to get just the artist name #$artist = find -L $i -name "*.mp3" | read file; id3info $file | grep TPE | sed "s|.*: \(.*\)|\1|"|head -n1 #move it to correct artist folder #mv "$i" "$artist" done Now, it does find the correct folder, but every time there is a space in the dir name it makes it a newline. Here's a sample of what I'm trying to do $ ls DJ Exortius/ The Trance Mix 3 Wanderlust - DJ Exortius [TRANCE DEEP VOCAL TECH]@ I'm trying to mv The Trance Mix 3 Wanderlust - DJ Exortius [TRANCE DEEP VOCAL TECH]@ into the real directory DJ Exortius. DJ Exortius already exists, so it's just a matter of moving it into the correct directory that's based on the id3 tag of the mp3 inside. Thanks! PS: I've tried easytag, but when I restructure the album, it moves it from /home/foo which is not what I want.

    Read the article

  • RHEL 5.3 Kickstart - How specify location of individual package in Workstation folder?

    - by Ed
    I keep getting "package does not exist" errors during the install. I made a kickstart ISO to create an unattended install of a RHEL 5.3 build machine for C++ software releases. It pulls the kickstart config file from our internal web server. This is handy; it makes it easy to test and modify without having to make a new ISO. And I plan to check it in to version control if I can get it working. Anyway, the rpm packages are located in two folders on the disk; Client and Workstation. The packages install fine for the ones that are physically located under the Client folder. It cannot find those under the Workstation folder such as as doxygen and subversion complaining that packages do not exist. Is there a way to specify the individual package location? # ----------------------------------------------------------------------------- # P A C K A G E S # ----------------------------------------------------------------------------- %packages @gnome-desktop @core @base @base-x @printing @development-tools emacs kexec-tools fipscheck xorg-x11-server-Xnest xorg-x11-server-Xvfb #Packages Located in Workstation Folder *** Install can not find any of these ?? bison doxygen gcc-c++ subversion zlib-devel freetype-devel libxml2-devel Thanks in advance, -Ed

    Read the article

  • How I can recover files when the folder shows empty but the files are not deleted?

    - by Borror0
    Yesterday, my laptop caught a virus which caused massive damage. Since them, I have been trying to recover important files before reformatting my computer, a task the virus has not made easy. Restoration points predating the attack have been deleted. Most of my folders show empty. My Start menu is essentially empty, with the exception of Trillian and Mirror's Edge. The same goes for my Desktop, which only has programs which were installed after the attack. Searching for files though my computer is pretty much useless, as it only rarely brings up anything. I suspect most of my files have not been deleted. While my folders show empty, uTorrent still does display them and I can open them from here. Unfortunately, when I select Open Containing Folder, the folder still shows as completely empty even if I'm currently watching a video from that very folder. Further adding evidence to the not-deleted, just-missing theory, the data recovery software I'm using (Restoration) cannot find only find an handful of the missing files. If they were deleted, I could do a forensic recovery to get them back but since they're probably still somewhere on my computer, just out out of my reach, I can't find them. Under those circumstances, is there a way I can recover those files?

    Read the article

  • RHEL 5.3 Kickstart - How specify location of individual package in Workstation folder?

    - by Ed
    I keep getting "package does not exist" errors during the install. I made a kickstart ISO to create an unattended install of a RHEL 5.3 build machine for C++ software releases. It pulls the kickstart config file from our internal web server. This is handy; it makes it easy to test and modify without having to make a new ISO. And I plan to check it in to version control if I can get it working. Anyway, the rpm packages are located in two folders on the disk; Client and Workstation. The packages install fine for the ones that are physically located under the Client folder. It cannot find those under the Workstation folder such as as doxygen and subversion complaining that packages do not exist. Is there a way to specify the individual package location? # ----------------------------------------------------------------------------- # P A C K A G E S # ----------------------------------------------------------------------------- %packages @gnome-desktop @core @base @base-x @printing @development-tools emacs kexec-tools fipscheck xorg-x11-server-Xnest xorg-x11-server-Xvfb #Packages Located in Workstation Folder *** Install can not find any of these ?? bison doxygen gcc-c++ subversion zlib-devel freetype-devel libxml2-devel Thanks in advance, -Ed

    Read the article

  • How would I write a terminal command to download a folder with wget from a Media Temple (gs) server?

    - by racl101
    I'm trying to download a folder using wget on the Terminal (I'm usin a Mac if that matters) because my ftp client sucks and keeps timing out. It doesn't stay connected for long. So I was wondering if I could use wget to connect via ftp protocol to the server to download the directory in question. I have searched around in the internet for this and have attempted to write the command but it keeps failing. So assuming the following: ftp username is: [email protected] ftp host is: ftp.s12345.gridserver.com ftp password is: somepassword I have tried to write the command in the following ways: wget -r ftp://[email protected]:[email protected]/path/to/desired/folder/ wget -r ftp://serveradmin:[email protected]/path/to/desired/folder/ When I try the first way I get this error: Bad port number. When I try the second way I get a little further but I get this error: Resolving s12345.gridserver.com... 71.46.226.79 Connecting to s12345.gridserver.com|71.46.226.79|:21... connected. Logging in as serveradmin ... Login incorrect. What could I be doing wrong?

    Read the article

  • Made a .dmg for a project; user can't open it - "no mountable file systems"

    - by dragonridingsorceress
    Hello, We don't know a great deal about Macs. We had to make an installer, and were told to try a .dmg So we put together version 1, and it seemed to work. We had one application file, which had our icon, and one folder. The user was instructed to drag these into the Applications folder, of which there was the Mac version of a shortcut in the dmg. Then we were told we needed to update files, and assured that we could do so via drag-and-drop. So we did; we dragged them into the folder in the dmg. We tested it (on the computer we were using to edit the dmg) and it seemed to work. So we burnt it onto a disk (along with a windows installer that actually works!). I've just gotten an email from the recipient. She's got a Mac laptop. She inserted the disk, doubleclicked on it, doubleclicked on the .dmg, and got a Warning: no mountable file systems. Screenshot: http://www.flickr.com/photos/97292258@N00/5101670174/ I have the dmg (not on a disk) and am able to open it with no difficulty. How can we get it to work for our recipient?

    Read the article

  • How to store Movies on a separate volume from the iTunes media folder?

    - by Manca Weeks
    I have a rather enormous Music collection. The music itself is approaching the 1TB mark. I am storing that on an external drive already. My iTunes library files are in their default location (/Users/me/Music/iTunes). My iTunes media folder is on an external drive /Volumes/iTunes/iTunes Music This has been working as expected. Now I would like to store just the contents of the Movies folder in the iTunes media folder on a separate drive. Apparently, iTunes doesn't like aliases or symlinks. I saw somewhere that one could mount a volume in a different directory than the default /Volumes. I would like to permanently mount my new Movies volume in the directory /Volumes/iTunes/iTunes Music/Movies. I know there is a command to do this, but how does one configure Mac OS 10.6.4 to always automatically mount that volume in this directory? I hope someone can enlighten me... If I find a solution, I can finally import all my movies into iTunes and be able to search them and stuff - it would be a dream. Thanks, M

    Read the article

  • How do I use a list of filenames to find a folder on my hard drive, that contains most matches of these filenames?

    - by Web Master
    I need a program that will use a list of file names to find a folder on my hard drive that contains the most of these filenames. Long story short I made a giant map. This map was live and got ruined. New map data files have been generated, and previous map data files have been altered. What does this mean? This means file sizes have been changed, and there will be new files that have never been in the backup folder. Some files map files could also have been generated in other projects. So there could be filenames on my computer not associated with this due to the way the files are named when created. So If I take an indidual file for example "r.-1.-1.mca" This file could show up on my hard drive 10 times. Anyway, the goal is to take 100 map files, turn them into a list, and then search the hard drive and find the folder that has the highest count of matching map file names. Can anyone figure out a way to do this? I am thinking about manually searching for every single file.

    Read the article

  • How to store Movies on a separate volume from the iTunes media folder?

    - by Manca Weeks
    I have a rather enormous Music collection. The music itself is approaching the 1TB mark. I am storing that on an external drive already. My iTunes library files are in their default location (/Users/me/Music/iTunes). My iTunes media folder is on an external drive /Volumes/iTunes/iTunes Music This has been working as expected. Now I would like to store just the contents of the Movies folder in the iTunes media folder on a separate drive. Apparently, iTunes doesn't like aliases or symlinks. I saw somewhere that one could mount a volume in a different directory than the default /Volumes. I would like to permanently mount my new Movies volume in the directory /Volumes/iTunes/iTunes Music/Movies. I know there is a command to do this, but how does one configure Mac OS 10.6.4 to always automatically mount that volume in this directory? I hope someone can enlighten me... If I find a solution, I can finally import all my movies into iTunes and be able to search them and stuff - it would be a dream. Thanks, M

    Read the article

  • Folder doesn't show up in explorer, cmd, and python even though I can access it, how can I fix this?

    - by Miebster
    I am accessing another computer on the network using a mapped network drive. The path looks like \\192.168.0.100\d$ which is mapped to my computer's "m" drive. I can access, view, create, delete, move, etc folders on this drive. However, some folders don't show up in windows explorer, even tho I can access them. Example: Lets say that M:\stuff\more_stuff is a directory. What I can't do: When windows explorer is pointed at M:\stuff I can't see more_stuff In cmd prompt pointed at M:\stuff "dir" doesn't find more_stuff In cmd prompt pointed at M:\stuff "dir /a" doens't find more_stuff In python, os.listdir at M:\stuff doens't find more_stuff What I can do: Typing M:\stuff\more_stuff into the address bar lets me access the folder like normal. Because there is no indication that this folder even exists, there could be more like them. I have no way of knowing how many folders are magically hidden on this mapped drive. What are some steps I can do to figure out why this folder is hidden? (With the end goal of making it no longer hidden).

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >