Search Results

Search found 4474 results on 179 pages for 'git workflow'.

Page 119/179 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Log ports opened by an application

    - by Simon A. Eugster
    I'm searching for something like: tcpdump -p PID # But tcpdump does not know the PID or lsof -i --continuous # But lsof just runs and exits, no «live logging» to log which connections an application opens. In my case, I want to find out to which port git connects when committing. This happens in a fraction of a second, so I cannot use lsof. If there is a lot of traffic, filtering by PID or process name would be useful.

    Read the article

  • SSH dns issue giving break-in error

    - by psion
    Address ..*.* maps to ec2---*-*.compute-1.amazonaws.com, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT! I keep getting this when I try to log-in to my remote server. I have it set for key authentication and when this error comes through, I still have to push through the password. I want to use this for automated Git pulls, and I can't have this kind of error message. anybody know what is going on here and how to fix it?

    Read the article

  • Can not find the "variables.tcl" file in Varnish Security

    - by Vladimir
    Varnish Security main.vcl contains # clear all internal variables include "/etc/varnish/security/build/variables.vcl"; and # fallthrough: clear all internal variables on security.vcl_recv exit include "/etc/varnish/security/build/variables.vcl"; but /etc/varnish/security/build/variables.vcl is not included into the git. I commented it out, and it is working fine but where can I get that file?

    Read the article

  • Mercurial (hg) commit only certain files

    - by bresc
    Hi I'm trying to commit only certain files with hg. Because of of hg having auto-add whenever I try to commit a change it wants to commit all files. But I don't want that because certain files are not "ready" yet. There is hg commit -I thefile.foo, but this is only for one file. The better way for me would be if I can turn off auto-add as in git. Is this possible? thx

    Read the article

  • Computer "Server"

    - by user328379
    so at home we had the idea of instead of buying 3 different pc's we would somehow create a "server" for the computers where a cable would come to our screens and keyboard and mouses, so the actual pc was somewhere else in the house with all the others. Does such a thing exist? And is it possible to have such a thing for high performance workflow? (Compiling, High-End Games, just as if it was a separate pc )

    Read the article

  • What are optimal strategies for using mapreduce and other applications on the same server?

    - by user45532
    I have two applications that I need to run continuously to process data. 1.) An app that processes and aggregates information from sources 2.) A mapreduce workflow* that processes the above info I've thought about either getting vps hosting or getting my own inexpensive server and using xen to split the resources of the server. Getting a quad core box with 2 GB of Ram seems a lot cheaper than the grid options I've seen at slicehost, rackspace and others...

    Read the article

  • Save as PDF to Folder Script for Apple Mail

    - by patte
    I want to add a script to a rule to auto-print messages as PDF to a specific Folder. What I found was this: http://is.gd/acmsE but it doesn't work for me on OS X 10.6. I already know the Save to PDF command and that you can create simple aliases in the "PDF Services" folder, but I need something automated for this task. I searched the web the whole morning and also tried to write my own Automator workflow but couldn't manage it Any help is highly appreciated.

    Read the article

  • Using www-data through SSH

    - by Fluidbyte
    For development purposes I'm using www-data (on an ubuntu 11.10 server) to ssh in and fire git commands and basic stuff against the webroot. I don't have things like command history, coloring, etc like I do when I ssh in as any other user, so I'm curious how to get this working. I'm assuming I need a `.bashrc' file, but I'm not sure what to include or (more importantly since I could just copy the one from another user) where it goes.

    Read the article

  • VMWare/Ubuntu development stack with symlink to Windows 7

    - by wdhilliard
    I would like to use a vm ubuntu installation as my testing environment, but to ease workflow, I have symlinked /var/www to a windows share. Everything looks good when browsing files and the owner and group both are showing up as www-data, but I can not seem to get apache to respond with anything other than permission denied. Obviously there are still some permission issues between Windows 7 and Ubuntu, but I don't know where to go

    Read the article

  • [mac] Running an Automator task at night

    - by Roberto Aloi
    I've just created an Automator workflow and I'm using crontab to execute it at night. The problem is that, after a while, the machine is going to sleep and a password is required (this is the intended behaviour). Unfortunately, Automator seems to be unable to perform the task when the password is required. It works like a charm if the "password required" dialog is not displayed (before the sleep). Any suggestion on how to solve this?

    Read the article

  • Switching efficiently between windows, not apps, in OS X

    - by Vultan
    Previous questions have asked "how can I efficiently switch between windows, not applications, in OS X"? (Switching windows on OS X, Switch between windows on Mac OS X? and others). The most recommended suggestions seem to be: Use some combo of cmd-tab and cmd-~. Use Expose, and possibly Spaces Use Witch I spent the money on Witch, and have been using it for a few weeks; it's ok, but it is sometimes slow to respond, sometimes buggy on window order, crashes my system if I disable and re-enable it too many times, and doesn't work properly with X11 apps. The built-in cmd-tab and cmd-~ are ok, but still bring an entire application to the forefront. I find a very common workflow I use is to bounce back and forth between two windows (for example, a browser window and a Thunderbird email in progress), when both apps (the browser and email software) have multiple windows open. I can use Cmd-Tab to get back and forth between apps, but whenever I switch to an app, ALL windows from that app pop up. That suddenly fills my screen with irrelevant data and windows, and often drops those other windows in front of the single window from the other app that I was using and would conveniently like to keep viewing even though it isn't in focus. Expose seems to be the preferred "OS X natural way," but I can't seem to get myself to use it efficiently. I hit F9, and see 10 windows; I then need to squint, try to find the window I want, then use the mouse or the cursor keys to navigate to the one I want. Given the number of power users who say they use Expose, I must be missing the boat here. My goal is not to make this a repeat of previous questions. I'm not asking "what are my alternatives?" (unless I've missed one above!) Rather, I'm asking: what are you, OS X power users, actually doing to handle the use case I described above? Another common use case for me is having multiple Excel spreadsheets open and multiple browser windows open, and I'm rapidly switching back and forth between one spreadsheet in particular and one browser window. Every time I Cmd-Tab, all spreadsheets or all browser windows appear: I don't want to see the ones I'm not working with, and they tend to hide the windows from the alternative app that I don't have in focus but I'd like to at least eyeball. Can you describe what your workflow is like, and how you rapidly and thoughtlessly switch between windows from apps that have multiple windows open?

    Read the article

  • Remove item older than 2 weeks

    - by Simon
    I use emacs org-mode to manage work items. Every week, I manually remove all Done items older than 2 weeks. Is there an easy way to perform this automatically? EDIT: I am currently trying to add a new custom command like this: (setq org-agenda-custom-commands '(("P" "Show old entries" todo "DONE" ( (org-agenda-files '("c:/git/org/tickets.org")) (tags "CLOSED<=\"-2w\"") ) )) ) The filter on the CLOSED timestamp is not working correctly.

    Read the article

  • Getting TF215097 error after modifying a build process template in TFS Team Build 2010

    - by Jakob Ehn
    When embracing Team Build 2010, you typically want to define several different build process templates for different scenarios. Common examples here are CI builds, QA builds and release builds. For example, in a contiuous build you often have no interest in publishing to the symbol store, you might or might not want to associate changesets and work items etc. The build server is often heavily occupied as it is, so you don’t want to have it doing more that necessary. Try to define a set of build process templates that are used across your company. In previous versions of TFS Team Build, there was no easy way to do this. But in TFS 2010 it is very easy so there is no excuse to not do it! :-)   I ran into a scenario today where I had an existing build definition that was based on our release build process template. In this template, we have defined several different build process parameters that control the release build. These are placed into its own sectionin the Build Process Parameters editor. This is done using the ProcessParameterMetadataCollection element, I will explain how this works in a future post.   I won’t go into details on these parametes, the issue for this blog post is what happens when you modify a build process template so that it is no longer compatible with the build definition, i.e. a breaking change. In this case, I removed a parameter that was no longer necessary. After merging the new build process template to one of the projects and queued a new release build, I got this error:   TF215097: An error occurred while initializing a build for build definition <Build Definition Name>: The values provided for the root activity's arguments did not satisfy the root activity's requirements: 'DynamicActivity': The following keys from the input dictionary do not map to arguments and must be removed: <Parameter Name>.  Please note that argument names are case sensitive. Parameter name: rootArgumentValues <Parameter Name> was the parameter that I removed so it was pretty easy to understand why the error had occurred. However, it is not entirely obvious how to fix the problem. When open the build definition everything looks OK, the removed build process parameter is not there, and I can open the build process template without any validation warnings. The problem here is that all settings specific to a particular build definition is stored in the TFS database. In TFS 2005, everything that was related to a build was stored in TFS source control in files (TFSBuild.proj, WorkspaceMapping.xml..). In TFS 2008, many of these settings were moved into the database. Still, lots of things were stored in TFSBuild.proj, such as the solution and configuration to build, wether to execute tests or not. In TFS 2010, all settings for a build definition is stored in the database. If we look inside the database we can see what this looks like. The table tbl_BuildDefinition contains all information for a build definition. One of the columns is called ProcessParameters and contains a serialized representation of a Dictionary that is the underlying object where these settings are stoded. Here is an example:   <Dictionary x:TypeArguments="x:String, x:Object" xmlns="clr-namespace:System.Collections.Generic;assembly=mscorlib" xmlns:mtbwa="clr-namespace:Microsoft.TeamFoundation.Build.Workflow.Activities;assembly=Microsoft.TeamFoundation.Build.Workflow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <mtbwa:BuildSettings x:Key="BuildSettings" ProjectsToBuild="$/PathToProject.sln"> <mtbwa:BuildSettings.PlatformConfigurations> <mtbwa:PlatformConfigurationList Capacity="4"> <mtbwa:PlatformConfiguration Configuration="Release" Platform="Any CPU" /> </mtbwa:PlatformConfigurationList> </mtbwa:BuildSettings.PlatformConfigurations> </mtbwa:BuildSettings> <mtbwa:AgentSettings x:Key="AgentSettings" Tags="Agent1" /> <x:Boolean x:Key="DisableTests">True</x:Boolean> <x:String x:Key="ReleaseRepositorySolution">ERP</x:String> <x:Int32 x:Key="Major">2</x:Int32> <x:Int32 x:Key="Minor">3</x:Int32> </Dictionary> Here we can see that it is really only the non-default values that are persisted into the databasen. So, the problem in my case was that I removed one of the parameteres from the build process template, but the parameter and its value still existed in the build definition database. The solution to the problem is to refresh the build definition and save it. In the process tab, there is a Refresh button that will reload the build definition and the process template and synchronize them:   After refreshing the build definition and saving it, the build was running successfully again.

    Read the article

  • Linux.com: Q&A on Oracle's Unbreakable Enterprise Kernel

    - by monica.kumar
    Linux.com recently published a Q&A on Oracle's Unbreakable Enterprise Kernel for Linux. The interview highlights the key benefits of Oracle's new offering and also offers an insight into our long and ongoing commitment to advancing Linux. Here are some excerpts from the Q&A: All enhancements made in the Unbreakable Enterprise Kernel are open source and have been made available to the Linux community. Oracle Linux, including both the kernels, is free to download, use and distribute. You can download the Unbreakable Enterprise Kernel at http://public-yum.oracle.com Source code is available, including a public git repository with full changelog and individual patches and checkins for convenience. Read the entire interview. Visit the Oracle Linux Homepage.

    Read the article

  • SQLAuthority News – List of Master Data Services White Paper

    - by pinaldave
    Since my TechEd India 2010 presentation I am very excited with SQL Server 2010 MDS. I just come across very interesting white paper on Microsoft site related to this subject. Here is the list of the same and location where you can download them. They are all written by Top Experts at Microsoft. Master Data Management from a Business Perspective - Download a PDF version or an XPS version Master Data Management from a Technical Perspective - Download a PDF version or an XPS version Bringing Master Data Management to the Stakeholders - Download a PDF version or an XPS version Implementing a Phased Approach to Master Data Management - Download a PDF version or an XPS version SharePoint Workflow Integration with Master Data Services - Read it here. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL

    Read the article

  • svn connection timeout

    - by Tom celic
    I have Ubuntu 12.04 running in virtual box inside Windows 7. I have the network adapter set as NAT and everything networking wise seems to be running smoothly (internet / git ect.). However, for some reason, svn always times out when i.e michael@michael-VirtualBox:~/Documents/deleteme$ svn co svn://svn.openwrt.org/openwrt/trunk/ svn: Can't connect to host 'svn.openwrt.org': Connection timed out Somebody suggested to me that I might need to change what ports svn uses. Does anybody have any idea how to diagnose / solve the problem? Thanks!

    Read the article

  • ASP.NET MVC 3: Razor’s @: and <text> syntax

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (today) In today’s post I’m going to discuss two useful syntactical features of the new Razor view-engine – the @: and <text> syntax support. Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post.  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a list of products: When run, it generates output like:   One of the techniques that Razor uses to implicitly identify when a code block ends is to look for tag/element content to denote the beginning of a content region.  For example, in the code snippet above Razor automatically treated the inner <li></li> block within our foreach loop as an HTML content block because it saw the opening <li> tag sequence and knew that it couldn’t be valid C#.  This particular technique – using tags to identify content blocks within code – is one of the key ingredients that makes Razor so clean and productive with scenarios involving HTML creation. Using @: to explicitly indicate the start of content Not all content container blocks start with a tag element tag, though, and there are scenarios where the Razor parser can’t implicitly detect a content block. Razor addresses this by enabling you to explicitly indicate the beginning of a line of content by using the @: character sequence within a code block.  The @: sequence indicates that the line of content that follows should be treated as a content block: As a more practical example, the below snippet demonstrates how we could output a “(Out of Stock!)” message next to our product name if the product is out of stock: Because I am not wrapping the (Out of Stock!) message in an HTML tag element, Razor can’t implicitly determine that the content within the @if block is the start of a content block.  We are using the @: character sequence to explicitly indicate that this line within our code block should be treated as content. Using Code Nuggets within @: content blocks In addition to outputting static content, you can also have code nuggets embedded within a content block that is initiated using a @: character sequence.  For example, we have two @: sequences in the code snippet below: Notice how within the second @: sequence we are emitting the number of units left within the content block (e.g. - “(Only 3 left!”). We are doing this by embedding a @p.UnitsInStock code nugget within the line of content. Multiple Lines of Content Razor makes it easy to have multiple lines of content wrapped in an HTML element.  For example, below the inner content of our @if container is wrapped in an HTML <p> element – which will cause Razor to treat it as content: For scenarios where the multiple lines of content are not wrapped by an outer HTML element, you can use multiple @: sequences: Alternatively, Razor also allows you to use a <text> element to explicitly identify content: The <text> tag is an element that is treated specially by Razor. It causes Razor to interpret the inner contents of the <text> block as content, and to not render the containing <text> tag element (meaning only the inner contents of the <text> element will be rendered – the tag itself will not).  This makes it convenient when you want to render multi-line content blocks that are not wrapped by an HTML element.  The <text> element can also optionally be used to denote single-lines of content, if you prefer it to the more concise @: sequence: The above code will render the same output as the @: version we looked at earlier.  Razor will automatically omit the <text> wrapping element from the output and just render the content within it.  Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s smart detection of <tag> elements to identify the beginning of content regions is one of the reasons that the Razor approach works so well with HTML generation scenarios, and it enables you to avoid having to explicitly mark the beginning/ending of content regions in about 95% of if/else and foreach scenarios. Razor’s @: and <text> syntax can then be used for scenarios where you want to avoid using an HTML element within a code container block, and need to more explicitly denote a content region. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Download Free PowerShell Quick Reference Guides from Microsoft

    - by Akemi Iwaya
    Are you just getting started with learning PowerShell or tired of looking up less frequently used commands? Then this terrific set of PowerShell quick reference guides from Microsoft is just what you need! The first guide focuses on commonly-used Windows PowerShell commands and is available in a single .doc format document. The other guides are available as a set (six files) in .pdf format and focus on: tips, shortcuts, and common operations in Windows PowerShell 3.0, Windows PowerShell Workflow, Windows PowerShell ISE, Windows PowerShell Web Access, Server Manager for Windows Server 2012, WinRM, WMI, and WS-Man. Keep in mind that you can select all the guides or just the ones you need to download for the PowerShell 3.0 set. Windows PowerShell Quick Reference [Microsoft] Windows PowerShell 3.0 and Server Manager Quick Reference Guides [Microsoft] [via The Windows Club here and here]     

    Read the article

  • The Breakpoint with Paul Irish and Addy Osmani—Episode 2

    The Breakpoint with Paul Irish and Addy Osmani—Episode 2 Ask and vote for questions at: goo.gl Addy Osmani and the (real) Paul Irish return for the second live episode of the Breakpoint - a new show focusing on developer tooling and workflow. This week they'll be showing us brand new SASS, feature inspection and console features in the Chrome Developer Tools. If you want your to stay on the bleeding edge of tooling, you won't want to miss it. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Error in destroying object in Box2D/LibGDX

    - by Crypted
    I'm trying to delete an object when a collision happens. I have put the following code in the render method of the object so it would be outside of the physics calculations. public void render(SpriteBatch spriteBatch) { // some other code... body.setActive(false); body.getWorld().destroyBody(body); But I'm getting an run-time error which crashes the JVM and shows, AL lib: alc_cleanup: 1 device not closed Assertion failed! Program: C:\Program Files\Java\jre6\bin\javaw.exe File: /var/lib/hudson/jobs/libgdx-git/workspace/gdx/jni/Box2D/Dynamics/b2World.cpp, Line 133 Expression: m_bodyCount 0 Can anyone help me here?

    Read the article

  • What are the advantages of version control systems that version each file separately?

    - by Mike Daniels
    Over the past few years I have worked with several different version control systems. For me, one of the fundamental differences between them has been whether they version files individually (each file has its own separate version numbering and history) or the repository as a whole (a "commit" or version represents a snapshot of the whole repository). Some "per-file" version control systems: CVS ClearCase Visual SourceSafe Some "whole-repository" version control systems: SVN Git Mercurial In my experience, the per-file version control systems have only led to problems, and require much more configuration and maintenance to use correctly (for example, "config specs" in ClearCase). I've had many instances of a co-worker changing an unrelated file and breaking what would ideally be an isolated line of development. What are the advantages of these per-file version control systems? What problems do "whole-repository" version control systems have that per-file version control systems do not?

    Read the article

  • .NETTER Code Starter Pack v1.0.beta Released

    - by Mohammad Ashraful Alam
    .NETTER Code Starter Pack contains a gallery of Visual Studio 2010 solutions leveraging latest and new technologies released by Microsoft. Each Visual Studio solution included here is focused to provide a very simple starting point for cutting edge development technologies and framework, using well known Northwind database. The current release of this project includes starter samples for the following technologies: ASP.NET Dynamic Data QuickStart (TBD) Azure Service Platform Windows Azure Hello World Windows Azure Storage Simple CRUD Database Scripts Entity Framework 4.0 (TBD) SharePoint 2010 Visual Web Part Linq QuickStart Silverlight Business App Hello World WCF RIA Services QuickStart Utility Framework MEF Moq QuickStart T-4 QuickStart Unity QuickStart WCF WCF Data Services QuickStart WCF Hello World WorkFlow Foundation Web API Facebook Toolkit QuickStart Download link: http://codebox.codeplex.com/releases/view/57382 Technorati Tags: release,new release,asp.net,mef,unity,sharepoint,wcf

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >