Search Results

Search found 6144 results on 246 pages for 'ignore arguments'.

Page 106/246 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • Can't run Eclipse after installing ADT Plugin

    - by user89439
    So, I've installed the ADT Plugin, run a HelloWorld, restart my computer and after that the Eclipse can't run. A message appear: "An error has ocurred. See the log file: /home/todi (...)" Here is the log file: !SESSION 2011-07-26 22:51:59.381 ----------------------------------------------- eclipse.buildId=I20110613-1736 java.version=1.6.0_26 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=pt_BR Framework arguments: -product org.eclipse.epp.package.java.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.java.product !ENTRY org.eclipse.update.configurator 4 0 2011-07-26 22:57:34.135 !MESSAGE Could not rename configuration temp file !ENTRY org.eclipse.update.configurator 4 0 2011-07-26 22:57:34.157 !MESSAGE Unable to save configuration file "C:\Program Files\eclipse\configuration\org.eclipse.update\platform.xml.tmp" !STACK 0 java.io.IOException: Unable to save configuration file "C:\Program Files\eclipse\configuration\org.eclipse.update\platform.xml.tmp" at org.eclipse.update.internal.configurator.PlatformConfiguration.save(PlatformConfiguration.java:690) at org.eclipse.update.internal.configurator.PlatformConfiguration.save(PlatformConfiguration.java:574) at org.eclipse.update.internal.configurator.PlatformConfiguration.startup(PlatformConfiguration.java:714) at org.eclipse.update.internal.configurator.ConfigurationActivator.getPlatformConfiguration(ConfigurationActivator.java:404) at org.eclipse.update.internal.configurator.ConfigurationActivator.initialize(ConfigurationActivator.java:136) at org.eclipse.update.internal.configurator.ConfigurationActivator.start(ConfigurationActivator.java:69) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683) at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381) at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:299) at org.eclipse.osgi.framework.util.SecureAction.start(SecureAction.java:440) at org.eclipse.osgi.internal.loader.BundleLoader.setLazyTrigger(BundleLoader.java:268) at org.eclipse.core.runtime.internal.adaptor.EclipseLazyStarter.postFindLocalClass(EclipseLazyStarter.java:107) at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:462) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216) at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:400) at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:476) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:429) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:417) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at java.lang.ClassLoader.loadClass(Unknown Source) at org.eclipse.osgi.internal.loader.BundleLoader.loadClass(BundleLoader.java:345) at org.eclipse.osgi.framework.internal.core.BundleHost.loadClass(BundleHost.java:229) at org.eclipse.osgi.framework.internal.core.AbstractBundle.loadClass(AbstractBundle.java:1207) at org.eclipse.equinox.internal.ds.model.ServiceComponent.createInstance(ServiceComponent.java:480) at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.createInstance(ServiceComponentProp.java:271) at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:332) at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:588) at org.eclipse.equinox.internal.ds.ServiceReg.getService(ServiceReg.java:53) at org.eclipse.osgi.internal.serviceregistry.ServiceUse$1.run(ServiceUse.java:138) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.internal.serviceregistry.ServiceUse.getService(ServiceUse.java:136) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.getService(ServiceRegistrationImpl.java:468) at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.getService(ServiceRegistry.java:467) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.getService(BundleContextImpl.java:594) at org.osgi.util.tracker.ServiceTracker.addingService(ServiceTracker.java:450) at org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:980) at org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:1) at org.osgi.util.tracker.AbstractTracked.trackAdding(AbstractTracked.java:262) at org.osgi.util.tracker.AbstractTracked.trackInitial(AbstractTracked.java:185) at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:348) at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:283) at org.eclipse.core.internal.runtime.InternalPlatform.getBundleGroupProviders(InternalPlatform.java:225) at org.eclipse.core.runtime.Platform.getBundleGroupProviders(Platform.java:1261) at org.eclipse.ui.internal.ide.IDEWorkbenchPlugin.getFeatureInfos(IDEWorkbenchPlugin.java:291) at org.eclipse.ui.internal.ide.WorkbenchActionBuilder.makeFeatureDependentActions(WorkbenchActionBuilder.java:1217) at org.eclipse.ui.internal.ide.WorkbenchActionBuilder.makeActions(WorkbenchActionBuilder.java:1026) at org.eclipse.ui.application.ActionBarAdvisor.fillActionBars(ActionBarAdvisor.java:147) at org.eclipse.ui.internal.ide.WorkbenchActionBuilder.fillActionBars(WorkbenchActionBuilder.java:341) at org.eclipse.ui.internal.WorkbenchWindow.fillActionBars(WorkbenchWindow.java:3564) at org.eclipse.ui.internal.WorkbenchWindow.(WorkbenchWindow.java:419) at org.eclipse.ui.internal.tweaklets.Workbench3xImplementation.createWorkbenchWindow(Workbench3xImplementation.java:31) at org.eclipse.ui.internal.Workbench.newWorkbenchWindow(Workbench.java:1920) at org.eclipse.ui.internal.Workbench.access$14(Workbench.java:1918) at org.eclipse.ui.internal.Workbench$68.runWithException(Workbench.java:3658) at org.eclipse.ui.internal.StartupThreading$StartupRunnable.run(StartupThreading.java:31) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:135) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:4140) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3757) at org.eclipse.ui.application.WorkbenchAdvisor.openWindows(WorkbenchAdvisor.java:803) at org.eclipse.ui.internal.Workbench$33.runWithException(Workbench.java:1595) at org.eclipse.ui.internal.StartupThreading$StartupRunnable.run(StartupThreading.java:31) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:135) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:4140) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3757) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2604) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2494) at org.eclipse.ui.internal.Workbench$7.run(Workbench.java:674) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:667) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:123) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:344) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:622) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:577) at org.eclipse.equinox.launcher.Main.run(Main.java:1410) !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 00:15:28.049 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 00:15:28.049 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 00:15:28.049 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 00:15:28.644 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 00:15:28.644 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 00:15:28.644 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 00:27:35.152 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 00:27:35.158 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 00:27:35.159 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 00:27:35.215 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 00:27:35.216 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 00:27:35.216 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 01:07:17.988 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 01:07:18.006 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 01:07:18.006 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. !ENTRY org.eclipse.equinox.p2.operations 4 0 2011-07-27 01:07:19.847 !MESSAGE Operation details !SUBENTRY 1 org.eclipse.equinox.p2.director 4 1 2011-07-27 01:07:19.848 !MESSAGE Cannot complete the install because some dependencies are not satisfiable !SUBENTRY 2 org.eclipse.equinox.p2.director 4 0 2011-07-27 01:07:19.848 !MESSAGE org.eclipse.linuxtools.callgraph.feature.group [0.0.2.201106060936] cannot be installed in this environment because its filter is not applicable. I don't understand how the path windows like has appeared... if anyone knows how to solve this, I'll appreciate! Thank you for all your answers! Best regards, Alexandre Ferreira.

    Read the article

  • C#: Optional Parameters - Pros and Pitfalls

    - by James Michael Hare
    When Microsoft rolled out Visual Studio 2010 with C# 4, I was very excited to learn how I could apply all the new features and enhancements to help make me and my team more productive developers. Default parameters have been around forever in C++, and were intentionally omitted in Java in favor of using overloading to satisfy that need as it was though that having too many default parameters could introduce code safety issues.  To some extent I can understand that move, as I’ve been bitten by default parameter pitfalls before, but at the same time I feel like Java threw out the baby with the bathwater in that move and I’m glad to see C# now has them. This post briefly discusses the pros and pitfalls of using default parameters.  I’m avoiding saying cons, because I really don’t believe using default parameters is a negative thing, I just think there are things you must watch for and guard against to avoid abuses that can cause code safety issues. Pro: Default Parameters Can Simplify Code Let’s start out with positives.  Consider how much cleaner it is to reduce all the overloads in methods or constructors that simply exist to give the semblance of optional parameters.  For example, we could have a Message class defined which allows for all possible initializations of a Message: 1: public class Message 2: { 3: // can either cascade these like this or duplicate the defaults (which can introduce risk) 4: public Message() 5: : this(string.Empty) 6: { 7: } 8:  9: public Message(string text) 10: : this(text, null) 11: { 12: } 13:  14: public Message(string text, IDictionary<string, string> properties) 15: : this(text, properties, -1) 16: { 17: } 18:  19: public Message(string text, IDictionary<string, string> properties, long timeToLive) 20: { 21: // ... 22: } 23: }   Now consider the same code with default parameters: 1: public class Message 2: { 3: // can either cascade these like this or duplicate the defaults (which can introduce risk) 4: public Message(string text = "", IDictionary<string, string> properties = null, long timeToLive = -1) 5: { 6: // ... 7: } 8: }   Much more clean and concise and no repetitive coding!  In addition, in the past if you wanted to be able to cleanly supply timeToLive and accept the default on text and properties above, you would need to either create another overload, or pass in the defaults explicitly.  With named parameters, though, we can do this easily: 1: var msg = new Message(timeToLive: 100);   Pro: Named Parameters can Improve Readability I must say one of my favorite things with the default parameters addition in C# is the named parameters.  It lets code be a lot easier to understand visually with no comments.  Think how many times you’ve run across a TimeSpan declaration with 4 arguments and wondered if they were passing in days/hours/minutes/seconds or hours/minutes/seconds/milliseconds.  A novice running through your code may wonder what it is.  Named arguments can help resolve the visual ambiguity: 1: // is this days/hours/minutes/seconds (no) or hours/minutes/seconds/milliseconds (yes) 2: var ts = new TimeSpan(1, 2, 3, 4); 3:  4: // this however is visually very explicit 5: var ts = new TimeSpan(days: 1, hours: 2, minutes: 3, seconds: 4);   Or think of the times you’ve run across something passing a Boolean literal and wondered what it was: 1: // what is false here? 2: var sub = CreateSubscriber(hostname, port, false); 3:  4: // aha! Much more visibly clear 5: var sub = CreateSubscriber(hostname, port, isBuffered: false);   Pitfall: Don't Insert new Default Parameters In Between Existing Defaults Now let’s consider a two potential pitfalls.  The first is really an abuse.  It’s not really a fault of the default parameters themselves, but a fault in the use of them.  Let’s consider that Message constructor again with defaults.  Let’s say you want to add a messagePriority to the message and you think this is more important than a timeToLive value, so you decide to put messagePriority before it in the default, this gives you: 1: public class Message 2: { 3: public Message(string text = "", IDictionary<string, string> properties = null, int priority = 5, long timeToLive = -1) 4: { 5: // ... 6: } 7: }   Oh boy have we set ourselves up for failure!  Why?  Think of all the code out there that could already be using the library that already specified the timeToLive, such as this possible call: 1: var msg = new Message(“An error occurred”, myProperties, 1000);   Before this specified a message with a TTL of 1000, now it specifies a message with a priority of 1000 and a time to live of -1 (infinite).  All of this with NO compiler errors or warnings. So the rule to take away is if you are adding new default parameters to a method that’s currently in use, make sure you add them to the end of the list or create a brand new method or overload. Pitfall: Beware of Default Parameters in Inheritance and Interface Implementation Now, the second potential pitfalls has to do with inheritance and interface implementation.  I’ll illustrate with a puzzle: 1: public interface ITag 2: { 3: void WriteTag(string tagName = "ITag"); 4: } 5:  6: public class BaseTag : ITag 7: { 8: public virtual void WriteTag(string tagName = "BaseTag") { Console.WriteLine(tagName); } 9: } 10:  11: public class SubTag : BaseTag 12: { 13: public override void WriteTag(string tagName = "SubTag") { Console.WriteLine(tagName); } 14: } 15:  16: public static class Program 17: { 18: public static void Main() 19: { 20: SubTag subTag = new SubTag(); 21: BaseTag subByBaseTag = subTag; 22: ITag subByInterfaceTag = subTag; 23:  24: // what happens here? 25: subTag.WriteTag(); 26: subByBaseTag.WriteTag(); 27: subByInterfaceTag.WriteTag(); 28: } 29: }   What happens?  Well, even though the object in each case is SubTag whose tag is “SubTag”, you will get: 1: SubTag 2: BaseTag 3: ITag   Why?  Because default parameter are resolved at compile time, not runtime!  This means that the default does not belong to the object being called, but by the reference type it’s being called through.  Since the SubTag instance is being called through an ITag reference, it will use the default specified in ITag. So the moral of the story here is to be very careful how you specify defaults in interfaces or inheritance hierarchies.  I would suggest avoiding repeating them, and instead concentrating on the layer of classes or interfaces you must likely expect your caller to be calling from. For example, if you have a messaging factory that returns an IMessage which can be either an MsmqMessage or JmsMessage, it only makes since to put the defaults at the IMessage level since chances are your user will be using the interface only. So let’s sum up.  In general, I really love default and named parameters in C# 4.0.  I think they’re a great tool to help make your code easier to read and maintain when used correctly. On the plus side, default parameters: Reduce redundant overloading for the sake of providing optional calling structures. Improve readability by being able to name an ambiguous argument. But remember to make sure you: Do not insert new default parameters in the middle of an existing set of default parameters, this may cause unpredictable behavior that may not necessarily throw a syntax error – add to end of list or create new method. Be extremely careful how you use default parameters in inheritance hierarchies and interfaces – choose the most appropriate level to add the defaults based on expected usage. Technorati Tags: C#,.NET,Software,Default Parameters

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

  • Get Application Title from Windows Phone

    - by psheriff
    In a Windows Phone application that I am currently developing I needed to be able to retrieve the Application Title of the phone application. You can set the Deployment Title in the Properties of your Windows Phone Application, however getting to this value programmatically can be a little tricky. This article assumes that you have Visual Studio 2010 and the Windows Phone tools installed along with it. The Windows Phone tools must be downloaded separately and installed with Visual Studio2010. You may also download the free Visual Studio2010 Express for Windows Phone developer environment. The WMAppManifest.xml File First off you need to understand that when you set the Deployment Title in the Properties windows of your Windows Phone application, this title actually gets stored into an XML file located under the \Properties folder of your application. This XML file is named WMAppManifest.xml. A portion of this file is shown in the following listing. <?xml version="1.0" encoding="utf-8"?><Deployment  http://schemas.microsoft.com/windowsphone/2009/deployment"http://schemas.microsoft.com/windowsphone/2009/deployment"  AppPlatformVersion="7.0">  <App xmlns=""       ProductID="{71d20842-9acc-4f2f-b0e0-8ef79842ea53}"       Title="Mobile Time Track"       RuntimeType="Silverlight"       Version="1.0.0.0"       Genre="apps.normal"       Author="PDSA, Inc."       Description="Mobile Time Track"       Publisher="PDSA, Inc."> ... ...  </App></Deployment> Notice the “Title” attribute in the <App> element in the above XML document. This is the value that gets set when you modify the Deployment Title in your Properties Window of your Phone project. The only value you can set from the Properties Window is the Title. All of the other attributes you see here must be set by going into the XML file and modifying them directly. Note that this information duplicates some of the information that you can also set from the Assembly Information… button in the Properties Window. Why Microsoft did not just use that information, I don’t know. Reading Attributes from WMAppManifest I searched all over the namespaces and classes within the Windows Phone DLLs and could not find a way to read the attributes within the <App> element. Thus, I had to resort to good old fashioned XML processing. First off I created a WinPhoneCommon class and added two static methods as shown in the snippet below: public class WinPhoneCommon{  /// <summary>  /// Returns the Application Title   /// from the WMAppManifest.xml file  /// </summary>  /// <returns>The application title</returns>  public static string GetApplicationTitle()  {    return GetWinPhoneAttribute("Title");  }   /// <summary>  /// Returns the Application Description   /// from the WMAppManifest.xml file  /// </summary>  /// <returns>The application description</returns>  public static string GetApplicationDescription()  {    return GetWinPhoneAttribute("Description");  }   ... GetWinPhoneAttribute method here ...} In your Windows Phone application you can now simply call WinPhoneCommon.GetApplicationTitle() or WinPhone.GetApplicationDescription() to retrieve the Title or Description properties from the WMAppManifest.xml file respectively. You notice that each of these methods makes a call to the GetWinPhoneAttribute method. This method is shown in the following code snippet: /// <summary>/// Gets an attribute from the Windows Phone WMAppManifest.xml file/// To use this method, add a reference to the System.Xml.Linq DLL/// </summary>/// <param name="attributeName">The attribute to read</param>/// <returns>The Attribute's Value</returns>private static string GetWinPhoneAttribute(string attributeName){  string ret = string.Empty;   try  {    XElement xe = XElement.Load("WMAppManifest.xml");    var attr = (from manifest in xe.Descendants("App")                select manifest).SingleOrDefault();    if (attr != null)      ret = attr.Attribute(attributeName).Value;  }  catch  {    // Ignore errors in case this method is called    // from design time in VS.NET  }   return ret;} I love using the new LINQ to XML classes contained in the System.Xml.Linq.dll. When I did a Bing search the only samples I found for reading attribute information from WMAppManifest.xml used either an XmlReader or XmlReaderSettings objects. These are fine and work, but involve a little extra code. Instead of using these, I added a reference to the System.Xml.Linq.dll, then added two using statements to the top of the WinPhoneCommon class: using System.Linq;using System.Xml.Linq; Now, with just a few lines of LINQ to XML code you can read to the App element and extract the appropriate attribute that you pass into the GetWinPhoneAttribute method. Notice that I added a little bit of exception handling code in this method. I ignore the exception in case you call this method in the Loaded event of a user control. In design-time you cannot access the WMAppManifest file and thus an exception would be thrown. Summary In this article you learned how to retrieve the attributes from the WMAppManifest.xml file. I use this technique to grab information that I would otherwise have to hard-code in my application. Getting the Title or Description for your Windows Phone application is easy with just a little bit of LINQ to XML code. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "Get Application Title from Windows Phone" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • Neo4J and Azure and VS2012 and Windows 8

    - by Chris Skardon
    Now, I know that this has been written about, but both of the main places (http://www.richard-banks.org/2011/02/running-neo4j-on-azure.html and http://blog.neo4j.org/2011/02/announcing-neo4j-on-windows-azure.html) utilise VS2010, and well, I’m on VS2012 and Windows 8. Not that I think Win 8 had anything to do with it really, anyhews! I’m going to begin from the beginning, this is my first foray into running something on Azure, so it’s been a bit of a learning curve. But luckily the Neo4J guys have got us started, so let’s download the VS2010 solution: http://neo4j.org/get?file=Neo4j.Azure.Server.zip OK, the other thing we’ll need is the VS2012 Azure SDK, so let’s get that as well: http://www.windowsazure.com/en-us/develop/downloads/ (I just did the full install). Now, unzip the VS2010 solution and let’s open it in VS2012: <your location>\Neo4j.Azure.Server\Neo4j.Azure.Server.sln One-way-upgrade? Yer! Ignore the migration report – we don’t care! Let’s build that sucker… Ahhh 14 errors… WindowsAzure does not exist in the namespace ‘Microsoft’ Not a problem right? We’ve installed the SDK, just need to update the references: We can ignore the Test projects, they don’t use Azure, we’re interested in the other projects, so what we’ll do is remove the broken references, and add the correct ones, so expand the references bit of each project: hunt out those yellow exclamation marks, and delete them! You’ll need to add the right ones back in (listed below), when you go to the ‘Add Reference’ dialog make sure you have ‘Assemblies’ and ‘Framework’ selected before you seach (and search for ‘microsoft.win’ to narrow it down) So the references you need for each project are: CollectDiagnosticsData Microsoft.WindowsAzure.Diagnostics Microsoft.WindowsAzure.StorageClient Diversify.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.CloudDrive Microsoft.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.StorageClient Right, so let’s build again… Sweet! No errors.   Now we need to setup our Blobs, I’m assuming you are using the most up-to-date Java you happened to have downloaded :) in my case that’s JRE7, and that is located in: C:\Program Files (x86)\Java\jre7 So, zip up that folder into whatever you want to call it, I went with jre7.zip, and stuck it in a temp folder for now. In that same temp folder I also copied the neo4j zip I was using: neo4j-community-1.7.2-windows.zip OK, now, we need to get these into our Blob storage, this is where a lot of stuff becomes unstuck - I didn’t find any applications that helped me use the blob storage, one would crash (because my internet speed is so slow) and the other just didn’t work – sure it looked like it had worked, but when push came to shove it didn’t. So this is how I got my files into Blob (local first): 1. Run the ‘Storage Emulator’ (just search for that in the start menu) 2. That takes a little while to start up so fire up another instance of Visual Studio in the mean time, and create a new Console Application. 3. Manage Nuget Packages for that solution and add ‘Windows Azure Storage’ Now you’re set up to add the code: public static void Main() { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.DevelopmentStorageAccount; CloudBlobClient client = cloudStorageAccount.CreateCloudBlobClient(); client.Timeout = TimeSpan.FromMinutes(30); CloudBlobContainer container = client.GetContainerReference("neo4j"); //This will create it as well   UploadBlob(container, "jre7.zip", "c:\\temp\\jre7.zip"); UploadBlob(container, "neo4j-community-1.7.2-windows.zip", "c:\\temp\\neo4j-community-1.7.2-windows.zip"); }   private static void UploadBlob(CloudBlobContainer container, string blobName, string filename) { CloudBlob blob = container.GetBlobReference(blobName);   using (FileStream fileStream = File.OpenRead(filename)) blob.UploadFromStream(fileStream); } This will upload the files to your local storage account (to switch to an Azure one, you’ll need to create a storage account, and use those credentials when you make your CloudStorageAccount above) To test you’ve got them uploaded correctly, go to: http://localhost:10000/devstoreaccount1/neo4j/jre7.zip and you will hopefully download the zip file you just uploaded. Now that those files are there, we are ready for some final configuration… Right click on the Neo4jServerHost role in the Neo4j.Azure.Server cloud project: Click on the ‘Settings’ tab and we’ll need to do some changes – by default, the 1.7.2 edition of neo4J unzips to: neo4j-community-1.7.2 So, we need to update all the ‘neo4j-1.3.M02’ directories to be ‘neo4j-community-1.7.2’, we also need to update the Java runtime location, so we start with this: and end with this: Now, I also changed the Endpoints settings, to be HTTP (from TCP) and to have a port of 7410 (mainly because that’s straight down on the numpad) The last ‘gotcha’ is some hard coded consts, which had me looking for ages, they are in the ‘ConfigSettings’ class of the ‘Neo4jServerHost’ project, and the ones we’re interested in are: Neo4jFileName JavaZipFileName Change those both to what that should be. OK Nearly there (I promise)! Run the ‘Compute Emulator’ (same deal with the Start menu), in your system tray you should have an Azure icon, when the compute emulator is up and running, right click on the icon and select ‘Show Compute Emulator UI’ The last steps! Make sure the ‘Neo4j.Azure.Server’ cloud project is set up as the start project and let’s hit F5 tension mounts, the build takes place (you need to accept the UAC warning) and VS does it’s stuff. If you look at the Compute Emulator UI you’ll see some log stuff (which you’ll need if this goes awry – but it won’t don’t worry!) In a bit, the console and a Java window will pop up: Then the console will bog off, leaving just the Java one, and if we switch back to the Compute Emulator UI and scroll up we should be able to see a line telling us the port number we’ve been assigned (in my case 7411): (If you can’t see it, don’t worry.. press CTRL+A on the emulator, then CTRL+C, copy all the text and paste it into something like Notepad, then just do a Find for ‘port’ you’ll soon see it) Go to your favourite browser, and head to: http://localhost:YOURPORT/ and you should see the WebAdmin! See you on the cloud side hopefully! Chris PS Other gotchas! OK, I’ve been caught out a couple of times: I had an instance of Neo4J running as a service on my machine, the Azure instance wanted to run the https version of the server on the same port as the Service was running on, and so Java would complain that the port was already in use.. The first time I converted the project, it didn’t update the version of the Azure library to load, in the App.Config of the Neo4jServerHost project, and VS would throw an exception saying it couldn’t find the Azure dll version 1.0.0.0.

    Read the article

  • Super constructor must be a first statement in Java constructor [closed]

    - by Val
    I know the answer: "we need rules to prevent shooting into your own foot". Ok, I make millions of programming mistakes every day. To be prevented, we need one simple rule: prohibit all JLS and do not use Java. If we explain everything by "not shooting your foot", this is reasonable. But there is not much reason is such reason. When I programmed in Delphy, I always wanted the compiler to check me if I read uninitializable. I have discovered myself that is is stupid to read uncertain variable because it leads unpredictable result and is errorenous obviously. By just looking at the code I could see if there is an error. I wished if compiler could do this job. It is also a reliable signal of programming error if function does not return any value. But I never wanted it do enforce me the super constructor first. Why? You say that constructors just initialize fields. Super fields are derived; extra fields are introduced. From the goal point of view, it does not matter in which order you initialize the variables. I have studied parallel architectures and can say that all the fields can even be assigned in parallel... What? Do you want to use the unitialized fields? Stupid people always want to take away our freedoms and break the JLS rules the God gives to us! Please, policeman, take away that person! Where do I say so? I'm just saying only about initializing/assigning, not using the fields. Java compiler already defends me from the mistake of accessing notinitialized. Some cases sneak but this example shows how this stupid rule does not save us from the read-accessing incompletely initialized in construction: public class BadSuper { String field; public String toString() { return "field = " + field; } public BadSuper(String val) { field = val; // yea, superfirst does not protect from accessing // inconstructed subclass fields. Subclass constr // must be called before super()! System.err.println(this); } } public class BadPost extends BadSuper { Object o; public BadPost(Object o) { super("str"); this. o = o; } public String toString() { // superconstructor will boom here, because o is not initialized! return super.toString() + ", obj = " + o.toString(); } public static void main(String[] args) { new BadSuper("test 1"); new BadPost(new Object()); } } It shows that actually, subfields have to be inilialized before the supreclass! Meantime, java requirement "saves" us from writing specializing the class by specializing what the super constructor argument is, public class MyKryo extends Kryo { class MyClassResolver extends DefaultClassResolver { public Registration register(Registration registration) { System.out.println(MyKryo.this.getDepth()); return super.register(registration); } } MyKryo() { // cannot instantiate MyClassResolver in super super(new MyClassResolver(), new MapReferenceResolver()); } } Try to make it compilable. It is always pain. Especially, when you cannot assign the argument later. Initialization order is not important for initialization in general. I could understand that you should not use super methods before initializing super. But, the requirement for super to be the first statement is different. It only saves you from the code that does useful things simply. I do not see how this adds safety. Actually, safety is degraded because we need to use ugly workarounds. Doing post-initialization, outside the constructors also degrades safety (otherwise, why do we need constructors?) and defeats the java final safety reenforcer. To conclude Reading not initialized is a bug. Initialization order is not important from the computer science point of view. Doing initalization or computations in different order is not a bug. Reenforcing read-access to not initialized is good but compilers fail to detect all such bugs Making super the first does not solve the problem as it "Prevents" shooting into right things but not into the foot It requires to invent workarounds, where, because of complexity of analysis, it is easier to shoot into the foot doing post-initialization outside the constructors degrades safety (otherwise, why do we need constructors?) and that degrade safety by defeating final access modifier When there was java forum alive, java bigots attecked me for these thoughts. Particularly, they dislaked that fields can be initialized in parallel, saying that natural development ensures correctness. When I replied that you could use an advanced engineering to create a human right away, without "developing" any ape first, and it still be an ape, they stopped to listen me. Cos modern technology cannot afford it. Ok, Take something simpler. How do you produce a Renault? Should you construct an Automobile first? No, you start by producing a Renault and, once completed, you'll see that this is an automobile. So, the requirement to produce fields in "natural order" is unnatural. In case of alarmclock or armchair, which are still chair and clock, you may need first develop the base (clock and chair) and then add extra. So, I can have examples where superfields must be initialized first and, oppositely, when they need to be initialized later. The order does not exist in advance. So, the compiler cannot be aware of the proper order. Only programmer/constructor knows is. Compiler should not take more responsibility and enforce the wrong order onto programmer. Saying that I cannot initialize some fields because I did not ininialized the others is like "you cannot initialize the thing because it is not initialized". This is a kind of argument we have. So, to conclude once more, the feature that "protects" me from doing things in simple and right way in order to enforce something that does not add noticeably to the bug elimination at that is a strongly negative thing and it pisses me off, altogether with the all the arguments to support it I've seen so far. It is "a conceptual question about software development" Should there be the requirement to call super() first or not. I do not know. If you do or have an idea, you have place to answer. I think that I have provided enough arguments against this feature. Lets appreciate the ones who benefit form it. Let it just be something more than simple abstract and stupid "write your own language" or "protection" kind of argument. Why do we need it in the language that I am going to develop?

    Read the article

  • Failed to resolve artifact. Missing: ---------- 1) org.codehaus.mojo:gwt-maven-plugin:jar:1.3-SNAPSHOT

    - by karim
    i want to use the addon vaadin Timeline, so i have to make "gwt-maven-plugin 3.1" as i know ,my pom.xml is the following : <?xml version="1.0"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>life</groupId> <artifactId>life</artifactId> <packaging>war</packaging> <name>life Portlet</name> <version>0.0.1-SNAPSHOT</version> <url>http://maven.apache.org</url> <properties> <vaadin-widgets-dir>src/main/webapp/VAADIN/widgetsets</vaadin-widgets-dir> </properties> <build> <plugins> <plugin> <groupId>com.liferay.maven.plugins</groupId> <artifactId>liferay-maven-plugin</artifactId> <version>6.1.0</version> <configuration> <autoDeployDir>${liferay.auto.deploy.dir}</autoDeployDir> <liferayVersion>6.1.0</liferayVersion> <pluginType>portlet</pluginType> </configuration> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <encoding>UTF-8</encoding> <source>1.5</source> <target>1.5</target> </configuration> </plugin> <plugin> <groupId>com.vaadin</groupId> <artifactId>vaadin-maven-plugin</artifactId> <version>1.0.1</version> </plugin> <!-- Compiles your custom GWT components with the GWT compiler --> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <version>2.1.0-1</version> <configuration> <!-- if you don't specify any modules, the plugin will find them --> <!--modules> .. </modules --> <webappDirectory>${project.build.directory}/${project.build.finalName}/VAADIN/widgetsets</webappDirectory> <extraJvmArgs>-Xmx512M -Xss1024k</extraJvmArgs> <runTarget>clean</runTarget> <hostedWebapp>${project.build.directory}/${project.build.finalName}</hostedWebapp> <noServer>true</noServer> <port>8080</port> <soyc>false</soyc> </configuration> <executions> <execution> <goals> <goal>resources</goal> <goal>compile</goal> </goals> </execution> </executions> </plugin> <!-- Updates Vaadin 6.2+ widgetset definitions based on project dependencies --> <plugin> <groupId>com.vaadin</groupId> <artifactId>vaadin-maven-plugin</artifactId> <version>1.0.1</version> <executions> <execution> <configuration> <!-- if you don't specify any modules, the plugin will find them --> <!-- <modules> <module>${package}.gwt.MyWidgetSet</module> </modules> --> </configuration> <goals> <goal>update-widgetset</goal> </goals> </execution> </executions> </plugin> </plugins> <pluginManagement> <plugins> <!--This plugin's configuration is used to store Eclipse m2e settings only. It has no influence on the Maven build itself. --> <plugin> <groupId>org.eclipse.m2e</groupId> <artifactId>lifecycle-mapping</artifactId> <version>1.0.0</version> <configuration> <lifecycleMappingMetadata> <pluginExecutions> <pluginExecution> <pluginExecutionFilter> <groupId> org.codehaus.mojo </groupId> <artifactId> gwt-maven-plugin </artifactId> <versionRange> [2.1.0-1,) </versionRange> <goals> <goal>resources</goal> </goals> </pluginExecutionFilter> <action> <ignore></ignore> </action> </pluginExecution> <pluginExecution> <pluginExecutionFilter> <groupId>com.vaadin</groupId> <artifactId> vaadin-maven-plugin </artifactId> <versionRange> [1.0.1,) </versionRange> <goals> <goal> update-widgetset </goal> </goals> </pluginExecutionFilter> <action> <ignore></ignore> </action> </pluginExecution> </pluginExecutions> </lifecycleMappingMetadata> </configuration> </plugin> </plugins> </pluginManagement> </build> <dependencies> <dependency> <groupId>com.liferay.portal</groupId> <artifactId>portal-service</artifactId> <version>6.1.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>com.liferay.portal</groupId> <artifactId>util-bridges</artifactId> <version>6.1.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.vaadin.addons</groupId> <artifactId>vaadin-timeline-agpl-3.0</artifactId> <version>1.2.4</version> </dependency> <dependency> <groupId>com.liferay.portal</groupId> <artifactId>util-taglib</artifactId> <version>6.1.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>com.liferay.portal</groupId> <artifactId>util-java</artifactId> <version>6.1.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.portlet</groupId> <artifactId>portlet-api</artifactId> <version>2.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.4</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet.jsp</groupId> <artifactId>jsp-api</artifactId> <version>2.0</version> <scope>provided</scope> </dependency> <!-- sqx --> <dependency> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> <version>1.1.1</version> <scope>provided</scope> </dependency> <dependency> <groupId>antlr</groupId> <artifactId>antlr</artifactId> <version>2.7.6</version> <scope>provided</scope> </dependency> <dependency> <groupId>aopalliance</groupId> <artifactId>aopalliance</artifactId> <version>1.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>asm</groupId> <artifactId>asm</artifactId> <version>1.5.3</version> <scope>provided</scope> </dependency> <dependency> <groupId>asm</groupId> <artifactId>asm-attrs</artifactId> <version>1.5.3</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> <version>1.6.8</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjweaver</artifactId> <version>1.6.8</version> <scope>provided</scope> </dependency> <dependency> <groupId>bsh</groupId> <artifactId>bsh</artifactId> <version>1.3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> <version>2.1_3</version> <scope>provided</scope> </dependency> <dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.1</version> <scope>provided</scope> </dependency> <dependency> <groupId>commons-dbcp</groupId> <artifactId>commons-dbcp</artifactId> <version>1.3</version> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>commons-pool</groupId> <artifactId>commons-pool</artifactId> <version>1.5.3</version> </dependency> <dependency> <groupId>dom4j</groupId> <artifactId>dom4j</artifactId> <version>1.6.1</version> </dependency> <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache</artifactId> <version>1.2.3</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>3.3.1.GA</version> </dependency> <dependency> <groupId>hsqldb</groupId> <artifactId>hsqldb</artifactId> <version>1.8.0.10</version> </dependency> <!-- <dependency> <groupId>jboss</groupId> <artifactId>jboss-backport-concurrent</artifactId> <version>2.1.0.GA</version> </dependency> --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-parent</artifactId> <version>1.5.0</version> </dependency> <dependency> <groupId>javax.jcr</groupId> <artifactId>jcr</artifactId> <version>1.0</version> </dependency> <!-- <dependency> <groupId>javax.sql</groupId> <artifactId>jdbc-stdext</artifactId> <version>2.0</version> </dependency> --> <dependency> <groupId>jdom</groupId> <artifactId>jdom</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>javax.transaction</groupId> <artifactId>jta</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.14</version> </dependency> <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.3</version> </dependency> <dependency> <groupId>com.sun.portal.portletcontainer</groupId> <artifactId>container</artifactId> <version>1.1-m4</version> </dependency> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> <version>8.4-702.jdbc3</version> </dependency> <!-- sl4j-api-1.5.0 manquante --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-parent</artifactId> <version>1.5.0</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.5.0</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> <version>3.0.3.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-support</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jms</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>javax.jms</groupId> <artifactId>jms</artifactId> <version>1.1</version> <scope>compile</scope> </dependency> <!-- <dependency> <groupId>org.springmodules</groupId> <artifactId>spring-modules-jbpm31</artifactId> <version>0.9</version> <scope>provided</scope> </dependency> --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework.ws</groupId> <artifactId>spring-oxm</artifactId> <version>1.5.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-core</artifactId> <version>2.0.4</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-tx</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc-portlet</artifactId> <version>2.5</version> </dependency> <dependency> <groupId>com.atomikos</groupId> <artifactId>transactions-hibernate3</artifactId> <version>3.6.4</version> </dependency> <dependency> <groupId>com.atomikos</groupId> <artifactId>transactions-osgi</artifactId> <version>3.7.0</version> </dependency> <dependency> <groupId>com.vaadin</groupId> <artifactId>vaadin</artifactId> <version>6.7.0</version> </dependency> <dependency> <groupId>com.thoughtworks.xstream</groupId> <artifactId>xstream</artifactId> <version>1.3.1</version> </dependency> <!-- this is the dependency to the "jar"-subproject --> <dependency> <groupId>org.codehaus.plexus</groupId> <artifactId>plexus-utils</artifactId> <version>1.5.9</version> </dependency> <dependency> <groupId>com.google.gwt</groupId> <artifactId>gwt-user</artifactId> <version>2.1.1</version> <scope>provided</scope> </dependency> </dependencies> <!-- Define our plugin repositories --> <pluginRepositories> <pluginRepository> <id>Codehaus</id> <name>Codehaus Maven Plugin Repository</name> <url>http://repository.codehaus.org/org/codehaus/mojo</url> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>codehaus-snapshots</id> <url>[http://nexus.codehaus.org/snapshots]</url> <snapshots> <enabled>true</enabled> </snapshots> <releases> <enabled>false</enabled> </releases> </pluginRepository> </pluginRepositories> <repositories> <repository> <id>vaadin-addons</id> <url>http://maven.vaadin.com/vaadin-addons</url> </repository> <repository> <id>demoiselle.sourceforge.net</id> <name>Demoiselle Maven Repository</name> <url>http://demoiselle.sourceforge.net/repository/release</url> </repository> </repositories> AND when i do "clean install" to build my mvn , the console show me this taken : [INFO] Unable to find resource 'org.codehaus.mojo:gwt-maven-plugin:jar:1.3-SNAPSHOT' in repository demoiselle.sourceforge.net (http://demoiselle.sourceforge.net/repository/release) [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to resolve artifact. Missing: ---------- 1) org.codehaus.mojo:gwt-maven-plugin:jar:1.3-SNAPSHOT Try downloading the file manually from the project website. Then, install it using the command: mvn install:install-file -DgroupId=org.codehaus.mojo -DartifactId=gwt-maven-plugin - Dversion=1.3-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file Alternatively, if you host your own repository you can deploy the file there: mvn deploy:deploy-file -DgroupId=org.codehaus.mojo -DartifactId=gwt-maven-plugin -Dversion=1.3-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] Path to dependency: 1) com.vaadin:vaadin-maven-plugin:maven-plugin:1.0.1 2) org.codehaus.mojo:gwt-maven-plugin:jar:1.3-SNAPSHOT ---------- 1 required artifact is missing. for artifact: com.vaadin:vaadin-maven-plugin:maven-plugin:1.0.1 from the specified remote repositories: demoiselle.sourceforge.net (http://demoiselle.sourceforge.net/repository/release), central (http://repo1.maven.org/maven2), Codehaus (http://repository.codehaus.org/org/codehaus/mojo), codehaus-snapshots ([http://nexus.codehaus.org/snapshots]), vaadin-snapshots (http://oss.sonatype.org/content/repositories/vaadin-snapshots/), vaadin-releases (http://oss.sonatype.org/content/repositories/vaadin-releases/), vaadin-addons (http://maven.vaadin.com/vaadin-addons) your help will be welcome thank you a lot !!! :)))

    Read the article

  • Why do I get error "1337 The security ID structure is invalid" when using subinacl?

    - by ralbatross
    I have a standard Win 7 account 'popuser' to which I'd like to grant start and stop permissions for the OpenVPNService. I've used the following command successfully on other machines, but for some reason on a new Acer Aspire 5830T that I'm setting up this doesn't do the trick for me: subinacl /service OpenVPNService /grant=popuser=TO I keep getting the following error message: LookupAccountName : OpenVPNService:popuser 1337 The security ID structure is invalid. Current object OpenVPNService will not be processed Elapsed Time: 00 00:00:00 Done: 0, Modified 0, Failed 0, Syntax errors 1 Last Syntax Error:WARNING : /grant=popuser=to : Error when checking arguments - OpenVPNService I've tried adding the machine name to the username and the service name to no avail. I'm running command prompt as an administrator. Anyone have any ideas what's going on?

    Read the article

  • mysql: Cannot load from mysql.proc. The table is probably corrupted

    - by Alex
    Mysql was started: /usr/bin/mysqld_safe --datadir=/srv/mysql/myDB --log-error=/srv/mysql/logs/mysqld-myDB.log --pid-file=/srv/mysql/pids/mysqld-myDB.pid --user=mysql --socket=/srv/mysql/sockets/mysql-myDB.sock --port=3700 but when I'm trying to do something: ERROR 1548 (HY000) at line 1: Cannot load from mysql.proc. The table is probably corrupted How to fix it? $ mysql -V mysql Ver 14.14 Distrib 5.1.58, for debian-linux-gnu (x86_64) using readline 6.2 $ lsb_release -a Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric $ sudo mysql_upgrade -uroot -p<password> --force Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysql.columns_priv OK mysql.db OK mysql.event OK mysql.func OK mysql.general_log Error : You can't use locks with log tables. status : OK mysql.help_category OK mysql.help_keyword OK mysql.help_relation OK mysql.help_topic OK mysql.host OK mysql.ndb_binlog_index OK mysql.plugin OK mysql.proc OK mysql.procs_priv OK mysql.servers OK mysql.slow_log Error : You can't use locks with log tables. status : OK mysql.tables_priv OK mysql.time_zone OK mysql.time_zone_leap_second OK mysql.time_zone_name OK mysql.time_zone_transition OK mysql.time_zone_transition_type OK mysql.user OK Running 'mysql_fix_privilege_tables'... OK $ mysqlcheck --port=3700 --socket=/srv/mysql/sockets/mysql-my-env.sock -A -udata_owner -pdata_owner <all tables> OK UPD1: for example I'm trying to remove procedure: mysql> DROP PROCEDURE IF EXISTS mysql.myproc; ERROR 1548 (HY000): Cannot load from mysql.proc. The table is probably corrupted mysql> UPD2: mysql> REPAIR TABLE mysql.proc; +------------+--------+----------+-----------------------------------------------------------------------------------------+ | Table | Op | Msg_type | Msg_text | +------------+--------+----------+-----------------------------------------------------------------------------------------+ | mysql.proc | repair | error | 1 when fixing table | | mysql.proc | repair | Error | Can't change permissions of the file '/srv/mysql/myDB/mysql/proc.MYD' (Errcode: 1) | | mysql.proc | repair | status | Operation failed | +------------+--------+----------+-----------------------------------------------------------------------------------------+ 3 rows in set (0.04 sec) This is strange, because: $ ls -l /srv/mysql/myDB/mysql/proc.MYD -rwxrwxrwx 1 mysql root 3983252 2012-02-03 22:51 /srv/mysql/myDB/mysql/proc.MYD UPD3: $ ls -la /srv/mysql/myDB/mysql total 8930 drwxrwxrwx 2 mysql root 2480 2012-02-21 13:13 . drwxrwxrwx 13 mysql root 504 2012-02-21 19:01 .. -rwxrwxrwx 1 mysql root 8820 2012-02-20 15:50 columns_priv.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 columns_priv.MYD -rwxrwxrwx 1 mysql root 4096 2012-02-20 15:50 columns_priv.MYI -rwxrwxrwx 1 mysql root 9582 2012-02-20 15:50 db.frm -rwxrwxrwx 1 mysql root 8360 2011-12-08 02:14 db.MYD -rwxrwxrwx 1 mysql root 5120 2012-02-20 15:50 db.MYI -rwxrwxrwx 1 mysql root 54 2011-11-12 15:42 db.opt -rwxrwxrwx 1 mysql root 10223 2012-02-20 15:50 event.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 event.MYD -rwxrwxrwx 1 mysql root 2048 2012-02-20 15:50 event.MYI -rwxrwxrwx 1 mysql root 8665 2012-02-20 15:50 func.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 func.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 func.MYI -rwxrwxrwx 1 mysql root 8700 2012-02-20 15:50 help_category.frm -rwxrwxrwx 1 mysql root 21497 2011-11-12 15:42 help_category.MYD -rwxrwxrwx 1 mysql root 3072 2012-02-20 15:50 help_category.MYI -rwxrwxrwx 1 mysql root 8612 2012-02-20 15:50 help_keyword.frm -rwxrwxrwx 1 mysql root 88650 2011-11-12 15:42 help_keyword.MYD -rwxrwxrwx 1 mysql root 16384 2012-02-20 15:50 help_keyword.MYI -rwxrwxrwx 1 mysql root 8630 2012-02-20 15:50 help_relation.frm -rwxrwxrwx 1 mysql root 8874 2011-11-12 15:42 help_relation.MYD -rwxrwxrwx 1 mysql root 16384 2012-02-20 15:50 help_relation.MYI -rwxrwxrwx 1 mysql root 8770 2012-02-20 15:50 help_topic.frm -rwxrwxrwx 1 mysql root 414320 2011-11-12 15:42 help_topic.MYD -rwxrwxrwx 1 mysql root 20480 2012-02-20 15:50 help_topic.MYI -rwxrwxrwx 1 mysql root 9510 2012-02-20 15:50 host.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 host.MYD -rwxrwxrwx 1 mysql root 2048 2012-02-20 15:50 host.MYI -rwxrwxrwx 1 mysql root 8554 2011-11-12 15:42 innodb_monitor.frm -rwxrwxrwx 1 mysql root 98304 2011-11-12 15:55 innodb_monitor.ibd -rwxrwxrwx 1 mysql root 8592 2012-02-20 15:50 inventory.frm -rwxrwxrwx 1 mysql root 76 2011-11-12 15:42 inventory.MYD -rwxrwxrwx 1 mysql root 2048 2012-02-20 15:50 inventory.MYI -rwxrwxrwx 1 mysql root 8778 2012-02-20 15:50 ndb_binlog_index.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 ndb_binlog_index.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 ndb_binlog_index.MYI -rwxrwxrwx 1 mysql root 8586 2012-02-20 15:50 plugin.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 plugin.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 plugin.MYI -rwxrwxrwx 1 mysql root 9996 2012-02-20 15:50 proc.frm -rwxrwxrwx 1 mysql root 3983252 2012-02-03 22:51 proc.MYD -rwxrwxrwx 1 mysql root 36864 2012-02-21 13:23 proc.MYI -rwxrwxrwx 1 mysql root 8875 2012-02-20 15:50 procs_priv.frm -rwxrwxrwx 1 mysql root 1700 2011-11-12 15:42 procs_priv.MYD -rwxrwxrwx 1 mysql root 8192 2012-02-20 15:50 procs_priv.MYI -rwxrwxrwx 1 mysql root 3977704 2012-02-21 13:23 proc.TMD -rwxrwxrwx 1 mysql root 8800 2012-02-20 15:50 proxies_priv.frm -rwxrwxrwx 1 mysql root 693 2011-11-12 15:42 proxies_priv.MYD -rwxrwxrwx 1 mysql root 5120 2012-02-20 15:50 proxies_priv.MYI -rwxrwxrwx 1 mysql root 8838 2012-02-20 15:50 servers.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 servers.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 servers.MYI -rwxrwxrwx 1 mysql root 8955 2012-02-20 15:50 tables_priv.frm -rwxrwxrwx 1 mysql root 5957 2011-11-12 15:42 tables_priv.MYD -rwxrwxrwx 1 mysql root 8192 2012-02-20 15:50 tables_priv.MYI -rwxrwxrwx 1 mysql root 8636 2012-02-20 15:50 time_zone.frm -rwxrwxrwx 1 mysql root 8624 2012-02-20 15:50 time_zone_leap_second.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone_leap_second.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone_leap_second.MYI -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone.MYI -rwxrwxrwx 1 mysql root 8606 2012-02-20 15:50 time_zone_name.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone_name.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone_name.MYI -rwxrwxrwx 1 mysql root 8686 2012-02-20 15:50 time_zone_transition.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone_transition.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone_transition.MYI -rwxrwxrwx 1 mysql root 8748 2012-02-20 15:50 time_zone_transition_type.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone_transition_type.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone_transition_type.MYI -rwxrwxrwx 1 mysql root 10630 2012-02-20 15:50 user.frm -rwxrwxrwx 1 mysql root 5456 2011-11-12 21:01 user.MYD -rwxrwxrwx 1 mysql root 4096 2012-02-20 15:50 user.MYI

    Read the article

  • nginx unknown directive ssl_protocols

    - by ghostrifle
    I've compiled NGINX 1.4.1 with ssl support and wanted to secure my configuation with these lines: ssl_prefer_server_ciphers on; ssl_protocols        SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers          AES128-GCM-SHA256:AES256-GCM-SHA384:RC4-SHA:AES128-SHA256:AES256-SHA256; ssl_session_cache       shared:TLSSL:16m; ssl_session_timeout     10m; This is the error I'm getting which I don't understand why it comes up: nginx: [emerg] unknown directive "ssl_protocols        SSLv3" my nginx configuration: nginx version: nginx/1.4.1 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled configure arguments: --with-http_dav_module --with-http_gzip_static_module --with- http_stub_status_module --prefix=/opt/nginx --with-http_perl_module --with-http_ssl_module --with-perl=/usr/bin/perl --with-http_geoip_module --with-http_realip_module maybe anyone knows what I'm doing wrong?

    Read the article

  • Nginx compiled --with-http_spdy_module yet raise errors complains ngx_http_spdy_module

    - by c19
    [emerg] 21101#0: the "spdy" parameter requires ngx_http_spdy_module in /etc/nginx/conf.d/cc.conf isn't it the same module? and it causes multi-redirection error too. I have no idea what is going on. Full configure arg: nginx version: nginx/1.4.2 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-pcre --with-http_ssl_module `--with-http_spdy_module` --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --with-openssl=/usr/local/src/openssl-1.0.1e

    Read the article

  • How to use BCDEdit to dual boot Windows installations?

    - by Ian Boyd
    What are the bcdedit commands necessary to setup dual boot between different installations of Windows?5 Background i recently installed Windows 8 onto a separate hard drive1. Now that Windows 8 in installed i want to dual-boot back to Windows 7. i have my two2 hard drives: So you can see that i have my two disks, with the partitions containing Windows: Windows 7: \\PhysicalDisk0 (partition 03) Windows 8: \\PhysicalDisk2 (partition 1) What i'm trying to figure out how is how to use bcdedit to instruct the thing that boots Windows that there is another Windows installation out there. Running bcdedit now, it shows current configuration: C:\WINDOWS\system32>bcdedit Windows Boot Manager -------------------- identifier {bootmgr} device partition=\Device\HarddiskVolume2 description Windows Boot Manager locale en-US inherit {globalsettings} integrityservices Enable default {current} resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} displayorder {current} toolsdisplayorder {memdiag} timeout 30 Windows Boot Loader ------------------- identifier {current} device partition=C: path \WINDOWS\system32\winload.exe description Windows 8 locale en-US inherit {bootloadersettings} recoverysequence {ce153eb9-3786-11e2-87c0-e740e123299f} integrityservices Enable recoveryenabled Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \WINDOWS resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto i cannot find any documentation on the difference between Windows Boot Manager and Windows Boot Loader. Documentation There is some documentation on Bcdedit: Technet: Command Line Reference - Bcdedit Technet: Windows Automated Installation Kit - BCDEdit Command Line Options Whitepaper - BCDEdit Commands for Boot Environment (Word Document) But they don't explain how edit the binary boot configuration data If i had to guess, i would think that a Windows Boot Manager instructs the BIOS what program it should run. That program would give the user a set of boot choices. That leaves Windows Boot Loader do be a particular boot choice, that represents a particular installation of Windows. If that is the case i would need to create a new Windows Boot Loader entry. This means i might want to use the /create parameter: /create Creates a new boot entry: bcdedit [/store filename] /create [id] /d description [/application apptype | /inherit [apptype] | /inherit DEVICE | /device] So i assume a syntax of: >bcdedit /create /d "The old Windows 7" /application osloader Where application can be one of the following types: Apptype Description BOOTSECTOR The boot sector application OSLOADER The Windows boot loader RESUME A resume application Unfortunately, the only documentation about osloader is "The Windows boot loader". i don't see how that can differentiate between Windows 8 on one hard drive, and Windows 7 on another. The other possible parameter when /create a boot loader is >bcdedit /create /D "Windows Vista" /device "The Quick Brown Fox" Unfortunately the documentation is missing for /device: /device Optional. If id is not set to a well-known identifier, the option that is used to specify the new boot entry as an additional device options entry. Since i did not set id to a well-known identifier, i must set /device to "the option that is used to specify the new boot entry as an additional device options entry". i know all those words; they're all English. But i have on idea what it is saying; those words in that order seem nonsensical. So i'm somewhat stymied. i don't want to be like Dan Stolts from Microsoft: I found no content that was particularly helpful when I hosed my machine by playing with BCDEdit. This post would have been ok if there was much more detail especially on the /set command OSDevice, etc. So once I got my machine fixed, I documented the solution and the information is here.... i mean, if a Microsoft guy can't even figure out how to use BCDEdit to edit his BCD, then what chance to i have? Bonus Reading BCDEdit Command-Line Options Bcdedit Server 2008 R2 or Windows 7 System Will NOT Boot After Making Changes To Boot Manager Using BCDEdit Visual BCD Editor4 Windows 7 and Windows 8 RTM Dual Boot Setup Footnotes 1 Since the Windows 8 installer would have damaged my Windows 7 install, i decided to unplug my "main" hard drive during the install. Which is a long-winded explanation of why the Windows 8 installer didn't detect the existing Windows 7 install. Normally the installer would have automatically created the required entries for dual-boot. Not that the reason i'm asking the question is important. 2 Really there's three drives, but the third is just bulk storage. The existence of a 3rd hard drive is irrelevant to the question. i only mention it in case someone wants to know why the screenshot has 3 hard drives when i only mention two. 3 i arbitrarily started numbering partitions at "zero"; not to imply that partitions are numbered starting at zero. i only mention partitions because i don't see how any boot-loader could do its job without knowing which partition, and which folder, an installation of Windows is located in. 4 i'm asking about BCDEdit. i tried Visual BCD Editor. It seems to be a visual BCD editor. That is to say that it's a GUI, but still uses the same terminology as BCDEdit, and requires the same knowledge that BCD doesn't document. 5 For simplicity sake we'll assume that all installation of Windows i want to dual-boot between are Windows Vista or later, making them all compatible with the BCDEdit and the binary boot loader. The alternative would require delving into the intricacies of the old ntloader. Nor am i asking about dual booting to Linux; or how to boot to a Virtual Hard Drive (vhd) image. Just modern versions of Windows on existing hard drives in the same machine. Note: You can ignore everything after the word Background. It's all pointless exposition to satisfy some people's need for "research effort" before they'll consider being helpful. Some people have even been known to summarily close questions unless there is research effort. Some people have been know to close questions if there is too much research effort. Some people close questions when i put the note saying that they can ignore everything after the Background out of spite. Some people are just grumpy.

    Read the article

  • How do I mount an EBS root volume to a windows instance in Amazon EC2

    - by Kyle
    So basically, I created a large windows server for development, and then I created a micro windows server for production. I set up everything how I wanted it on my development server, and then i unmounted the drives, and mounted them to my micro server. Now I'm trying to get back into my large windows development server, and I'm getting the error. Invalid value 'i-4896ce28' for instanceId. Instance does not have a volume attached at root (/dev/sda1) this error pops up when I try to start my large windows server. I've remounted the drives to the large development server, and I still get this message. I'm not really sure what to do, I've read other posts and everyone is giving these almost like command line arguments and talking about other tools, and I really have no clue what any of that means, or where I even have an option to enter any commands without be logged into a specific instance.

    Read the article

  • Vim: Use different ~/.vim/plugin/ directories for different versions of vim?

    - by Stefan Lasiewski
    Like many of you, my custom Vim configuration is stored in my ~/.vimrc, with the plugins, colors, etc. stored under ~/.vim/plugins, ~/.vim/colors, etc. I want to share a single Vim configuration among many servers. Some of these servers run Vim 7, some run the older Vim 6. Most Vim plugins are intended for Vim 7, but older versions still exist for those of us on older systems. See DirDiff for an example. If I am on a system which runs Vim 6, how can I configure Vim to only use Vim 6-compatible plugins? I was thinking about storing older plugins in a subdirectory like ~/.vim/plugins6/ and keep the Vim plugins in ~/.vim/plugins, but then how can I tell Vim6 to ignore ~/.vim/plugins and use ~/.vim/plugins6 instead?

    Read the article

  • How do I resolve the message "Validating WSFC quorum vote configuration - Action Required."

    - by Rob Boek
    I have a 3 node AlwaysOn Availability Group on a 3 node WSFC using node majority. 2 nodes are setup as synchronous with automatic fail-over, the 3rd is setup as asynchronous with manual fail-over. When I try to fail-over using the GUI, I get a warning as shown in the screenshot. There is no warning or error if I fail-over with T-SQL. Adding a file share to the quorum doesn't help. The only way I can resolve the warning is to remove the asynchronous sql instance from the 3rd node (it remains part of the WSFC). Either way, the AlwaysOn dashboard says quorum is OK. Am I missing something? Is this a bug in the GUI that I should just ignore? Clicking "Action Required" gives the following error:

    Read the article

  • How to create multiboot flash drive

    - by Nrew
    I've found a guide here: http://www.pendrivelinux.com/boot-multiple-iso-from-usb-multiboot-usb/ And found this menu.lst in my flash drive, which seems to be the one that I'm seeing when I boot using my flash drive: # This Menu Created by Lance http://www.pendrivelinux.com # Ongoing Suggested Menu Entries and the Suggestor are noted! default 0 timeout 30 color NORMAL HIGHLIGHT HELPTEXT HEADING splashimage=(hd0,0)/splash.xpm.gz foreground=FFFFFF background=0066FF title Memtest86+ find --set-root /memtest86+-4.00.iso map --mem /memtest86+-4.00.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by madprofessor title Boot Clonezilla root (hd0,0) kernel /clonezilla/live/vmlinuz live-media-path=clonezilla/live bootfrom=/dev/sd boot=live union=aufs noprompt ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" ocs_lang="" vga=791 ip=frommedia initrd /clonezilla/live/initrd.img title Parted Magic 4.9 (Partition Tools) find --set-root /pmagic-4.9.iso map /pmagic-4.9.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by Deb title Partition Wizard 4.2 (Partition Tools) find --set-root /pwhe42.iso map /pwhe42.iso (hd32) map --hook root (hd32) chainloader (hd32) title Balder DOS image (FreeDOS) map --unsafe-boot /balder10.img (fd0) map --hook chainloader --force (fd0)+1 rootnoverify (fd0) # Suggested by Szymon Silski title Linux Mint 8 find --set-root /LinuxMint-8.iso map /LinuxMint-8.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/mint.seed boot=casper persistent iso-scan/filename=/LinuxMint-8.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 find --set-root /ubuntu-10.04-desktop-i386.iso map /ubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/ubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz title Xubuntu 10.04 (XFCE Desktop) find --set-root /xubuntu-10.04-desktop-i386.iso map /xubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/xubuntu.seed boot=casper persistent iso-scan/filename=/xubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz title Kubuntu 10.04 (KDE Desktop) find --set-root /kubuntu-10.04-desktop-i386.iso map /kubuntu-10.04-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/kubuntu.seed boot=casper persistent iso-scan/filename=/kubuntu-10.04-desktop-i386.iso splash initrd /casper/initrd.lz # Suggested by Ambriel title Lubuntu 10.04 (LXDE Lightweight Desktop) find --set-root /lubuntu-10.04.iso map /lubuntu-10.04.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/lubuntu.seed boot=casper persistent iso-scan/filename=/lubuntu-10.04.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 Netbook Remix (NetBook Distro) find --set-root /ubuntu-10.04-netbook-i386.iso map /ubuntu-10.04-netbook-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/netbook-remix.seed boot=casper persistent iso-scan/filename=/ubuntu-10.04-netbook-i386.iso splash initrd /casper/initrd.lz title Ubuntu 10.04 Server Edition Installer (32 bit Installer Only) find --set-root /ubuntu-10.04-server-i386.iso map /ubuntu-10.04-server-i386.iso (0xff) map --hook root (0xff) kernel /install/vmlinuz file=/cdrom/preseed/ubuntu-server.seed boot=install iso-scan/filename=/ubuntu-10.04-server-i386.iso splash initrd /install/initrd.gz title Ubuntu 9.10 find --set-root /ubuntu-9.10-desktop-i386.iso map /ubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/ubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz title Xubuntu 9.10 find --set-root /xubuntu-9.10-desktop-i386.iso map /xubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/xubuntu.seed boot=casper persistent iso-scan/filename=/xubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz title Kubuntu 9.10 find --set-root /kubuntu-9.10-desktop-i386.iso map /kubuntu-9.10-desktop-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/kubuntu.seed boot=casper persistent iso-scan/filename=/kubuntu-9.10-desktop-i386.iso splash initrd /casper/initrd.lz # Ubuntu Server and Netbook Remix suggested by Wojciech Holek title Ubuntu 9.10 Server Edition Installer (Installer Only) find --set-root /ubuntu-9.10-server-i386.iso map /ubuntu-9.10-server-i386.iso (0xff) map --hook root (0xff) kernel /install/vmlinuz file=/cdrom/preseed/ubuntu-server.seed boot=install iso-scan/filename=/ubuntu-9.10-server-i386.iso splash initrd /install/initrd.gz title Ubuntu 9.10 Netbook Remix (NetBook Distro) find --set-root /ubuntu-9.10-netbook-remix-i386.iso map /ubuntu-9.10-netbook-remix-i386.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/netbook-remix.seed boot=casper persistent iso-scan/filename=/ubuntu-9.10-netbook-remix-i386.iso splash initrd /casper/initrd.lz title Ubuntu 9.10 Rescue Remix (Recovery Tools) find --set-root /ubuntu-rescue-remix-9-10-revision1.iso map /ubuntu-rescue-remix-9-10-revision1.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper iso-scan/filename=/ubuntu-rescue-remix-9-10-revision1.iso splash initrd /casper/initrd.lz title DSL 4.4.10 find --set-root /dsl-4.4.10-initrd.iso map --mem /dsl-4.4.10-initrd.iso (hd32) map --hook root (hd32) chainloader (hd32) title AVG Rescue CD (Anti-Virus + Anti-Spyware) find --set-root /avg_arl_en_90_100114.iso map /avg_arl_en_90_100114.iso (hd32) map --hook chainloader (hd32) title Ultimate Boot CD 4.11 find --set-root /ubcd411.iso map /ubcd411.iso (hd32) map --hook chainloader (hd32) title OphCrack XP 2.3.1 (XP Password Cracker) find --set-root /ophcrack-xp-livecd-2.3.1.iso map /ophcrack-xp-livecd-2.3.1.iso (0xff) map --hook root (0xff) kernel /boot/bzImage rw root=/dev/null vga=normal lang=C kmap=us screen=1024x768x16 autologin initrd /boot/rootfs.gz title OphCrack Vista 2.3.1 (Vista Password Cracker) find --set-root /ophcrack-vista-livecd-2.3.1.iso map /ophcrack-vista-livecd-2.3.1.iso (0xff) map --hook root (0xff) kernel /boot/bzImage rw root=/dev/null vga=normal lang=C kmap=us screen=1024x768x16 autologin initrd /boot/rootfs.gz # Suggested by Greg Steer title Offline NT Password & Registy Editor find --set-root /cd080802.iso map /cd080802.iso (hd32) map --hook chainloader (hd32) title SliTaz 2.0 find --set-root /slitaz-2.0.iso map --mem /slitaz-2.0.iso (hd32) map --hook chainloader (hd32) title Riplinux 9.3 find --set-root /RIPLinuX-9.3.iso map --heads=0 --sectors-per-track=0 /RIPLinuX-9.3.iso (0xff) || map --heads=0 --sectors-per-track=0 --mem /RIPLinuX-9.3.iso (0xff) map --hook chainloader (0xff) # Suggested by Sunny title YlmF (Windows Like OS) find --set-root /YlmF_OS_EN_v1.0.iso map /YlmF_OS_EN_v1.0.iso (0xff) map --hook root (0xff) kernel /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper persistent iso-scan/filename=/YlmF_OS_EN_v1.0.iso splash initrd /casper/initrd.lz # Suggested by Martin Andersson title DBAN 1.0.7 (Drive Nuker) find --set-root /dban-1.0.7_i386.iso map --mem /dban-1.0.7_i386.iso (hd32) map --hook root (hd32) chainloader (hd32) # Suggested by Robin McGough title xPUD 0.9.2 (NetBook Distro) find --set-root --ignore-floppies --ignore-cd /xpud-0.9.2.iso map --heads=0 --sectors-per-track=0 /xpud-0.9.2.iso (hd32) map --hook chainloader (hd32) title Puppy 4.3.1 find --set-root /puppy/pup-431.sfs kernel /puppy/vmlinuz initrd /puppy/initrd.gz # Suggested by Relst title Run a Linux OS from the Internet kernel /gpxe.lkrn I also put some .iso files for os installers (Windows xp sp2 and Ubuntu 10.04) But they didn't show up in the list when I booted Do I need to: extract the .iso files and put in in their respective folders? Add the os that I added on the menu.lst? How do I add the iso image(os) in the menu.lst? Before adding the .iso files I first made a folder named Windows xp sp2 then placed the .iso files in there. Please help, I think I need to add the folder name or the file name on the menu.lst but I don't know how

    Read the article

  • Can't login to Manager App in Tomcat 6.0.18

    - by Rafael Almeida
    Folks, I can't login to the manager app (localhost:8080/manager/html) in my Tomcat. More specifically, it asks for my username and password, and the ones supposed to be correct aren't accepted. Here's what I already checked: I tried editing my conf/tomcat-users.xml to add my user/role. Here's the current content of this file: <?xml version='1.0' encoding='utf-8'?> <tomcat-users> <role rolename="manager"/> <user username="tomcat" password="s3cret" roles="manager"/> </tomcat-users> I thought that maybe it wasn't looking up on this XML, but elsewhere. So, I came to know about Realms. The Realm part of my configuration is now: < Realm className="org.apache.catalina.realm.MemoryRealm" / ( please ignore the space before Realm, for some reason this site isn't accepting the literal tag ) What am I missing?

    Read the article

  • Force Capistrano to ask for password

    - by Moshe Katz
    I am deploying using Capistrano to a new server and having the following issue. Currently, I cannot add an SSH key to the server to log in with so I must use password authentication. However, I do have a key for another server saved in my local user account's .ssh directory. Here is the error I get when I try to log in: C:\Web\CampMaRabu>cap deploy:setup * executing `deploy:setup' * executing "mkdir -p /home2/webapp1 /home2/webapp1/releases /home2/webapp1/shared /home2/webapp1/shared/system /home2/webapp1/shared/log /home2/webapp1/shared/pids" servers: ["myserver.example.com"] connection failed for: myserver.example.com (OpenSSL::PKey::PKeyError: not a public key "C:/Users/MyAccount/.ssh/id_rsa.pub") How can I get Capistrano to ignore the existence of the key I have and let me log in with a password instead? I tried adding set :password, "myp@ssw0rd" to deploy.rb and it didn't help.

    Read the article

  • Redirect as a backup trick? w/o modifying DNS?

    - by acidzombie24
    I specifically looked up how to do something like this ( Can you set a backup ip for your server in DNS? ) and the answer basically was you can't. If i say specify 2 ip addresses could i somehow use a HTTP response header to ignore it temporary (say 5mins) and go to the other IP address? Or maybe i can play dead however i'm unsure how to play dead using nginx. I then would like to be available after my box notice the other box is down and be some kind of readonly server. I'm sure something like this has been implemented i am just wondering how i might implement it with 2 boxes. I'm sure it isn't very difficult? How might i redirect traffic from a backup box to my main server without modifying the DNS?

    Read the article

  • tmux combine multiple commands to one vi-copy command or tmux command to yank a line

    - by MIkhail
    In tmux, i know we can chain multiple commands to a key by using \; See Here But in vi mode, i want one single key press to go to the beginning of the current line, begin-selection, go to end-of-line, copy-selection. In tmux.conf if i give the following bind-key -t vi-copy 's' start-of-line \; begin-selection \; end-of-line \; copy-selection \; It gives me this : 69: usage: bind-key [-cnr] [-t key-table] key command [arguments] error. Or is there any alternative way to yank the current line in single key.

    Read the article

  • OpenVPN make redirect-gateway optional

    - by Tuinslak
    Hi there, I'm currently running an OpenVPN server for multiple clients. All traffic is directed through the VPN (it's set up as gateway; push "redirect-gateway def1"). So far, all is working fine. However, I'd like to connect a couple of servers to this virtual private network, without these servers using the OVPN daemon as gateway. These servers have to be accessible from both their WAN as well as their LAN IP address. Certain services will be accessible only from the LAN side. Is there any way, for a client, to ignore the push redirect-gateway option? Kind regards, Tuinslak

    Read the article

  • PAM_LDAP error trying to bind ?

    - by billyduc
    I have this error when I ssh to my LDAP client using the login name on the LDAP server my LDAP client's running Ubuntu 9.10 Karmic my LDAP server is Fedora Core 4 and running Fedora Directory Server ssh [email protected] cat /var/log/auth.log //on the client Dec 18 10:24:17 ubuntu-ltsp sshd[4527]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=billyhost.local user=billyduc Dec 18 10:24:17 ubuntu-ltsp sshd[4527]: pam_ldap: error trying to bind as user "uid=billyduc,dc=mydomain,dc=com" (Invalid credentials) Dec 18 10:24:18 ubuntu-ltsp sshd[4527]: Failed password for billyduc from 192.168.5.121 port 51449 ssh2 Here's my /etc/pam.d/sshd cat /etc/pam.d/sshd auth [success=1 default=ignore] pam_unix.so auth required pam_ldap.so use_first_pass auth required pam_permit.so account sufficient pam_permit.so I also edit my /etc/ssh/sshd_config in both client and Server PasswordAuthentication yes So I think something wrong with the password when the ssh server do checking

    Read the article

  • SSH - using keys works, but not in a script

    - by Garfonzo
    I'm kind of confused, I have set up public keys between two servers and it works great, sort of. It only works if I ssh manually from a terminal. When I put the ssh command into a python script, it asks me for a password to login. The script is using rsync to sync up a directory from one server to the other. manual ssh command that works, no password prompt, automatic login: ssh -p 1234 [email protected] In the Python script: rsync --ignore-existing --delete --stats --progress -rp -e "ssh -p 1234" [email protected]:/directory/ /other/directory/ What gives? (obviously, ssh details are fake)

    Read the article

  • Glibc importance of error ...

    - by Oz123
    Hi Everyone, I am following LFS 6.7, and I reached the point where I compile glibc-2.12.1 . I mounted the LFS partition with the atime option: here is a confirm on that I think: /dev/sdb1 on /mnt /lfs type ext4 (rw) I get the following errors on making the test, and I have no clue if I should try to resolve them, or just ignore them and go on ... rpc/types.h sunrpc/rpc/svc_auth.h sunrpc/rpcsvc/bootparam.h sysvipc/sys/ipc.h \ sysvipc/sys/msg.h sysvipc/sys/sem.h sysvipc/sys/shm.h termios/termios.h \ termios/sys/termios.h termios/sys/ttychars.h time/time.h time/sys/time.h \ time/sys/timeb.h wcsmbs/wchar.h wctype/wctype.h > \ /sources/glibc-build/begin-end-check.out make[1]: Target `check' not remade because of errors. make[1]: Leaving directory `/sources/glibc-2.12.1' make: *** [check] Error 2 root:/sources/glibc-build# grep Error glibc-check-log make[2]: *** [/sources/glibc-build/math/test-float.out] Error 1 make[2]: *** [/sources/glibc-build/math/test-ifloat.out] Error 1 make[1]: *** [math/tests] Error 2 make[2]: [/sources/glibc-build/posix/annexc.out] Error 1 (ignored) make: *** [check] Error 2 thanks in advance, Oz

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >