Search Results

Search found 2823 results on 113 pages for 'perforce branch spec'.

Page 38/113 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • How to get the earliest checkout-able revision info from subversion?

    - by zhongshu
    I want to check a svn url and to get the earliest revision, then checkout it, I don't want to use HEAD because I will compare the earliest revision to others. so I use "svn info" to get the "Last Changed Rev" for the url like this: D:\Project>svn info svn://.../branches/.../path Path: ... URL: svn://.../branches/.../path Repository Root: svn://yt-file-srv/ Repository UUID: 9ed5ffd7-7585-a14e-96b2-4aab7121bb21 Revision: 2400 Node Kind: directory Last Changed Author: xxx Last Changed Rev: 2396 Last Changed Date: 2010-03-12 09:31:52 +0800 but, I found the 2396 revision is not checkout-able, because this path is in a branch copied from trunk, and the 2396 is the revision modified in the trunk. so when I use svn checkout -r 2396, I will get a working copy for the path in the trunk, then I can not do checkin for the branch. D:\Project>svn checkout svn://.../branches/.../path -r 2396 workcopy ..... ..... D:\Project>svn info workcopy Path: workcopy URL: svn://.../trunk/.../path Repository Root: svn://yt-file-srv/ Repository UUID: 9ed5ffd7-7585-a14e-96b2-4aab7121bb21 Revision: 2396 Node Kind: directory Schedule: normal Last Changed Author: xxx Last Changed Rev: 2396 Last Changed Date: 2010-03-12 09:31:52 +0800 So, my question is how to get a checkout-able revision for the branch path, for this example, I want to get 2397 (because 2397 is the revision which copy occur). And I know "svn log" can get the info, but "svn log" output maybe very long and parse it will be difficult than "svn info". I just want know which revision is the earliest checkout-able revision for the path.

    Read the article

  • How to use git to manage one codebase but have different environments

    - by emostar
    I'm using git for a personal project at the moment and have run into a problem of having one codebase for two different environments and was wondering what the cleanest way to use git would be. Main Desktop I Use this machine for most of my development. I have a git repository here that I cloned off of an empty repository that I use on my internal server. I do most of my work here and push back to the internal server so I can use that as a master of truth and to ease making backups. Laptop I sometimes want to code on the road, so I did a clone from the internal server and created a new branch called "laptop-branch". Unfortunately some directories MSVC++ version are different than from the Main Desktop environment. I just modified the files in the "laptop-branch" and committed them there. Now I did a lot of changes while on vacation with my laptop, and want to push them to origin, but don't want the changes I made that were related to directories and compiler versions to be pushed back to origin. What would be the best way to get this done?

    Read the article

  • Manipulating gl_LightSource[gl_MaxLights]

    - by pion
    The The OpenGL ® Shading Language Spec version 1.20 http://www.opengl.org/registry/doc/GLSLangSpec.Full.1.20.8.pdf shows the following: struct gl_LightSourceParameters { vec4 ambient; // Acli vec4 diffuse; // Dcli vec4 specular; // Scli vec4 position; // Ppli vec4 halfVector; // Derived: Hi vec3 spotDirection; // Sdli float spotExponent; // Srli float spotCutoff; // Crli // (range: [0.0,90.0], 180.0) float spotCosCutoff; // Derived: cos(Crli) // (range: [1.0,0.0],-1.0) float constantAttenuation; // K0 float linearAttenuation; // K1 float quadraticAttenuation;// K2 }; uniform gl_LightSourceParameters gl_LightSource[gl_MaxLights]; Also, http://www.lighthouse3d.com/opengl/glsl/index.php?ogldir1 shows the following code snippets: lightDir = normalize(vec3(gl_LightSource[0].position)); https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.10 shows many uniform* functions but nothing seems to deal with an array of uniform variable like gl_LightSource[0]. How do we set the gl_LightSource[0] fields in WebGL using JavaScript? For example, gl_LightSource[0].position Thanks in advance for your help. Note: Cross post on http://www.khronos.org/message_boards/viewtopic.php?f=43&t=3373

    Read the article

  • AuthnRequest Settings in OIF / SP

    - by Damien Carru
    In this article, I will list the various OIF/SP settings that affect how an AuthnRequest message is created in OIF in a Federation SSO flow. The AuthnRequest message is used by an SP to start a Federation SSO operation and to indicate to the IdP how the operation should be executed: How the user should be challenged at the IdP Whether or not the user should be challenged at the IdP, even if a session already exists at the IdP for this user Which NameID format should be requested in the SAML Assertion Which binding (Artifact or HTTP-POST) should be requested from the IdP to send the Assertion Which profile should be used by OIF/SP to send the AuthnRequest message Enjoy the reading! Protocols The SAML 2.0, SAML 1.1 and OpenID 2.0 protocols define different message elements and rules that allow an administrator to influence the Federation SSO flows in different manners, when the SP triggers an SSO operation: SAML 2.0 allows extensive customization via the AuthnRequest message SAML 1.1 does not allow any customization, since the specifications do not define an authentication request message OpenID 2.0 allows for some customization, mainly via the OpenID 2.0 extensions such as PAPE or UI SAML 2.0 OIF/SP allows the customization of the SAML 2.0 AuthnRequest message for the following elements: ForceAuthn: Boolean indicating whether or not the IdP should force the user for re-authentication, even if the user has still a valid session By default set to false IsPassive Boolean indicating whether or not the IdP is allowed to interact with the user as part of the Federation SSO operation. If false, the Federation SSO operation might result in a failure with the NoPassive error code, because the IdP will not have been able to identify the user By default set to false RequestedAuthnContext Element indicating how the user should be challenged at the IdP If the SP requests a Federation Authentication Method unknown to the IdP or for which the IdP is not configured, then the Federation SSO flow will result in a failure with the NoAuthnContext error code By default missing NameIDPolicy Element indicating which NameID format the IdP should include in the SAML Assertion If the SP requests a NameID format unknown to the IdP or for which the IdP is not configured, then the Federation SSO flow will result in a failure with the InvalidNameIDPolicy error code If missing, the IdP will generally use the default NameID format configured for this SP partner at the IdP By default missing ProtocolBinding Element indicating which SAML binding should be used by the IdP to redirect the user to the SP with the SAML Assertion Set to Artifact or HTTP-POST By default set to HTTP-POST OIF/SP also allows the administrator to configure the server to: Set which binding should be used by OIF/SP to redirect the user to the IdP with the SAML 2.0 AuthnRequest message: Redirect or HTTP-POST By default set to Redirect Set which binding should be used by OIF/SP to redirect the user to the IdP during logout with SAML 2.0 Logout messages: Redirect or HTTP-POST By default set to Redirect SAML 1.1 The SAML 1.1 specifications do not define a message for the SP to send to the IdP when a Federation SSO operation is started. As such, there is no capability to configure OIF/SP on how to affect the start of the Federation SSO flow. OpenID 2.0 OpenID 2.0 defines several extensions that can be used by the SP/RP to affect how the Federation SSO operation will take place: OpenID request: mode: String indicating if the IdP/OP can visually interact with the user checkid_immediate does not allow the IdP/OP to interact with the user checkid_setup allows user interaction By default set to checkid_setup PAPE Extension: max_auth_age : Integer indicating in seconds the maximum amount of time since when the user authenticated at the IdP. If MaxAuthnAge is bigger that the time since when the user last authenticated at the IdP, then the user must be re-challenged. OIF/SP will set this attribute to 0 if the administrator configured ForceAuthn to true, otherwise this attribute won't be set Default missing preferred_auth_policies Contains a Federation Authentication Method Element indicating how the user should be challenged at the IdP By default missing Only specified in the OpenID request if the IdP/OP supports PAPE in XRDS, if OpenID discovery is used. UI Extension Popup mode Boolean indicating the popup mode is enabled for the Federation SSO By default missing Language Preference String containing the preferred language, set based on the browser's language preferences. By default missing Icon: Boolean indicating if the icon feature is enabled. In that case, the IdP/OP would look at the SP/RP XRDS to determine how to retrieve the icon By default missing Only specified in the OpenID request if the IdP/OP supports UI Extenstion in XRDS, if OpenID discovery is used. ForceAuthn and IsPassive WLST Command OIF/SP provides the WLST configureIdPAuthnRequest() command to set: ForceAuthn as a boolean: In a SAML 2.0 AuthnRequest, the ForceAuthn field will be set to true or false In an OpenID 2.0 request, if ForceAuthn in the configuration was set to true, then the max_auth_age field of the PAPE request will be set to 0, otherwise, max_auth_age won't be set IsPassive as a boolean: In a SAML 2.0 AuthnRequest, the IsPassive field will be set to true or false In an OpenID 2.0 request, if IsPassive in the configuration was set to true, then the mode field of the OpenID request will be set to checkid_immediate, otherwise set to checkid_setup Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> Let's configure OIF/SP for that IdP Partner, so that the SP will require the IdP to re-challenge the user, even if the user is already authenticated: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the configureIdPAuthnRequest() command:configureIdPAuthnRequest(partner="AcmeIdP", forceAuthn="true") Exit the WLST environment:exit() After the changes, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ForceAuthn="true" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> To display or delete the ForceAuthn/IsPassive settings, perform the following operatons: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the configureIdPAuthnRequest() command: To display the ForceAuthn/IsPassive settings on the partnerconfigureIdPAuthnRequest(partner="AcmeIdP", displayOnly="true") To delete the ForceAuthn/IsPassive settings from the partnerconfigureIdPAuthnRequest(partner="AcmeIdP", delete="true") Exit the WLST environment:exit() Requested Fed Authn Method In my earlier "Fed Authentication Method Requests in OIF / SP" article, I discussed how OIF/SP could be configured to request a specific Federation Authentication Method from the IdP when starting a Federation SSO operation, by setting elements in the SSO request message. WLST Command The OIF WLST commands that can be used are: setIdPPartnerProfileRequestAuthnMethod() which will configure the requested Federation Authentication Method in a specific IdP Partner Profile, and accepts the following parameters: partnerProfile: name of the IdP Partner Profile authnMethod: the Federation Authentication Method to request displayOnly: an optional parameter indicating if the method should display the current requested Federation Authentication Method instead of setting it delete: an optional parameter indicating if the method should delete the current requested Federation Authentication Method instead of setting it setIdPPartnerRequestAuthnMethod() which will configure the specified IdP Partner entry with the requested Federation Authentication Method, and accepts the following parameters: partner: name of the IdP Partner authnMethod: the Federation Authentication Method to request displayOnly: an optional parameter indicating if the method should display the current requested Federation Authentication Method instead of setting it delete: an optional parameter indicating if the method should delete the current requested Federation Authentication Method instead of setting it This applies to SAML 2.0 and OpenID 2.0 protocols. See the "Fed Authentication Method Requests in OIF / SP" article for more information. Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> Let's configure OIF/SP for that IdP Partner, so that the SP will request the IdP to use a mechanism mapped to the urn:oasis:names:tc:SAML:2.0:ac:classes:X509 Federation Authentication Method to authenticate the user: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setIdPPartnerRequestAuthnMethod() command:setIdPPartnerRequestAuthnMethod("AcmeIdP", "urn:oasis:names:tc:SAML:2.0:ac:classes:X509") Exit the WLST environment:exit() After the changes, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/>   <samlp:RequestedAuthnContext Comparison="minimum">      <saml:AuthnContextClassRef xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">         urn:oasis:names:tc:SAML:2.0:ac:classes:X509      </saml:AuthnContextClassRef>   </samlp:RequestedAuthnContext></samlp:AuthnRequest> NameID Format The SAML 2.0 protocol allows for the SP to request from the IdP a specific NameID format to be used when the Assertion is issued by the IdP. Note: SAML 1.1 and OpenID 2.0 do not provide such a mechanism Configuring OIF The administrator can configure OIF/SP to request a NameID format in the SAML 2.0 AuthnRequest via: The OAM Administration Console, in the IdP Partner entry The OIF WLST setIdPPartnerNameIDFormat() command that will modify the IdP Partner configuration OAM Administration Console To configure the requested NameID format via the OAM Administration Console, perform the following steps: Go to the OAM Administration Console: http(s)://oam-admin-host:oam-admin-port/oamconsole Navigate to Identity Federation -> Service Provider Administration Open the IdP Partner you wish to modify In the Authentication Request NameID Format dropdown box with one of the values None The NameID format will be set Default Email Address The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress X.509 Subject The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName Windows Name Qualifier The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:WindowsDomainQualifiedName Kerberos The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:kerberos Transient The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:transient Unspecified The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified Custom In this case, a field would appear allowing the administrator to indicate the custom NameID format to use The NameID format will be set to the specified format Persistent The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:persistent I selected Email Address in this example Save WLST Command To configure the requested NameID format via the OIF WLST setIdPPartnerNameIDFormat() command, perform the following steps: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setIdPPartnerNameIDFormat() command:setIdPPartnerNameIDFormat("PARTNER", "FORMAT", customFormat="CUSTOM") Replace PARTNER with the IdP Partner name Replace FORMAT with one of the following: orafed-none The NameID format will be set Default orafed-emailaddress The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress orafed-x509 The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName orafed-windowsnamequalifier The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:WindowsDomainQualifiedName orafed-kerberos The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:kerberos orafed-transient The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:transient orafed-unspecified The NameID format will be set urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified orafed-custom In this case, a field would appear allowing the administrator to indicate the custom NameID format to use The NameID format will be set to the specified format orafed-persistent The NameID format will be set urn:oasis:names:tc:SAML:2.0:nameid-format:persistent customFormat will need to be set if the FORMAT is set to orafed-custom An example would be:setIdPPartnerNameIDFormat("AcmeIdP", "orafed-emailaddress") Exit the WLST environment:exit() Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer> <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> After the changes performed either via the OAM Administration Console or via the OIF WLST setIdPPartnerNameIDFormat() command where Email Address would be requested as the NameID Format, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ForceAuthn="false" IsPassive="false" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer> <samlp:NameIDPolicy Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress" AllowCreate="true"/></samlp:AuthnRequest> Protocol Binding The SAML 2.0 specifications define a way for the SP to request which binding should be used by the IdP to redirect the user to the SP with the SAML 2.0 Assertion: the ProtocolBinding attribute indicates the binding the IdP should use. It is set to: Either urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST for HTTP-POST Or urn:oasis:names:tc:SAML:2.0:bindings:Artifact for Artifact The SAML 2.0 specifications also define different ways to redirect the user from the SP to the IdP with the SAML 2.0 AuthnRequest message, as the SP can send the message: Either via HTTP Redirect Or HTTP POST (Other bindings can theoretically be used such as Artifact, but these are not used in practice) Configuring OIF OIF can be configured: Via the OAM Administration Console or the OIF WLST configureSAMLBinding() command to set the Assertion Response binding to be used Via the OIF WLST configureSAMLBinding() command to indicate how the SAML AuthnRequest message should be sent Note: the binding for sending the SAML 2.0 AuthnRequest message will also be used to send the SAML 2.0 LogoutRequest and LogoutResponse messages. OAM Administration Console To configure the SSO Response/Assertion Binding via the OAM Administration Console, perform the following steps: Go to the OAM Administration Console: http(s)://oam-admin-host:oam-admin-port/oamconsole Navigate to Identity Federation -> Service Provider Administration Open the IdP Partner you wish to modify Check the "HTTP POST SSO Response Binding" box to request the IdP to return the SSO Response via HTTP POST, otherwise uncheck it to request artifact Save WLST Command To configure the SSO Response/Assertion Binding as well as the AuthnRequest Binding via the OIF WLST configureSAMLBinding() command, perform the following steps: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the configureSAMLBinding() command:configureSAMLBinding("PARTNER", "PARTNER_TYPE", binding, ssoResponseBinding="httppost") Replace PARTNER with the Partner name Replace PARTNER_TYPE with the Partner type (idp or sp) Replace binding with the binding to be used to send the AuthnRequest and LogoutRequest/LogoutResponse messages (should be httpredirect in most case; default) httppost for HTTP-POST binding httpredirect for HTTP-Redirect binding Specify optionally ssoResponseBinding to indicate how the SSO Assertion should be sent back httppost for HTTP-POST binding artifactfor for Artifact binding An example would be:configureSAMLBinding("AcmeIdP", "idp", "httpredirect", ssoResponseBinding="httppost") Exit the WLST environment:exit() Test In this test, OIF/SP is integrated with a remote SAML 2.0 IdP Partner, with the OOTB configuration which requests HTTP-POST from the IdP to send the SSO Assertion. Based on this setup, when OIF/SP starts a Federation SSO flow, the following SAML 2.0 AuthnRequest would be generated: <samlp:AuthnRequest ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" ID="id-E4BOT7lwbYK56lO57dBaqGUFq01WJSjAHiSR60Q4" Version="2.0" IssueInstant="2014-04-01T21:39:14Z" Destination="https://acme.com/saml20/sso">   <saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://sp.com/oam/fed</saml:Issuer>   <samlp:NameIDPolicy AllowCreate="true"/></samlp:AuthnRequest> In the next article, I will cover the various crypto configuration properties in OIF that are used to affect the Federation SSO exchanges.Cheers,Damien Carru

    Read the article

  • Installing an exe with Powershell DSC Package resource gets return code 1619

    - by Jay Spang
    I'm trying to use Powershell DSC's Package resource to install an exe... Perforce's P4V to be specific. Here's my code: Configuration PerforceMachine { Node "SERVERNAME" { Package P4V { Ensure = "Present" Name = "Perforce Visual Components" Path = "\\nas\share\p4vinst64.exe" ProductId = '' Arguments = "/S /V/qn" # args for silent mode LogPath = "$env:ProgramData\p4v_install.log" } } } When running this, this is the error Powershell gives me: PowerShell provider MSFT_PackageResource failed to execute Set-TargetResource functionality with error message: The return code 1619 was not expected. Configuration is likely not correct + CategoryInfo : InvalidOperation: (:) [], CimException + FullyQualifiedErrorId : ProviderOperationExecutionFailure + PSComputerName : SERVERNAME According to documentation, return code 1619 means the MSI package couldn't be opened. However, when I manually log in to the machine and run "\\nas\share\p4vinst64.exe /S /V/qn", the install works flawlessly. Does anyone know why this is failing? Alternately, can anyone tell me how to troubleshoot this? I pasted all the error information I got from the terminal, my log file (p4v_install.log) is a 0 byte file, and there are no events in the event viewer. I don't know how to troubleshoot it any further! EDIT: I should note that I also tried using the File resource to copy the file locally, and then install it from there. Sadly, that met with the same result.

    Read the article

  • Git + SoA, one repo or many?

    - by parsenome
    Normally, when I start up a new application, I'd create a new git repository for it. That's well accepted and plays nice with Github when I want to share my code. At work, I'm working in a service oriented architecture. One very common pattern is to add some code to two different applications at the same time - perhaps adding a model with a RESTful interface to one and a web frontend for managing it on another. Using separate git repositories has some warts in this case. Here are what I see as the downsides of doing separate repositories: I have to commit twice I can't correllate related commits very well No single place to go back and trace history - I'd love to be able to bring up all my commits for the day in a single place Forgetting to pull one repo or another is a gotcha On the other hand, I've used perforce a lot and its one giant repository model has lots of warts too. Perforce has features designed to let help you with those, git doesn't. Has anyone else run into this situation? How did you handle it? What worked well, and what didn't?

    Read the article

  • fatal: git-http-push-failed (return code 22)

    - by Mariusz
    Hello, that's me again. After having problem with estabilishing connection to github.com now I have a problem with next step - pushing. I need to mention, that I am novice at GIT service, and this whole Distributed Subversion Checking Systems world.. I have done git init, then git add *.h and git add *.cpp, but currently git status does not print anything in "# On branch master" section? Previously It was correctly printing whole list of added files, now this list is gone. Nextly, I have executed: git remote add origin https://github.com/mgeeky/disasm.git and error has occured after: git push origin master Username: Password: error: Cannot access URL https://github.com/mgeeky/disasm.git/, return code 22 fatal: git-http-push failed What should I do now? I've tried: git push origin Username: Password: No refs in common and none specified; doing nothing. Perhaps you should specify a branch such as 'master'. Everything up-to-date But it seems to be okey.

    Read the article

  • Building php-devel package from source (php 5.3)

    - by BajaBob
    I am building PHP 5.3 rpm packages for our custom CentOS 5 yum repo. I am fairly new to building rpms to be honest, but I have had moderate success downloading the SRPMS for a given package and repackaging them using "rpmbuild --rebuild" command. One thing that is throwing me off though is how to satisfy the php-devel package.. I obviously have the PHP 5.3 source files as I was able to build my php-common and other packages with it. But I am not sure how to actually build the devel package! From what I understand, I already have most of what I need - the latest php 5.3.5 source tarball. However I am not sure how to build the correct .spec file to satisfy what I need. If you are knowledgeable in this area, would you mind helping a fellow sysadmin out? Sharing a spec file or at least giving me some pointers on how to approach it. Thanks much serverfault community! -BajaBob

    Read the article

  • Git pull auto complete OSX

    - by vodkhang
    Follow some instruction on this site http://denis.tumblr.com/post/71390665/adding-bash-completion-for-git-on-mac-os-x-leopard . I can do git auto complete for MAC OS. However, when I type git pull origin ma (for master), and then tab it takes a long time for git to auto complete to become git pull origin master . I think it connect to the server to get the branch, but I am not sure, is there any way to make it faster and only get the branch on local machine cd /tmp git clone git://git.kernel.org/pub/scm/git/git.git cd git git checkout v`git --version | awk '{print $3}'` cp contrib/completion/git-completion.bash ~/.git-completion.bash cd ~ rm -rf /tmp/git echo -e "source ~/.git-completion.bash" >> .profile

    Read the article

  • Cant logon to domain over site-to-site vpn

    - by 3molo
    Tied together branch office with main office over two Cisco ASAs. The (internal) networks on either side can communicate with the other. I can ping, use the DC's DNS service and even join a domain on a new client. I can't however logon, I get the "domain controller is not available" error message on client. I find nothing peculiar in DC's event logs. Sicne it's site-to-site (with ping), it's always up so it should work. No firewall rules (except allow any any) between the two networks (of either side). Main site internal net: 10.10.10.0/24 Branch office net: 10.180.3.0/24 Am I overlooking something here? Where should I start investigating this?d

    Read the article

  • foswiki: use genPDF extension with topic templates?

    - by Mica
    I have a foswiki installation for keeping ISO and other documents. I would like to create a PDF from each page. How can I create a topic template with different headers and footers for each topic template? More info: When a user creates a new topic, they can choose a template. I've made several templates for Functional and Programming specs. The functional spec and programming spec require different document numbers. I would like for the software engineers to be able to create a new topic, choose the template, then be able to generate a PDF from the wiki page, pulling the appropriate document number, and some other text into the headers and footers. I am not very familiar, and haven't been able to find any examples on doing this. Any help would be appreciated!

    Read the article

  • Agent admitted failure to sign using the key.

    - by Delirium tremens
    .ssh dir is chmodded 700, id_rsa.pub 600, id_rsa 400. I ran ssh-keygen -t rsa, imported key to launchpad and ran bzr branch lp:unity, but got error message: Agent admitted failure to sign using the key. Permission denied (publickey). bzr: ERROR: Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist. auth.log: Nov 28 20:23:13 ubuntu sudo: deltrem : TTY=pts/0 ; PWD=/home/deltrem/Documentos/repositories ; USER=root ; COMMAND=/usr/bin/bzr branch lp:unity Nov 28 20:39:01 ubuntu CRON[2959]: pam_unix(cron:session): session opened for user root by (uid=0) Nov 28 20:39:01 ubuntu CRON[2959]: pam_unix(cron:session): session closed for user root Nov 28 20:41:04 ubuntu gnome-screensaver-dialog: gkr-pam: unlocked login keyring

    Read the article

  • BUILDROOT files during RPM generation

    - by khmarbaise
    Currently i have the following spec file to create a RPM. The spec file is generated by maven plugin to produce a RPM out of it. The question is: will i find files which are mentioned in the spec file after the rpm generation inside the BUILDROOT/SPECS/SOURCES/SRPMS structure? %define _unpackaged_files_terminate_build 0 Name: rpm-1 Version: 1.0 Release: 1 Summary: rpm-1 License: 2009 my org Distribution: My App Vendor: my org URL: www.my.org Group: Application/Collectors Packager: my org Provides: project Requires: /bin/sh Requires: jre >= 1.5 Requires: BASE_PACKAGE PreReq: dependency Obsoletes: project autoprov: yes autoreq: yes BuildRoot: /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/buildroot %description %install if [ -e $RPM_BUILD_ROOT ]; then mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot/* $RPM_BUILD_ROOT else mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot $RPM_BUILD_ROOT fi ln -s /usr/myusr/app $RPM_BUILD_ROOT/usr/myusr/app2 ln -s /tmp/myapp/somefile $RPM_BUILD_ROOT/tmp/myapp/somefile2 ln -s name.sh $RPM_BUILD_ROOT/usr/myusr/app/bin/oldname.sh %files %defattr(-,myuser,mygroup,-) %dir "/usr/myusr/app" "/usr/myusr/app2" "/tmp/myapp/somefile" "/tmp/myapp/somefile2" "/usr/myusr/app/lib" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/start.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter-version.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name-Linux.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/oldname.sh" %dir "/usr/myusr/app/conf" %config "/usr/myusr/app/conf/log4j.xml" "/usr/myusr/app/conf/log4j.xml.deliver" %prep echo "hello from prepare" %pre -p /bin/sh #!/bin/sh if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi %post #!/bin/sh #create soft link script to services directory ln -s /usr/myusr/app/bin/start.sh /etc/init.d/myapp chmod 555 /etc/init.d/myapp %preun #!/bin/sh #the argument being passed in indicates how many versions will exist #during an upgrade, this value will be 1, in which case we do not want to stop #the service since the new version will be running once this script is called #during an uninstall, the value will be 0, in which case we do want to stop #the service and remove the /etc/init.d script. if [ "$1" = "0" ] then if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi fi; %triggerin -- dependency, dependency1 echo "hello from install" %changelog * Tue May 23 2000 Vincent Danen <[email protected]> 0.27.2-2mdk -update BuildPreReq to include rep-gtk and rep-gtkgnome * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.2-1mdk -0.27.2 * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.1-2mdk -added BuildPreReq -change name from Sawmill to Sawfish The problem i found is that the files (filter.txt in particular) after the generation process on a Ubuntu system but not on SuSE system. Which might be caused by different rpm versions ? Currently we have an integration test which fails based on the non existing of the file (filter.txt under a buildroot folder?)

    Read the article

  • How can I find a laptop if it has a different name all over the world?

    - by Mike
    CONUNDRUM: A laptop review in the UK talks about how brilliant the "ASUS ABCDE 55" is, but in America, France, etc there is no such laptop name. In fact it's called "ASUS 12345 AB" - AAARRGH! QUESTION: Is there a way of finding out all the diverse names for the same laptop all over the world? Example: if Samsung create a R2D2500, then what is that spec laptop called in all the other countries (if they release it of course). Or if it's not released, what is their similar spec laptop called in the other countries? I understand that specs may be different, but if I read a review on my trusted UK website, but live in say Australia, I want to be able to find the name of the same laptop in Australia and then check out local places to buy it. So if anyone knows if there is a technique, specific website, or even how to use a company website to find out these annoying name changes I'd really appreciate it.

    Read the article

  • How to consolidate servers with the not-very-strong infrastructure

    - by Sim
    All, Situation We are in retail industry with about 10 distributors and use Solomon as the standard ERP for all our systems Each distributor has 1 HQ and 5 - 10 branches, each branch has their own server (Windows 2000/XP/2003 + Solomon + another built-in POS system) Everyday, branches has to extract data and send (via email/Skype) to HQ for data consolidation purpose When we first deployed our ERP, the infrastructure (e.g. Internet connection) wasn't reliable enough. That's why we went with the de-centralized model (each branch got their own server) Now, the infrastructure is mature already. And we need to consolidate data more quickly (not from branches -- HQ -- our company but something like HQ -- our company only) Goal We just have Solomon servers in distributor HQ. All the transactions in branches (retrieved from POS) will by synchronized with HQ server directly) There is a backup plan just in case the Internet goes down, or HQ server goes down Question With the above question, could you guys suggests some model for me ? Should we use Terminal services, any other solutions ? Any watchout/suggestions ? Any good article to read 'bout this ? Thanks a lot

    Read the article

  • Fault tolerance with a pair of tightly coupled services

    - by cogitor
    I have two tightly coupled services that can run on completely different nodes (e.g. ServiceA and ServiceB). If I start up another replicated copy of both these services for backup purposes (ServiceA-2 and ServiceB-2), what would be the best way of setting up a fault tolerant distributed system such that on a fault in any of the tightly coupled services ServiceA or ServiceB the whole communication should go through backup ServiceA-2 and ServiceB-2? Overall, all the communication should go either through both services or their backup replicas. |---- Service A | | Service B | | (backup branch - used only on fault in Service A or B) ---- Service A-2 | Service B-2 Note that in case that Service A goes down, data from Service B would be incorrect (and vice versa). Load balancing between the primary and backup branch is also not feasible.

    Read the article

  • How can I find a laptop if it has a different model name all over the world?

    - by Mike
    CONUNDRUM: A laptop review in the UK talks about how brilliant the "ASUS ABCDE 55" is, but in America, France, etc there is no such laptop name. In fact it's called "ASUS 12345 AB" - AAARRGH! QUESTION: Is there a way of finding out all the diverse names for the same laptop all over the world? Example: if Samsung create a R2D2500, then what is that spec laptop called in all the other countries (if they release it of course). Or if it's not released, what is their similar spec laptop called in the other countries? I understand that specs may be different, but if I read a review on my trusted UK website, but live in say Australia, I want to be able to find the name of the same laptop in Australia and then check out local places to buy it. So if anyone knows if there is a technique, specific website, or even how to use a company website to find out these annoying name changes I'd really appreciate it.

    Read the article

  • Puppet yum repo - Pull down 2.7.x vs 3.0.x

    - by Mike Purcell
    So a few weeks ago I started on the path to using puppet to automate all the configs/services. At the time I was using the EPEL repo, which installed version 2.6.x. After some reading I was trying to gain access to the flatten method available via the puppet stdlib, and thought it was available by default in the newer 2.7.x version. So I added a puppet repo with the following settings: [puppetlabs] name=Puppet Labs Packages baseurl=http://yum.puppetlabs.com/el/$releasever/products/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs The problem with this, is it installed v3.0.x instead of 2.7.x. And apparently 3.0.x is a major upgrade which was released only a few weeks ago. Obviously I would prefer to use the 2.7.x for the next few months while PuppetLabs fix any defects which will inevitably arise after a major version. So my question is, what setting can I add to the puppet repo config to pull down only the 2.7.x branch and not the 3.0.x branch?

    Read the article

  • Is it still "wrong" to require TLS on incoming SMTP messages

    - by jackweirdy
    According to the STARTTLS Spec Section 5: A publicly-referenced SMTP server MUST NOT require use of the STARTTLS extension in order to deliver mail locally. This rule prevents the STARTTLS extension from damaging the interoperability of the Internet's SMTP infrastructure. A publicly-referenced SMTP server is an SMTP server which runs on port 25 of an Internet host listed in the MX record (or A record if an MX record is not present) for the domain name on the right hand side of an Internet mail address. However, this spec was written in 1999, and considering it's 2014, I'd expect most SMTP clients, servers, and relays to have some kind of implementation of STARTTLS. How much email can I expect to lose if I require TLS for incoming messages?

    Read the article

  • Sharing files between multiple sites using only desktop software

    - by perlyking
    Our organisation has three sites; a head office, where the master copies of company files are stored, plus two branch offices using only workstations and a NAS or two. Currently we're talking about <10GB. At the main office, we have no admin access to the file server, as this is entirely controlled by the larger institution where we are located. For the same reason, we have no VPN remote access to this network. Instead, we simply have access to a network share using over a Novell LAN. Question: how can we share files between offices in way that minimises latency, i.e. that gives us a mirror of the main network share at each site? (There is little likelihood of concurrent editing, and we can live with the odd file conflict now and again). Up to now branch office staff have had to use GotoMyPC-type solutions to remotely access files held at the main office. Or email. I was hoping to use Google Drive on a dedicated workstation at each office to sync the contents of the network share (head office) or NAS (branch offices) via the cloud, but at my last attempt (29 Jun '12), the Google Drive installer would not allow me to designate the remote network share as the "target" folder. (I chose Google Drive over Drobbox et al. as we already use GMail for corporate mail) The next idea was to use a designated workstation at head office to mirror the network share to a local drive, then use Google Drive to push that to the cloud. This seems a step too far. Nor do I have any good ideas about how to achieve this network/local mirroring, as we can't, for example, install the rsync daemon on the server. I do not want to use Google Drive locally on each workstation as this will inconvenience users, and more importantly, move files off the backed-up, well-maintained (UPS, RAID etc) network share at head office. Our budget is only in the £100's. Should we perhaps just ditch the head office server and use something like JungleDisk? At least this presents the user with what appears to be a mapped drive.

    Read the article

  • How Do I Delete Misnamed Tagged Directory Already Committed?

    - by Teno
    I'm new to using the SVN system and having hard time figuring out how to delete files uploaded mistakenly. What I've done: Committed the trunk folder with right clicking on "SVN Commit" Right clicked and choose "TortoiseSVN" - "Branch/Tag" In the section of "To path:" in the "Branch/Tag - Tortoise" window, I typed /*mydirectory*/tags/*1.0.11* where 1.0.11 was supposed to be 1.0.1.1 After realizing 1.0.11 was a mistake, to remove the directory, I right clicked on the 1.0.11 folder in Windows and selected "TortoiseSVN" - "Delete" It deleted the folder in Windows but does not delete the folder in the remote server. According to this page,http://stackoverflow.com/questions/2092344/how-do-i-delete-a-wrongly-tagged-directory-in-svn, a command can be used and I tried to type svn in the command prompt window but it gives svn is not recognized as an internal or external command. This should be a very basic question but I could not find relevant pages. Some pages suggest to use revert but I've already committed 1.0.1.1 so I'm afraid doing revert causes the newest one to be deleted. Thanks for your information.

    Read the article

  • How best to integrate PPA into Debian?

    - by eicos
    I'm working with ZFS on Linux, on my Debian squeeze server. I've found a useful package in an Ubuntu PPA, apparently by one of the ZoL developers, and I would like to integrate it into my package system. However, I am really having a terrible time doing this. It seems like it would be possible if I upgraded my system to the testing branch, but I'd prefer not to do this for obvious reasons. So, what is the One True Way to do this? Or, what is a passable way to do this, i.e. one that does not involve an ice nine-like assimilation of my entire system to testing branch? Edit: Silly question. I clicked the little green "technical information about this package" on launchpad and all was revealed.

    Read the article

  • Timeout ssh sessions after inactivity?

    - by Insyte
    PCI requirement 8.5.15 states: "If a session has been idle for more than 15 minutes, require the user to re-enter the password to re-activate the terminal." The first, and most obvious, way to deal with ssh sessions that are idling at the bash prompt is by enforcing a read-only, global $TMOUT of 900. Unfortunately, that only covers sessions sitting at the bash prompt. The spirit of the PCI spec would also require killing sessions running top/vim/etc. I've considered writing a */1 cron job that parses the output of "/usr/bin/w" and kills the associated shell, but that seems like a blunt instrument. Any ideas for something that would actually do what the spec requires and just lock the terminal? I've looked at away and vlock; they both seem great for voluntarily locking your terminal, but I need a cron/daemon task that will enforce locking.

    Read the article

  • Using rspec to check creation of template

    - by Brian
    I am trying to use rspec with puppet to check the generation of a configuration file from an .erb file. However, I get the error 1) customizations should generate valid logstash.conf Failure/Error: content = catalogue.resource('file', 'logstash.conf').send(:parameters)[:content] ArgumentError: wrong number of arguments (0 for 1) # ./spec/classes/logstash_spec.rb:29:in `catalogue' # ./spec/classes/logstash_spec.rb:29 And the logstash_spec.rb: describe "customizations" do let(:params) { {:template => "profiles/logstash/output_broker.erb", :options => {'opt_a' => 'value_a' } } } it 'should generate valid logstash.conf' do content = catalogue.resource('file', 'logstash.conf').send(:parameters)[:content] content.should match('logstash') end end

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >