Search Results

Search found 23897 results on 956 pages for 'installation configuration'.

Page 271/956 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Nhibernate 2.1 and mysql 5 - InvalidCastException on Setup

    - by Nash
    Hello there, I am trying to use NHibernate with Spring.Net und mySQL 5. However, when setting up the connection and creating the SessionFactoryObject, I get this InvalidCastException: NHibernate seems to cast MySql.Data.MySqlClient.MySqlConnection to System.Data.Common.DbConnection which causes the exception. System.InvalidCastException wurde nicht behandelt. Message="Das Objekt des Typs \"MySql.Data.MySqlClient.MySqlConnection\" kann nicht in Typ \"System.Data.Common.DbConnection\" umgewandelt werden." Source="NHibernate" StackTrace: bei NHibernate.Tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.Prepare() in c:\CSharp\NH\nhibernate\src\NHibernate\Tool\hbm2ddl\SuppliedConnectionProviderConnectionHelper.cs:Zeile 25. bei NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.GetReservedWords(Dialect dialect, IConnectionHelper connectionHelper) in c:\CSharp\NH\nhibernate\src\NHibernate\Tool\hbm2ddl\SchemaMetadataUpdater.cs:Zeile 43. bei NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.Update(ISessionFactory sessionFactory) in c:\CSharp\NH\nhibernate\src\NHibernate\Tool\hbm2ddl\SchemaMetadataUpdater.cs:Zeile 17. bei NHibernate.Impl.SessionFactoryImpl..ctor(Configuration cfg, IMapping mapping, Settings settings, EventListeners listeners) in c:\CSharp\NH\nhibernate\src\NHibernate\Impl\SessionFactoryImpl.cs:Zeile 169. bei NHibernate.Cfg.Configuration.BuildSessionFactory() in c:\CSharp\NH\nhibernate\src\NHibernate\Cfg\Configuration.cs:Zeile 1090. bei OrmTest.Program.Main(String[] args) in C:\Users\Max\Documents\Visual Studio 2008\Projects\OrmTest\OrmTest\Program.cs:Zeile 24. bei System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) bei System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) bei Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() bei System.Threading.ThreadHelper.ThreadStart_Context(Object state) bei System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) bei System.Threading.ThreadHelper.ThreadStart() InnerException: I am using the programmatically setup approach in order to get a quick NHibernate Setup. Here is the setup Code: Configuration config = new Configuration(); Dictionary props = new Dictionary(); props.Add("dialect", "NHibernate.Dialect.MySQL5Dialect"); props.Add("connection.provider", "NHibernate.Connection.DriverConnectionProvider"); props.Add("connection.driver_class", "NHibernate.Driver.MySqlDataDriver"); props.Add("connection.connection_string", "Server=localhost;Database=orm_test;User ID=root;Password=password"); props.Add("proxyfactory.factory_class", "NHibernate.ByteCode.Spring.ProxyFactoryFactory, NHibernate.ByteCode.Spring"); config.AddProperties(props); config.AddFile("Person.hbm.xml"); ISessionFactory factory = config.BuildSessionFactory(); ISession session = factory.OpenSession(); Is something missing? I downloaded the current mysql Connector from the mysql Website.

    Read the article

  • MSBuild Community Tasks can't see msbuild in cmd

    - by phenevo
    Hi, I have winforms project app.config: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" > <section name="MyClient.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" /> </sectionGroup> </configSections> <applicationSettings> <MyClient.Properties.Settings> <setting name="MyClient_MyService_MyService" serializeAs="String"> <value>SomeUniqueKeyWithAGoodName/server/myService.asmx</value> </setting> </MyClient.Properties.Settings> </applicationSettings> </configuration> customized.targets: <Project ToolsVersion="3.5" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <BuildEnvironment>DEV</BuildEnvironment> </PropertyGroup> <Choose> <When Condition=" '$(BuildEnvironment)' == 'DEV' "> <PropertyGroup> <BaseUrlWebServices>http://tools.productionServer.pl</BaseUrlWebServices> <PublishDir>C:\Documents and Settings\myName\Desktop\Project\TestMsBuild\</PublishDir> </PropertyGroup> </When> <When Condition=" '$(BuildEnvironment)' == 'QA' "> <PropertyGroup> <BaseUrlWebServices>http://tools.testServer.pl</BaseUrlWebServices> <PublishDir>C:\Documents and Settings\myName\Desktop\Project\TestMsBuild2\</PublishDir> </PropertyGroup> </When> </Choose> </Project> and publishQA.bat (this file is in directory of project) @ECHO OFF msbuild /t:Publish /p:Configuration=Release /p:BuildEnvironment=QA /p:ApplicationVersion=1.2.3.5 pause When I'm running this bat I get error in cmd: @@echo is not recognised... When I'm starting project it's ok, but when I'm lauch try to use any method from webservice I got error about wrong URI. Good uri for QA is : http://tools.testServer.pl/server/myService.asmx Any ideas ?

    Read the article

  • Question about the code of the backend of symfony

    - by user248959
    Hi, this is the index action and template generated at the backend for the model "coche". public function executeIndex(sfWebRequest $request) { // sorting if ($request->getParameter('sort') && $this->isValidSortColumn($request->getParameter('sort'))) { $this->setSort(array($request->getParameter('sort'), $request->getParameter('sort_type'))); } // pager if ($request->getParameter('page')) { $this->setPage($request->getParameter('page')); } $this->pager = $this->getPager(); $this->sort = $this->getSort(); } This is the index template: <?php use_helper('I18N', 'Date') ?> <?php include_partial('coche/assets') ?> <div id="sf_admin_container"> <h1><?php echo __('Coche List', array(), 'messages') ?></h1> <?php include_partial('coche/flashes') ?> <div id="sf_admin_header"> <?php include_partial('coche/list_header', array('pager' => $pager)) ?> </div> <div id="sf_admin_bar"> <?php include_partial('coche/filters', array('form' => $filters, 'configuration' => $configuration)) ?> </div> <div id="sf_admin_content"> <form action="<?php echo url_for('coche_coche_collection', array('action' => 'batch')) ?>" method="post"> <?php include_partial('coche/list', array('pager' => $pager, 'sort' => $sort, 'helper' => $helper)) ?> <ul class="sf_admin_actions"> <?php include_partial('coche/list_batch_actions', array('helper' => $helper)) ?> <?php include_partial('coche/list_actions', array('helper' => $helper)) ?> </ul> </form> </div> <div id="sf_admin_footer"> <?php include_partial('coche/list_footer', array('pager' => $pager)) ?> </div> </div> In the template there is this line: include_partial('coche/filters', array('form' => $filters, 'configuration' => $configuration)) ?> but i can not find the variables $this-filters and $this-configuration in the index action. How is that possible? Javi

    Read the article

  • nhibernate subclass in code

    - by Antonio Nakic Alfirevic
    I would like to set up table-per-classhierarchy inheritance in nhibernate thru code. Everything else is set in XML mapping files except the subclasses. If i up the subclasses in xml all is well, but not from code. This is the code i use - my concrete subclass never gets created:( //the call NHibernate.Cfg.Configuration config = new NHibernate.Cfg.Configuration(); SetSubclass(config, typeof(TAction), typeof(tActionSub1), "Procedure"); //the method public static void SetSubclass(Configuration configuration, Type baseClass, Type subClass, string discriminatorValue) { PersistentClass persBaseClass = configuration.ClassMappings.Where(cm => cm.MappedClass == baseClass).Single(); SingleTableSubclass persSubClass = new SingleTableSubclass(persBaseClass); persSubClass.ClassName = subClass.AssemblyQualifiedName; persSubClass.DiscriminatorValue = discriminatorValue; persSubClass.EntityPersisterClass = typeof(SingleTableEntityPersister); persSubClass.ProxyInterfaceName = (subClass).AssemblyQualifiedName; persSubClass.NodeName = subClass.Name; persSubClass.EntityName = subClass.FullName; persBaseClass.AddSubclass(persSubClass); } the Xml mapping looks like this: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Riz.Pcm.Domain.BusinessObjects" assembly="Riz.Pcm.Domain"> <class name="Riz.Pcm.Domain.BusinessObjects.TAction, Riz.Pcm.Domain" table="dbo.tAction" lazy="true"> <id name="Id" column="ID"> <generator class="guid" /> </id> <discriminator type="String" formula="(select jt.Name from TJobType jt where jt.Id=JobTypeId)" insert="true" force="false"/> <many-to-one name="Session" column="SessionID" class="TSession" /> <property name="Order" column="Order1" /> <property name="ProcessStart" column="ProcessStart" /> <property name="ProcessEnd" column="ProcessEnd" /> <property name="Status" column="Status" /> <many-to-one name="JobType" column="JobTypeID" class="TJobType" /> <many-to-one name="Unit" column="UnitID" class="TUnit" /> <bag name="TActionProperties" lazy="true" cascade="all-delete-orphan" inverse="true" > <key column="ActionID"></key> <one-to-many class="TActionProperty"></one-to-many> </bag> <!--<subclass name="Riz.Pcm.Domain.tActionSub" discriminator-value="ZPower"></subclass>--> </class> </hibernate-mapping> What am I doing wrong? I can't find any examples on google:(

    Read the article

  • exchange web service C# code send email from home

    - by KK
    Is it possible to wrtie C# code as below .. and send email using my home network. I have a valid user name and password on that exchange server. Is there any configuration that i can set to achieve this. BY THE WAY ... this code blow works when i run it within office network .... i want this code to work when run from any network .... Thank you for your help guys ... String cMSExchangeWebServiceURL = (String)System.Configuration.ConfigurationSettings.AppSettings["MSExchangeWebServiceURL"]; String cEmail = (String)System.Configuration.ConfigurationSettings.AppSettings["Cemail"]; String cPassword = (String)System.Configuration.ConfigurationSettings.AppSettings["Cpassword"]; String cTo = (String)System.Configuration.ConfigurationSettings.AppSettings["CTo"]; ExchangeServiceBinding esb = new ExchangeServiceBinding(); esb.Timeout = 1800000; esb.AllowAutoRedirect = true; esb.UseDefaultCredentials = false; esb.Credentials = new NetworkCredential(cEmail, cPassword); esb.Url = cMSExchangeWebServiceURL; ServicePointManager.ServerCertificateValidationCallback += delegate(object sender1, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) { return true; }; // Create a CreateItem request object CreateItemType request = new CreateItemType(); // Setup the request: // Indicate that we only want to send the message. No copy will be saved. request.MessageDisposition = MessageDispositionType.SendOnly; request.MessageDispositionSpecified = true; // Create a message object and set its properties MessageType message = new MessageType(); message.Subject = subject; message.Body = new TestOutgoingEmailServer.com.cogniti.mail1.BodyType(); message.Body.BodyType1 = BodyTypeType.HTML; message.Body.Value = body; message.ToRecipients = new EmailAddressType[3]; message.ToRecipients[0] = new EmailAddressType(); //message.ToRecipients[1] = new EmailAddressType(); //message.ToRecipients[2] = new EmailAddressType(); message.ToRecipients[0].EmailAddress = "[email protected]"; message.ToRecipients[0].RoutingType = "SMTP"; //message.CcRecipients = new EmailAddressType[1]; //message.CcRecipients[0] = new EmailAddressType(); //message.CcRecipients[0].EmailAddress = toEmailAddress.ElementAt(1).ToString(); //message.CcRecipients[0].RoutingType = "SMTP"; //There are some more properties in MessageType object //you can set all according to your requirement // Construct the array of items to send request.Items = new NonEmptyArrayOfAllItemsType(); request.Items.Items = new ItemType[1]; request.Items.Items[0] = message; // Call the CreateItem EWS method. CreateItemResponseType response = esb.CreateItem(request);

    Read the article

  • Why does every thread in my application use a different hibernate session?

    - by Ittai
    Hi, I have a web-application which uses hibernate and for some reason every thread (httprequest or other threads related to queueing) uses a different session. I've implemented a HibernateSessionFactory class which looks like this: public class HibernateSessionFactory { private static final ThreadLocal<Session> threadLocal = new ThreadLocal<Session>(); private static Configuration configuration = new AnnotationConfiguration(); private static org.hibernate.SessionFactory sessionFactory; static { try { configuration.configure(configFile); sessionFactory = configuration.buildSessionFactory(); } catch (Exception e) {} } private HibernateSessionFactory() {} public static Session getSession() throws HibernateException { Session session = (Session) threadLocal.get(); if (session == null || !session.isOpen()) { if (sessionFactory == null) { rebuildSessionFactory();//This method basically does what the static init block does } session = (sessionFactory != null) ? sessionFactory.openSession(): null; threadLocal.set(session); } return session; } //More non relevant methods here. Now from my testing it seems that the threadLocal member is indeed initialized only once when the class is first loaded by the JVM but for some reason when different threads access the getSession() method they use different sessions. When a thread first accesses this class (Session) threadLocal.get(); will return null but as expected all other access requests will yeild the same session. I'm not sure how this can be happening as the threadLocal variable is final and the method threadLocal.set(session) is only used in the above context (which I'm 99.9% sure has to yeild a non null session as I would have encountered a NullPointerException at a different part of my app). I'm not sure this is relevant but these are the main parts of my hibernate.cfg.xml file: <hibernate-configuration> <session-factory> <property name="connection.url">someURL</property> <property name="connection.driver_class"> com.microsoft.sqlserver.jdbc.SQLServerDriver</property> <property name="dialect">org.hibernate.dialect.SQLServerDialect</property> <property name="hibernate.connection.isolation">1</property> <property name="hibernate.connection.username">User</property> <property name="hibernate.connection.password">Password</property> <property name="hibernate.connection.pool_size">10</property> <property name="show_sql">false</property> <property name="current_session_context_class">thread</property> <property name="hibernate.hbm2ddl.auto">update</property> <property name="hibernate.cache.use_second_level_cache">false</property> <property name="hibernate.cache.provider_class">org.hibernate.cache.NoCacheProvider</property> <!-- Mapping files --> I'd appreciate any help granted and of course if anyone has any questions I'd be happy to clarify. Ittai

    Read the article

  • Id property not populated

    - by fingers
    I have an identity mapping like so: Id(x => x.GuidId).Column("GuidId") .GeneratedBy.GuidComb().UnsavedValue(Guid.Empty); When I retrieve an object from the database, the GuidId property of my object is Guid.Empty, not the actual Guid (the property in the class is of type System.Guid). However, all of the other properties in the object are populated just fine. The database field's data type (SQL Server 2005) is uniqueidentifier, and marked as RowGuid. The application that is connecting to the database is a VB.NET Web Site project (not a "Web Application" or "MVC Web Application" - just a regular "Web Site" project). I open the NHibernate session through a custom HttpModule. Here is the HttpModule: public class NHibernateModule : System.Web.IHttpModule { public static ISessionFactory SessionFactory; public static ISession Session; private static FluentConfiguration Configuration; static NHibernateModule() { if (Configuration == null) { string connectionString = cfg.ConfigurationManager.ConnectionStrings["myDatabase"].ConnectionString; Configuration = Fluently.Configure() .Database(MsSqlConfiguration.MsSql2005.ConnectionString(cs => cs.Is(connectionString))) .ExposeConfiguration(c => c.Properties.Add("current_session_context_class", "web")) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<LeadMap>().ExportTo("C:\\Mappings")); } SessionFactory = Configuration.BuildSessionFactory(); } public void Init(HttpApplication context) { context.BeginRequest += delegate { Session = SessionFactory.OpenSession(); CurrentSessionContext.Bind(Session); }; context.EndRequest += delegate { CurrentSessionContext.Unbind(SessionFactory); }; } public void Dispose() { Session.Dispose(); } } The strangest part of all, is that from my unit test project, the GuidId property is returned as I would expect. I even rigged it to go for the exact row in the exact database as the web site was hitting. The only differences I can think of between the two projects are The unit test project is in C# Something with the way the session is managed between the HttpModule and my unit tests The configuration for the unit tests is as follows: Fluently.Configure() .Database(MsSqlConfiguration.MsSql2005.ConnectionString(cs => cs.Is(connectionString))) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<LeadDetailMap>()); I am fresh out of ideas. Any help would be greatly appreciated. Thanks

    Read the article

  • C# Windows Service Multiple Config Files

    - by Goober
    Quick Question Is it possible to have more than 1 config file in a windows service? Or is there some way I can merge them at run time? Scenario Currently I have two config files containing the below contents. After I build and install my Windows Service, I can't get my custom XML Parser class to read the content because it keeps pointing to the wrong config file, even though I am using a few lines of code to ensure it gets the right name of the config file I need to access. ALSO When I navigate to the folder in which the service is installed, there is only sign of the normal APP.Config file, and no sign of the custom config file. (I have even set the normal ones properties to "Do Not Copy" and the custom ones properties to "Copy Always"). Config File Determination Code string settingsFile = String.Format("{0}.exe.config", System.AppDomain.CurrentDomain.BaseDirectory + Assembly.GetExecutingAssembly().GetName().Name); CUSTOM CONFIG File <?xml version="1.0" encoding="utf-8" ?> <configuration> <servers> <SV066930> <add name="Name" value = "SV066930" /> <processes> <SimonTest1> <add name="ProcessName" value="notepad.exe" /> <add name="CommandLine" value="C:\\WINDOWS\\system32\\notepad.exe C:\\WINDOWS\\Profiles\\TA2TOF1\\Desktop\\SimonTest1.txt" /> </SimonTest1> </processes> </SV066930> </servers> </configuration> NORMAL APP.CONFIG File <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxxxx" /> </configSections> <connectionStrings> <add name="DB" connectionString="Data Source=etc......" /> </connectionStrings> </configuration>

    Read the article

  • Solution Output Directory

    - by L.E.O
    The project that I'm currently working on is being developed by multiple teams where each team is responsible for different part of the project. They all have set up their own C# projects and solutions with configuration settings specific to their own needs. However, now we need to create another, global solution, which will combine and build all projects into the same output directory. The problem that I have encountered though, is that I have found only one way to make all projects build into the same output directory - I need to modify configurations for all of them. That is what we would like to avoid. We would prefer that all these projects had no knowledge about this "global" solution. Each team must retain possibility to work just with their own sub-solution. One possible workaround is to create a special configuration for all projects just for this "global" solution, but that could create extra problems since now you have to constantly sync this configuration settings with the regular one, used by that specific team. Last thing we want to do is to spend hours trying to figure out why something doesn't work when building under global solution just because of some check box that developers have checked in their configuration, but forgot to do so in the global configuration. So, to simplify, we need some sort of output directory setting or post build event that would only be present when building from that global, all-inclusive solution. Is there any way to achieve this without changing something in projects configurations? Update 1: Some extra details I guess I need to mention: We need this global solution to be as close as possible to what the end user gets when he installs our application, since we intend to use it for debugging of the entire application when we need to figure out which part of the application isn't working before sending this bug to the team working on that part. This means that when building under global solution the output directory hierarchy should be the same as it would be in Program Files after installation, so that if, for example, we have Program Files/MyApplication/Addins folder which contains all the addins developed by different teams, we need the global solution to copy the binaries from addins projects and place them in the output directory accordingly. The thing is, the team developing an addin doest necessary know that it is an addin and that it should be placed in that folder, so they cannot change their relative output directory to be build/bin/Debug/Addins.

    Read the article

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • HP SmartArray P400: How to repair failed logical drive?

    - by TegtmeierDE
    I have a HP Server with SmartArray P400 controller (incl. 256 MB Cache/Battery Backup) with a logicaldrive with replaced failed physicaldrive that does not rebuild. This is how it looked when I detected the error: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, Failed) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) unassigned physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SATA, 750 GB, OK) ~# I thought that I had drive 2I:1:8 configured as a spare for Array A and Array B, but it seems this was not the case :-(. I noticed the problem due to I/O errors on the host, even if only 1 physicaldrive of the RAID5 is failed. Does someone know why this could happen? The logicaldrive should go into "Degraded" mode but still be fully accessible from the host os!? I first tried to add the unassigned drive 2I:1:8 as a spare to logicaldrive 2, but this was not possible: ~# /usr/sbin/hpacucli ctrl slot=0 array B add spares=2I:1:8 Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# Interestingly it is possible to add the unassigned drive to the first array without problems. I thought maybe the controller put the array into "failed" state due to the missing spare and protects failed arrays from modification. So I tried was to reenable the logicaldrive (to add the spare afterwards): ~# /usr/sbin/hpacucli ctrl slot=0 ld 2 modify reenable Warning: Any previously existing data on the logical drive may not be valid or recoverable. Continue? (y/n) y Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration. ~# But as you can see, re-enabling the logicaldrive this was not possible. Now I replaced the failed drive by hotswapping it with the unassigned drive. The status now looks like this: ~# /usr/sbin/hpacucli ctrl slot=0 show config Smart Array P400 in Slot 0 (Embedded) (sn: XXXX) array A (SATA, Unused Space: 0 MB) logicaldrive 1 (698.6 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 750 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 750 GB, OK) array B (SATA, Unused Space: 0 MB) logicaldrive 2 (2.7 TB, RAID 5, Failed) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 750 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 750 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SATA, 750 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SATA, 750 GB, OK) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SATA, 750 GB, OK) ~# The logical drive is still not accessible. Why is it not rebuilding? What can I do? FYI, this is the configuration of my controller: ~# /usr/sbin/hpacucli ctrl slot=0 show Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: XXXX Cache Serial Number: XXXX RAID 6 (ADG) Status: Enabled Controller Status: OK Chassis Slot: Hardware Revision: Rev E Firmware Version: 5.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Analysis Inconsistency Notification: Disabled Raid1 Write Buffering: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True ~# Thanks for you help in advance.

    Read the article

  • Failed upgrade of PHP on Ubunutu 12.04, error: Sub-process /usr/bin/dpkg returned an error code (1)

    - by DanielAttard
    I just tried to upgrade my version of PHP on Ubuntu 12.04 and now I have messed it up. First I did this: sudo add-apt-repository ppa:ondrej/php5-oldstable Then I did this: sudo apt-get update Then finally I did this: sudo apt-get install php5 And now I am getting an error message about Sub-process /usr/bin/dpkg returned an error code (1) What have I done wrong? How can I fix this problem? Thanks. Here are the errors received: Do you want to continue [Y/n]? Y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Setting up libapache2-mod-php5 (5.4.28-1+deb.sury.org~precise+1) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing libapache2-mod-php5 (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Setting up php5-cli (5.4.28-1+deb.sury.org~precise+1) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing php5-cli (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-curl: php5-curl depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-curl (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-gd: php5-gd depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-gd (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-mcrypt: php5-mcrypt depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-mcrypt (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5-mysql: php5-mysql depends on phpapi-20100525+lfs; however: Package phpapi-20100525+lfs is not installed. Package libapache2-mod-php5 which provides phpapi-20100525+lfs is not configured yet. Package php5-cli which provides phpapi-20100525+lfs is not configured yet. dpkg: error processing php5-mysql (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of php5: php5 depends on libapache2-mod-php5 (>= 5.4.28-1+deb.sury.org~precise+1) | libapache2-mod-php5filter (>= 5.4.28-1+deb.sury.org~precise+1) | php5-cgi (>= 5.4.28-1+deb.sury.org~precise+1) | php5-fpm (>= 5.4.28-1+deb.sury.org~precise+1); however: Package libapache2-mod-php5 is not configured yet. Package libapache2-mod-php5filter is not installed. Package php5-cgi is not installed. Package php5-fpm is not installed. dpkg: error processing php5 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: libapache2-mod-php5 php5-cli php5-curl php5-gd php5-mcrypt php5-mysql php5 E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • windows 8.1 upgrade fails with error code 0xc1900101-0x20017

    - by cmorse
    I just tried to install windows 8.1 on my laptop, but it fails to install with the message: Sorry we couldn't complete the update to Windows 8.1. We restored your previous version of Windows to this PC 0xC1900101 - 0x20017 It looks like on the first boot that the laptop is going to the PC restore screen (it asks what kind of keyboard I have, and then what repair optins I would like to take). Thus far I have just been selecting "Continue to windows 8." I'm running a Lenovo x220i tablet. I've got 43GB of free disk space. It installed just fine on my desktop. The primary difference between the two machines is that the desktop has media center install, and isn't using TrueCrypt. Full WindowsUpdate.log http://pastebin.com/hGmAW4Q1 Most important portion of WindowsUpdate.log: 2013-10-17 10:41:06:671 964 694 Agent ************* 2013-10-17 10:41:06:671 964 694 Agent ** START ** Agent: Finding updates [CallerId = AutomaticUpdates] 2013-10-17 10:41:06:671 964 694 Agent ********* 2013-10-17 10:41:06:671 964 694 Agent * Online = No; Ignore download priority = No 2013-10-17 10:41:06:671 964 694 Agent * Criteria = "IsInstalled=0 and DeploymentAction='Installation' or IsPresent=1 and DeploymentAction='Uninstallation' or IsInstalled=1 and DeploymentAction='Installation' and RebootRequired=1 or IsInstalled=0 and DeploymentAction='Uninstallation' and RebootRequired=1" 2013-10-17 10:41:06:671 964 694 Agent * ServiceID = {7971F918-A847-4430-9279-4A52D1EFE18D} Third party service 2013-10-17 10:41:06:671 964 694 Agent * Search Scope = {Machine & All Users} 2013-10-17 10:41:06:671 964 694 Agent * Caller SID for Applicability: S-1-5-18 2013-10-17 10:41:07:233 964 870 Report REPORT EVENT: {AD47FBDC-F7F9-4E7F-BAF4-DBA3784C7101} 2013-10-17 10:41:06:436-0600 1 202 [AU_REBOOT_COMPLETED] 102 {00000000-0000-0000-0000-000000000000} 0 0 AutomaticUpdates Success Content Install Reboot completed. 2013-10-17 10:41:07:233 964 870 Report REPORT EVENT: {8D4E7A67-9526-4702-A897-5BE5F97497AF} 2013-10-17 10:41:06:639-0600 1 204 [AGENT_INSTALLING_FAILED_POST_REBOOT] 101 {8951E70D-4332-4F7C-B92D-D9362E384959} 1 c1900101 WSAcquisition Failure Content Install Installation Failure Post Reboot. 2013-10-17 10:41:07:249 964 870 Report CWERReporter::HandleEvents - WER report upload completed with status 0x8 2013-10-17 10:41:07:249 964 870 Report WER Report sent: 7.8.9200.16715 0xc1900101(0x20017) 8951E70D-4332-4F7C-B92D-D9362E384959 Install 101 Unmanaged 2013-10-17 10:41:07:249 964 870 Report CWERReporter finishing event handling. (00000000)

    Read the article

  • AdPrep logs show an LDAP error

    - by Omar
    What I am trying to do is transition our domain from Server 2003 Enterprise x32 to Server 2008 R2 Enterprise x64. Here is what I have done thus far. The 2003 server is a physical machine, the 2008 server is a virtual machine Built a virtual machine that has Server 2008 R2 Enterprise x64 and joined it to the domain as a domain member On the 2003 DC, Raised Domain Functional Level and Forest Functional Level to Windows Server 2003 On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is 30 On the 2003 DC, inserted the Windows Server 2008 Enterprise x32 Edition to copy over the adprep folder. This version is the only one that seemed to work On the 2003 DC, opened command prompt and went to adprep directory and ran adprep /forestprep , adprep /domainprep , and adprep /domainprep /gpprep On the 2008 server, Installed the Active Directory Domain Services role from Server Manager On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is now 44 When I go to run dcpromo on the 2008 server, I get a message that says: "To install a domain controller into this Active Directory forest, you must first prepare using adprep /forestprep" I went back to the 2003 DC server and went through the adprep logs and I came across this: Adprep was unable to modify the security descriptor on object CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. [Status/Consequence] ADPREP was unable to merge the existing security descriptor with the new access control entry (ACE). [User Action] Check the log file ADPrep.log in the C:\WINDOWS\debug\adprep\logs\20100327143517 directory for more information. Adprep encountered an LDAP error. *Error code: 0x20. Server extended error code: 0x208d, Server error message: 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of: 'CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com* In fact, I got three of these errors. The LDAP error is consistent with all three, but the top part where it says "Adprep was unable to modify the security descriptor on object" are different. They are the following: CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=DirectoryEmailReplication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=KerberosAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. The credentials I am using on the 2008 server when running dcpromo is my domain account. My account is part of the domain and enterprise admin groups. I've tried various quick fixes that I've came across through Google searches that include: Disabling AntiVirus on current DCs Pointing DNS on PDC to point to itself Changing the Schema Update Allowed key to 1 and tried rerunning adprep - when rerunning adprep, told me that Forest-wide information has already been updated Disabled Windows Firewall on the Server 2008 box On the 2003 DC, went to Domain Controller Security Policy Local Policies User Rights Assignment and added Domain Admins to the Enable computer and user accounts to be trusted for delegation policy setting Both our PDC and BDC are Global Catalog Servers. Not sure if this matters or not I ran the command netdom query fsmo and verified that the FSMO role holder is the current 2003 PDC I ran dcdiag /v on the 2003 PDC and the only thing that failed was Services. Dnscache Service is stopped on the PDC I even went as far as deleting the virtual machine and recreating it from scratch - no avail... Help :(

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • socket problem with MySQL

    - by Hristo
    This is a recent problem... MySQL was working and a couple of days ago I must have done something. I deleted MySQL and tried reinstalling using the .dmg file. The mysql.sock file never gets created and I get the following error messages: Hristo$ mysql Enter password: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/mysql/mysql.sock' (2) I also tried stopping Apache and installing but Apache gave me an error... I don't know if this is good or bad: Hristo$ sudo apachectl stop launchctl: Error unloading: org.apache.httpd I tried the MacPorts installation as well but the socket file still didn't get created. I don't really know what to do and I don't want to reinstall Snow Leopard and start from scratch :/ I also tried installing the 32-bit version and same deal. No luck. Finally... I tried doing the source installation but when I get to the configuration step, I get the following error: -bash: ./configure: No such file or directory The file is "mysql-5.1.47-osx10.6-x86_64.tar.gz" so I think it is the proper file for source installation and yes I have a 64 bit system. I don't know what to do anymore. Any ideas? Thanks, Hristo

    Read the article

  • Visual Studio 2005 won't install on Windows 7

    - by Peanut
    Hi, My question relates very closely to this question: http://superuser.com/questions/34190/visual-studio-2005-sp1-refuses-to-install-in-windows-7 However this question hasn't provided the answer I'm looking for. I'm trying to install Visual Studio 2005 onto a clean Windows 7 (64 bit) box. However I keep getting the following error when the 'Microsoft Visual Studio 2005' component finishes installing ... Error 1935.An error occurred during the installation of assembly 'policy.8.0.Microsoft.VC80.OpenMP,type="win32-policy",version="8.0.50727.42",publicKeyToken="1fc8b3b9a1e18e3b",processorArchitecture="x86",Please refer to Help and Support for more information. HRESULT: 0x80073712. On my first attempt to install VS 2005 I got a warning about compatibility issues. I stopped at this point, downloaded the necessary service packs and restarted the installation from the beginning. Every since then I just get the error message above. I keep rolling back the installation and trying again ... it's but always the same error. Any help would be very much appreciated. Thanks.

    Read the article

  • How to create an MST for silent install using Orca?

    - by Sanarothe
    Hi. I'm trying to deploy 7zip via GPO; I assigned the original MSI, but the package installation simply doesn't take place. What I've gathered is that I need to create an MST. In the spirit of trying to learn as much as possible about it, I've opted to use Orca rather than a third-party automagic tool, but I'm at a loss as to which fields to edit. So far the only change that I've made is to give the license accepted checkbox a value of "1" instead of pointing to another key that, still, just gave it a value of "1." So, to give this some structure, How does (Or what criteria should I consider) creating a MST make the install noninteractive/silent? Do you have to manually reconfigure the MSI to simply not perform the GUI aspects? Or do I have to execute the program in silent mode after defining the variables the the installer requests? (Though, of course, it seems that would defeat the purpose of the MST) How do I determine which fields I need to edit? I've loaded the installer and it takes three inputs: License acceptance, feature set and installation location. I want all of the default values: I'm just trying to deploy it at all, not customize the installation. I BELIEVE that I should be messing with some values in the Registry table, but I really don't know. If I'm not asking the right questions, can someone point me to a THOROUGH resource or documentation for this process? I've already gone over the technet articles on basic Orca use and deployment, but I couldn't really find anything on creating MST that didn't involve a third party program in which one runs a 'dummy' installer to get the before and after snapshots. Thank you very much, Cameron UPDATE: After spending the day troubleshooting, I finally got my server to send out 7zip, but not until I had also assigned firefox. Not sure why it didn't want to send out 7zip by itself, but I also had some domain naming problems. Thanks for the input (GPResult helped enormously.)

    Read the article

  • How to I make my bootcamp partition bootable again?

    - by KJFMusic
    I'm having a similar problem as everyone else in this posting. I have 5 partitions. 3 of which I created for my Mac OS Lion installation, Windows 7 installation and a 3rd for storage. Everything was running fine for quite sometime until recently. My Windows 7 installation has suddenly stopped booting. Instead of a start up screen I get: Windows failed to start. A recent hardware or software change might be the cause. File: \BOOT\BCD Status: 0xc000000d Info: An error occurred while attempting to read the boot configuration data Mac OS Lion starts up fine. I'm unable to mount my "Bootcamp" partition nor the "Storage" partition. On top of that "Storage" has been renamed to "disk0s5". When I installed Windows 7 it didn't recognize the "Storage" partition that was created in Lion so it merged what it thought was free diskspace (I'm assuming the same space that Mac OS recognized as Storage) to the Root Drive of Windows 7 (Bootcamp). Are you able to assist?

    Read the article

  • Debian/Redmine: Upgrade multiple instances at once

    - by Davey
    I have multiple Redmine instances. Let's call them InstanceA and InstanceB. InstanceA and InstanceB share the same Redmine installation on Debian. Suppose I would want to install Redmine 1.3 on both instances, how would I do that? After upgrading the core files I would have to migrate the databases. What I would like to know is: can I migrate all databases in a single action? Normally I would do something like: rake -s db:migrate RAILS_ENV=production X_DEBIAN_SITEID=InstanceA for each instance, but this would get tedious if you have 50+ instances. Thanks in advance! Edit: The README.Debian file that's in the (Debian) Redmine package states: SUPPORTS SETUP AND UPGRADES OF MULTIPLE DATABASE INSTANCES This redmine package is designed to automatically configure database BUT NOT the web server. The default database instance is called "default". A debconf facility is provided for configuring several redmine instances. Use dpkg-reconfigure to define the instances identifiers. But can't figure out what to do with the "debconf facility". Edit2: My environment is a default Debian 6.0 "Squeeze" installation with a default Redmine (aptitude install redmine) installation on a default libapache2-mod-passenger. I have setup two instances with dpkg-reconfigure redmine.

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • Changing order of Thunderbird email address autocomplete?

    - by Brooks Moses
    I recently did a system wipe and installed Thunderbird 3.0, and imported all of my email setup from a previous Thunderbird 2.0 installation. Almost everything is working fine, but I'm having a problem with the autocomplete in email addresses when writing messages. The relevant behavior is this: In the old 2.0 installation, the autocomplete appeared to know which email addresses I used most frequently, and so when I typed "m" in the address line, it would pick as the default selection the "m"-person who I frequently write email to. (It's possible this is an illusion and it simply picked people in the order I added them to my address book.) Thus, I have become used to typing "m"-"enter" in the address field, and getting this person. In the current 3.0 installation, however, the autocomplete order has changed. It's not the same as it was, and it's not alphabetical. The result is that I'm spending extra time looking at the email address bar, and more annoyingly, half the time the old muscle-memory kicks in and I find myself with an email that's addressed to a couple of customers rather than to my boss and coworker. Thus, two questions: How does Thunderbird determine this autocomplete order, among a set of addresses all of which are in the same address book? How can I change this ordering to be what I want? (I have tried Google-searching, and found a number of incomplete answers, nearly all of which were for version 1.0 or thereabouts, and reference settings dialog boxes that no longer exist.)

    Read the article

  • Unable to activate Windows XP

    - by Josh Kelley
    The latest round of Patch Tuesday updates left my Windows XP computer unbootable. ("Fatal System Error: The Windows Logon Process system process terminated unexpectedly.") After much messing around with the recovery console, an XP CD's repair mode, and manually copying registry files around, I have a system that can boot again. However, I overwrote my OEM XP installation's activation information while trying to run a retail XP CD's setup, so it needs reactivation. Here's my problem: I cannot activate it at all. I log in, Windows tells me I have to activate to continue, I click Yes, and absolutely nothing happens: no windows, no response to keyboard or mouse, no response to Ctrl-Alt-Del, nothing. Safe mode works, but I can't activate in safe mode (EDIT: not even safe mode with networking). I read a trick online of pressing [Windows Key]+U to bring up the Microsoft Narrator, and that works, but clicking its Microsoft Web Site link does nothing. My last attempt to resolve this was to reinstall Windows off of the OEM CD. Now I have two parallel Windows installations, both on the same hard drive, one with all of my stuff and no way to activate it, one fully activated with no usable programs. Any ideas? Any way to activate in safe mode? Any way to copy activation information from my activated installation to my unactivated installation (since they're both on the same hard drive)?

    Read the article

  • Does a Windows 7 dvd only have one language?

    - by user326639
    I'm a Dutch developer living in Spain. I recently composed a new computer from new parts and I installed Windows 7 Professional 64 bit (OEM) on it. On the web site of the on-line shop there was a note saying "language: Spanish". Because my English is quite a bit better than my Spanish, but mainly because it is much easier to find information on the web in English, I want my OS to be in English. I asked the on-line shop if they also sold the UK version of Windows 7 but they assured me that "all Windows 7 versions are multi-language". With the installation of XP a few years ago, I remember that I was offered the option English or Spanish while the installation process was still in the DOS-like (non-graphical) screen. While installing Windows 7, I did not see any non-graphical screen and the first time I was asked about the language, was in a drop-down list that contained only Spanish. I know about the language pack possibility of Windows 7, but this is not available on Professional. Even if I had Ultimate, I don't know if it would be possible to install Windows in Spanish, and then add English as a second language from a language pack. I get the impression that English has to be the base-language. Furthermore, I am a bit sceptical until I'd see it in action. What happens with shortcuts (i.e. Select All: ctrl-a in English / ctrl-e in Spanish, and what about logging messages in Event Viewer, etc) So can anybody tell me how it works with languages in Windows 7? Have I been misinformed by the computer shop? Could it be that OEM versions of Windows are single language an a full installation is not?

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >