Search Results

Search found 23897 results on 956 pages for 'installation configuration'.

Page 281/956 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • log4j.xml show com.foo, but hide com.foo.bar

    - by Bas Hendriks
    Hi, I have the following log4j.xml configuration: <log4j:configuration> <appender name = "CONSOLE" class = "org.apache.log4j.ConsoleAppender"> <param name = "Target" value = "System.out"/> <param name = "Threshold" value = "DEBUG"/> </appender> <category name = "com.foo"> <appender-ref ref = "CONSOLE"/> </category> </log4j:configuration> This displays every log in com.foo.* . I want to disable logging in com.foo.bar.* . How do i do this.

    Read the article

  • App.config for SpecFlow not recognized

    - by INTPnerd
    How do I get my App.config file to be recognized/used? I have tried placing it in the top folder of my project and in the same folder as my feature files. Here are the contents of my App.config file: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="specFlow" type="TechTalk.SpecFlow.Configuration.ConfigurationSectionHandler, TechTalk.SpecFlow"/> </configSections> <specFlow> <runtime detectAmbiguousMatches="true" stopAtFirstError="false" missingOrPendingStepsOutcome="Error" /> </specFlow> </configuration> Specifically I am trying to tell NUnit to have a fail result when there is a missing or pending step which is why I am specifying "Error" for this.

    Read the article

  • Switching VS2010 to use Windows 7.1 SDK

    - by freefallr
    I've used VS2008 on my development machine for some years now, with windows SDK v7.1. I've installed VS2010, and it's using the Windows SDK v7.0a, but I need it to use the Windows 7.1 SDK (which I had installed prior to installing VS2010). When I run the Windows SDK 7.1 configuration tool, to switch the Windows SDK in use, the tool updates for VS2008, but not for VS2010. The message it reports is: "The Windows SDK Configuration Tool has successfully set Windows SDK version v7.1 as the current version for Visual Studio 2008" The configuration tool is installed with the Windows 7.1 SDK and is found here: "C:\Program Files\Microsoft SDKs\Windows\v7.1\Setup\WindowsSdkVer.exe" VS2010 continues to use WSDK 7.0a, which extremely frustrating, as I need to do DirectShow development (so I need to build the baseclasses, which aren't released with 7.0a release of WSDK). Would I be correct in assuming that it's not updating VS2010 settings because VS2010 wasn't installed at the time that I installed Windows 7.1 SDK? Can I fix this manually, or should I uninstall Windows 7.1 SDK, then reinstall it? Any other suggestions / workarounds for this?

    Read the article

  • ssms cannot connect to default sql server instance without specifying port number

    - by Oliver
    I have multiple SQL Server 2005 instances on a box. From SSMS on my desktop I can connect to that box's named instances with no problem. After some recent network configuration changes, when I want to connect to the default instance from SSMS on my desktop, I have to specify the port number. Before the network changes, I did not have to specify the port number of the default instance. If I remote to any other box (including the one in question), and use that box's SSMS to connect to that default instance, success. From my desktop, and only from my desktop, I have to specify the port number. Is it a SQL Server configuration that I've missed? Is it possible something in my PC's configuration is getting in the way? Where would I look, or what could I pass on to the network folks to help them resolve this? Any help is appreciated.

    Read the article

  • sharp architecture mapping error

    - by fez
    this is the error when i load product entity my code is Configuration cfg = new NHibernate.Cfg.Configuration(); ISessionFactory sessions; public MedicineController() //Construtor { cfg.Configure(); sessions = cfg.BuildSessionFactory(); } using (var session = sessions.OpenSession()) { var pGet = session.Get<Product>(0); } The Error is Unable to locate persister for the entity named 'SharpArchitecture.Domain.Product'. The persister define the persistence strategy for an entity. Possible causes: - The mapping for 'SharpArchitecture.Domain.Product' was not added to the NHibernate configuration. Thanks in advance

    Read the article

  • dll woes c# noob

    - by Chin
    Hi, I'm a bit of a visual studio noob. I have just restarted a project in which I am using NHibernate. The project worked fine last time I used it but now is giving the following error. System.IO.FileLoadException: Could not load file or assembly 'Iesi.Collections, Version=1.0.0.3, Culture=neutral, PublicKeyToken=aa95f207798dfdb4' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) at NHibernate.Cfg.Configuration.Reset() at NHibernate.Cfg.Configuration..ctor(SettingsFactory settingsFactory) at NHibernate.Cfg.Configuration..ctor() at Luther.Dao.Repositories.Session.NHibernateHelper..cctor() in NHibernateHelper.cs: line 18 I notice the current reference to the iesi dll ia at 1.0.1.0. What is the best way to get this up and running again? Try and find the appropriate version of the dll or sort out the manifest file? Any pointers much appreciated.

    Read the article

  • An item with the same key has already been added.

    - by senzacionale
    Can someone advice me how to prevent this error. An item with the same key has already been added.? // Failed to find a matching SessionFactory so make a new one. if (sessionFactory == null) { Check.Require(File.Exists(sessionFactoryConfigPath), "The config file at '" + sessionFactoryConfigPath + "' could not be found"); Configuration cfg = new Configuration(); cfg.Configure(sessionFactoryConfigPath); /*MINE*/ var persistenceModel = new PersistenceModel(); persistenceModel.AddMappingsFromAssembly(Assembly.Load("EMedicine.Core")); persistenceModel.Configure(cfg); /*END_OF_MINE*/ // Now that we have our Configuration object, create a new SessionFactory sessionFactory = cfg.BuildSessionFactory(); if (sessionFactory == null) { throw new InvalidOperationException("cfg.BuildSessionFactory() returned null."); } if (sessionFactoryConfigPath != null) sessionFactories.Add(sessionFactoryConfigPath, sessionFactory); } Error is here: sessionFactory = cfg.BuildSessionFactory();

    Read the article

  • what special issues are at play when loading a config file from the comand prompt with DTExec

    - by Ralph Shillington
    If I run a package from the Management Studio, and specify a configuration file, everything works as expected. However if I try and run the package from the command prompt with DTExec I get the error: Cannot load the XML configuration file. The XML configuration file may be malformed or not valid. The command I'm using to execute the package is: dtexec /conf ConfigurationDemo.dtsConfig /f Package.dtsx I am running the dtexec from the folder where these two files reside. Is there an addtional switch or something that must used to get dtexec to behave the same was at the management Stduio in launching a package?

    Read the article

  • Certificate Information from WCF Service using Transport security mode

    - by Langdon
    Is there any way to pull information about which client certificate was used inside of my web service method when using <security mode="Transport>? I sifted through OperationContext.Current but couldn't find anything obvious. My server configuration is as follows: <basicHttpBinding> <binding name="SecuredBasicBindingCert"> <security mode="Transport"> <message clientCredentialType="Certificate" /> </security> </binding> </basicHttpBinding> I'm working with a third party pub/sub system who is unfortunately using DataPower for authentication. It seems like if I'm using WCF with this configuration, then I'm unable to glean any information about the caller (since no credentials are actually sent). I somehow need to be able to figure out whose making calls to my service without changing my configuration or asking them to change their payload.

    Read the article

  • How do you: Symfony functional test on a secure app?

    - by Dan Tudor
    Im trying to perform some function tests in symfony 1.4. My application is secure so the tests return 401 response codes not 200 as expected. I've tried creating a context and authentication the user prior to performing the test but to no avail. Any suggestions? Do I need to pass sfContext to the sfTestFunctional? Thanks include(dirname(FILE).'/../../bootstrap/functional.php'); $configuration = ProjectConfiguration::getApplicationConfiguration('backend', 'test', true); $context = sfContext::createInstance($configuration); new sfDatabaseManager($configuration); $loader = new sfPropelData(); $loader-loadData(sfConfig::get('sf_test_dir').'/fixtures'); // load test data $user = sfGuardUserPeer::retrieveByUsername('test'); $context-getUser()-signin($user); $browser = new sfTestFunctional(new sfBrowser()); $browser- get('/')- with('request')-begin()- isParameter('module', 'video')- isParameter('action', 'index')- end()- with('response')-begin()- isStatusCode(200)- //checkElement('body', '!/This is a temporary page/')- end() ;

    Read the article

  • No Hibernate Session bound to thread grails

    - by naresh
    Actually we've lot of quartz jobs in our application. For some time all of the jobs work fine. After some time all jobs are throwing the following exception. org.quartz.JobExecutionException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here [See nested exception: java.lang.IllegalStateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here] at grails.plugins.quartz.QuartzDisplayJob.execute(QuartzDisplayJob.groovy:37) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) Caused by: java.lang.IllegalStateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here at grails.plugins.quartz.QuartzDisplayJob.execute(QuartzDisplayJob.groovy:29) ... 2 more

    Read the article

  • How to publish multiple jar files to maven on a clean install

    - by Abhijit Hukkeri
    I have a used the maven assembly plugin to create multiple jar from one jar now the problem is that I have to publish these jar to the local repo, just like other maven jars publish by them self when they are built maven clean install how will I be able to do this here is my pom file <project> <parent> <groupId>parent.common.bundles</groupId> <version>1.0</version> <artifactId>child-bundle</artifactId> </parent> <modelVersion>4.0.0</modelVersion> <groupId>common.dataobject</groupId> <artifactId>common-dataobject</artifactId> <packaging>jar</packaging> <name>common-dataobject</name> <version>1.0</version> <dependencies> </dependencies> <build> <plugins> <plugin> <groupId>org.jibx</groupId> <artifactId>maven-jibx-plugin</artifactId> <version>1.2.1</version> <configuration> <directory>src/main/resources/jibx_mapping</directory> <includes> <includes>binding.xml</includes> </includes> <verbose>false</verbose> </configuration> <executions> <execution> <goals> <goal>bind</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-assembly-plugin</artifactId> <executions> <execution> <id>make-business-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>flight-dto</finalName> <descriptors> <descriptor>src/main/assembly/car-assembly.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> <execution> <id>make-gui-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>app_gui</finalName> <descriptors> <descriptor>src/main/assembly/bike-assembly.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> </executions> </plugin> </plugins> </build> </project> Here is my assembly file <assembly> <id>app_business</id> <formats> <format>jar</format> </formats> <baseDirectory>target</baseDirectory> <includeBaseDirectory>false</includeBaseDirectory> <fileSets> <fileSet> <directory>${project.build.outputDirectory}</directory> <outputDirectory></outputDirectory> <includes> <include>com/dataobjects/**</include> </includes> </fileSet> </fileSets> </assembly>

    Read the article

  • Freemarker template not found

    - by brock
    Hi, I'm currently trying to get Freemarker to work with my application using Spring. No matter what I try I keep getting template not found. I am not sure if I have the configuration set up properly, but it never finds my template. Here is my spring bean config: <bean id="freemarkerConfig" class="org.springframework.web.servlet.view.freemarker.FreeMarkerConfigurer"> <property name="templateLoaderPath" value="/WEB-INF/freemarker/"/> </bean> Whenever I try to call getTemplate on the freemaker configuration it always sends back a template not found error. So if I do configuration.getTemplate("testTemplate.ftl") it always throws an IOException. I'm not sure if anyone has an idea of what I'm doing wrong. Thanks for all your help!

    Read the article

  • Spring security access with multiple roles

    - by Evgeny Makarov
    I want to define access for some pages for user who has one of following roles (ROLE1 or ROLE2) I'm trying to configure this in my spring security xml file as following: <security:http entry-point-ref="restAuthenticationEntryPoint" access-decision-manager-ref="accessDecisionManager" xmlns="http://www.springframework.org/schema/security" use-expressions="true"> <!-- skipped configuration --> <security:intercept-url pattern="/rest/api/myUrl*" access="hasRole('ROLE1') or hasRole('ROLE2')" /> <!-- skipped configuration --> </security:http> I've tried various ways like: access="hasRole('ROLE1, ROLE2')" access="hasRole('ROLE1', 'ROLE2')" access="hasAnyRole('[ROLE1', 'ROLE2]')" etc but nothing seems to be working. I'm keep getting exception java.lang.IllegalArgumentException: Unsupported configuration attributes: or java.lang.IllegalArgumentException: Failed to parse expression 'hasAnyRole(['ROLE1', 'ROLE2'])' how should it be configured? Thanks

    Read the article

  • The challenge of communicating externally with IRM secured content

    - by Simon Thorpe
    I am often asked by customers about how they handle sending IRM secured documents to external parties. Their concern is that using IRM to secure sensitive information they need to share outside their business, is troubled with the inability for third parties to install the software which enables them to gain access to the information. It is a very legitimate question and one i've had to answer many times in the past 10 years whilst helping customers plan successful IRM deployments. The operating system does not provide the required level of content security The problem arises from what IRM delivers, persistent security to your sensitive information where ever it resides and whenever it is in use. Oracle IRM gives customers an array of features that help ensure sensitive information in an IRM document or email is always protected and only accessed by authorized users using legitimate applications. Examples of such functionality are; Control of the clipboard, either by disabling completely in the opened document or by allowing the cut and pasting of information between secured IRM documents but not into insecure applications. Protection against programmatic access to the document. Office documents and PDF documents have the ability to be accessed by other applications and scripts. With Oracle IRM we have to protect against this to ensure content cannot be leaked by someone writing a simple program. Securing of decrypted content in memory. At some point during the process of opening and presenting a sealed document to an end user, we must decrypt it and give it to the application (Adobe Reader, Microsoft Word, Excel etc). This process must be secure so that someone cannot simply get access to the decrypted information. The operating system alone just doesn't have the functionality to deliver these types of features. This is why for every IRM technology there must be some extra software installed and typically this software requires administrative rights to do so. The fact is that if you want to have very strong security and access control over a document you are going to send to someone who is beyond your network infrastructure, there must be some software to provide that functionality. Simple installation with Oracle IRM The software used to control access to Oracle IRM sealed content is called the Oracle IRM Desktop. It is a small, free piece of software roughly about 12mb in size. This software delivers functionality for everything a user needs to work with an Oracle IRM solution. It provides the functionality for all formats we support, the storage and transparent synchronization of user rights and unique to Oracle, the ability to search inside sealed files stored on the local computer. In Oracle we've made every technical effort to ensure that installing this software is a simple as possible. In situations where the user's computer is part of the enterprise, this software is typically deployed using existing technologies such as Systems Management Server from Microsoft or by using Active Directory Group Policies. However when sending sealed content externally, you cannot automatically install software on the end users machine. You need to rely on them to download and install themselves. Again we've made every effort for this manual install process to be as simple as we can. Starting with the small download size of the software itself to the simple installation process, most end users are able to install and access sealed content very quickly. You can see for yourself how easily this is done by walking through our free and easy self service demonstration of using sealed content. How to handle objections and ensure there is value However the fact still remains that end users may object to installing, or may simply be unable to install the software themselves due to lack of permissions. This is often a problem with any technology that requires specialized software to access a new type of document. In Oracle, over the past 10 years, we've learned many ways to get over this barrier of getting software deployed by external users. First and I would say of most importance, is the content MUST have some value to the person you are asking to install software. Without some type of value proposition you are going to find it very difficult to get past objections to installing the IRM Desktop. Imagine if you were going to secure the weekly campus restaurant menu and send this to contractors. Their initial response will be, "why on earth are you asking me to download some software just to access your menu!?". A valid objection... there is no value to the user in doing this. Now consider the scenario where you are sending one of your contractors their employment contract which contains their address, social security number and bank account details. Are they likely to take 5 minutes to install the IRM Desktop? You bet they are, because there is real value in doing so and they understand why you are doing it. They want their personal information to be securely handled and a quick download and install of some software is a small task in comparison to dealing with the loss of this information. Be clear in communicating this value So when sending sealed content to people externally, you must be clear in communicating why you are using an IRM technology and why they need to install some software to access the content. Do not try and avoid the issue, you must be clear and upfront about it. In doing so you will significantly reduce the "I didn't know I needed to do this..." responses and also gain respect for being straight forward. One customer I worked with, 6 months after the initial deployment of Oracle IRM, called me panicking that the partner they had started to share their engineering documents with refused to install any software to access this highly confidential intellectual property. I explained they had to communicate to the partner why they were doing this. I told them to go back with the statement that "the company takes protecting its intellectual property seriously and had decided to use IRM to control access to engineering documents." and if the partner didn't respect this decision, they would find another company that would. The result? A few days later the partner had made the Oracle IRM Desktop part of their approved list of software in the company. Companies are successful when sending sealed content to third parties We have many, many customers who send sensitive content to third parties. Some customers actually sell access to Oracle IRM protected content and therefore 99% of their users are external to their business, one in particular has sold content to hundreds of thousands of external users. Oracle themselves use the technology to secure M&A documents, payroll data and security assessments which go beyond the traditional enterprise security perimeter. Pretty much every company who deploys Oracle IRM will at some point be sending those documents to people outside of the company, these customers must be successful otherwise Oracle IRM wouldn't be successful. Because our software is used by a wide variety of companies, some who use it to sell content, i've often run into people i'm sharing a sealed document with and they already have the IRM Desktop installed due to accessing content from another company. The future In summary I would say that yes, this is a hurdle that many customers are concerned about but we see much evidence that in practice, people leap that hurdle with relative ease as long as they are good at communicating the value of using IRM and also take measures to ensure end users can easily go through the process of installation. We are constantly developing new ideas to reducing this hurdle and maybe one day the operating systems will give us enough rich security functionality to have no software installation. Until then, Oracle IRM is by far the easiest solution to balance security and usability for your business. If you would like to evaluate it for yourselves, please contact us.

    Read the article

  • setIncludesSubentities: in an NSFetchRequest is broken for entities across multiple persistent store

    - by SG
    Prior art which doesn't quite address this: http://stackoverflow.com/questions/1774359/core-data-migration-error-message-model-does-not-contain-configuration-xyz I have narrowed this down to a specific issue. It takes a minute to set up, though; please bear with me. The gist of the issue is that a persistentStoreCoordinator (apparently) cannot preserve the part of an object graph where a managedObject is marked as a subentity of another when they are stored in different files. Here goes... 1) I have 2 xcdatamodel files, each containing a single entity. In runtime, when the managed object model is constructed, I manually define one entity as subentity of another using setSubentities:. This is because defining subentities across multiple files in the editor is not supported yet. I then return the complete model with modelByMergingModels. //Works! [mainEntity setSubentities:canvasEntities]; NSLog(@"confirm %@ is super for %@", [[[canvasEntities lastObject] superentity] name], [[canvasEntities lastObject] name]); //Output: "confirm Note is super for Browser" 2) I have modified the persistentStoreCoordinator method so that it sets a different store for each entity. Technically, it uses configurations, and each entity has one and only one configuration defined. //Also works! for ( NSString *configName in [[HACanvasPluginManager shared].registeredCanvasTypes valueForKey:@"viewControllerClassName"] ) { storeUrl = [NSURL fileURLWithPath:[[self applicationDocumentsDirectory] stringByAppendingPathComponent:[configName stringByAppendingPathExtension:@"sqlite"]]]; //NSLog(@"entities for configuration '%@': %@", configName, [[[self managedObjectModel] entitiesForConfiguration:configName] valueForKey:@"name"]); //Output: "entities for configuration 'HATextCanvasController': (Note)" //Output: "entities for configuration 'HAWebCanvasController': (Browser)" if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:configName URL:storeUrl options:options error:&error]) //etc 3) I have a fetchRequest set for the parent entity, with setIncludesSubentities: and setAffectedStores: just to be sure we get both 1) and 2) covered. When inserting objects of either entity, they both are added to the context and they both are fetched by the fetchedResultsController and displayed in the tableView as expected. // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; [fetchRequest setEntity:entity]; [fetchRequest setIncludesSubentities:YES]; //NECESSARY to fetch all canvas types [fetchRequest setSortDescriptors:sortDescriptors]; [fetchRequest setFetchBatchSize:20]; // Set the batch size to a suitable number. [fetchRequest setAffectedStores:[[managedObjectContext persistentStoreCoordinator] persistentStores]]; [fetchRequest setReturnsObjectsAsFaults:NO]; Here is where it starts misbehaving: after closing and relaunching the app, ONLY THE PARENT ENTITY is fetched. If I change the entity of the request using setEntity: to the entity for 'Note', all notes are fetched. If I change it to the entity for 'Browser', all the browsers are fetched. Let me reiterate that during the run in which an object is first inserted into the context, it will appear in the list. It is only after save and relaunch that a fetch request fails to traverse the hierarchy. Therefore, I can only conclude that it is the storage of the inheritance that is the problem. Let's recap why: - Both entities can be created, inserted into the context, and viewed, so the model is working - Both entities can be fetched with a single request, so the inheritance is working - I can confirm that the files are being stored separately and objects are going into their appropriate stores, so saving is working - Launching the app with either entity set for the request works, so retrieval from the store is working - This also means that traversing different stores with the request is working - By using a single store instead of multiple, the problem goes away completely, so creating, storing, fetching, viewing etc is working correctly. This leaves only one culprit (to my mind): the inheritance I'm setting with setSubentities: is effective only for objects creating during the session. Either objects/entities are being stored stripped of the inheritance info, or entity inheritance as defined programmatically only applies to new instances, or both. Either of these is unacceptable. Either it's a bug or I am way, way off course. I have been at this every which way for two days; any insight is greatly appreciated. The current workaround - just using a single store - works completely, except it won't be future-proof in the event that I remove one of the models from the app etc. It also boggles the mind because I can't see why you would have all this infrastructure for storing across multiple stores and for setting affected stores in fetch requests if it by core definition (of setSubentities:) doesn't work.

    Read the article

  • Storing Configurations into Active Direcotry Application Mode

    - by Khurram Aziz
    I have a network devices polling and do actions kind of app; currently it keeps the configuration (which devices to poll, what kind of device, ip, login, password etc) in the database. My network administrator wants that this information is stored in some LDAP server so that he maintain single store of configuration which he himself can use in other apps/scripts etc. I am looking for some article that walks me through setting up ADAM/AD LDS for storing configuration by authoring custom schema etc and how to setup some authentication infrastructure to protect the data.

    Read the article

  • Trouble enabling Grails logging

    - by Dave
    I have this logging configuration in my Config.groovy file. This is a development environment, started as such. I have verified the file exists and there are 775 perms on the file, but nothing is getting output to the file. // set per-environment serverURL stem for creating absolute links environments { production { grails.serverURL = "http://www.changeme.com" } development { grails.serverURL = "http://localhost:8080/${appName}" logFilePath = "/Users/davea/Tomcat/logs/log4j.log" } test { grails.serverURL = "http://localhost:8080/${appName}" } } // log4j configuration log4j = { console name:'Appender1', layout:pattern(conversionPattern: '%-4r [%t] %-5p %c %x - %m%n') rollingFile name:'Appender2', maxFileSize:1024 * 1024, file:logFilePath, layout:pattern(conversionPattern: '%-4r [%t] %-5p %c %x - %m%n') root { debug 'Appender1', 'Appender2' } } Can anyone tell what's wrong with my configuration? Thanks, - Dave

    Read the article

  • unable to inject seam cache provider

    - by Joshua
    Env: Seam 2.2, ehcache-core 2.1.0 I tried injecting the CacheProvider using the following call in my bean scoped for session @In CacheProvider cacheProvider; WEB-INF\components.xml contains the following line to enable the cache provider <cache:eh-cache-provider/> The above configuration seems to return a null value for the cache provider Using the cache provider like this CacheProvider cacheProvider = CacheProvider.instance(); throws the following warning 15:29:27,586 WARN [CacheManager] Creating a new instance of CacheManager using the diskStorePath "C:\DOCUME~1\user5\LOCALS~1\Temp\" which is already used by an existing CacheManager. The source of the configuration was net.sf.ehcache.config.generator.Configuratio nSource$DefaultConfigurationSource@15ed0f9. The diskStore path for this CacheManager will be set to C:\DOCUME~1\user5\LOCALS ~1\Temp\\ehcache_auto_created_1276682367586. To avoid this warning consider using the CacheManager factory methods to create a singleton CacheManager or specifying a separate ehcache configuration (ehcache .xml) for each CacheManager instance. What am I missing here?

    Read the article

  • Several appdomains calling the same unmanged dll

    - by Mr. T.
    Our .NET 3.5 C# application creates multiple appdomains. Each appdomain loads the same unmanaged 3rd party dll. This dll reads a configuration file upon initialization. If the configuration changes during runtime, the dll must be unloaded and loaded again. This dll is not in our scope to rewrite correctly. Does each appdomain have access to a separtate copy of this unmanaged dll, or does Windows keep one copy of the dll and maintain a usage count? If the latter is is the case, how do we get each instance of the unmanaged dll to reflect its unique configuration?

    Read the article

  • CURL on PHP 5.2 works in CLI but not in IIS

    - by Ben Reisner
    I have a Windows 2003 Server running IIS with php 5.2.8, I'm trying to use CURL, and it works in CLI mode (if i execute php.exe) but it does not seem to be registered when running under IIS. The output of PHP info in both CLI and IIS show the same 'Loaded Configuration File', but under IIS it does not give the CURL info box. c:\program files\php\php.exe -i shows ... Loaded Configuration File => C:\Program Files\PHP\php.ini ... curl cURL support => enabled cURL Information => libcurl/7.16.0 OpenSSL/0.9.8i zlib/1.2.3 phpinfo() ... Loaded Configuration File => C:\Program Files\PHP\php.ini ... NOTE: This server also runs php 5.3 in c:\program files\php-5.3.0 and CURL does properly work with that installation.

    Read the article

  • Inside a decorator-class, access instance of the class which contains the decorated method

    - by ifischer
    I have the following decorator, which saves a configuration file after a method decorated with @saveconfig is called: class saveconfig(object): def __init__(self, f): self.f = f def __call__(self, *args): self.f(object, *args) # Here i want to access "cfg" defined in pbtools print "Saving configuration" I'm using this decorator inside the following class. After the method createkvm is called, the configuration object self.cfg should be saved inside the decorator: class pbtools() def __init__(self): self.configfile = open("pbt.properties", 'r+') # This variable should be available inside my decorator self.cfg = ConfigObj(infile = self.configfile) @saveconfig def createkvm(self): print "creating kvm" My problem is that i need to access the object variable self.cfg inside the decorator saveconfig. A first naive approach was to add a parameter to the decorator which holds the object, like @saveconfig(self), but this doesn't work. How can I access object variables of the method host inside the decorator? Do i have to define the decorator inside the same class to get access?

    Read the article

  • Fluentnhibernate and PostgerSQL, SchemaMetadataUpdater.QuoteTableAndColumns - System.NotSupportedExc

    - by Vyacheslav
    Hello! I'm using fluentnhibernate with PostgreSQL. Fluentnhibernate is last version. PosrgreSQL version is 8.4. My code for create ISessionFactory: public static ISessionFactory CreateSessionFactory() { string connectionString = ConfigurationManager.ConnectionStrings["PostgreConnectionString"].ConnectionString; IPersistenceConfigurer config = PostgreSQLConfiguration.PostgreSQL82.ConnectionString(connectionString); FluentConfiguration configuration = Fluently .Configure() .Database(config) .Mappings(m => m.FluentMappings.Add(typeof(ResourceMap)) .Add(typeof(TaskMap)) .Add(typeof(PluginMap))); var nhibConfig = configuration.BuildConfiguration(); SchemaMetadataUpdater.QuoteTableAndColumns(nhibConfig); return configuration.BuildSessionFactory(); } When I'm execute code at line SchemaMetadataUpdater.QuoteTableAndColumns(nhibConfig); throw error: System.NotSupportedException: Specified method is not supported. Help me, please! I'm very need for solution. Best regards

    Read the article

  • LocationMatch and DAV svn

    - by Homes2001
    Hi, I am trying to make our subversion repository accessible via multiple URLs. To do so, I was thinking to use the LocationMatch directive. My configuration is: <Location ~ "/(svn|repository)"> DAV svn SVNPath /opt/svn AuthzSVNAccessFile /etc/subversion/access </Location> The above configuration does NOT work ... Strange thing is that if i use for example this configuration, it works well for both URLs: <Location ~ "/(svn|repository)"> SetHandler server-status </Location> For me it looks like the combination of DAV svn and LocationMatch does not really work... or am I doing something wrong here?

    Read the article

  • How to repeat a particular execution multiple times

    - by Joshua
    The following snippet generates create / drop sql for a particular database, whenever there is a modification to JPA entity classes. How do I perform something equivalent of a 'for' operation where-in the following code can be used to generate sql for all supported databases (e.g. H2, MySQL, Postgres) Currently I have to modify db.groupId, db.artifactId, db.driver.version everytime to generate the sql files <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>${hibernate3-maven-plugin.version}</version> <executions> <execution> <id>create schema</id> <phase>process-test-resources</phase> <goals> <goal>hbm2ddl</goal> </goals> <configuration> <componentProperties> <persistenceunit>${app.module}</persistenceunit> <drop>false</drop> <create>true</create> <outputfilename>${app.sql}-create.sql</outputfilename> </componentProperties> </configuration> </execution> <execution> <id>drop schema</id> <phase>process-test-resources</phase> <goals> <goal>hbm2ddl</goal> </goals> <configuration> <componentProperties> <persistenceunit>${app.module}</persistenceunit> <drop>true</drop> <create>false</create> <outputfilename>${app.sql}-drop.sql</outputfilename> </componentProperties> </configuration> </execution> </executions> <dependencies> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>${hibernate-core.version}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${slf4j-api.version}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-nop</artifactId> <version>${slf4j-nop.version}</version> </dependency> <dependency> <groupId>${db.groupId}</groupId> <artifactId>${db.artifactId}</artifactId> <version>${db.driver.version}</version> </dependency> </dependencies> <configuration> <components> <component> <name>hbm2cfgxml</name> <implementation>annotationconfiguration</implementation> </component> <component> <name>hbm2dao</name> <implementation>annotationconfiguration</implementation> </component> <component> <name>hbm2ddl</name> <implementation>jpaconfiguration</implementation> <outputDirectory>src/main/sql</outputDirectory> </component> <component> <name>hbm2doc</name> <implementation>annotationconfiguration</implementation> </component> <component> <name>hbm2hbmxml</name> <implementation>annotationconfiguration</implementation> </component> <component> <name>hbm2java</name> <implementation>annotationconfiguration</implementation> </component> <component> <name>hbm2template</name> <implementation>annotationconfiguration</implementation> </component> </components> </configuration> </plugin>

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >