Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 236/618 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Spring.net customer namespace parser

    - by ListenToRick
    I have a customer parser which looks like this: [NamespaceParser( Namespace = "http://mysite/schema/cache", SchemaLocationAssemblyHint = typeof(CacheNamespaceParser ), SchemaLocation = "/cache.xsd" ) ] public class CacheNamespaceParser : NamespaceParserSupport { public override void Init() { RegisterObjectDefinitionParser("cache", new CacheParser ()); } } public class CacheParser : AbstractSimpleObjectDefinitionParser { protected override Type GetObjectType(XmlElement element) { return typeof(CacheDefinition); } protected override void DoParse(XmlElement element, ObjectDefinitionBuilder builder) { } protected override bool ShouldGenerateIdAsFallback { get { return true; } } } in the web config i have the following configuration.... <spring> <parsers> <parser type="Spring.Data.Config.DatabaseNamespaceParser, Spring.Data"/> <parser type="App.Web.CacheNamespaceParser, WebApp" /> </parsers> When I run the project I get the following error: An error occurred creating the configuration section handler for spring/parsers: Invalid resource name. Name has to be in 'assembly:<assemblyName>/<namespace>/<resourceName>' format. I put a break point in the CacheNamespaceParser init method and it is called. If I remove from the web config all is well! Any ideas whats wrong

    Read the article

  • C# why unit test has this strange behaviour?

    - by 5YrsLaterDBA
    I have a class to encrypt the connectionString. public class SKM { private string connStrName = "AndeDBEntities"; internal void encryptConnStr() { if(isConnStrEncrypted()) return; ... } private bool isConnStrEncrypted() { bool status = false; // Open app.config of executable. System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); // Get the connection string from the app.config file. string connStr = config.ConnectionStrings.ConnectionStrings[connStrName].ConnectionString; status = !(connStr.Contains("provider")); Log.logItem(LogType.DebugDevelopment, "isConnStrEncrypted", "SKM::isConnStrEncrypted()", "isConnStrEncrypted=" + status); return status; } } Above code works fine in my application. But not in my unit test project. In my unit test project, I test the encryptConnStr() method. it will call isConnStrEncrypted() method. Then exception (null pointer) will be thrown at this line: string connStr = config.ConnectionStrings.ConnectionStrings[connStrName].ConnectionString; I have to use index like this to pass the unit test: string connStr = config.ConnectionStrings.ConnectionStrings[0].ConnectionString; I remember it worked several days ago at the time I added above unit test. But now it give me an error. The unit test is not integrated with our daily auto build yet. We only have ONE connectionStr. It works with product but not in unit test. Don't know why. Anybody can explain to me?

    Read the article

  • Why is there so much XML in Java these days?

    - by BD at Rivenhill
    This is really more of a philosophy/design issue. I did some work in Java back in the middle 90's and again in the early 2000's and now I'm coming back to it after spending a lot of time in C/C++ and it seems like there was an explosion of XML dependency while I was gone. Major build system tools like ant and maven depend on XML for their configuration, but I'm actually more concerned with all the frameworks, such as Spring, Hibernate, etc. My experience is that powerful supporting libraries like these are where a developer can really get leverage for building programs with lots of features without writing a lot of code, but it really seems like I'm getting one language for the price of two here. I write a bunch of Java classes, but then I also write a bunch of XML files to glue them together. The things that get done in the XML are things that I can see reasonable ways of doing in straight code without the middleman, and they don't really seem to be treated exactly like configuration files: they change rarely and they end up getting committed to source code control like the Java code itself, but they are distributed with the resulting application and need to be unpacked and installed in the classpath in order to get the application to work. I'm working with server applications that are not web-based, so maybe the domain is a bit different from what most people are doing, but I just can't help feeling that I must be doing something wrong here. Can someone point me to a good source of information for why these design choices were made and what problems they are meant to solve so that I can analyze my own experiences in this context?

    Read the article

  • Require reasonably random results from an SQL SELECT query within a Joomla article (Cache enabled)

    - by Shrinivas
    Setup: Joomla website on LAMP stack I have a MySQL table containing some records, these are queried by a simple SELECT on the Joomla article, as pasted below. This specific Joomla website has Caching turned on in Joomla's Global Configuration. I need to randomize the order in which I display the resultset, each time the page is loaded. Regular php/mysql would offer me two approaches for this: 1. use 'order by RAND()' or any of a number of methods to allow a SELECT query to return reasonably random results. 2. once php gets the result from the SELECT into an array, shuffle the array to get a reasonably random order of array items. However, as this Joomla instance has Caching turned ON in its Global Configuration, either of the above approaches fails. The first time I load the page the order is randomized, however any further reloads do not cause the order to change, as the page is delivered from cache. The instant the Cache is disabled, both approaches (shuffle/order by rand) work perfectly. What am I missing? How do I override the Global Cache for this specific article? A very simple requirement, that is met by both php and mysql reasonably well, is blocked by the Joomla Cache that I cannot turn off. The php that returns results from the database. <pre> $db = JFactory::getDBO(); $select = "SELECT id FROM jos_mytable;"; //order by RAND() $db->setQuery($select); echo $db->getQuery(); //Show me the Query! $rows = $db->loadObjectList(); //shuffle($rows); foreach($rows as $row) { echo "$row->id"; }

    Read the article

  • Proper structure for dependency injection (using Guice)

    - by David B.
    I would like some suggestions and feedback on the best way to structure dependency injection for a system with the structure described below. I'm using Guice and thus would prefer solutions centered around it's annotation-based declarations, not XML-heavy Spring-style configuration. Consider a set of similar objects, Ball, Box, and Tube, each dependent on a Logger, supplied via the constructor. (This might not be important, but all four classes happen to be singletons --- of the application, not Gang-of-Four, variety.) A ToyChest class is responsible for creating and managing the three shape objects. ToyChest itself is not dependent on Logger, aside from creating the shape objects which are. The ToyChest class is instantiated as an application singleton in a Main class. I'm confused about the best way to construct the shapes in ToyChest. I either (1) need access to a Guice Injector instance already attached to a Module binding Logger to an implementation or (2) need to create a new Injector attached to the right Module. (1) is accomplished by adding an @Inject Injector injectorfield to ToyChest, but this feels weird because ToyChest doesn't actually have any direct dependencies --- only those of the children it instantiates. For (2), I'm not sure how to pass in the appropriate Module. Am I on the right track? Is there a better way to structure this? The answers to this question mention passing in a Provider instead of using the Injector directly, but I'm not sure how that is supposed to work. EDIT: Perhaps a more simple question is: when using Guice, where is the proper place to construct the shapes objects? ToyChest will do some configuration with them, but I suppose they could be constructed elsewhere. ToyChest (as the container managing them), and not Main, just seems to me like the appropriate place to construct them.

    Read the article

  • How to debug JBoss out of memory problem?

    - by user561733
    Hello, I am trying to debug a JBoss out of memory problem. When JBoss starts up and runs for a while, it seems to use memory as intended by the startup configuration. However, it seems that when some unknown user action is taken (or the log file grows to a certain size) using the sole web application JBoss is serving up, memory increases dramatically and JBoss freezes. When JBoss freezes, it is difficult to kill the process or do anything because of low memory. When the process is finally killed via a -9 argument and the server is restarted, the log file is very small and only contains outputs from the startup of the newly started process and not any information on why the memory increased so much. This is why it is so hard to debug: server.log does not have information from the killed process. The log is set to grow to 2 GB and the log file for the new process is only about 300 Kb though it grows properly during normal memory circumstances. This is information on the JBoss configuration: JBoss (MX MicroKernel) 4.0.3 JDK 1.6.0 update 22 PermSize=512m MaxPermSize=512m Xms=1024m Xmx=6144m This is basic info on the system: Operating system: CentOS Linux 5.5 Kernel and CPU: Linux 2.6.18-194.26.1.el5 on x86_64 Processor information: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz, 8 cores This is good example information on the system during normal pre-freeze conditions a few minutes after the jboss service startup: Running processes: 183 CPU load averages: 0.16 (1 min) 0.06 (5 mins) 0.09 (15 mins) CPU usage: 0% user, 0% kernel, 1% IO, 99% idle Real memory: 17.38 GB total, 2.46 GB used Virtual memory: 19.59 GB total, 0 bytes used Local disk space: 113.37 GB total, 11.89 GB used When JBoss freezes, system information looks like this: Running processes: 225 CPU load averages: 4.66 (1 min) 1.84 (5 mins) 0.93 (15 mins) CPU usage: 0% user, 12% kernel, 73% IO, 15% idle Real memory: 17.38 GB total, 17.18 GB used Virtual memory: 19.59 GB total, 706.29 MB used Local disk space: 113.37 GB total, 11.89 GB used

    Read the article

  • Is the Subversion 'stack' a realistic alternative to Team Foundation Server?

    - by Robert S.
    I'm evaluating Microsoft Team Foundation Server for my customer, who currently uses Visual SourceSafe and nothing else. They have explicitly expressed a desire to implement a more rigid and process-driven environment as their application is in production and they have future releases to consider. The particular areas I'm trying to cover are: Configuration management (e.g., source control) Change management (workflow and doco for change requests, tasks) Release management (builds and deployments) Incident and problem management (issues and bugs) Document management (similar to source control, but available via web) Code analysis constraints on check-ins A testing framework Reporting Visual Studio 2008 integration TFS does all of these things quite well, but it's expensive and complex to maintain, and the inexpensive Workgroup edition doesn't scale. We don't get TFS as part of our MSDN subscription. Those problems can be overcome, but before I tell my customer to go the TFS route, which in itself isn't a terrible thing, I wanted to evaluate the alternatives. I know Subversion is often suggested for its configuration management/source control, but what about the other areas? Would a combination of Subversion/NUnit/Wiki/CruiseControl/NAnt/something else satisfy all of these requirements? What tools do I need to include in my evaluation? Or should I just bite the bullet and go with TFS since we're already invested in the Microsoft stack?

    Read the article

  • LINQ EF not saving to database...

    - by Keith Barrows
    I guess this is a continuation of the last question I asked: http://stackoverflow.com/questions/2587542/bulk-insert-and-update-with-ado-net-entity-framework. I am not getting any errors while doing inserts yet no data is actually going into my DB. My DB is a SDF file (SQL CE). Any ideas what to check? My app.config looks like: <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> </configSections> <connectionStrings> <add name="Lab_Use_Billing.Properties.Settings.LabUseConnectionString" connectionString="Data Source=|DataDirectory|\Models\LabUse.sdf" providerName="Microsoft.SqlServerCe.Client.3.5" /> <add name="LabUseEntities" connectionString="metadata=res://*/Models.LabUseEntities.csdl|res://*/Models.LabUseEntities.ssdl|res://*/Models.LabUseEntities.msl; provider=System.Data.SqlServerCe.3.5; provider connection string=&quot;Data Source=|DataDirectory|\Models\LabUse.sdf&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> </configuration> TIA

    Read the article

  • Hadoop File Read

    - by user3684584
    Hadoop Distributed Cache Wordcount example in hadoop 2.2.0. Copied file into hdfs filesystem to be used inside setup of mapper class. protected void setup(Context context) throws IOException,InterruptedException { Path[] uris = DistributedCache.getLocalCacheFiles(context.getConfiguration()); cacheData=new HashMap(); for(Path urifile: uris) { try { BufferedReader readBuffer1 = new BufferedReader(new FileReader(urifile.toString())); String line; while ((line=readBuffer1.readLine())!=null) { System.out.println("**************"+line); cacheData.put(line,line); } readBuffer1.close(); } catch (Exception e) { System.out.println(e.toString()); } } } Inside Driver Main class Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs(); if (otherArgs.length != 3) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word_count"); job.setJarByClass(WordCount.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); Path outputpath=new Path(otherArgs[1]); outputpath.getFileSystem(conf).delete(outputpath,true); FileOutputFormat.setOutputPath(job,outputpath); System.out.println("CachePath****************"+otherArgs[2]); DistributedCache.addCacheFile(new URI(otherArgs[2]),job.getConfiguration()); System.exit(job.waitForCompletion(true) ? 0 : 1); But getting exception java.io.FileNotFoundException: file:/home/user12/tmp/mapred/local/1408960542382/cache (No such file or directory) So Cache functionality not working properly. Any Idea ?

    Read the article

  • File writing problem on Windows 7 Professional in c#

    - by Ummar
    I have an application in C# which I write some data to file. I am facing the problem on windows 7 professional that when I write data to C:\ProgramData, Access denied acception is thrown. If I login from an administrator account this issue vanishes, and if I login from some other account who have administrative previlages this issue comes up. This issue is only produces on windows 7 professional, it is working fine on all other flavors of windows 7 as well as windows vista. try { XmlTextWriter myXmlTextWriter = new XmlTextWriter("Configuration.xml", null); myXmlTextWriter.Formatting = Formatting.Indented; myXmlTextWriter.WriteStartDocument(true); myXmlTextWriter.WriteDocType("ApplicationConfigurations", null, null, null); ////myXmlTextWriter.WriteComment("This file represents another fragment of a book store inventory database"); myXmlTextWriter.WriteStartElement("Configuration"); myXmlTextWriter.WriteElementString("firstElement", pe.ToString()); myXmlTextWriter.WriteEndElement(); myXmlTextWriter.WriteEndDocument(); myXmlTextWriter.Flush(); myXmlTextWriter.Close(); }catch(Exception e) { //Exception is thrown in Win7 professional }

    Read the article

  • How do I manage multiple development branches in GIT?

    - by Ian
    I have 5 branches of one system - lets call them master, London, Birmingham, Manchester and demo. These differ in only a configuration file and each has its own set of graphics files. When I do some development, I create a temp branch from master, called after the feature, and work on that. When ready to merge I checkout master, and git merge feature to bring in my work. That appears to work just fine. Now I need to get my changes into the other Branches, without losing the differences between then that are there already. How can I do that? I have been having no end of problems with Birmingham geting London's graphics, and with conflicts within the configuration file. When the branch is finally correct, I push it up to a depot, and pull each Branch down to a linux box for final testing, From there the release into production is using rsync (set to ignore the .git repository itself). This phase works just fine also. I am the only developer at the moment, but I need to get the process solid before inviting assistance :)

    Read the article

  • Is it a good practice for a .js file to rely on variables declared in the including html

    - by Bozho
    In short: <script type="text/javascript"> var root = '${config.root}'; var userLanguage = '${config.language}'; var userTimezone = '${config.timezone}'; </script> <script type="text/javascript" src="js/scripts.js"></script> And then, in scripts.js, rely on these variables: if (userLanguage == 'en') { .. } The ${..} is simply a placeholder for a value in the script that generates the page. It can be php, jsp, asp, whatever. The point is - it is dynamic, and hence it can't be part of the .js file (which is static). So, is it OK for the static javascript file to rely on these externally defined configuration variables? (they are mainly configuration, of course). Or is it preferred to make the .js file be served dynamically as well (i.e. make it a .php / .jsp, with the proper Content-Type), and have these values defined in there.

    Read the article

  • Nginx - Treats PHP as binary

    - by Think Floyd
    We are running Nginx+FastCgi as the backend for our Drupal site. Everything seems to work like fine, except for this one url. http:///sites/all/modules/tinymce/tinymce/jscripts/tiny_mce/plugins/smimage/index.php (We use TinyMCE module in Drupal, and the url above is invoked when user tries to upload an image) When we were using Apache, everything was working fine. However, nginx treats that above url Binary and tries to Download it. (We've verified that the file pointed out by the url is a valid PHP file) Any idea what could be wrong here? I think it's something to do with the NGINX configuration, but not entirely sure what that is. Any help is greatly appreciated. Config: Here's the snippet from the nginx configuration file: root /var/www/; index index.php; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; } error_page 404 index.php; location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~* ^.+\.(jpg|jpeg|gif|png|ico)$ { access_log off; expires 7d; } location ~* ^.+\.(css|js)$ { access_log off; expires 7d; } location ~ .php$ { include /etc/nginx/fcgi.conf; fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; } location ~ /\.ht { deny all; }

    Read the article

  • Subdomain Routing Rules (using chaining) Broke after upgrading to Zend Framework 1.9.5, but only for

    - by Dan
    I asked a similar question months ago (see How do I write Routing Chains for a Subdomain in Zend Framework in a routing INI file?), on how to write chaining rules in an app.ini format. The answer to this question worked wonderfully! Now, however, I have upgraded to the latest version of the Zend Framework 1.9.5 (I needed to upgrade for another issue) and now my subdomains no longer work! To clarify, if I visit subdomain.domain.com, it does not recognize my rule. However, if I visit subdomain.domain.com/somepage/ it does recognize my routing rule. Here is my code: ;; the following is apparently being ignored, and does not work routes.manager.type = "Zend_Controller_Router_Route_Hostname" routes.manager.route = "manager.sitename.com" routes.manager.defaults.module = "manager" ;; this is not being ignored and works! routes.manager.chains.settings.type = "Zend_Controller_Router_Route_Static" routes.manager.chains.settings.route = "/settings" routes.manager.chains.settings.defaults.controller = "manager" routes.manager.chains.settings.defaults.action = "settings" So for example, if I go to manager.sitename.com, it just redirects to my default index and controller (does not access the module, $this-getRequest()-getModuleName() is blank). However, if I go to manager.sitename.com/settings, the page comes up! This app.ini configuration works fine in ZF 1.7.8, But now since I upgraded to 1.9.5, it no longer works. I have tried adding routes.manager.defaults.controller = "manager" and routes.manager.defaults.action = 'index" to my configuration as well, but this didn't work. There is not much out there on the internet with chaining and app.ini dealing with Zend Framework. Any help on this issue would be greatly appreciated.

    Read the article

  • Windows "forms" authentication - <deny users="?"> redirecting to foreign page!

    - by Erik5388
    Like the title states - I have a web.config file that looks like, <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <authentication mode="Forms"> <forms name="login" protection="All" timeout="30" loginUrl="login" defaultUrl="~/"> <credentials passwordFormat="Clear"> <user name="admin" password="password" /> </credentials> </forms> </authentication> <authorization> <deny users="?" /> </authorization> </system.web> </configuration> I want to do exactly what it says it should do... I want to deny all users who try to enter the site. It works however, it redirects to a "Account/Login?ReturnUrl=%2flogin" url I have never heard of... Is there a place I can change this?

    Read the article

  • How do I exclude a properties file when deploy

    - by Huy
    I want to include this file when running locally, but exclude it when deploy. I tried the following the doesn't seem to work. <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.3</version> <executions> <execution> <phase>package</phase> <goals> <goal>jar</goal> </goals> <configuration> <excludes> <exclude>filename.properties</exclude> </excludes> </configuration> </execution> </executions> </plugin>

    Read the article

  • Apache's AuthDigestDomain and Rails Distributed Asset Hosts

    - by Jared
    I've got a server I'm in the process of setting up and I'm running into an Apache configuration problem that I can not get around. I've got Apache 2.2 and Passenger serving a Rails app with distributed asset hosting. This is the feature of Rails that lets you serve your static assets from assets0.example.com, assets1, assets2, and so on. The site needs to be passworded until launch. I've set up HTTP authentication on the site using Apache's mod_auth_digest. In my configuration I'm attempting to use the AuthDigestDomain directive to allow access to each of the asset URLs. The problem is, it doesn't seem to be working. I get the initial prompt for the password when I load the page, but then the first time it loads an asset from one of the asset URLs, I get prompted a 2nd, 3rd, or 4th time. In some browsers, I get prompted for every single resource on the page. I'm hoping that this is only a problem of how I'm specifying my directives and not a limitation of authorization in Apache itself. See the edited auth section below: <Location /> AuthType Digest AuthName "Restricted Site" AuthUserFile /etc/httpd/passwd/passwords AuthGroupFile /dev/null AuthDigestDomain / http://assets0.example.com/ http://assets1.example.com/ http://assets2.example.com/ http://assets3.example.com/ require valid-user order deny,allow allow from all </Location>

    Read the article

  • mvn clean install using java 1.5 or 1.6

    - by bruce
    When I do mvn clean install, I get this error: annotations are not supported in -source 1.3 (try -source 1.5 to enable annotations) But where do I put this -source 1.5 command? I tried all permutations with mvn clean install and couldn't get it to work. So I tried putting compilation in my pom, like this: <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> </plugins> But that didn't work either.. What am I missing? Thanks!

    Read the article

  • Strange bug with PHP on Windows 7

    - by chessweb
    This is the configuration: Windows 7 Home Premium, XAMPP 1.7.3 (Apache 2.2.14 , PHP 5.3.1), Firefox 3.6 This is the PHP-code in a file named 'test.php' in htdocs: <?php echo('04556-8978765'); ?> On http://localhost/test.php I would expect to see the string 04556-8978765 in the browser. This is not what happens, though. The string appears for a short time and then it disappears altogether. Firebug shows an empty body-tag. However, when I look at page source, the string is there alright. When I change the string in the echo-statement to e.g. 4556-8978765, everything is fine. Internet Explorer 8 does not show this strange behavior. I could not reproduce this with the same Apache/PHP/Firefox configuration on Windows XP. '04556-8978765' is by no means unique. The couple '02065-96047' and '02065-9604' behave exactly the same. Can anybody reproduce this and offer an explanation as to what is going on? PS: If you can not see the string '04556-8978765' in the echo-statement above, look at this post with IE8.

    Read the article

  • Generate Spring bean definition from a Java object

    - by joeslice
    Let's suggest that I have a bean defined in Spring: <bean id="neatBean" class="com..." abstract="true">...</bean> Then we have many clients, each of which have slightly different configuration for their 'neatBean'. The old way we would do it was to have a new file for each client (e.g., clientX_NeatFeature.xml) that contained a bunch of beans for this client (these are hand-edited and part of the code base): <bean id="clientXNeatBean" parent="neatBean"> <property id="whatever" value="something"/> </bean> Now, I want to have a UI where we can edit and redefine a client's neatBean on the fly. My question is: given a neatBean, and a UI that can 'override' properties of this bean, what would be a straightforward way to serialize this to an XML file as we do [manually] today? For example, if the user set property whatever to be "17" for client Y, I'd want to generate: <bean id="clientYNeatBean" parent="neatBean"> <property id="whatever" value="17"/> </bean> Note that moving this configuration to a different format (e.g., database, other-schema'd-xml) is an option, but not really an answer to the question at hand.

    Read the article

  • Linking Error Building 64bit Qt app on 32bit XP machine.

    - by photo_tom
    I'm trying to build a 64 bit version of my application (and yes I really do need the memory) on my 32bit xp dev box for production testing on our Vista64 server. Previously, I have built w/o any errors the Qt 4.6.2 DLL's in 64 bit mode. That step went vary smooth. Just to get started in building production, I'm trying to rebuild Qt's Star Delegate demo in 64bit mode. I converted the 32bit to 64bit app by changing the application configuration and adjusting the library's to the 64bit venisons. Now, when I go to link, I'm getting the following error when I link 1>------ Build started: Project: stardelegate, Configuration: Release x64 ------ 1>Linking... 1>MSVCRT.lib(crtexew.obj) : error LNK2001: unresolved external symbol WinMain 1>release64\stardelegate.exe : fatal error LNK1120: 1 unresolved externals Suggestions? edit - After some more searching, discovered if I link as a console app it will work and run. But not as a windows app. And I don't have this problem in 32 bit mode.

    Read the article

  • what's a good technique for building and running many similar unit tests?

    - by jcollum
    I have a test setup where I have many very similar unit tests that I need to run. For example, there are about 40 stored procedures that need to be checked for existence in the target environment. However I'd like all the tests to be grouped by their business unit. So there'd be 40 instances of a very similar TestMethod in 40 separate classes. Kinda lame. One other thing: each group of tests need to be in their own solution. So Business Unit A will have a solution called Tests.BusinessUnitA. I'm thinking that I can set this all up by passing a configuration object (with the name of the stored proc to check, among other things) to a TestRunner class. The problem is that I'm losing the atomicity of my unit tests. I wouldn't be able to run just one of the tests, I'd have to run all the tests in the TestRunner class. This is what the code looks like at this time. Sure, it's nice and compact, but if Test 8 fails, I have no way of running just Test 8. TestRunner runner = new TestRunner(config, this.TestContext); var runnerType = typeof(TestRunner); var methods = runnerType.GetMethods() .Where(x => x.GetCustomAttributes(typeof(TestMethodAttribute), false) .Count() > 0).ToArray(); foreach (var method in methods) { method.Invoke(runner, null); } So I'm looking for suggestions for making a group of unit tests that take in a configuration object but won't require me to generate many many TestMethods. This looks like it might require code-generation, but I'd like to solve it without that.

    Read the article

  • Maven exec bash script and save output as property

    - by djechlin
    I'm wondering if there exists a Maven plugin that runs a bash script and saves the results of it into a property. My actual use case is to get the git source version. I found one plugin available online but it didn't look well tested, and it occurred to me that a plugin as simple as the one in the title of this post is all I need. Plugin would look something like: <plugin>maven-run-script-plugin> <phase>process-resources</phase> <!-- not sure where most intelligent --> <configuration> <script>"git rev-parse HEAD"</script> <!-- must run from build directory --> <targetProperty>"properties.gitVersion"</targetProperty> </configuration> </plugin> Of course necessary to make sure this happens before the property will be needed, and in my case I want to use this property to process a source file.

    Read the article

  • Pass command line arguments to JUnit test case being run programmatically

    - by __nv__
    I am attempting to run a JUnit Test from a Java Class with: JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.run(classToRun); Problem is my JUnit test requires a database connection that is currently hardcoded in the JUnit test itself. What I am looking for is a way to run the JUnit test programmatically(above) but pass a database connection to it that I create in my Java Class that runs the test, and not hardcoded within the JUnit class. Basically something like JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.addParameters(java.sql.Connection); core.run(classToRun); Then within the classToRun: @Test Public void Test1(Connection dbConnection){ Statement st = dbConnection.createStatement(); ResultSet rs = st.executeQuery("select total from dual"); rs.next(); String myTotal = rs.getString("TOTAL"); //btw my tests are selenium testcases:) selenium.isTextPresent(myTotal); } I know about The @Parameters, but it doesn't seem applicable here as it is more for running the same test case multiple times with differing values. I want all of my test cases to share a database connection that I pass in through a configuration file to my java client that then runs those test cases (also passed in through the configuration file). Is this possible? P.S. I understand this seems like an odd way of doing things.

    Read the article

  • How do I conditionally compile C code snippets to my Perl module?

    - by mobrule
    I have a module that will target several different operating systems and configurations. Sometimes, some C code can make this module's task a little easier, so I have some C functions that I would like to bind the code. I don't have to bind the C functions -- I can't guarantee that the end-user even has a C compiler, for instance, and it's generally not a problem to failover gracefully to a pure Perl way of accomplishing the same thing -- but it would be nice if I could call the C functions from the Perl script. Still with me? Here's another tricky part. Just about all of the C code is system specific -- a function written for Windows won't compile on Linux and vice-versa, and the function that does a similar thing on Solaris will look totally different. #include <some/Windows/headerfile.h> int foo_for_Windows_c(int a,double b) { do_windows_stuff(); return 42; } #include <path/to/linux/headerfile.h> int foo_for_linux_c(int a,double b) { do_linux_stuff(7); return 42; } Furthermore, even for native code that targets the same system, it's possible that only some of them can be compiled on any particular configuration. #include <some/headerfile/that/might/not/even/exist.h> int bar_for_solaris_c(int a,double b) { call_solaris_library_that_might_be_installed_here(11); return 19; } But ideally we could still use the C functions that would compile with that configuration. So my questions are: how can I compile C functions conditionally (compile only the code that is appropriate for the current value of $^O)? how can I compile C functions individually (some functions might not compile, but we still want to use the ones that can)? can I do this at build-time (while the end-user is installing the module) or at run-time (with Inline::C, for example)? Which way is better? how would I tell which functions were successfully compiled and are available for use from Perl? All thoughts appreciated!

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >