Search Results

Search found 9492 results on 380 pages for 'logic unit'.

Page 99/380 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • intelligent path truncation/ellipsis for display

    - by peterchen
    I am looking for an existign path truncation algorithm (similar to what the Win32 static control does with SS_PATHELLIPSIS) for a set of paths that should focus on the distinct elements. For example, if my paths are like this: Unit with X/Test 3V/ Unit with X/Test 4V/ Unit with X/Test 5V/ Unit without X/Test 3V/ Unit without X/Test 6V/ Unit without X/2nd Test 6V/ When not enough display space is available, they should be truncated to something like this: ...with X/...3V/ ...with X/...4V/ ...with X/...5V/ ...without X/...3V/ ...without X/...6V/ ...without X/2nd ...6V/ (Assuming that an ellipsis generally is shorter than three letters). This is just an example of a rather simple, ideal case (e.g. they'd all end up at different lengths now, and I wouldn't know how to create a good suggestion when a path "Thingie/Long Test/" is added to the pool). There is no given structure of the path elements, they are assigned by the user, but often items will have similar segments. It should work for proportional fonts, so the algorithm should take a measure function (and not call it to heavily) or generate a suggestion list. Data-wise, a typical use case would contain 2..4 path segments anf 20 elements per segment. I am looking for previous attempts into that direction, and if that's solvable wiht sensible amount of code or dependencies.

    Read the article

  • nhibernate - mapping with contraints

    - by Tobias Müller
    Hello everybody, I am having a Problem with my nhibernate-mapping and I can't find a solution by searching on stackoverflow/google/documentation. The database I am using has (amongst others) two tables. One is unit with the following fields: id enduring_id starts ends damage_enduring_id [...] The other one is damage, which has the following fields: id enduring_id starts ends [...] The units are assigned to a damage and one damage can have zero, one or more units working on it. Every time a unit moves to annother damage, the dataset is copied. The field "ends" of the old record and "starts" of the new record are set to the current time stamp, enduring_id stays the same. So if I want to know which units were working on a damage at a certain time, I do the following select: select * from unit join damage on damage.enduring_id = unit.damage_enduring_id where unit.starts <= 'time' and unit.ends = 'time' (This is not an actualy query from the database, I made it up to make clear what I mean. The the real database is a little more complex) Now I want to map it that way, so I can load all the damages which are valid at one time (starts <= wanted time <= ends) and that each of them has a Bag with all the attached units at that time (again starts <= wanted time <= ends). Is this possible within the mapping? Sorry if this is a stupid question, but I am pretty new to nhibernate and I have no clue how to do it. Thanks a lot for reading my post! Bye, Tobias

    Read the article

  • Deleting a resource in a Cucumber (Capybara) step doesn't work

    - by Josiah Kiehl
    Here is my Scenario: Scenario: Delete a match Given pojo is logged in And there is a match with the following: | game_id | 1 | | name | Game del Pojo | | date_and_time | 2010-02-23 17:52:00 | | players | 2 | | teams | 2 | | comment | This is an awesome comment | | user_id | 1 | And I am on the show match 1 page And show me the page When I follow "Delete" And I follow "Yes, delete it" Then there should not be a match with the following: | game_id | 1 | | name | Game del Pojo | | date_and_time | 2010-02-23 17:52:00 | | players | 2 | | teams | 2 | | comment | This is an awesome comment | | user_id | 1 | If I walk through these steps manually, they work. When I click the confirmation: Yes, delete it, then the match is deleted. Cucumber, however, fails to delete the record and the last step fails. And I follow "Yes, delete it" # features/step_definitions/web_steps.rb:32 Then there should not be a match with the following: # features/step_definitions/match_steps.rb:8 | game_id | 1 | | name | Game del Pojo | | date_and_time | 2010-02-23 17:52:00 | | players | 2 | | teams | 2 | | comment | This is an awesome comment | | user_id | 1 | <nil> expected but was <#<Match id: 1, name: "Game del Pojo", date_and_time: "2010-02-23 17:52:00", teams: 2, created_at: "2010-03-02 23:06:33", updated_at: "2010-03-02 23:06:33", comment: "This is an awesome comment", players: 2, game_id: 1, user_id: 1>>. (Test::Unit::AssertionFailedError) /usr/lib/ruby/1.8/test/unit/assertions.rb:48:in `assert_block' /usr/lib/ruby/1.8/test/unit/assertions.rb:495:in `_wrap_assertion' /usr/lib/ruby/1.8/test/unit/assertions.rb:46:in `assert_block' /usr/lib/ruby/1.8/test/unit/assertions.rb:83:in `assert_equal' /usr/lib/ruby/1.8/test/unit/assertions.rb:172:in `assert_nil' ./features/step_definitions/match_steps.rb:22:in `/^there should (not)? be a match with the following:$/' features/matches.feature:124:in `Then there should not be a match with the following:' Any clue how to debug this? Thanks!

    Read the article

  • DDD: Enum like entities

    - by Chris
    Hi all, I have the following DB model: **Person table** ID | Name | StateId ------------------------------ 1 Joe 1 2 Peter 1 3 John 2 **State table** ID | Desc ------------------------------ 1 Working 2 Vacation and domain model would be (simplified): public class Person { public int Id { get; } public string Name { get; set; } public State State { get; set; } } public class State { private int id; public string Name { get; set; } } The state might be used in the domain logic e.g.: if(person.State == State.Working) // some logic So from my understanding, the State acts like a value object which is used for domain logic checks. But it also needs to be present in the DB model to represent a clean ERM. So state might be extended to: public class State { private int id; public string Name { get; set; } public static State New {get {return new State([hardCodedIdHere?], [hardCodeNameHere?]);}} } But using this approach the name of the state would be hardcoded into the domain. Do you know what I mean? Is there a standard approach for such a thing? From my point of view what I am trying to do is using an object (which is persisted from the ERM design perspective) as a sort of value object within my domain. What do you think? Question update: Probably my question wasn't clear enough. What I need to know is, how I would use an entity (like the State example) that is stored in a database within my domain logic. To avoid things like: if(person.State.Id == State.Working.Id) // some logic or if(person.State.Id == WORKING_ID) // some logic

    Read the article

  • How to Inserting message into View that depends on session value. ASP.NET MVC. Best practice

    - by Andrew Florko
    User have to populate multistep questionnaire web-forms and step messages depend on the option chosen by user at the very beginning. Messages are stored in web.config file. I use asp.net mvc project, strong typed views and keep business logic separated from controller in static class. I don't want to make business logic dependency on web.config. Well, I have to insert message into view that depends on session value. There are at least 2 options how to implement this: View model has property that is populated in controller/businessLogic and rendered in view like <%: Model.HelpMessage1 %>. I have to pass web.config values from controller to businessLogic that makes business logic methods signature too complex. I don't want to make configuration source abstract (in order to let business logic read configuration values from its methods directly) also. Create static helper class that is called from view like <%: ViewHelper.HelpMessage1(Model.Option1) %>. But in this case logic what to show seems to be separated into two classes: business logic & viewHelper. What will you suggest? Thank you in advance!

    Read the article

  • Coding style in .NET: whether to refactor into new method or not?

    - by Dione
    Hi As you aware, in .NET code-behind style, we already use a lot of function to accommodate those _Click function, _SelectedIndexChanged function etc etc. In our team there are some developer that make a function in the middle of .NET function, for example: public void Button_Click(object sender, EventArgs e) {     some logic here..     some logic there..     DoSomething();     DoSomethingThere();     another logic here..     DoOtherSomething(); } private void DoSomething() { } private void DoSomethingThere() { } private void DoOtherSomething() { } public void DropDown_SelectedIndexChanged() { } public void OtherButton_Click() { } and the function listed above is only used once in that function and not used anywhere else in the page, or called from other part of the solution. They said it make the code more tidier by grouping them and extract them into additional sub-function. I can understand if the sub-function is use over and over again in the code, but if it is only use once, then I think it is not really a good idea to extract them into sub-function, as the code getting bigger and bigger, when you look into the page and trying to understand the logic or to debug by skimming through line by line, it will make you confused by jumping from main function to the sub-function then to main function and to sub-function again. I know this kind of grouping by method is better when you writing old ASP or Cold fusion style, but I am not sure if this kind of style is better for .NET or not. Question is: which is better when you developing .NET, is grouping similar logic into a sub-method better (although they only use once), or just put them together inside main function and add //explanation here on the start of the logic is better? Hope my question is clear enough. Thanks.

    Read the article

  • Returning a lua table on SWIG call

    - by Tom J Nowell
    I have a class with a methodcalled GetEnemiesLua. I have bound this class to lua using SWIG, and I can call this method using my lua code. I am trying to get the method to return a lua table of objects. Here is my current code: void CSpringGame::GetEnemiesLua(){ std::vector<springai::Unit*> enemies = callback->GetEnemyUnits(); if( enemies.empty()){ lua_pushnil(ai->L); return; } else{ lua_newtable(ai->L); int top = lua_gettop(ai->L); int index = 1; for (std::vector<springai::Unit*>::iterator it = enemies.begin(); it != enemies.end(); ++it) { //key lua_pushinteger(ai->L,index);//lua_pushstring(L, key); //value CSpringUnit* unit = new CSpringUnit(callback,*it,this); ai->PushIUnit(unit); lua_settable(ai->L, -3); ++index; } ::lua_pushvalue(ai->L,-1); } } PushIUnit is as follows: void CTestAI::PushIUnit(IUnit* unit){ SWIG_NewPointerObj(L,unit,SWIGTYPE_p_IUnit,1); } To test this I have the following code: t = game:GetEnemiesLua() if t == nil then game:SendToConsole("t is nil! ") end The result is always 't is nil', despite this being incorrect. I have put breakpoints in the code and it is indeed going over the loop, rather than doing lua_pushnil. So how do I make my method return a table when called via lua?

    Read the article

  • New TPerlRegEx Compatible with Delphi XE

    - by Jan Goyvaerts
    The new RegularExpressionsCore unit in Delphi XE is based on the PerlRegEx unit that I wrote many years ago. Since I donated full rights to a copy rather than full rights to the original, I can continue to make my version of TPerlRegEx available to people using older versions of Delphi. I did make a few changes to the code to modernize it a bit prior to donating a copy to Embarcadero. The latest TPerlRegEx includes those changes. This allows you to use the same regex-based code using the RegularExpressionsCore unit in Delphi XE, and the PerlRegEx unit in Delphi 2010 and earlier. If you’re writing new code using regular expressions in Delphi 2010 or earlier, I strongly recomment you use the new version of my PerlRegEx unit. If you later migrate your code to Delphi XE, all you have to do is replace PerlRegEx with RegularExrpessionsCore in the uses clause of your units. If you have code written using an older version of TPerlRegEx that you want to migrate to the latest TPerlRegEx, you’ll need to take a few changes into account. The original TPerlRegEx was developed when Borland’s goal was to have a component for everything on the component palette. So the old TPerlRegEx derives from TComponent, allowing you to put it on the component palette and drop it on a form. The new TPerlRegEx derives from TObject. It can only be instantiated at runtime. If you want to migrate from an older version of TPerlRegEx to the latest TPerlRegEx, start with removing any TPerlRegEx components you may have placed on forms or data modules and instantiate the objects at runtime instead. When instantiating at runtime, you no longer need to pass an owner component to the Create() constructor. Simply remove the parameter. Some of the property and method names in the original TPerlRegEx were a bit unwieldy. These have been renamed in the latest TPerlRegEx. Essentially, in all identifiers SubExpression was replaced with Group and MatchedExpression was replaced with Matched. Here is a complete list of the changed identifiers: Old Identifier New Identifier StoreSubExpression StoreGroups NamedSubExpression NamedGroup MatchedExpression MatchedText MatchedExpressionLength MatchedLength MatchedExpressionOffset MatchedOffset SubExpressionCount GroupCount SubExpressions Groups SubExpressionLengths GroupLengths SubExpressionOffsets GroupOffsets Download TPerlRegEx. Source is included under the MPL 1.1 license.

    Read the article

  • Help with Strategy-game AI

    - by f20k
    Hi, I am developing a strategy-game AI (think: Final Fantasy Tactics), and I am having trouble coming up for the design of the AI. My main problem is determining which is the optimal thing for it to do. First let me describe the priority of what action I would like the AI to take: Kill nearest player unit Fulfill primary directive (kill all player units, kill target unit, survive for x turns) Heal ally unit / cast buffer Now the AI can do the following in its turn: Move - {Attack / Ability / Item} (either attack or ability or item) {Attack / Ability / Item} - Move Move closer (if targets not in range) {Attack / Ability / Item} (if move not available) Notes Abilities have various ranges / effects / costs / effects. Each ai unit has maybe 5-10 abilities to choose from. The AI will prioritize killing over safety unless its directive is to survive for x turns. It also doesn't care about ability cost much. While a player may want to save a big spell for later, the AI will most likely use it asap. Movement is on a (hex) grid num of player units: 3-6 num of ai units: 3-7 or more. Probably max 10. AI and player take turns controlling ONE unit, instead of all at the same time. Platform is Android (if program doesnt respond after some time, there will be a popup saying to Force Quit or Wait - which looks really bad!). Now comes the questions: The best ability to use would obviously be the one that hits the most targets for the most damage. But since each ability has different ranges, I won't know if they are in range without exploring each possible place I can move to. One solution would be to go through each possible places to move to, determine the optimal attack at that location - which gives me a list of optimal moves for each location. Then choose the optimal out of the list and execute it. But this will take a lot of CPU time. Is there a better solution? My current idea is to move as close as possible towards the closest, largest group of people, and determine the optimal attack/ability from there. I think this would be a lot less work for the CPU and still allow for wide-range attacks. Its sub-optimal but the AI will still seem 'smart'. Other notes/questions: Am I over-thinking/over-complicating it? Better solution? I am open to all sorts of suggestions I have taken a look at the spell-casting question, but it doesn't take into account the movement - so perhaps use that algo for each possible move location? The top answer mentioned it wasn't great for area-of-effect and group fights - so maybe requires more tweaking? Please, if you mention a graph/tree, let me know basically how to use it. E.g. Node means ability, level corresponds to damage, then search for the deepest node.

    Read the article

  • Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache

    - by Ricardo Ferreira
    The concept and usage of data grids are becoming very popular in this days since this type of technology are evolving very fast with some cool lead products like Oracle Coherence. Once for a while, developers need an programmatic way to calculate the total size of a specific cache that are residing in the data grid. In this post, I will show how to accomplish this using Oracle Coherence API. This example has been tested with 3.6, 3.7 and 3.7.1 versions of Oracle Coherence. To start the development of this example, you need to create a POJO ("Plain Old Java Object") that represents a data structure that will hold user data. This data structure will also create an internal fat so I call that should increase considerably the size of each instance in the heap memory. Create a Java class named "Person" as shown in the listing below. package com.oracle.coherence.domain; import java.io.Serializable; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Random; @SuppressWarnings("serial") public class Person implements Serializable { private String firstName; private String lastName; private List<Object> fat; private String email; public Person() { generateFat(); } public Person(String firstName, String lastName, String email) { setFirstName(firstName); setLastName(lastName); setEmail(email); generateFat(); } private void generateFat() { fat = new ArrayList<Object>(); Random random = new Random(); for (int i = 0; i < random.nextInt(18000); i++) { HashMap<Long, Double> internalFat = new HashMap<Long, Double>(); for (int j = 0; j < random.nextInt(10000); j++) { internalFat.put(random.nextLong(), random.nextDouble()); } fat.add(internalFat); } } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } } Now let's create a Java program that will start a data grid into Coherence and will create a cache named "People", that will hold people instances with sequential integer keys. Each person created in this program will trigger the execution of a custom constructor created in the People class that instantiates an internal fat (the random amount of data generated to increase the size of the object) for each person. Create a Java class named "CreatePeopleCacheAndPopulateWithData" as shown in the listing below. package com.oracle.coherence.demo; import com.oracle.coherence.domain.Person; import com.tangosol.net.CacheFactory; import com.tangosol.net.NamedCache; public class CreatePeopleCacheAndPopulateWithData { public static void main(String[] args) { // Asks Coherence for a new cache named "People"... NamedCache people = CacheFactory.getCache("People"); // Creates three people that will be putted into the data grid. Each person // generates an internal fat that should increase its size in terms of bytes... Person pessoa1 = new Person("Ricardo", "Ferreira", "[email protected]"); Person pessoa2 = new Person("Vitor", "Ferreira", "[email protected]"); Person pessoa3 = new Person("Vivian", "Ferreira", "[email protected]"); // Insert three people at the data grid... people.put(1, pessoa1); people.put(2, pessoa2); people.put(3, pessoa3); // Waits for 5 minutes until the user runs the Java program // that calculates the total size of the people cache... try { System.out.println("---> Waiting for 5 minutes for the cache size calculation..."); Thread.sleep(300000); } catch (InterruptedException ie) { ie.printStackTrace(); } } } Finally, let's create a Java program that, using the Coherence API and JMX, will calculate the total size of each cache that the data grid is currently managing. The approach used in this example was retrieve every cache that the data grid are currently managing, but if you are interested on an specific cache, the same approach can be used, you should only filter witch cache will be looked for. Create a Java class named "CalculateTheSizeOfPeopleCache" as shown in the listing below. package com.oracle.coherence.demo; import java.text.DecimalFormat; import java.util.Map; import java.util.Set; import java.util.TreeMap; import javax.management.MBeanServer; import javax.management.MBeanServerFactory; import javax.management.ObjectName; import com.tangosol.net.CacheFactory; public class CalculateTheSizeOfPeopleCache { @SuppressWarnings({ "unchecked", "rawtypes" }) private void run() throws Exception { // Enable JMX support in this Coherence data grid session... System.setProperty("tangosol.coherence.management", "all"); // Create a sample cache just to access the data grid... CacheFactory.getCache(MBeanServerFactory.class.getName()); // Gets the JMX server from Coherence data grid... MBeanServer jmxServer = getJMXServer(); // Creates a internal data structure that would maintain // the statistics from each cache in the data grid... Map cacheList = new TreeMap(); Set jmxObjectList = jmxServer.queryNames(new ObjectName("Coherence:type=Cache,*"), null); for (Object jmxObject : jmxObjectList) { ObjectName jmxObjectName = (ObjectName) jmxObject; String cacheName = jmxObjectName.getKeyProperty("name"); if (cacheName.equals(MBeanServerFactory.class.getName())) { continue; } else { cacheList.put(cacheName, new Statistics(cacheName)); } } // Updates the internal data structure with statistic data // retrieved from caches inside the in-memory data grid... Set<String> cacheNames = cacheList.keySet(); for (String cacheName : cacheNames) { Set resultSet = jmxServer.queryNames( new ObjectName("Coherence:type=Cache,name=" + cacheName + ",*"), null); for (Object resultSetRef : resultSet) { ObjectName objectName = (ObjectName) resultSetRef; if (objectName.getKeyProperty("tier").equals("back")) { int unit = (Integer) jmxServer.getAttribute(objectName, "Units"); int size = (Integer) jmxServer.getAttribute(objectName, "Size"); Statistics statistics = (Statistics) cacheList.get(cacheName); statistics.incrementUnit(unit); statistics.incrementSize(size); cacheList.put(cacheName, statistics); } } } // Finally... print the objects from the internal data // structure that represents the statistics from caches... cacheNames = cacheList.keySet(); for (String cacheName : cacheNames) { Statistics estatisticas = (Statistics) cacheList.get(cacheName); System.out.println(estatisticas); } } public MBeanServer getJMXServer() { MBeanServer jmxServer = null; for (Object jmxServerRef : MBeanServerFactory.findMBeanServer(null)) { jmxServer = (MBeanServer) jmxServerRef; if (jmxServer.getDefaultDomain().equals(DEFAULT_DOMAIN) || DEFAULT_DOMAIN.length() == 0) { break; } jmxServer = null; } if (jmxServer == null) { jmxServer = MBeanServerFactory.createMBeanServer(DEFAULT_DOMAIN); } return jmxServer; } private class Statistics { private long unit; private long size; private String cacheName; public Statistics(String cacheName) { this.cacheName = cacheName; } public void incrementUnit(long unit) { this.unit += unit; } public void incrementSize(long size) { this.size += size; } public long getUnit() { return unit; } public long getSize() { return size; } public double getUnitInMB() { return unit / (1024.0 * 1024.0); } public double getAverageSize() { return size == 0 ? 0 : unit / size; } public String toString() { StringBuffer sb = new StringBuffer(); sb.append("\nCache Statistics of '").append(cacheName).append("':\n"); sb.append(" - Total Entries of Cache -----> " + getSize()).append("\n"); sb.append(" - Used Memory (Bytes) --------> " + getUnit()).append("\n"); sb.append(" - Used Memory (MB) -----------> " + FORMAT.format(getUnitInMB())).append("\n"); sb.append(" - Object Average Size --------> " + FORMAT.format(getAverageSize())).append("\n"); return sb.toString(); } } public static void main(String[] args) throws Exception { new CalculateTheSizeOfPeopleCache().run(); } public static final DecimalFormat FORMAT = new DecimalFormat("###.###"); public static final String DEFAULT_DOMAIN = ""; public static final String DOMAIN_NAME = "Coherence"; } I've commented the overall example so, I don't think that you should get into trouble to understand it. Basically we are dealing with JMX. The first thing to do is enable JMX support for the Coherence client (ie, an JVM that will only retrieve values from the data grid and will not integrate the cluster) application. This can be done very easily using the runtime "tangosol.coherence.management" system property. Consult the Coherence documentation for JMX to understand the possible values that could be applied. The program creates an in memory data structure that holds a custom class created called "Statistics". This class represents the information that we are interested to see, which in this case are the size in bytes and in MB of the caches. An instance of this class is created for each cache that are currently managed by the data grid. Using JMX specific methods, we retrieve the information that are relevant for calculate the total size of the caches. To test this example, you should execute first the CreatePeopleCacheAndPopulateWithData.java program and after the CreatePeopleCacheAndPopulateWithData.java program. The results in the console should be something like this: 2012-06-23 13:29:31.188/4.970 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence.xml" 2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational overrides from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml" 2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not specified 2012-06-23 13:29:31.266/5.048 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified Oracle Coherence Version 3.6.0.4 Build 19111 Grid Edition: Development mode Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved. 2012-06-23 13:29:33.156/6.938 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded Reporter configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/reports/report-group.xml" 2012-06-23 13:29:33.500/7.282 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded cache configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/coherence-cache-config.xml" 2012-06-23 13:29:35.391/9.173 Oracle Coherence GE 3.6.0.4 <D4> (thread=Main Thread, member=n/a): TCMP bound to /192.168.177.133:8090 using SystemSocketProvider 2012-06-23 13:29:37.062/10.844 Oracle Coherence GE 3.6.0.4 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) joined cluster "cluster:0xC4DB" with senior Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) 2012-06-23 13:29:37.172/10.954 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Cluster with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCache with senior member 1 2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Started cluster Name=cluster:0xC4DB Group{Address=224.3.6.0, Port=36000, TTL=4} MasterMemberSet ( ThisMember=Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle) OldestMember=Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith) ActualMemberSet=MemberSet(Size=2, BitSetCount=2 Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith) Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle) ) RecycleMillis=1200000 RecycleSet=MemberSet(Size=0, BitSetCount=0 ) ) TcpRing{Connections=[1]} IpMonitor{AddressListSize=0} 2012-06-23 13:29:37.891/11.673 Oracle Coherence GE 3.6.0.4 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior service member 1 2012-06-23 13:29:39.203/12.985 Oracle Coherence GE 3.6.0.4 <D5> (thread=DistributedCache, member=2): Service DistributedCache joined the cluster with senior service member 1 2012-06-23 13:29:39.297/13.079 Oracle Coherence GE 3.6.0.4 <D4> (thread=DistributedCache, member=2): Asking member 1 for 128 primary partitions Cache Statistics of 'People': - Total Entries of Cache -----> 3 - Used Memory (Bytes) --------> 883920 - Used Memory (MB) -----------> 0.843 - Object Average Size --------> 294640 I hope that this post could save you some time when calculate the total size of Coherence cache became a requirement for your high scalable system using data grids. See you!

    Read the article

  • How to safely reboot via First Boot script

    - by unixman
    With the cost and performance benefits of the SPARC T4 and SPARC T5 systems undeniably validated, the banking sector is actively moving to Solaris 11.  I was recently asked to help a banking customer of ours look at migrating some of their Solaris 10 logic over to Solaris 11.  While we've introduced a number of holistic improvements in Solaris 11, in terms of how we ease long-term software lifecycle management, it is important to appreciate that customers may not be able to move all of their Solaris 10 scripts and procedures at once; there are years of scripts that reflect fine-tuned requirements of proprietary banking software that gets layered on top of the operating system. One of these requirements is to go through a cycle of reboots, after the system is installed, in order to ensure appropriate software dependencies and various configuration files are in-place. While Solaris 10 introduced a facility that aids here, namely SMF, many of our customers simply haven't yet taken the time to take advantage of this - proceeding with logic that, while functional, without further analysis has an appearance of not being optimal in terms of taking advantage of all the niceties bundled in Solaris 11 at no extra cost. When looking at Solaris 11, we recognize that one of the vehicles that bridges the gap between getting the operating system image payload delivered, and the customized banking software installed, is a notion of a First Boot script.  I had a working example of this at one of the Oracle OpenWorld sessions a few years ago - we've since improved our documentation and have introduced sections where this is described in better detail.   If you're looking at this for the first time and you've not worked with IPS and SMF previously, you might get the sense that the tasks are daunting.   There is a set of technologies involved that are jointly engineered in order to make the process reliable, predictable and extensible. As you go down the path of writing your first boot script, you'll be faced with a need to wrap it into a SMF service and then packaged into a IPS package. The IPS package would then need to be placed onto your IPS repository, in order to subsequently be made available to all of your AI (Automated Install) clients (i.e. the systems that you're installing Solaris and your software onto).     With this blog post, I wanted to create a single place that outlines the entire process (simplistically), and provide a hint of how a good old "at" command may make the requirement of forcing an initial reboot handy. The syntax and references to commands here is based on running this on a version of Solaris 11 that has been updated since its initial release in 2011 (i.e. I am writing this on Solaris 11.1) Assuming you've built an AI server (see this How To article for an example), you might be asking yourself: "Ok, I've got some logic that I need executed AFTER Solaris is deployed and I need my own little script that would make that happen. How do I go about hooking that script into the Solaris 11 AI framework?"  You might start here, in Chapter 13 of the "Installing Oracle Solaris 11.1 Systems" guide, which talks about "Running a Custom Script During First Boot".  And as you do, you'll be confronted with command that might be unfamiliar to you if you're new to Solaris 11, like our dear new friend: svcbundle svcbundle is an aide to creating manifests and profiles.  It is awesome, but don't let its awesomeness overwhelm you. (See this How To article by my colleague Glynn Foster for a nice working example).  In order to get your script's logic integrated into the Solaris 11 deployment process, you need to wrap your (shell) script into 2 manifests -  a SMF service manifest and a IPS package manifest.  ....and if you're new to XML, well then -- buckle up We have some examples of small first boot scripts shown here, as templates to build upon. Necessary structure of the script, particularly in leveraging SMF interfaces, is key. I won't go into that here as that is covered nicely in the doc link above.    Let's say your script ends up looking like this (btw: if things appear to be cut-off in your browser, just select them, copy and paste into your editor and it'll be grabbed - the source gets captured eventhough the browser may not render it "correctly" - ah, computers). #!/bin/sh # Load SMF shell support definitions . /lib/svc/share/smf_include.sh # If nothing to do, exit with temporary disable completed=`svcprop -p config/completed site/first-boot-script-svc:default` [ "${completed}" = "true" ] && \ smf_method_exit $SMF_EXIT_TEMP_DISABLE completed "Configuration completed" # Obtain the active BE name from beadm: The active BE on reboot has an R in # the third column of 'beadm list' output. Its name is in column one. bename=`beadm list -Hd|nawk -F ';' '$3 ~ /R/ {print $1}'` beadm create ${bename}.orig echo "Original boot environment saved as ${bename}.orig" # ---- Place your one-time configuration tasks here ---- # For example, if you have to pull some files from your own pre-existing system: /usr/bin/wget -P /var/tmp/ $PULL_DOWN_ADDITIONAL_SCRIPTS_FROM_A_CORPORATE_SYSTEM /usr/bin/chmod 755 /var/tmp/$SCRIPTS_THAT_GOT_PULLED_DOWN_IN_STEP_ABOVE # Clearly the above 2 lines represent some logic that you'd have to customize to fit your needs. # # Perhaps additional things you may want to do here might be of use, like # (gasp!) configuring ssh server for root login and X11 forwarding (for testing), and the like... # # Oh and by the way, after we're done executing all of our proprietary scripts we need to reboot # the system in accordance with our operational software requirements to ensure all layered bits # get initialized properly and pull-in their own modules and components in the right sequence, # subsequently. # We need to set a "time bomb" reboot, that would take place upon completion of this script. # We already know that *this* script depends on multi-user-server SMF milestone, so it should be # safe for us to schedule a reboot for 5 minutes from now. The "at" job get scheduled in the queue # while our little script continues thru the rest of the logic. /usr/bin/at now + 5 minutes <<REBOOT /usr/bin/sync /usr/sbin/reboot REBOOT # ---- End of your customizations ---- # Record that this script's work is done svccfg -s site/first-boot-script-svc:default setprop config/completed = true svcadm refresh site/first-boot-script-svc:default smf_method_exit $SMF_EXIT_TEMP_DISABLE method_completed "Configuration completed"  ...and you're happy with it and are ready to move on. Where do you go and what do you do? The next step is creating the IPS package for your script. Since running the logic of your script constitutes a service, you need to create a service manifest. This is described here, in the middle of Chapter 13 of "Creating an IPS package for the script and service".  Assuming the name of your shell script is first-boot-script.sh, you could end up doing the following: $ cd some_working_directory_for_this_project$ mkdir -p proto/lib/svc/manifest/site$ mkdir -p proto/opt/site $ cp first-boot-script.sh proto/opt/site  Then you would create the service manifest  file like so: $ svcbundle -s service-name=site/first-boot-script-svc \ -s start-method=/opt/site/first-boot-script.sh \ -s instance-property=config:completed:boolean:false -o \ first-boot-script-svc-manifest.xml   ...as described here, and place it into the directory hierarchy above. But before you place it into the directory, make sure to inspect the manifest and adjust the appropriate service dependencies.  That is to say, you want to properly specify what milestone should be reached before your service runs.  There's a <dependency> section that looks like this, before you modify it: <dependency restart_on="none" type="service" name="multi_user_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user"/>  </dependency>  So if you'd like to have your service run AFTER the multi-user-server milestone has been reached (i.e. later, as multi-user-server has more dependencies then multi-user and our intent to reboot the system may have significant ramifications if done prematurely), you would modify that section to read:  <dependency restart_on="none" type="service" name="multi_user_server_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user-server"/>  </dependency> Save the file and validate it: $ svccfg validate first-boot-script-svc-manifest.xml Assuming there are no errors returned, copy the file over into the directory hierarchy: $ cp first-boot-script-svc-manifest.xml proto/lib/svc/manifest/site Now that we've created the service manifest (.xml), create the package manifest (.p5m) file named: first-boot-script.p5m.  Populate it as follows: set name=pkg.fmri value=first-boot-script-AT-1-DOT-0,5.11-0 set name=pkg.summary value="AI first-boot script" set name=pkg.description value="Script that runs at first boot after AI installation" set name=info.classification value=\ "org.opensolaris.category.2008:System/Administration and Configuration" file lib/svc/manifest/site/first-boot-script-svc-manifest.xml \ path=lib/svc/manifest/site/first-boot-script-svc-manifest.xml owner=root \ group=sys mode=0444 dir path=opt/site owner=root group=sys mode=0755 file opt/site/first-boot-script.sh path=opt/site/first-boot-script.sh \ owner=root group=sys mode=0555 Now we are going to publish this package into a IPS repository. If you don't have one yet, don't worry. You have 2 choices: You can either  publish this package into your mirror of the Oracle Solaris IPS repo or create your own customized repo.  The best practice is to create your own customized repo, leaving your mirror of the Oracle Solaris IPS repo untouched.  From this point, you have 2 choices as well - you can either create a repo that will be accessible by your clients via HTTP or via NFS.  Since HTTP is how the default Solaris repo is accessed, we'll go with HTTP for your own IPS repo.   This nice and comprehensive How To by Albert White describes how to create multiple internal IPS repos for Solaris 11. We'll zero in on the basic elements for our needs here: We'll create the IPS repo directory structure hanging off a separate ZFS file system, and we'll tie it into an instance of pkg.depotd. We do this because we want our IPS repo to be accessible to our AI clients through HTTP, and the pkg.depotd SMF service bundled in Solaris 11 can help us do this. We proceed as follows: # zfs create rpool/export/MyIPSrepo # pkgrepo create /export/MyIPSrepo # svccfg -s pkg/server add MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg pkg application # svccfg -s pkg/server:MyIPSrepo setprop pkg/port=10081 # svccfg -s pkg/server:MyIPSrepo setprop pkg/inst_root=/export/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg general framework # svccfg -s pkg/server:MyIPSrepo addpropvalue general/complete astring: MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpropvalue general/enabled boolean: true # svccfg -s pkg/server:MyIPSrepo setprop pkg/readonly=true # svccfg -s pkg/server:MyIPSrepo setprop pkg/proxy_base = astring: http://your_internal_websrvr/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo setprop pkg/threads = 200 # svcadm refresh application/pkg/server:MyIPSrepo # svcadm enable application/pkg/server:MyIPSrepo Now that the IPS repo is created, we need to publish our package into it: # pkgsend publish -d ./proto -s /export/MyIPSrepo first-boot-script.p5m If you find yourself making changes to your script, remember to up-rev the version in the .p5m file (which is your IPS package manifest), and re-publish the IPS package. Next, you need to go to your AI install server (which might be the same machine) and modify the AI manifest to include a reference to your newly created package.  We do that by listing an additional publisher, which would look like this (replacing the IP address and port with your own, from the "svccfg" commands up above): <publisher name="firstboot"> <origin name="http://192.168.1.222:10081"/> </publisher>  Further down, in the  <software_data action="install">  section add: <name>pkg:/first-boot-script</name> Make sure to update your Automated Install service with the new AI manifest via installadm update-manifest command.  Don't forget to boot your client from the network to watch the entire process unfold and your script get tested.  Once the system makes the initial reboot, the first boot script will be executed and whatever logic you've specified in it should be executed, too, followed by a nice reboot. When the system comes up, your service should stay in a disabled state, as specified by the tailing lines of your SMF script - this is normal and should be left as is as it helps provide an auditing trail for you.   Because the reboot is quite a significant action for the system, you may want to add additional logic to the script that actually places and then checks for presence of certain lock files in order to avoid doing a reboot unnecessarily. You may also want to, alternatively, remove the SMF service entirely - if you're unsure of the potential for someone to try and accidentally enable that service -- eventhough its role in life is to only run once upon the system's first boot. That is how I spent a good chunk of my pre-Halloween time this week, hope yours was just as SPARCkly^H^H^H^H fun!    

    Read the article

  • idiomatic property changed notification in scala?

    - by Jeremy Bell
    I'm trying to find a cleaner alternative (that is idiomatic to Scala) to the kind of thing you see with data-binding in WPF/silverlight data-binding - that is, implementing INotifyPropertyChanged. First, some background: In .Net WPF or silverlight applications, you have the concept of two-way data-binding (that is, binding the value of some element of the UI to a .net property of the DataContext in such a way that changes to the UI element affect the property, and vise versa. One way to enable this is to implement the INotifyPropertyChanged interface in your DataContext. Unfortunately, this introduces a lot of boilerplate code for any property you add to the "ModelView" type. Here is how it might look in Scala: trait IDrawable extends INotifyPropertyChanged { protected var drawOrder : Int = 0 def DrawOrder : Int = drawOrder def DrawOrder_=(value : Int) { if(drawOrder != value) { drawOrder = value OnPropertyChanged("DrawOrder") } } protected var visible : Boolean = true def Visible : Boolean = visible def Visible_=(value: Boolean) = { if(visible != value) { visible = value OnPropertyChanged("Visible") } } def Mutate() : Unit = { if(Visible) { DrawOrder += 1 // Should trigger the PropertyChanged "Event" of INotifyPropertyChanged trait } } } For the sake of space, let's assume the INotifyPropertyChanged type is a trait that manages a list of callbacks of type (AnyRef, String) = Unit, and that OnPropertyChanged is a method that invokes all those callbacks, passing "this" as the AnyRef, and the passed-in String). This would just be an event in C#. You can immediately see the problem: that's a ton of boilerplate code for just two properties. I've always wanted to write something like this instead: trait IDrawable { val Visible = new ObservableProperty[Boolean]('Visible, true) val DrawOrder = new ObservableProperty[Int]('DrawOrder, 0) def Mutate() : Unit = { if(Visible) { DrawOrder += 1 // Should trigger the PropertyChanged "Event" of ObservableProperty class } } } I know that I can easily write it like this, if ObservableProperty[T] has Value/Value_= methods (this is the method I'm using now): trait IDrawable { // on a side note, is there some way to get a Symbol representing the Visible field // on the following line, instead of hard-coding it in the ObservableProperty // constructor? val Visible = new ObservableProperty[Boolean]('Visible, true) val DrawOrder = new ObservableProperty[Int]('DrawOrder, 0) def Mutate() : Unit = { if(Visible.Value) { DrawOrder.Value += 1 } } } // given this implementation of ObservableProperty[T] in my library // note: IEvent, Event, and EventArgs are classes in my library for // handling lists of callbacks - they work similarly to events in C# class PropertyChangedEventArgs(val PropertyName: Symbol) extends EventArgs("") class ObservableProperty[T](val PropertyName: Symbol, private var value: T) { protected val propertyChanged = new Event[PropertyChangedEventArgs] def PropertyChanged: IEvent[PropertyChangedEventArgs] = propertyChanged def Value = value; def Value_=(value: T) { if(this.value != value) { this.value = value propertyChanged(this, new PropertyChangedEventArgs(PropertyName)) } } } But is there any way to implement the first version using implicits or some other feature/idiom of Scala to make ObservableProperty instances function as if they were regular "properties" in scala, without needing to call the Value methods? The only other thing I can think of is something like this, which is more verbose than either of the above two versions, but is still less verbose than the original: trait IDrawable { private val visible = new ObservableProperty[Boolean]('Visible, false) def Visible = visible.Value def Visible_=(value: Boolean): Unit = { visible.Value = value } private val drawOrder = new ObservableProperty[Int]('DrawOrder, 0) def DrawOrder = drawOrder.Value def DrawOrder_=(value: Int): Unit = { drawOrder.Value = value } def Mutate() : Unit = { if(Visible) { DrawOrder += 1 } } }

    Read the article

  • Configuring ant to run unit tests. Where should libraries be? How should classpath be configured? av

    - by chillitom
    Hi All, I'm trying to run my junit tests using ant. The tests are kicked off using a JUnit 4 test suite. If I run this direct from Eclipse the tests complete without error. However if I run it from ant then many of the tests fail with this error repeated over and over until the junit task crashes. [junit] java.util.zip.ZipException: error in opening zip file [junit] at java.util.zip.ZipFile.open(Native Method) [junit] at java.util.zip.ZipFile.(ZipFile.java:114) [junit] at java.util.zip.ZipFile.(ZipFile.java:131) [junit] at org.apache.tools.ant.AntClassLoader.getResourceURL(AntClassLoader.java:1028) [junit] at org.apache.tools.ant.AntClassLoader$ResourceEnumeration.findNextResource(AntClassLoader.java:147) [junit] at org.apache.tools.ant.AntClassLoader$ResourceEnumeration.nextElement(AntClassLoader.java:130) [junit] at org.apache.tools.ant.util.CollectionUtils$CompoundEnumeration.nextElement(CollectionUtils.java:198) [junit] at sun.misc.CompoundEnumeration.nextElement(CompoundEnumeration.java:43) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.checkForkedPath(JUnitTask.java:1128) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.executeAsForked(JUnitTask.java:1013) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.execute(JUnitTask.java:834) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.executeOrQueue(JUnitTask.java:1785) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTask.execute(JUnitTask.java:785) [junit] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [junit] at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [junit] at java.lang.reflect.Method.invoke(Method.java:597) [junit] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) [junit] at org.apache.tools.ant.Task.perform(Task.java:348) [junit] at org.apache.tools.ant.Target.execute(Target.java:357) [junit] at org.apache.tools.ant.Target.performTasks(Target.java:385) [junit] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337) [junit] at org.apache.tools.ant.Project.executeTarget(Project.java:1306) [junit] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) [junit] at org.apache.tools.ant.Project.executeTargets(Project.java:1189) [junit] at org.apache.tools.ant.Main.runBuild(Main.java:758) [junit] at org.apache.tools.ant.Main.startAnt(Main.java:217) [junit] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257) [junit] at org.apache.tools.ant.launch.Launcher.main(Launcher.java:104) my test running task is as follows: <target name="run-junit-tests" depends="compile-tests,clean-results"> <mkdir dir="${test.results.dir}"/> <junit failureproperty="tests.failed" fork="true" showoutput="yes" includeantruntime="false"> <classpath refid="test.run.path" /> <formatter type="xml" /> <test name="project.AllTests" todir="${basedir}/test-results" /> </junit> <fail if="tests.failed" message="Unit tests failed"/> </target> I've verified that the classpath contains the following as well as all of the program code and libraries: ant-junit.jar ant-launcher.jar ant.jar easymock.jar easymockclassextension.jar junit-4.4.jar I've tried debugging to find out which ZipFile it is trying to open with no luck, I've tried toggling includeantruntime and fork and i've tried running ant with ant -lib test/libs where test/libs contains the ant and junit libraries. Any info about what causes this exception or how you've configured ant to successfully run unit tests is gratefully received. ant 1.7.1 (ubuntu), java 1.6.0_10, junit 4.4 Thanks. Update - Fixed Found my problem. I had included my classes directory in my path using a fileset as opposed to a pathelement this was causing .class files to be opened as ZipFiles which of course threw an exception.

    Read the article

  • JBoss Seam project can not be run/deployed

    - by user1494328
    I created sample application in Seam framework (Seam Web Project) and JBoss Server 7.1. When I try run application, console dislays: 23:29:35,419 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-3) MSC00001: Failed to start service jboss.deployment.unit."secoundProject-ds.xml".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."secoundProject-ds.xml".PARSE: Failed to process phase PARSE of deployment "secoundProject-ds.xml" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_24] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_24] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_24] Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:85) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: org.jboss.jca.common.metadata.ParserException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.jca.common.metadata.ds.DsParser.parseDataSources(DsParser.java:183) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:119) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:82) at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:80) ... 6 more 23:29:35,452 INFO [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015877: Stopped deployment secoundProject-ds.xml in 1ms 23:29:35,455 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015863: Replacement of deployment "secoundProject-ds.xml" by deployment "secoundProject-ds.xml" was rolled back with failure message {"JBAS014671: Failed services" => {"jboss.deployment.unit.\"secoundProject-ds.xml\".PARSE" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"secoundProject-ds.xml\".PARSE: Failed to process phase PARSE of deployment \"secoundProject-ds.xml\""}} 23:29:35,457 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015876: Starting deployment of "secoundProject-ds.xml" 23:29:35,920 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-1) MSC00001: Failed to start service jboss.deployment.unit."secoundProject-ds.xml".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."secoundProject-ds.xml".PARSE: Failed to process phase PARSE of deployment "secoundProject-ds.xml" at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:119) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746) [jboss-msc-1.0.2.GA.jar:1.0.2.GA] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_24] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_24] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_24] Caused by: org.jboss.as.server.deployment.DeploymentUnitProcessingException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:85) at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:113) [jboss-as-server-7.1.1.Final.jar:7.1.1.Final] ... 5 more Caused by: org.jboss.jca.common.metadata.ParserException: IJ010061: Unexpected element: local-tx-datasource at org.jboss.jca.common.metadata.ds.DsParser.parseDataSources(DsParser.java:183) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:119) at org.jboss.jca.common.metadata.ds.DsParser.parse(DsParser.java:82) at org.jboss.as.connector.deployers.processors.DsXmlDeploymentParsingProcessor.deploy(DsXmlDeploymentParsingProcessor.java:80) ... 6 more 23:29:35,952 INFO [org.jboss.as.controller] (DeploymentScanner-threads - 2) JBAS014774: Service status report JBAS014777: Services which failed to start: service jboss.deployment.unit."secoundProject-ds.xml".PARSE: org.jboss.msc.service.StartException in service jboss.deployment.unit."secoundProject-ds.xml".PARSE: Failed to process phase PARSE of deployment "secoundProject-ds.xml" My secoundProject-ds.xml: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE datasources PUBLIC "-//JBoss//DTD JBOSS JCA Config 1.5//EN" "http://www.jboss.org/j2ee/dtd/jboss-ds_1_5.dtd"> <datasources> <local-tx-datasource> <jndi-name>secoundProjectDatasource</jndi-name> <use-java-context>true</use-java-context> <connection-url>jdbc:mysql://localhost:3306/database</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>root</user-name> <password></password> </local-tx-datasource> </datasources> When I comment tags errors disappear, but application is disabled in browser (The requested resource (/secoundProject/) is not available.). What should I do to fix this problem?

    Read the article

  • Setting up Apache and PHP on Mac OS X Snow Leopard

    - by Martin Bean
    I've recently purchased an Apple iMac. Unfortunately, enabling Apache and PHP has thrown up some problems. I enabled Mac's built-in Web Sharing through System Preferences, at which point I got an output and could add HTML files to my user directory. However, PHP files were being displayed rather than interpreted. I then discovered this is because PHP isn't enabled by default on Mac's Apache set-up. After a quick Google search, I came across this page: http://developer.apple.com/mac/articles/internet/phpeasyway.html I proceeded to the section, Enabling PHP in Apache, copying and pasting the following code snippet into a new Terminal window and hitting Return: set admin_email to (do shell script "defaults read AddressBookMe ExistingEmailAddress") user_www=$HOME/Sites filename=php-test user_index=${user_www}/${filename}.php user_db=${user_www}/${filename}-db.sqlite3 # NOTE: Having a writeable database in your home directory can be a security risk! conf=`apachectl -V | awk -F= '/SERVER_CONFIG/ {print \$2}'| sed 's/"//g'` conf_old=$conf.$$ conf_new=/tmp/php_conf.new touch $user_db chmod a+r $user_index chmod a+w $user_db chmod a+w $user_www echo "Enabling PHP in $conf ..." sed '/#LoadModule php5_module/s/#LoadModule/LoadModule/' $conf | sed "s^[email protected]^<b>\$admin_email</b>^" > $conf_new echo "(Re)Starting Apache ..." osascript <<EOF do shell script "/bin/mv -f $conf $conf_old; /bin/mv $conf_new $conf; /usr/sbin/apachectl restart" with administrator privileges EOF Unfortunately, this has completed thrown Apache and now nothing is being served; instead I'm receiving "Failed to open page" errors because it cannot connect to the server, despite Web Sharing still being active in System Preferences. So therefore I guess my question is this: how can I undo the changes made by the copy-and-pasting of the above code snippet? Admittedly, I don't understand what the above did; I just thought it looked like a Terminal command and tried it. I have no experience in setting up Apache on Mac OS X (and I've only installed XAMPP and WampServer on Windows). So any points on reversing the aforementioned, and then successfully enabling PHP would be great. EDIT: I've discovered, via Console, the following error message is being recorded when trying to browse to 127.0.0.1... (org.apache.httpd) Throttling respawn: Will start in 10 seconds no listening sockets available, shutting down Unable to open logs (org.apache.httpd[13453]) Exited with exit code: 1 Does this point any more to the issue? EDIT #2: I'm now getting this in Console... 15/02/2010 21:24:14 osascript[3597] Error loading /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: dlopen(/Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types, 262): no suitable image found. Did find: /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: no matching architecture in universal wrapper

    Read the article

  • PlaySound linker error in C++

    - by logic-unit
    Hello, I'm getting this error: [Linker error] undefined reference to 'PlaySoundA@12' Id returned 1 exit status From this code: // c++ program to generate a random sequence of numbers then play corresponding audio files #include <windows.h> #include <mmsystem.h> #include <iostream> #pragma comment(lib, "winmm.lib") using namespace std; int main() { int i; i = 0; // set the value of i while (i <= 11) // set the loop to run 11 times { int number; number = rand() % 10 + 1; // generate a random number sequence // cycling through the numbers to find the right wav and play it if (number == 0) { PlaySound("0.wav", NULL, SND_FILENAME); // play the random number } else if (number == 1) { PlaySound("1.wav", NULL, SND_FILENAME); // play the random number } //else ifs repeat to 11... i++; // increment i } return 0; } I've tried absolute and relative paths for the wavs, the file size of them is under 1Mb each too. I've read another thread here on the subject: http://stackoverflow.com/questions/1565439/how-to-playsound-in-c As you may well have guessed this is my first C++ program, so my knowledge is limited with where to go next. I've tried pretty much every page Google has on the subject including MSDN usage page. Any ideas? Thanks in advance...

    Read the article

  • Windows 2008 as home file server and more

    - by Christian W
    I currently have a freenas-unit as a NAS, and a Win2k8R2-unit as server. However I would like to consolidate these to units in one. What I really like about the freenas-unit is the ZFS filesystem. And the only reason I care about the ZFS filesystem is the easy way I can grow an existing filesystem just by inserting a new drive. How would this work in Win2k8? If I setup my unit with a separate drive as C: and a 1TB drive as D:. The D: would then be segmented into d:\Videos d:\Music d:\Pictures. When everything gets close to filling the storage-drive, I would like to expand the storage, but I wouldn't want to have E:\Videos or d:\Videos2 (using the NTFS folder mount thingy). I still want all my Videos to reside in D:\Videos and I want the OS to decide which drive it's going to be stored on... Some kind of on-the-fly jbod expansion :) Is this at all possible in Windows 2008?

    Read the article

  • IIS 7 with verisign certificate, invalid certificate returned

    - by bh213
    We have IIS7 on windows 2008 and we installed verisign certificate and bound it to https. Certificate seems fine. Chain: mysite.com - not expired VeriSign international server CA class 3 - not expired Verisign Class 3 Public primary certification Authority - not expired Yet when I use verisign online validation, I get that second certificate is expired. https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR1130# This is what it reports, mysite is reported to be ok: ---------------- --Issued To-- Organization: VeriSign Trust Network Organizational Unit: www.verisign.com/CPS Incorp.by Ref. LIABILITY LTD.(c)97 VeriSign Organizational Unit 2: VeriSign International Server CA - Class 3 Organizational Unit 3: VeriSign,, Inc. --Issued By-- Organization: VeriSign,, Inc. Organizational Unit: Class 3 Public Primary Certification Authority Country: US Validity Start: Wed Apr 16 17:00:00 PDT 1997 Validity End: Wed Jan 07 15:59:59 PST 2004 ---------------- Any ideas?

    Read the article

  • Build a user's profile directory on creation in batch

    - by Moses
    I have a batch script that I use when I set up new Windows 7 PCs that creates a user based on a variable, creates a folder on their desktop, then shares it: @echo off SET /p unitnumber="Enter unit number: " net user unit%unitnumber% password /add /expire:never MD "C:/Users/unit%unitnumber%/Desktop/Accounting #%unitnumber%" runas /user:administrator "net share "Accounting#%unitnumber%"="C:/Users/unit%unitnumber%/Desktop/Accounting#%unitnumber"" I discovered that the share that is created is overwritten when the newly created user first logs on, because Windows creates builds their profile directory at that time. Is there any way to initiate a build of a user's profile directory in the batch file just after creating the it? The only thing that looks useful is the /homedir:pathname switch for the net user command, but I believe that option assumes the directory already exists. Other than that web research hasn't been fruitful. I'd be to use whatever to get this done as long as I can incorporate/launch it from the batch. Any suggestions?

    Read the article

  • Asus EEE PC 1005HA (XP Home) refuses to connect to Virgin Mobile MiFi

    - by Dennis Wurster
    My client has an Asus EEE PC model 1005HA, and we're attempting to connect it to the WiFi network created by a VirginMobile MiFi unit. They also have a MacBook Pro with Snow Leopard that has absolutely no issue connecting to the MiFi. The specific symptom is that the netbook fails to lease an IP address from the MiFi unit. I supply the 12-digit numerical password (WPA) to the netbook, it throws a 'waiting for network' dialog with an indeterminate progress indicator, and then times out. Update: We've determined that this behavior has stopped when the EEE PC and the MiFi unit were taken out of the client's home, and to a different home that didn't have an existing wifi network. Similarly, when taken to a third location that didn't have wifi, the EEE PC and MiFi got along swimmingly. My current theory is that the existing wifi networks and the wifi leg of the MiFi unit are on the same channel and competing with one another. Perhaps the MacBook Pro has the capability to overcome this interference, while the EEE PC doesn't.

    Read the article

  • quick java question

    - by j-unit-122
    private static char[] quicksort (char[] array , int left , int right) { if (left < right) { int p = partition(array , left, right); quicksort(array, left, p - 1 ); quicksort(array, p + 1 , right); } for (char i : array) System.out.print(i + ” ”); System.out.println(); return array; } private static int partition(char[] a, int left, int right) { char p = a[left]; int l = left + 1, r = right; while (l < r) { while (l < right && a[l] < p) l++; while (r > left && a[r] >= p) r--; if (l < r) { char temp = a[l]; a[l] = a[r]; a[r] = temp; } } a[left] = a[r]; a[r] = p; return r; } } hi guys just a quick question regarding the above coding, i know that the above coding returns the following B I G C O M P U T E R B C E G I M P U T O R B C E G I M P U T O R B C E G I M P U T O R B C E G I M P U T O R B C E G I M O P T U R B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U B C E G I M O P R T U when the sequence BIGCOMPUTER is used but my question is can someone explain to me what is happening in the code and how? i know abit about the quick-sort algorithm but it doesnt seem to be the same in the above example.

    Read the article

  • How can I keep the the logic to translate a ViewModel's values to a Where clause to apply to a linq query out of My Controller?

    - by Mr. Manager
    This same problem keeps cropping up. I have a viewModel that doesn't have any persistent backing. It is just a ViewModel to generate a search input form. I want to build a large where clause from the values the user entered. If the Action Accepts as a parameter SearchViewModel How do I do this without passing my viewModel to my service layer? Service shouldn't know about ViewModels right? Oh and if I serialize it, then it would be a big string and the key/values would be strongly typed. SearchViewModel this is just a snippet. [Display(Name="Address")] public string AddressKeywords { get; set; } /// <summary> /// Gets or sets the census. /// </summary> public string Census { get; set; } /// <summary> /// Gets or sets the lot block sub. /// </summary> public string LotBlockSub { get; set; } /// <summary> /// Gets or sets the owner keywords. /// </summary> [Display(Name="Owner")] public string OwnerKeywords { get; set; } In my controller action I was thinking of something like this. but I would think all this logic doesn't belong in my Controller. ActionResult GetSearchResults(SearchViewModel model){ var query = service.GetAllParcels(); if(model.Census != null){ query = query.Where(x=>x.Census == model.Census); } if (model.OwnerKeywords != null){ query = query.Where(x=>x.Owners == model.OwnerKeywords); } return View(query.ToList()); }

    Read the article

  • one more time about loop that doesn't work

    - by unit
    I have asked a couple of questions about this for loop: String[] book = new String [ISBN_NUM]; bookNum.replaceAll("-",""); if (bookNum.length()!=ISBN_NUM) throw new ISBNException ("ISBN "+ bookNum + " must be 10 characters"); for (int i=0;i<bookNum.length();i++) { if (Character.isDigit(bookNum.charAt(i))) book[j]=bookNum.charAt(i); //this is the problem right here j++; if (book[9].isNotDigit()|| book[9]!="x" || book[9]!="X") throw new ISBNException ("ISBN " + bookNum + " must contain all digits" + "or 'X' in the last position"); } which will not compile. An answer I had from the other question I asked told me that the line where the error occurs is wrong in that bookNum.charAt(i) is an (immutable) string, and I can't get the values into a book array that way. What I need to do on my assignment is check an ISBN number (bookNum) to see that it is all numbers, except the last digit can be an 'x' (valid ISBN). Is this the best way to do it? If so, what the hell am I doing wrong? If not, what method would be a better one to use?

    Read the article

  • Getting text boxes on a webpage

    - by C-UNIT
    So what I am trying to do is make an application (just for fun, not a lot of purpose), where when you launch it a interface comes up consisting of Username: textboxhere Password: textbox2here then a connect button, and when you fill out your information in the textboxs' then click Connect, it navigates to facebook.com and puts in your information that you defined in the textbox and logs you in. Any ideas guys?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >