Search Results

Search found 22170 results on 887 pages for 'multiple schema'.

Page 260/887 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • BizTalk: namespaces

    - by Leonid Ganeline
    BizTalk team did a good job hiding the .NET guts from developers. Developers are working with editors and hardly with .NET code. The Orchestration editor, the Mapper, the Schema editor, the Pipeline editor, all these editors hide what is going on with artifacts created and deployed. Working with the BizTalk artifacts year after year brings us some knowledge which could help to understand more about the .NET guts. I would like to highlight the .NET namespaces. What they are, how they influence our everyday tasks in the BizTalk application development. What is it? Most of the BizTalk artifacts are compiled into the NET classes. Not all of them… but I will show you later. Classes are placed inside the namespaces. I will not describe here why we need namespaces and what is it. I assume you all know about it more then me. Here I would like to emphasize that almost each BizTalk artifact is implemented as a .NET class within a .NET namespace. Where to see the namespaces in development? The namespaces are inconsistently spread across the artifact parameters. Let’s start with namespace placement in development. Then we go with namespaces in deployment and operations. I am using pictures from the BizTalk Server 2013 Beta and the Visual Studio 2012 but there was no changes regarding the namespaces starting from the BizTalk 2006. Default namespace When a new BizTalk project is created, the default namespace is set up the same as a name of a project. This namespace would be used for all new BizTalk artifacts added to this project. Orchestrations When we select a green or a red markers (the Begin and End orchestration shapes) we will see the orchestration Properties window. We also can click anywhere on the space between Port Surfaces to see this window.   Schemas The only way to see the NET namespace for map is selecting the schema file name into the Solution Explorer. Notes: We can also see the Type Name parameter. It is a name of the correspondent .NET class. We can also see the Fully Qualified Name parameter. We cannot see the schema namespace when selecting any node on the schema editor surface. Only selecting a schema file name gives us a namespace parameter. If we select a <Schema> node we can get the Target Namespace parameter of the schema. This is NOT the .NET namespace! It is an XML namespace. See this XML namespace inside the XML schema, it is shown as a special targetNamespace attribute Here this XML namespace appears inside the XML document itself. It is shown as a special xmlns attribute.   Maps It is similar to the schemas. The only way to see the NET namespace for map is selecting a map file name into the Solution Explorer. Pipelines It is similar to the schemas. The only way to see the NET namespace for pipeline is selecting a pipeline file name into the Solution Explorer. z Ports, Policies and Tracking Profiles The Send and Receive Ports, the Policies and the BAM Tracking Profiles do not create the .NET classes and they do not have the associated .NET namespaces. How to copy artifacts? Since the new versions of the BizTalk Server are going to production I am spending more and more time redesigning and refactoring the BizTalk applications. It is good to know how the refactoring process copes with the .NET namespaces. Let see what is going on with the namespaces when we copy the artifacts from one project to another. Here is an example: I am going to group the artifacts under the project folders. So, I have created a Group folder, have run the Add / Existing Item.. command and have chosen all artifacts in the project root. The artifact copies were created in the Group folder: What was happened with the namespaces of the artifacts? As you can see, the folder name, the “Group”, was added to the namespace. It is great! When I added a folder, I have added one more level in the name hierarchy and the namespace change just reflexes this hierarchy change.  The same namespace adjustment happens when we copy the BizTalk artifacts between the projects. But there is an issue with the namespace of an orchestration. It was not changed. The namespaces of the schemas, maps, pipelines are changed but not the orchestration namespace. I have to change the orchestration namespace manually. Now another example: I am creating a new Project folder and moving the artifacts there from the project root by drag and drop. We will mention the artifact namespaces are not changed. Another example: I am copying the artifacts from the project root by (drag and drop) + Ctrl. We will mention the artifact namespaces are changed. It works exactly as it was with the Add / Existing Item.. command. Conclusion: The namespace parameter is put inconsistently in different places for different artifacts Moving artifacts changes the namespaces of the schemas, maps, pipelines but not the orchestrations.

    Read the article

  • How to optimize calls to multiple APIs at once and return as one set?

    - by Martin
    I have a web app that searches across 2 APIs right now. I have my own Restful web service that I call, and it does all the work on the backend to asynchronously call the 2 APIs and concatenate them into one result set for my web app to use. I want to scale this out and add as many other APIs as I can (currently looking at about 10 more). But as I add APIs, the call to my service gets (potentially) slower and more complex. How do I handle one API not responding ... and other issues that arise? What would be the best way to approach this? Should I create a service call for each API, that way each one is independent and not coupled to all the other calls? Is there a way on the backend to handle the multiple API calls without all the extra complexity it adds? If I go the route of a service call per API, now my client code gets more complex (and I have a lot of clients)? And it's more work for the client, and since I have mobile apps, it will cost the client more data usage. If I go one service call, is there a way to set up some sort of connection so I can return data as I get it, in case one service call hangs?

    Read the article

  • How to represent a graph with multiple edges allowed between nodes and edges that can selectively disappear

    - by Pops
    I'm trying to figure out what sort of data structure to use for modeling some hypothetical, idealized network usage. In my scenario, a number of users who are hostile to each other are all trying to form networks of computers where all potential connections are known. The computers that one user needs to connect may not be the same as the ones another user needs to connect, though; user 1 might need to connect computers A, B and D while user 2 might need to connect computers B, C and E. Image generated with the help of NCTM Graph Creator I think the core of this is going to be an undirected cyclic graph, with nodes representing computers and edges representing Ethernet cables. However, due to the nature of the scenario, there are a few uncommon features that rule out adjacency lists and adjacency matrices (at least, without non-trivial modifications): edges can become restricted-use; that is, if one user acquires a given network connection, no other user may use that connection in the example, the green user cannot possibly connect to computer A, but the red user has connected B to E despite not having a direct link between them in some cases, a given pair of nodes will be connected by more than one edge in the example, there are two independent cables running from D to E, so the green and blue users were both able to connect those machines directly; however, red can no longer make such a connection if two computers are connected by more than one cable, each user may own no more than one of those cables I'll need to do several operations on this graph, such as: determining whether any particular pair of computers is connected for a given user identifying the optimal path for a given user to connect target computers identifying the highest-latency computer connection for a given user (i.e. longest path without branching) My first thought was to simply create a collection of all of the edges, but that's terrible for searching. The best thing I can think to do now is to modify an adjacency list so that each item in the list contains not only the edge length but also its cost and current owner. Is this a sensible approach? Assuming space is not a concern, would it be reasonable to create multiple copies of the graph (one for each user) rather than a single graph?

    Read the article

  • Are separate business objects needed when persistent data can be stored in a usable format?

    - by Kylotan
    I have a system where data is stored in a persistent store and read by a server application. Some of this data is only ever seen by the server, but some of it is passed through unaltered to clients. So, there is a big temptation to persist data - whether whole rows/documents or individual fields/sub-documents - in the exact form that the client can use (eg. JSON), as this removes various layers of boilerplate, whether in the form of procedural SQL, an ORM, or any proxy structure which exists just to hold the values before having to re-encode them into a client-suitable form. This form can usually be used on the server too, though business logic may have to live outside of the object, On the other hand, this approach ends up leaking implementation details everywhere. 9 times out of 10 I'm happy just to read a JSON structure out of the DB and send it to the client, but 1 in every 10 times I have to know the details of that implicit structure (and be able to refactor access to it if the stored data ever changes). And this makes me think that maybe I should be pulling this data into separate business objects, so that business logic doesn't have to change when the data schema does. (Though you could argue this just moves the problem rather than solves it.) There is a complicating factor in that our data schema is constantly changing rapidly, to the point where we dropped our previous ORM/RDBMS system in favour of MongoDB and an implicit schema which was much easier to work with. So far I've not decided whether the rapid schema changes make me wish for separate business objects (so that server-side calculations need less refactoring, since all changes are restricted to the persistence layer) or for no separate business objects (because every change to the schema requires the business objects to change to stay in sync, even if the new sub-object or field is never used on the server except to pass verbatim to a client). So my question is whether it is sensible to store objects in the form they are usually going to be used, or if it's better to copy them into intermediate business objects to insulate both sides from each other (even when that isn't strictly necessary)? And I'd like to hear from anybody else who has had experience of a similar situation, perhaps choosing to persist XML or JSON instead of having an explicit schema which has to be assembled into a client format each time.

    Read the article

  • 3 column layout with css display table, with first row having multiple rows?

    - by Damainman
    I am working on a new website which: Has 3 columns - Each Column being a cell First column has 3 rows (Logo, Nav, icons) - Has a Div with display: table which wraps arround 3 divs with display:table-row. Other two columns only have 1 row. With the middle column being the content area. However since this is my first time using display:table, I am running into some things that aren't so clear to me. I was trying to avoid floating divs. If I need multiple rows with one cell in each row per column, do I embed each cell in a row or just create each row and not declare cells. I understand that browsers automatically create the missing elements but I want to make sure I do this properly to avoid any side effects that might occur due to the browser automatically creating the missing elements. Edit: I think my brain is just over worked, I guess I can accomplish this by just using 3 divs in the first column instead of using a nested table div with the rows. This just popped into my head.

    Read the article

  • Will we be penalized for having multiple external links to the same site?

    - by merk
    There seem to be conflicting answers on this question. The most relevant ones seem to be at least a year or two old, so I thought it would be worth re-asking this question. My gut says it's ok, because there are plenty of sites out there that do this already. Every major retailer site usually has links to the manufacturer of whatever item they are selling. go to www.newegg.com and they have hundreds of links to the same site since they sell multiple items from the same brand. Our site allows people to list a specific genre of items for sale (not porn - i'm just keeping it generic since I'm not trying to advertise) and on each item listing page, we have a link back to their website if they want. Our SEO guy is saying this is really bad and google is going to treat us as a link farm. My gut says when we have to start limiting user useful features to our site to boost our ranking, then something is wrong. Or start jumping through hoops by trying to hide text using javascript etc Some clients are only selling 1 to a handful of items, while a couple of our bigger clients have hundreds of items listed so will have hundreds of pages that link back to their site. I should also mention, there will be a handful of pages with the bigger clients where it may appear they have duplicate pages, because they will be selling 2 or 3 of the same item, and the only difference in the content of the page might just be a stock #. The majority of the pages though will have unique content. So - will we be penalized in some way for having anywhere from a handful to a few hundred pages that all point to the same link? If we are penalized, what's the suggested way to handle this? We still want to give users the option to go to the clients site, and we would still like to give a link back to the clients site to help their own SE rankings.

    Read the article

  • How do you blend multiple colors in HSV (polar) color-space?

    - by Toxikman
    In RGB color space, you can do a weighted multiple-color blend by just doing: Start with R = G = B = 0. Then we perform a blend at index i using a set of colors C, and a set of normalized weights w like so: R += w[i] * C[i].r G += w[i] * C[i].g B += w[i] * C[i].b But I'd like to interpolate the colors in the HSV color-space instead, so that saturation and brightness are uniform across the interpolation. I know I can blend saturation and brightness in the same way as above, but the HUE component is an angle around a continuous circle, since HSV is essentially a polar coordinate system. Blending only two HSV colors makes sense to me, you just find the shortest arc around the circle and interpolate between the two hues. But when you attempt to blend more than 2 colors, it becomes a bit of a puzzle. You have to handle anomalous cases, like 4 equally-weighted colors with a hue at 0, 90, 180, and 270 degrees. They basically cancel each other out, so any hue will do. Any ideas would be greatly appreciated.

    Read the article

  • Biztalk :Tagidentifier for optional records

    - by Mchandak
    I am sure many of us must have faced this issue.Problem:My flat file schema has an optional  record  and marked with a tagidentifier. we would think that the input message without that optional record will pass the schema validation. But by default Biztalk throws an error about the missing record if we try to 'Validate the instance' in the Biztalk mapper.Resolution:On the schema node, set Parser Optimization to “Complexity” instead of thedefault "Speed" optimization.

    Read the article

  • Strategy to isolate multiple nginx ssl apps with single domain via suburi's?

    - by icpu
    Warning: so far I have only learnt how to use nginx to serve apps with their own domain and server block. But I think its time to dive a little deeper. To mitigate the need for multiple SSL certificates or expensive wildcard certificates I would like to serve multiple apps (e.g. rails apps, php apps, node.js apps) from one nginx server_name. e.g. rooturl/railsapp rooturl/nodejsapp rooturl/phpshop rooturl/phpblog I am unsure on ideal strategy. Some examples I have seen and or thought about: Multiple location rules, this seems to cause conflicts between the individual app config requirements, e.g. differing rewrite and access requirements Isolated apps by backend internal port, is this possible? Each port routing to its own config? So config is isolated and can be bespoke to app requirements. Reverse proxy, I am little ignorant of how this works, is this what I need to research? is this actually 2 above? Help online seems to always proxy to another server e.g apache What is an effective way to isolate config requirements for apps served from a single domain via sub uris?

    Read the article

  • How should I implement multiple threads in a game? [duplicate]

    - by xerwin
    This question already has an answer here: Multi-threaded games best practices. One thread for 'logic', one for rendering, or more? 6 answers So I recently started learning Java, and having a interest in playing games as well as developing them, naturally I want to create game in Java. I have experience with games in C# and C++ but all of them were single-threaded simple games. But now, I learned how easy it is to make threads in Java, I want to take things to the next level. I started thinking about how would I actually implement threading in a game. I read couple of articles that say the same thing "Usually you have thread for rendering, for updating game logic, for AI, ..." but I haven't (or didn't look hard enough) found example of implementation. My idea how to make implementation is something like this (example for AI) public class AIThread implements Runnable{ private List<AI> ai; private Player player; /*...*/ public void run() { for (int i = 0; i < ai.size(); i++){ ai.get(i).update(player); } Thread.sleep(/* sleep until the next game "tick" */); } } I think this could work. If I also had a rendering and updating thread list of AI in both those threads, since I need to draw the AI and I need to calculate the logic between player and AI(But that could be moved to AIThread, but as an example) . Coming from C++ I'm used to do thing elegantly and efficiently, and this seems like neither of those. So what would be the correct way to handle this? Should I just keep multiple copies of resources in each thread or should I have the resources on one spot, declared with synchronized keyword? I'm afraid that could cause deadlocks, but I'm not yet qualified enough to know when a code will produce deadlock.

    Read the article

  • Why are transactions not rolling back when using SpringJUnit4ClassRunner/MySQL/Spring/Hibernate

    - by Trevor
    I am doing unit testing and I expect that all data committed to the MySQL database will be rolled back... but this isn't the case. The data is being committed, even though my log was showing that the rollback was happening. I've been wrestling with this for a couple days so my setup has changed quite a bit, here's my current setup. LoginDAOTest.java: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"file:web/WEB-INF/applicationContext-test.xml", "file:web/WEB-INF/dispatcher-servlet-test.xml"}) @TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true) public class UserServiceTest { private UserService userService; @Test public void should_return_true_when_user_is_logged_in () throws Exception { String[] usernames = {"a","b","c","d"}; for (String username : usernames) { userService.logUserIn(username); assertThat(userService.isUserLoggedIn(username), is(equalTo(true))); } } ApplicationContext-Text.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd"> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/test"/> <property name="username" value="root"/> <property name="password" value="Ecosim07"/> </bean> <tx:annotation-driven transaction-manager="transactionManager"/> <bean id="userService" class="Service.UserService"> <property name="userDAO" ref="userDAO"/> </bean> <bean id="userDAO" class="DAO.UserDAO"> <property name="hibernateTemplate" ref="hibernateTemplate"/> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource"/> <property name="mappingResources"> <list> <value>/himapping/User.hbm.xml</value> <value>/himapping/setup.hbm.xml</value> <value>/himapping/UserHistory.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.SQLServerDialect</prop> <prop key="hibernate.show_sql">true</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager" p:sessionFactory-ref="sessionFactory"/> <bean id="hibernateTemplate" class="org.springframework.orm.hibernate3.HibernateTemplate"> <property name="sessionFactory"> <ref bean="sessionFactory"/> </property> </bean> </beans> I have been reading about the issue, and I've already checked to ensure that the MySQL database tables are setup to use InnoDB. Also I have been able to successfully implement rolling back of transactions outside of my testing suite. So this must be some sort of incorrect setup on my part. Any help would be greatly appreciated :)

    Read the article

  • Multiple data centers and HTTP traffic: DNS Round Robin is the ONLY way to assure instant fail-over?

    - by vmiazzo
    Hi, Multiple A records pointing to the same domain seem to be used almost exclusively to implement DNS Round Robin as a cheap load balancing technique. The usual warning against DNS RR is that it is not good for high availability. When 1 IP goes down clients will continue to use it for minutes. A load balancer is often suggested as a better choice. Both claims are not completely true: When the traffic is HTTP then, most of the HTML browsers are able to automatically try the next A record if the previous is down, without a new DNS look-up. Read here chapter 3.1 and here. When multiple data centers are involved then, DNS RR is the only option to distribute traffic across them. So, is it true that, with multiple data centers and HTTP traffic, the use of DNS RR is the ONLY way to assure instant fail-over when one data center goes down? Thanks, Valentino Edit: Off course each data center has a local Load Balancer with hot spare. It's OK to sacrifice session affinity for an instant fail-over. AFAIK the only way for a DNS to suggest a data center instead of another is to reply with just the IP (or IPs) associated to that data center. If the data center becomes unreachable then all those IP are also unreachables. This means that, even if smart HTML browsers are able to instantly try another A record , all the attempts will fail until the local cache entry expires and a new DNS lookup is done, fetching the new working IPs (I assume DNS automatically suggests to a new data center when one fail). So, "smart DNS" cannot assure instant fail-over. Conversely a DNS round-robin permits it. When one data center fail, the smart HTML browsers (most of them) instantly try the other cached A records jumping to another (working) data center. So, DNS round-robin doesn't assure session affinity or the lowest RTT but seems to be the only way to assure instant fail-over when the clients are "smart" HTML browsers. Edit 2: Some people suggest TCP Anycast as a definitive solution. In this paper (chapter 6) is explained that Anycast fail-over is related to BGP convergence. For this reason Anycast can employ from 15 minutes to 20 seconds to complete. 20 seconds are possible on networks where the topology was optimized for this. Probably just CDN operators can grant such fast fail-overs. Edit 3:* I did some DNS look-ups and traceroutes (maybe some expert can double check) and: The only CDN using TCP Anycast seems to be CacheFly, other operators like CDN networks and BitGravity use CacheFly. Seems that their edges cannot be used as reverse proxies. Therefore, they cannot be used to grant instant failover. Akamai and LimeLight seems to use geo-aware DNS. But! They return multiple A records. From traceroutes seems that the returned IPs are on the same data center. So, I'm puzzled on how they can offer a 100% SLA when one data center goes down.

    Read the article

  • cxf jaxws with spring on gwt 2.0

    - by Karl
    Hi, I'm trying to use an application which uses cxf-jaxws in bean definition: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:sec="http://cxf.apache.org/configuration/security" xmlns:jaxws="http://cxf.apache.org/jaxws" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://cxf.apache.org/configuration/security http://cxf.apache.org/schemas/configuration/security.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.0.xsd"> however in combination with the jetty from gwt 2.0 development shell my context doesn't load and I get this exception: org.springframework.web.context.ContextLoader: Context initialization failed org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://cxf.apache.org/jaxws] enter code hereOffending resource: class path resource [bean_definition.xml] My project is maven-based and I got cxf-rt-frontend-jaxws which contains the namespacehandler and spring.handlers on the classpath. I added cxf-transports-http-jetty.jar as well. Has anyone experienced this kind of problem and found a solution? It seems to be a classpath issue, added the cxf-rt-frontend-jaxws.jar by hand and it works... Somehow the maven dependency doesn't get added to the classpath. Thanks in advance, karl

    Read the article

  • OpenEjb DeploymentLoader.getWebDescriptors NullPointerException

    - by jwmajors81
    I am currently getting the following error when I try to startup an EAR that includes OpenEJB. The error is: java.lang.NullPointerException at org.apache.openejb.config.DeploymentLoader.getWebDescriptors(DeploymentLoader.java:1057) at org.apache.openejb.config.DeploymentLoader.discoverModuleType(DeploymentLoader.java:1162) at org.apache.openejb.config.DeploymentsResolver.processUrls(DeploymentsResolver.java:294) at org.apache.openejb.config.DeploymentsResolver.loadFromClasspath(DeploymentsResolver.java:248) at org.apache.openejb.spring.ClassPathApplication.loadApplications(ClassPathApplication.java:56) at org.apache.openejb.spring.AbstractApplication.deployApplication(AbstractApplication.java:106) at org.apache.openejb.spring.OpenEJB.deployApplication(OpenEJB.java:342) at org.apache.openejb.spring.AbstractApplication.start(AbstractApplication.java:94) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostProcessor.java:340) The environment consists of: - Spring 3.0 - OpenEJB 3.1 - WebSphere 6.1 (Without EJB/Web Service feature packs) - OpenEJB-Spring jar The app-config.xml that configures Spring and OpenEJB looks like: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd" Also please note that I do have a JUnit test that runs against the app-config.xml specified above that can successfully load the application context file. It is just when I deploy it to WebSphere I encounter the error provided at the beginning of this message. Any assistance would be greatly appreciated. Thanks, Jeremy

    Read the article

  • ActiveMQ broker configuration error when specifying persistenceAdapter: "One of '{WC[##other:"http:/

    - by Joe
    I am setting up a simple ActiveMQ embedded broker. It works fine, until I try to configure a persistence adapter. I am basically just copying the configuration from http://activemq.apache.org/persistence.html#Persistence-ConfiguringKahaPersistence. When I add this configuration to my Spring configuration, like so: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:amq="http://activemq.apache.org/schema/core" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.3.0.xsd"> <amq:broker useJmx="true" persistent="true" brokerName="localhost"> <amq:transportConnectors> <amq:transportConnector name="vm" uri="vm://localhost"/> </amq:transportConnectors> <amq:persistenceAdapter> <amq:kahaPersistenceAdapter directory="activemq-data" maxDataFileLength="33554432"/> </amq:persistenceAdapter> </amq:broker> </beans> I get the error: cvc-complex-type.2.4.a: Invalid content was found starting with element 'amq:persistenceAdapter'. One of '{WC[##other:"http://activemq.apache.org/schema/core"]}' is expected. When I take out the amq:persistenceAdapter element, it works fine. The same error happens no matter which persistence adapter I include in the body, e.g. jdbc, journal, etc. Any help would be greatly appreciated. Thanks.

    Read the article

  • Is it possible to have multiple sets of key columns in a table?

    - by Peter Larsson
    Filtered indexes is one of my new favorite things with SQL Server 2008. I am currently working on designing a new datawarehouse. There are two restrictions doing this It has to be fed from the old legacy system with both historical data and new data It has to be fed from the new business system with new data When we incorporate the new business system, we are going to do that for one market only. It means the old legacy business system still will produce new data for other markets (together with historical data for all markets) and the new business system produce new data to that one market only. Sounds interesting this far? To accomplish this I did a thorough research about the business requirements about the business intelligence needs. Then I went on to design the sucker. How does this relate to filtered indexes you ask? I'll give one example, the Stock transaction table. Well, the key columns for the old legacy system are different from the key columns from the new business system. The old legacy system has a key of 5 columns Movement date Movement time Product code Order number Sequence number within shipment And to all thing, I found out that the Movement Time column is not really a time. It starts out like a time HH:MM:SS but seconds are added for each delivery within the shipment, so a Movement Time can look like "12:11:68". The sequence number is ordered over the distributors for shipment. As I said, it is a legacy system. The new business system has one key column, the Movement DateTime (accuracy down to 100th of nanosecond). So how to deal with this? On thing would be to have two stock transaction tables, one for legacy system and one for the new business system. But that would lead to a maintenance overhead and using partitioned views for getting data out of the warehouse. Filtered index will be of a great use here. MovementDate DATETIME2(7) MovementTime CHAR(8) NULL ProductCode VARCHAR(15) NOT NULL OrderNumber VARCHAR(30) NULL SequenceNumber INT NULL The sequence number is not even used in the new system, so I created a clustered index for a new IDENTITY column to make a new identity column which can be shared by both systems. Then I created one unique filtered index for old system like this CREATE UNIQUE NONCLUSTERED INDEX IX_Legacy (MovementDate, MovementTime, ProductCode, SequenceNumber) INCLUDE (OrderNumber, Col5, Col6, ... ) WHERE SequenceNumber IS NOT NULL And then I created a new unique filtered index for the new business system like this CREATE UNIQUE NONCLUSTERED INDEX IX_Business (MovementDate) INCLUDE (ProductCode, OrderNumber, Col12, ... ) WHERE SequenceNumber IS NULL This way I can have multiple sets of key columns on same base table which is shared by both systems.

    Read the article

  • Evaluating Scrum - is it okay to have people with multiple roles in a Scrum team?

    - by Wayne M
    I'm evaluating some Agile-style methodologies for possible introduction to my team. With Scrum, is it allowable to have the same person perform multiple roles? We have a small team of four developers and a web designer; we don't really have a lead (I fulfill this role), QA testers or business analysts, and all of our development tasks come from the CIO. Automated testing is seen as a total waste of time, and everything focuses on speed and not quality. What will happen is the CIO will come up with a development task (whether a feature or a bug) and give it to a developer (not to the whole team, to an individual, often in private or out of the blue) who is then expected to get it completed. The CIO doesn't gather requirements beyond the initial idea (and this has bitten us before as we'll implement something only to find out that none of the end users can use the feature, because they weren't consulted or even informed about it before we developed it, and in a panic we'll be told to revert the change) but requires say in/approval of everything that we do. First things first, is a Scrum style something to consider to introduce some standards and practices? From reading, Scrum seems to rely on a bit more trust and communication and focuses more on project management than on development, which is something we are completely devoid of as we don't have any semblance of project management at present. Second, if it can work is it unreasonable for someone, let's say myself, to act as both ScrumMaster and a developer? Or for a developer to also be the Product Owner (although chances are this will be the CIO, who isn't a developer)? I realize the Scrum Master and the Product Owner should be different people but at the same time I don't think we have anyone who has the qualities of a Product Owner (chances are it would turn into a "I need all these stories, I don't care how but get it done" type of deal and/or any freeze would be unfrozen on a whim). It seems to me that I might need to pick and choose pieces of Scrum/XP/Lean to compensate for how things are done currently, as it's highly unlikely that the mentality can be changed; for instance Pair Programming would never fly (seen as a waste, you get half the tasks done if you need two people for everything), TDD would be a hard sell, but short cycles would be welcomed.

    Read the article

  • Is it okay to have people with multiple roles in a Scrum team?

    - by Wayne M
    I'm evaluating some Agile-style methodologies for possible introduction to my team. With Scrum, is it allowable to have the same person perform multiple roles? We have a small team of four developers and a web designer; we don't really have a lead (I fulfill this role), QA testers or business analysts, and all of our development tasks come from the CIO. Automated testing is seen as a total waste of time, and everything focuses on speed and not quality. What will happen is the CIO will come up with a development task (whether a feature or a bug) and give it to a developer (not to the whole team, to an individual, often in private or out of the blue) who is then expected to get it completed. The CIO doesn't gather requirements beyond the initial idea (and this has bitten us before as we'll implement something only to find out that none of the end users can use the feature, because they weren't consulted or even informed about it before we developed it, and in a panic we'll be told to revert the change) but requires say in/approval of everything that we do. First things first, is a Scrum style something to consider to introduce some standards and practices? From reading, Scrum seems to rely on a bit more trust and communication and focuses more on project management than on development, which is something we are completely devoid of as we don't have any semblance of project management at present. Second, if it can work is it unreasonable for someone, let's say myself, to act as both ScrumMaster and a developer? Or for a developer to also be the Product Owner (although chances are this will be the CIO, who isn't a developer)? I realize the Scrum Master and the Product Owner should be different people but at the same time I don't think we have anyone who has the qualities of a Product Owner (chances are it would turn into a "I need all these stories, I don't care how but get it done" type of deal and/or any freeze would be unfrozen on a whim). It seems to me that I might need to pick and choose pieces of Scrum/XP/Lean to compensate for how things are done currently, as it's highly unlikely that the mentality can be changed; for instance Pair Programming would never fly (seen as a waste, you get half the tasks done if you need two people for everything), TDD would be a hard sell, but short cycles would be welcomed.

    Read the article

  • Is multiple domain names and links from same IP causing poor search engine rankings?

    - by John
    I have an ecommerce website which is not doing so well in Google. I am trying to improve this of course, and am looking at some possibilities for why it isn't doing well. The website has four domain names, all of which have been indexed by Google. A few months ago I applied 301 redirects to any requests for two of the domain names so now it is down to two domain names (one is a .net, the other is a .com.au, the others were .net.au and .com). I prefer to use my main domain name (the .com.au), but one of the names has been around for a long time and has more inbound links. According to a PageRank tool, both are PR2. It is a Classic ASP site and up until recently had a lot of querystring parameters. In the last week or so I added URL rewriting so there is now no parameters for most pages. I don't do 301 redirects from the old URLs but instead I add the META canonical tag indicating the preferred new URL. At the same time I redesigned the site and improved title tags, META descriptions, and H tags but it hasn't been long enough yet for Google to index many of these yet. I also looked at what pages Google has indexed and strangely it has some strange pages in the index, there are a lot of pages which are actual keyword searches (more a bunch of random letters than an actual word). What I mean is that it is as if they had typed in something to search for in my search box - there are no links to pages like this and the only way of getting this is to type something in to the search box). So I added a META robots tag with noindex,nofollow anytime that I render pages like this. Years ago I set up a fake price comparison site which lists all my products and links back to my site. It has a different keyword rich domain name but is on the same server and same IP address. It's a completely different layout but does have the same product categories and product descriptions (although I have stripped formatting out of them so they are not identical except in text). I also have a few blog sites which again are on the same server/IP and all have advertising for the website. My questions are: What should I do with the multiple domains, just use one, or continue with two or more? Should I add 301 redirects, not just the META canonical tag? Any idea about Google indexing my search results page, and did I do the right thing with the META robots tag? Is the fake price comparison site likely to be causing problems? Are all the links to the site from other domain names but the same IP address likely to be causing problems? Thanks for any help. Sorry for so many questions in one.

    Read the article

  • Translating multiple objects in GUI based on average position?

    - by user1423893
    I use this method to move a single object in 3D space, it accounts for a local offset based on where the cursor ray hits the widget and the center of the widget. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; objectSelected.Position = p; I would like to move multiple objects based on the same principle and using a widget which is located at the average position of all the objects currently selected. I thought that I would have to translate each object based on their offset from the average point and then include the local offset. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; int numSelectedObjects = objectSelectedList.Count; for (int i = 0; i < numSelectedObjects; ++i) { objectSelectedList[i].Position = (objectSelectedList[i].Position - translationWidget.Position) + p; } This doesn't work as the object starts shaking, which I think is because I haven't accounted for the new offset correctly. Where have I gone wrong?

    Read the article

  • How do I server multiple domains from the same directory and codebase without my configuraton breaking when apache.conf is overwritten?

    - by neokio
    I have 20 domains on a VPS running cPanel. One public_html is filled with code, the remaining 19 are symbolic links to that one. (For example, assets is a directory within public_html ... for the 19 others, there's a symbolic link to that directory in each each accounts public_html dir.) It's all PHP / MySQL database driven, with content changing depending on the domain. It works like a charm, assuming cPanel has suExec enabled correctly, and assuming apache.conf does NOT have SymLinksIfOwnerMatch enabled. However, every few weeks, my apache.conf is mysteriously overwritten, re-enabling SymLinksIfOwnerMatch, and disabling all 19 linked sites for as long as it takes for me to notice. Here's the offending line in apache.conf: <Directory "/"> AllowOverride All Options ExecCGI FollowSymLinks IncludesNOEXEC Indexes SymLinksIfOwnerMatch </Directory> The addition of SymLinksIfOwnerMatch disables the sites in a strange way ... the html is generated correctly, but all css/js/image in the html fails to load. Clicking any link redirects to /. And I have no idea why. I do have a few things in my .htaccess, which work fine when SymLinksIfOwnerMatch is not present: <IfModule mod_rewrite.c> # www.example.com -> example.com RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L] # Remove query strings from static resources RewriteRule ^assets/js/(.*)_v(.*)\.js /assets/js/$1.js [L] RewriteRule ^assets/css/(.*)_v(.*)\.css /assets/css/$1.css [L] RewriteRule ^assets/sites/(.*)/(.*)_v(.*)\.css /assets/sites/$1/$2.css [L] # Block access to hidden files and directories RewriteCond %{SCRIPT_FILENAME} -d [OR] RewriteCond %{SCRIPT_FILENAME} -f RewriteRule "(^|/)\." - [F] # SLIR ... reroute images to image processor RewriteCond %{REQUEST_URI} ^/images/.*$ RewriteRule ^.*$ - [L] # ignore rules if URL is a file RewriteCond %{REQUEST_FILENAME} !-f # ignore rules if URL is not php #RewriteCond %{REQUEST_URI} !\.php$ # catch-all for routing RewriteRule . index.php [L] </ifModule> I also use most of the 5G Blacklist 2013 for protection against exploits and other depravities. Again, all of this works great, except when SymLinksIfOwnerMatch gets added back into apache.conf. Since I've failed to find the cause of whatever cPanel/security update is overwriting apache.conf, I thought there might be a more correct way to accomplish my goal using group permissions. I've created a 'www' group, added all accounts to the group, and chmod -R'd the code source to use that group. Everything is 644 or 755. But doesn't seem to be enough. My unix isn't that strong. Do you need to restart something for group changes to take effect? Probably not. Anyways, I'm entering unknown territory. Can anyone recommend the right way to configure a website for multiple sites using one codebase that doesn't rely on apache.conf?

    Read the article

  • MySQL Full-Text Search Across Multiple Tables - Quick/Long Solution?

    - by Kerry
    Hello all, I have been doing a bit of research on full-text searches as we realized a series of LIKE statements are terrible. My first find was MySQL full-text searches. I tried to implement this and it worked on one table, failed when I was trying to join multiple tables, and so I consulted stackoverflow's articles (look at the end for a list of the ones I've been to) I didn't see anything that clearly answered my questions. I'm trying to get this done literally in an hour or two (quick solution) but I also want to do a better long term solution. Here is my query: SELECT a.`product_id`, a.`name`, a.`slug`, a.`description`, b.`list_price`, b.`price`, c.`image`, c.`swatch`, e.`name` AS industry FROM `products` AS a LEFT JOIN `website_products` AS b ON (a.`product_id` = b.`product_id`) LEFT JOIN ( SELECT `product_id`, `image`, `swatch` FROM `product_images` WHERE `sequence` = 0) AS c ON (a.`product_id` = c.`product_id`) LEFT JOIN `brands` AS d ON (a.`brand_id` = d.`brand_id`) INNER JOIN `industries` AS e ON (a.`industry_id` = e.`industry_id`) WHERE b.`website_id` = 96 AND b.`status` = 1 AND b.`active` = 1 AND MATCH( a.`name`, a.`sku`, a.`description`, d.`name` ) AGAINST ( 'ashley sofa' ) GROUP BY a.`product_id` ORDER BY b.`sequence` LIMIT 0, 9 The error I get is: Incorrect arguments to MATCH If I remove d.name from the MATCH statement it works. I have a full-text index on that column. I saw one of the articles say to use an OR MATCH for this table, but won't that lose the effectiveness of being able to rank them together or match them properly? Other places said to use UNIONs but I don't know how to do that properly. Any advice would be greatly appreciated. In the idea of a long term solution it seems that either Sphinx or Lucene is best. Now by no means and I a MySQL guru, and I heard that Lucene is a bit more complicated to setup, any recommendations or directions would be great. Articles: http://stackoverflow.com/questions/1117005/mysql-full-text-search-across-multiple-tables http://stackoverflow.com/questions/668371/mysql-fulltext-search-across-1-table http://stackoverflow.com/questions/2378366/mysql-how-to-make-multiple-table-fulltext-search http://stackoverflow.com/questions/737275/pros-cons-of-full-text-search-engine-lucene-sphinx-postgresql-full-text-searc http://stackoverflow.com/questions/1059253/searching-across-multiple-tables-best-practices

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >