Search Results

Search found 3148 results on 126 pages for 'angularjs scope'.

Page 33/126 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Secure xml messages being read from database into app.

    - by scope-creep
    I have an app that reads xml from a database using NHibernate Dal. The dal calls stored procedures to read and encapsulate the data from the schema into an xml message, wrap it up to a message and enqueue it on an internal queue for processing. I would to secure the channel from the database reads to the dequeue action. What would be the best way to do it. I was thinking of signing the xml using System.Security.Cryptography.Xml namespace, but is their any other techniques or approaches I need to know about? Any help would be appreciated. Bob.

    Read the article

  • Windows 2008 unable to execute c# powershell app. Returning access exception.

    - by scope-creep
    Hi, Does anybody know why I can't access the folder where my powershell scripts are in windows 2008 Ent. When I try to create a script with textpad it craps out. When I try and execute a c# powershell app, which is stored on another win 2003 drive, it craps out with an access exception as well. I've set powershell execution policy to unrestricted for both normal users and admin users with 'run as admin' on powershell, but it doesn't seem to make a difference. There must be a policy setting, doesn't allow scripts access to a directory, but where, and how to set it. Any help would be appreciated. scope_creep

    Read the article

  • Configure VS 2010 Help for a specific subject

    - by scope-creep
    Using VS2008, u could set Document Explorer to limit your search to specific subjects using the Technology dropdown, which made for finding info on a specific subject very easy, as it was limited to a subset of available subject. How is the accomplished in the new VS2010 help? The VS2010 help at the moment, is very hazy. When I search for Task, or task, or c# task. re the new Task library in .net, it returns a whole bundle of irrelevancy... Any ideas.

    Read the article

  • C# Anyway to detect if an object is locked.

    - by scope-creep
    Hi, Is their anyway to determine if a object is locked in c#. I have the unenviable position, through design where i'm reading from a queue inside a class, and I need to dump the contents into a collection in the class. But that collection is also read/write from an interface outside the class. So obviously their may be a case when the collection is being written to, as the same time i want to write to it. I could program round it, say using delegate but it would be ugly. Bob.

    Read the article

  • Any way to make GetTypes() less brittle.

    - by scope-creep
    I'm iterating through all the types in GAC, GAC_32 and GAC_MSIL looking for specific types, fundamentally to match those using clauses in my source code, so when I compile the source. I'll know exactly what assembly dll's to provide. I'm getting all the file names from each of those directory and applying GetTypes to each assembly in turn and comparing the returned types against my using list. But the problem I have is that GetTypes() keeps crapping out with an exception, when it can't load the types from a loaded assembly. Is their any way to make GetTypes() less brittle. For instance, when parsing this assembly on my box, {blbmmc, Version=6.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35}, it craps out. Any suggestions welcome. I know this is a fairly lengthly process, but I figure i'll eventually use a subset of common assemblies to search, or possibly cache the list of types-assembly dll name at program start. Thanks.

    Read the article

  • Is there any reason not to go directly from client-side Javascript to a database?

    - by Chris Smith
    So, let's say I'm going to build a Stack Exchange clone and I decide to use something like CouchDB as my backend store. If I use their built-in authentication and database-level authorization, is there any reason not to allow the client-side Javascript to write directly to the publicly available CouchDB server? Since this is basically a CRUD application and the business logic consists of "Only the author can edit their post" I don't see much of a need to have a layer between the client-side stuff and the database. I would simply use validation on the CouchDB side to make sure someone isn't putting in garbage data and make sure that permissions are set properly so that users can only read their own _user data. The rendering would be done client-side by something like AngularJS. In essence you could just have a CouchDB server and a bunch of "static" pages and you're good to go. You wouldn't need any kind of server-side processing, just something that could serve up the HTML pages. Opening my database up to the world seems wrong, but in this scenario I can't think of why as long as permissions are set properly. It goes against my instinct as a web developer, but I can't think of a good reason. So, why is this a bad idea? EDIT: Looks like there is a similar discussion here: Writing Web "server less" applications EDIT: Awesome discussion so far, and I appreciate everyone's feedback! I feel like I should add a few generic assumptions instead of calling out CouchDB and AngularJS specifically. So let's assume that: The database can authenticate users directly from its hidden store All database communication would happen over SSL Data validation can (but maybe shouldn't?) be handled by the database The only authorization we care about other than admin functions is someone only being allowed to edit their own post We're perfectly fine with everyone being able to read all data (EXCEPT user records which may contain password hashes) Administrative functions would be restricted by database authorization No one can add themselves to an administrator role The database is relatively easy to scale There is little to no true business logic; this is a basic CRUD app

    Read the article

  • Pushing complete notifications to client

    - by ton.yeung
    So with cqrs, we accept that consistency is eventual. However, that doesn't mean that the user has to continually poll, or that eventual means an update has to take more then 500ms to sync. For the sake of UX, we want to at least give the illusion of consistency, or if not possible, be as transparent as possible. With that in mind, I have this setup: angularjs web client, consumes webapi restful services, sends commands to nservicebus command handlers, saves to neventstore, dispatches events to nservicebus event handlers, sends message to signalr hub, sends notifications to angularjs web client so with that setup, theoretically some initiates a request the server validates the request sends out the necessary commands In the mean time the client gets a 200 response updates the view: working on it gets message sometime later: done, here's the updated data Here's where things get interesting, each command could spawn multiple events. Not sure if this is a serious no, no, or not, but that's how it is currently. For example, a new customer spawns CustomerIDCreated, CustomerNameUpdated, CustomerAddressUpdated, etc... Which event handler needs to notify the client? Should all of them in a progress bar style update?

    Read the article

  • KVM work with bridge network problems

    - by isware
    I try to configure bridge network for KVM(refer to http://www.linux-kvm.org/page/Networking), and it worked for Guest OS, but I have two problems with my Fedora host OS: 1?I can not access internet on host 2?The bridge configuration lost after reboot, I need to execute "service network restart" again to bring it up I checked here(http://serverfault.com/questions/168119/kvm-network-bridge-with-public-static-ip-for-both-host-and-guests) for the first problem, it seems not working for me. Any advice is appreciated! ifconfig -a eth0 Link encap:Ethernet HWaddr 48:5B:39:ED:EB:5A inet6 addr: fe80::4a5b:39ff:feed:eb5a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:231340 errors:0 dropped:0 overruns:0 frame:0 TX packets:413424 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:15335606 (14.6 MiB) TX bytes:114755796 (109.4 MiB) Interrupt:44 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:119307 errors:0 dropped:0 overruns:0 frame:0 TX packets:119307 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:57151264 (54.5 MiB) TX bytes:57151264 (54.5 MiB) sit0 Link encap:IPv6-in-IPv4 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) sw0 Link encap:Ethernet HWaddr 48:5B:39:ED:EB:5A inet addr:192.168.1.133 Bcast:255.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::4a5b:39ff:feed:eb5a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:229584 errors:0 dropped:0 overruns:0 frame:0 TX packets:401232 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11047463 (10.5 MiB) TX bytes:113891533 (108.6 MiB) tap0 Link encap:Ethernet HWaddr F2:86:1A:48:E2:55 inet6 addr: fe80::f086:1aff:fe48:e255/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:232 errors:0 dropped:0 overruns:0 frame:0 TX packets:2744 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:24842 (24.2 KiB) TX bytes:243899 (238.1 KiB) virbr0 Link encap:Ethernet HWaddr 9A:7C:09:6B:85:65 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:46 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:5513 (5.3 KiB)

    Read the article

  • Domain Controller DNS Best Practice/Practical Considerations for Domain Controllers in Child Domains

    - by joeqwerty
    I'm setting up several child domains in an existing Active Directory forest and I'm looking for some conventional wisdom/best practice guidance for configuring both DNS client settings on the child domain controllers and for the DNS zone replication scope. Assuming a single domain controller in each domain and assuming that each DC is also the DNS server for the domain (for simplicity's sake) should the child domain controller point to itself for DNS only or should it point to some combination (primary VS. secondary) of itself and the DNS server in the parent or root domain? If a parentchildgrandchild domain hierarchy exists (with a contiguous DNS namespace) how should DNS be configured on the grandchild DC? Regarding the DNS zone replication scope, if storing each domain's DNS zone on all DNS servers in the domain then I'm assuming a DNS delegation from the parent to the child needs to exist and that a forwarder from the child to the parent needs to exist. With a parentchildgrandchild domain hierarchy then does each child forward to the direct parent for the direct parent's zone or to the root zone? Does the delegation occur at the direct parent zone or from the root zone? If storing all DNS zones on all DNS servers in the forest does it make the above questions regarding the replication scope moot? Does the replication scope have some bearing on the DNS client settings on each DC?

    Read the article

  • What is the IPv6 equivalent to IPv4 RFC1918 addresses?

    - by Kumba
    Having a hard time wrapping my head around IPv6 here. A lot of the lingo seems targeted at enterprise-level IPv6 deployments, discussing link-local, site-local, global unicast, scopes, etc. Not a lot of solid information on really small networks, like home networks. I want to check my thinking and make sure I am getting the correct translations from IPv4-speak to IPv6-speak. The first question is, what's the equivalent of RFC1918 for IPv6? Initial searches suggested there was no equivalent. Then I stumbled upon Unique Local Addresses (RFC4193), and that states that all ULA's should be assigned the prefix fc00, followed by a 40-bit random number in the routing prefix. This random number is to "prevent collisions when two IPv6 networks are interconnected" -- again, another reference to an enterprise-level function. If I have a small local LAN at home, numbered using 192.168.4.0/24, what's my equivalent in IPv6's ULA scope? Assuming I will never, ever, tie that IPv6 address into the real internet (a router will NAT & firewall it), can I ignore the RFC to an extent and go with fc00::4:0/120? It also seems that any address in fc00::/7 are to be globally routable. Does this mean I'll need extra protections so my router would not automatically start advertising these private IPv6 addresses to the world? Second question, what's this link-local thing? Reading suggests a default-assigned address in the fe80::/10 range that has the last 64bits of the address comprised of the interface's MAC address. Seems to be required, too, but I'm annoyed by the constant discussion of it in relation to enterprise networks. Third question, what is scope id for? Seems to be yet another term tossed around in relation to enterprise networks, especially when interconnecting them, but almost no explanation on the smaller home network level. Can I see a scope ID AND CIDR notation used together? I.e., fc00::4:0/120%6, or are scope IDs only supposed to be applied to a single /128 IPv6 address?

    Read the article

  • Assigning IPs to OpenVZ containers

    - by Vojtech
    I have recently bought myself a physical server and I am trying to create containers which would have their IPs. The physical machine has both IPv4 and IPv6 addresses. I have accessible another IPv4 and some other IPv6 addresses which I would like to assign to the container. I managed to assign the addresses as follows: # vzctl set 101 --ipadd 144.76.195.252 --save I can ping to the machine from the physical machine, but not from the outside world. This also applies to the IPv6 I assigned as well. This is ifconfig of the physical machine: eth0 Link encap:Ethernet HWaddr d4:3d:7e:ec:e0:04 inet addr:144.76.195.232 Bcast:144.76.195.255 Mask:255.255.255.224 inet6 addr: 2a01:4f8:200:71e7::2/64 Scope:Global inet6 addr: fe80::d63d:7eff:feec:e004/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:217895 errors:0 dropped:0 overruns:0 frame:0 TX packets:16779 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:322481419 (307.5 MiB) TX bytes:1672628 (1.5 MiB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet6 addr: fe80::1/128 Scope:Link UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:3 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1108 (1.0 KiB) TX bytes:1108 (1.0 KiB) This is ifconfig of the OpenVZ container: # ifconfig venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: 2a01:4f8:200:71e7::3/64 Scope:Global UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1108 (1.0 KiB) TX bytes:1108 (1.0 KiB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:144.76.195.252 P-t-P:144.76.195.252 Bcast:144.76.195.252 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 What do I need to do to have the container accessible from the outside world? What could I have forgotten? Thanks.

    Read the article

  • Windows 2008 R2 DHCP Overlapping Scopes

    - by Buska
    We are trying to troubleshoot a scope overlap problem. We have multiple device types we wish to give all different ranges of a 16 bit subnet. IE. X device we wish to give 192.168.2.1-192.168.2.254/16, Y devices we wish to give 192.168.3.1-192.168.3.254/16. We are trying to accomplish this by creating different scopes and using the 60 class identifier. The problem is DHCP won't allow us to give these scopes with 16 bit masks because of the potential overlap. We aren't overlapping the address pool so why does DHCP care and can we work around this? If this isn't possible, how can i assign specific ranges by device type without creating multiple scopes? Any thoughts would be helpful. UPDATE: Entire Scope is 192.168.0.0/16 Gateway is 192.168.1.1/16 Device Hardware A - 192.168.20.1-192.168.20.254/16 Device Hardware B - 192.168.26.1-192.168.26.254/16 Device Hardware C - 192.168.85.1-192.168.85.254/16 We tried to setup multiple scopes for each device type (A,B,C) but couldn't specify a 16 bit mask as Scope A could technically overlap Scope B even thought our start and end addresses don't. I hope this makes more sense. Thanks for your thoughts.

    Read the article

  • how to get ip address of a PPP(Point-to-Point Protocol) network interface?

    - by Xsmael
    I have a Linux machine with two network interfaces, and I'd like to get the IP address of the PPP interface w1g1 but it doesn't show up in ifconfig. There is a public IP on the PPP interface, but there is no internet connection, I'm trying to troubleshoot but I need to get the IP address of the interface and I can't. ifconfig : eth0 Link encap:Ethernet HWaddr 00:30:48:8D:F0:2C inet addr:192.168.2.254 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::230:48ff:fe8d:f02c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9970 errors:0 dropped:567 overruns:0 frame:0 TX packets:4338 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:1441024 (1.3 MiB) TX bytes:915814 (894.3 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:675 errors:0 dropped:0 overruns:0 frame:0 TX packets:675 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:50659 (49.4 KiB) TX bytes:50659 (49.4 KiB) w1g1 Link encap:Point-to-Point Protocol UP POINTOPOINT RUNNING NOARP MTU:240 Metric:1 RX packets:748994 errors:0 dropped:0 overruns:0 frame:0 TX packets:748992 errors:0 dropped:0 overruns:0 carrier:3 collisions:0 txqueuelen:100 RX bytes:179758560 (171.4 MiB) TX bytes:179758080 (171.4 MiB) Interrupt:177 Memory:f881c400-f881e3ff w1g1 is connected to a modem by an RJ45<-Serial cable and the modem is connected to the phone line. The modem is a NOKIA DNT2Mi you can see it here Routing table : 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.254 169.254.0.0/16 dev eth0 scope link default via 192.168.2.180 dev eth0

    Read the article

  • Maven2 multi-module ejb 3.1 project - deployment error

    - by gerry
    The problem is taht I get the following error qhile deploying my project to Glassfish: java.lang.RuntimeException: Unable to load EJB module. DeploymentContext does not contain any EJB Check archive to ensure correct packaging But, let us start on how the project structure looks like in Maven2... I've build the following scenario: MultiModuleJavaEEProject - parent module - model --- packaged as jar - ejb1 ---- packaged as ebj - ejb2 ---- packaged as ebj - web ---- packaged as war So model, ejb1, ejb2 and web are children/modules of the parent MultiModuleJavaEEProject. _ejb1 depends on model. _ejb2 depends on ejb1. _web depends on ejb2. the pom's look like: _parent: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.dyndns.geraldhuber.testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <packaging>pom</packaging> <version>1.0</version> <name>MultiModuleJavaEEProject</name> <url>http://maven.apache.org</url> <modules> <module>model</module> <module>ejb1</module> <module>ejb2</module> <module>web</module> </modules> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.7</version> <scope>test</scope> </dependency> </dependencies> <build> <pluginManagement> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <version>2.2</version> <configuration> <ejbVersion>3.1</ejbVersion> <jarName>${project.groupId}.${project.artifactId}-${project.version}</jarName> </configuration> </plugin> </plugins> </pluginManagement> </build> </project> _model: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <version>1.0</version> </parent> <artifactId>model</artifactId> <packaging>jar</packaging> <version>1.0</version> <name>model</name> <url>http://maven.apache.org</url> </project> _ejb1: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <version>1.0</version> </parent> <artifactId>ejb1</artifactId> <packaging>ejb</packaging> <version>1.0</version> <name>ejb1</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.ejb</artifactId> <version>3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>testing</groupId> <artifactId>model</artifactId> <version>1.0</version> </dependency> </dependencies> </project> _ejb2: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>testing</groupId> <artifactId>MultiModuleJavaEEProject</artifactId> <version>1.0</version> </parent> <artifactId>ejb2</artifactId> <packaging>ejb</packaging> <version>1.0</version> <name>ejb2</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.ejb</artifactId> <version>3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>testing</groupId> <artifactId>ejb1</artifactId> <version>1.0</version> </dependency> </dependencies> </project> _web: <?xml version="1.0" encoding="UTF-8"?> <project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <modelVersion>4.0.0</modelVersion> <parent> <artifactId>MultiModuleJavaEEProject</artifactId> <groupId>testing</groupId> <version>1.0</version> </parent> <groupId>testing</groupId> <artifactId>web</artifactId> <version>1.0</version> <packaging>war</packaging> <name>web Maven Webapp</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.4</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.ejb</artifactId> <version>3.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>testing</groupId> <artifactId>ejb2</artifactId> <version>1.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.0</version> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> </manifest> </archive> </configuration> </plugin> </plugins> <finalName>web</finalName> </build> </project> And the model is just a simple Pojo: package testing.model; public class Data { private String data; public String getData() { return data; } public void setData(String data) { this.data = data; } } And the ejb1 contains only one STATELESS ejb. package testing.ejb1; import javax.ejb.Stateless; import testing.model.Data; @Stateless public class DataService { private Data data; public DataService(){ data = new Data(); data.setData("Hello World!"); } public String getDataText(){ return data.getData(); } } As well as the ejb2 is only a stateless ejb: package testing.ejb2; import javax.ejb.EJB; import javax.ejb.Stateless; import testing.ejb1.DataService; @Stateless public class Service { @EJB DataService service; public Service(){ } public String getText(){ return service.getDataText(); } } And the web module contains only a Servlet: package testing.web; import java.io.IOException; import java.io.PrintWriter; import javax.ejb.EJB; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import testing.ejb2.Service; public class SimpleServlet extends HttpServlet { @EJB Service service; public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); out.println( "SimpleServlet Executed" ); out.println( "Text: "+service.getText() ); out.flush(); out.close(); } } And the web.xml file in the web module looks like: <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd" > <web-app> <display-name>Archetype Created Web Application</display-name> <servlet> <servlet-name>simple</servlet-name> <servlet-class>testing.web.SimpleServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>simple</servlet-name> <url-pattern>/simple</url-pattern> </servlet-mapping> </web-app> So no further files are set up by me. There is no ejb-jar.xml in any ejb files, because I'm using EJB 3.1. So I think ejb-jar.xml descriptors are optional. I this right? But the problem is, the already mentioned error: java.lang.RuntimeException: Unable to load EJB module. DeploymentContext does not contain any EJB Check archive to ensure correct packaging Can anybody help?

    Read the article

  • SSAS: Using fake dimension and scopes for dynamic ranges

    - by DigiMortal
    In one of my BI projects I needed to find count of objects in income range. Usual solution with range dimension was useless because range where object belongs changes in time. These ranges depend on calculation that is done over incomes measure so I had really no option to use some classic solution. Thanks to SSAS forums I got my problem solved and here is the solution. The problem – how to create dynamic ranges? I have two dimensions in SSAS cube: one for invoices related to objects rent and the other for objects. There is measure that sums invoice totals and two calculations. One of these calculations performs some computations based on object income and some other object attributes. Second calculation uses first one to define income ranges where object belongs. What I need is query that returns me how much objects there are in each group. I cannot use dimension for range because on one date object may belong to one range and two days later to another income range. By example, if object is not rented out for two days it makes no money and it’s income stays the same as before. If object is rented out after two days it makes some income and this income may move it to another income range. Solution – fake dimension and scopes Thanks to Gerhard Brueckl from pmOne I got everything work fine after some struggling with BI Studio. The original discussion he pointed out can be found from SSAS official forums thread Create a banding dimension that groups by a calculated measure. Solution was pretty simple by nature – we have to define fake dimension for our range and use scopes to assign values for object count measure. Object count measure is primitive – it just counts objects and that’s it. We will use it to find out how many objects belong to one or another range. We also need table for fake ranges and we have to fill it with ranges used in ranges calculation. After creating the table and filling it with ranges we can add fake range dimension to our cube. Let’s see now how to solve the problem step-by-step. Solving the problem Suppose you have ranges calculation defined like this: CASE WHEN [Measures].[ComplexCalc] < 0 THEN 'Below 0'WHEN [Measures].[ComplexCalc] >=0 AND  [Measures].[ComplexCalc] <=50 THEN '0 - 50'...END Let’s create now new table to our analysis database and name it as FakeIncomeRange. Here is the definition for table: CREATE TABLE [FakeIncomeRange] (     [range_id] [int] IDENTITY(1,1) NOT NULL,     [range_name] [nvarchar](50) NOT NULL,     CONSTRAINT [pk_fake_income_range] PRIMARY KEY CLUSTERED      (         [range_id] ASC     ) ) Don’t forget to fill this table with range labels you are using in ranges calculation. To use ranges from table we have to add this table to our data source view and create new dimension. We cannot bind this table to other tables but we have to leave it like it is. Our dimension has two attributes: ID and Name. The next thing to create is calculation that returns objects count. This calculation is also fake because we override it’s values for all ranges later. Objects count measure can be defined as calculation like this: COUNT([Object].[Object].[Object].members) Now comes the most crucial part of our solution – defining the scopes. Based on data used in this posting we have to define scope for each of our ranges. Here is the example for first range. SCOPE([FakeIncomeRange].[Name].&[Below 0], [Measures].[ObjectCount])     This=COUNT(            FILTER(                [Object].[Object].[Object].members,                 [Measures].[ComplexCalc] < 0          )     ) END SCOPE To get these scopes defined in cube we need MDX script blocks for each line given here. Take a look at the screenshot to get better idea what I mean. This example is given from SQL Server books online to avoid conflicts with NDA. :) From previous example the lines (MDX scripts) are: Line starting with SCOPE Block for This = Line with END SCOPE And now it is time to deploy and process our cube. Although you may see examples where there are semicolons in the end of statements you don’t need them. Visual Studio BI tools generate separate command from each script block so you don’t need to worry about it.

    Read the article

  • BizTalk Server 2009 - Architecture Options

    - by StuartBrierley
    I recently needed to put forward a proposal for a BizTalk 2009 implementation and as a part of this needed to describe some of the basic architecture options available for consideration.  While I already had an idea of the type of environment that I would be looking to recommend, I felt that presenting a range of options while trying to explain some of the strengths and weaknesses of those options was a good place to start.  These outline architecture options should be equally valid for any version of BizTalk Server from 2004, through 2006 and R2, up to 2009.   The following diagram shows a crude representation of the common implementation options to consider when designing a BizTalk environment.         Each of these options provides differing levels of resilience in the case of failure or disaster, with the later options also providing more scope for performance tuning and scalability.   Some of the options presented above make use of clustering. Clustering may best be described as a technology that automatically allows one physical server to take over the tasks and responsibilities of another physical server that has failed. Given that all computer hardware and software will eventually fail, the goal of clustering is to ensure that mission-critical applications will have little or no downtime when such a failure occurs. Clustering can also be configured to provide load balancing, which should generally lead to performance gains and increased capacity and throughput.   (A) Single Servers   This option is the most basic BizTalk implementation that should be considered. It involves the deployment of a single BizTalk server in conjunction with a single SQL server. This configuration does not provide for any resilience in the case of the failure of either server. It is however the cheapest and easiest to implement option of those available.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (B) Single BizTalk Server with Clustered SQL Servers   This option uses a single BizTalk server with a cluster of SQL servers. By utilising clustered SQL servers we can ensure that there is some resilience to the implementation in respect of the databases that BizTalk relies on to operate. The clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition. While this option offers improved resilience over option (A) it does still present a potential single point of failure at the BizTalk server.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. You are also unable to take advantage of multiple message boxes, which would allow us to balance the SQL load in the event of any bottlenecks in this area of the implementation. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (C) Clustered BizTalk Servers with Clustered SQL Servers   This option makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in the case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    The use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning any implemented solutions. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling out the solution as future demand requires.   This might be seen as the middle cost option, providing a good level of protection in the case of failure, a decent level of future proofing, but at a higher cost than the single BizTalk server implementations.   (D) Clustered BizTalk Servers with Clustered SQL Servers – with disaster recovery/service continuity   This option is similar to that offered by (C) and makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    As with (C) the use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning the implemented solution. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling the solution out as future demand requires.   In this scenario however, we would be including some form of disaster recovery or service continuity. An example of this would be making use of multiple sites, with the BizTalk server cluster operating across sites to offer resilience in case of the loss of one or more sites. In this scenario there are options available for the SQL implementation depending on the network implementation; making use of either one cluster per site or a single SQL cluster across the network. A multi-site SQL implementation would require some form of data replication across the sites involved.   This is obviously an expensive and complex option, but does provide an extraordinary amount of protection in the case of failure.

    Read the article

  • qemu-kvm virtual machine virtio network freeze under load

    - by Rick Koshi
    I'm having a problem with my virtual machines, where the network will freeze under heavy load. I'm using CentOS 6.2 as both host and guest, not using libvirt, just running qemu-kvm directly as follows: /usr/libexec/qemu-kvm \ -drive file=/data2/vm/rb-dev2-www1-vm.img,index=0,media=disk,cache=none,if=virtio \ -boot order=c \ -m 2G \ -smp cores=1,threads=2 \ -vga std \ -name rb-dev2-www1-vm \ -vnc :84,password \ -net nic,vlan=0,macaddr=52:54:20:00:00:54,model=virtio \ -net tap,vlan=0,ifname=tap84,script=/etc/qemu-ifup \ -monitor unix:/var/run/vm/rb-dev2-www1-vm.mon,server,nowait \ -rtc base=utc \ -device piix3-usb-uhci \ -device usb-tablet /etc/qemu-ifup (used by the above command) is a very simple script, containing the following: #!/bin/sh sudo /sbin/ifconfig $1 0.0.0.0 promisc up sudo /usr/sbin/brctl addif br0 $1 sleep 2 And here's the info on br0 and other interfaces: avl-host3 14# brctl show bridge name bridge id STP enabled interfaces br0 8000.180373f5521a no bond0 tap84 virbr0 8000.525400858961 yes virbr0-nic avl-host3 15# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff 3: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff 4: em3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 18:03:73:f5:52:1e brd ff:ff:ff:ff:ff:ff 5: em4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 18:03:73:f5:52:20 brd ff:ff:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff inet6 fe80::1a03:73ff:fef5:521a/64 scope link valid_lft forever preferred_lft forever 7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff inet 172.16.1.46/24 brd 172.16.1.255 scope global br0 inet6 fe80::1a03:73ff:fef5:521a/64 scope link valid_lft forever preferred_lft forever 8: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500 link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff 12: tap84: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether ba:e8:9b:2a:ff:48 brd ff:ff:ff:ff:ff:ff inet6 fe80::b8e8:9bff:fe2a:ff48/64 scope link valid_lft forever preferred_lft forever bond0 is a bond of em1 and em2. virbr0 and virbr0-nic are vestigial interfaces left over from CentOS's default installation. They are unused (as far as I know). The guest runs perfectly until I run a large 'rsync', when the network will freeze after some seemingly-random time (usually under a minute). When it freezes, there is no network activity in or out of the guest. I can still connect to the guest's console via vnc, but it is unable to speak out its network interface. Any attempt to 'ping' from the guest gives a "Destination Host Unreachable" error for 3/4 packets and no reply for every fourth packet. Sometimes (perhaps two thirds of the time), I can bring the interface back to life by doing a "service network restart" from the guest's console. If this works (and if I do it before the rsync times out), the rsync will resume. Usually it will freeze again within a minute or two. If I repeat, the rsync will eventually finish, and I presume the machine goes back to waiting for another period of heavy load. Throughout the whole process, there are no console errors or relevant (that I can see) syslog messages on either guest or host machine. If the "service network restart" doesn't work the first time, trying again (and again and again) never seems to work. The command completes normally, with normal output, but the interface stays frozen. However, a soft reboot of the guest machine (without restarting qemu-kvm) always seems to bring it back. I am aware of the "lowest mac address" assignment problem, where the bridge takes on the mac address of the slave interface with the lowest mac address. This causes temporary network freezes, but is definitely not what's happening for me. My freezes are permanent until manual intervention, and you can see from the 'ip addr show' output above that the mac address being used by br0 is that of the physical ethernet. There are no other virtual machines running on the host. I've verified that each virtual machine on the subnet has its own unique mac address. I have rebuilt the guest machine several times, and I have tried this on three different host machines (identical hardware, built identically). Oddly, I do have one virtual host (the second of this series) which never seemed to have a problem. It never had its network freeze when it was running the same rsync during its build. It's particularly odd because it was the second build. The first, on a different host, did have the freezing problem, but the second did not. I assumed at the time that I had done something wrong with the first build, and that the problem was resolved. Unfortunately, the problem reappeared when I built the third VM. Also unfortunately, I can't do many tests with the working VM, as it's now in production use, and I'm hoping I can find the cause of this issue before that machine starts having problems. It's possible that I just got really lucky while running the rsync on the working machine, and that one time it didn't freeze. Of course it's possible that I somehow changed the build scripts without realizing it and re-broke something, but I can't find any such thing. In any case, I'm hoping someone has some idea what could cause this. Addendum: Preliminary tests suggest that I don't have the problem if I substitute e1000 for virtio in the first -net flag to qemu-kvm. I don't consider this a solution, but it is suitable for a stopgap. Has anyone else had (or better yet, solved) this problem with the virtio network driver?

    Read the article

  • yui compressor maven plugin doesnt compress the js files

    - by hanumant
    I am using yui compressor to compress the js files in my web app. I have configured the plugin as indicated on yui maven plugin site yui compressor maven plugin. This is the pom plugin conf <plugin> <groupId>net.sf.alchim</groupId> <artifactId>yuicompressor-maven-plugin</artifactId> <version>0.7.1</version> <executions> <execution> <phase>compile</phase> <goals> <goal>jslint</goal> <goal>compress</goal> </goals> </execution> </executions> <configuration> <failOnWarning>true</failOnWarning> <nosuffix>true</nosuffix> <force>true</force> <aggregations> <aggregation> <!-- remove files after aggregation (default: false) --> <removeIncluded>false</removeIncluded> <!-- insert new line after each concatenation (default: false) --> <insertNewLine>false</insertNewLine> <output>${project.basedir}/${webcontent.dir}/js/compressedAll.js</output> <!-- files to include, path relative to output's directory or absolute path--> <!--inputDir>base directory for non absolute includes, default to parent dir of output</inputDir--> <includes> <include>**/autocomplete.js</include> <include>**/calendar.js</include> <include>**/dialogs.js</include> <include>**/download.js</include> <include>**/folding.js</include> <include>**/jquery-1.4.2.min.js</include> <include>**/jquery.bgiframe.min.js</include> <include>**/jquery.loadmask.js</include> <include>**/jquery.printelement-1.1.js</include> <include>**/jquery.tablesorter.mod.js</include> <include>**/jquery.tablesorter.pager.js</include> <include>**/jquery.dialogs.plugin.js</include> <include>**/jquery.ui.autocomplete.js</include> <include>**/jquery.validate.js</include> <include>**/jquery-ui-1.8.custom.min.js</include> <include>**/languageDropdown.js</include> <include>**/messages.js</include> <include>**/print.js</include> <include>**/tables.js</include> <include>**/tabs.js</include> <include>**/uwTooltip.js</include> </includes> <!-- files to exclude, path relative to output's directory--> </aggregation> </aggregations> </configuration> <dependencies> <dependency> <groupId>rhino</groupId> <artifactId>js</artifactId> <scope>compile</scope> <version>1.6R5</version> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2.0.7</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-project</artifactId> <version>2.0.7</version> <scope>provided</scope> </dependency><dependency> <groupId>net.sf.retrotranslator</groupId> <artifactId>retrotranslator-runtime</artifactId> <version>1.2.9</version> <scope>runtime</scope> </dependency> </dependencies> </plugin> And here is the log at compress time These will use the artifact files already in the core ClassRealm instead, to allow them to be included in PluginDescriptor.getArtifacts(). [DEBUG] Configuring mojo 'net.sf.alchim:yuicompressor-maven-plugin:0.7.1:jslint' [DEBUG] (f) failOnWarning = true [DEBUG] (f) jswarn = true [DEBUG] (f) outputDirectory = C:\test\target\classes [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\src, PatternSet [includes: {}, excludes: {**/*.class, **/*.java, site/*}]}}] [DEBUG] (f) sourceDirectory = C:\test\src\..\js [DEBUG] (f) warSourceDirectory = C:\test\src\main\webapp [DEBUG] (f) webappDirectory = C:\test\target\test2-19-SNAPSHOT [DEBUG] -- end configuration -- [INFO] [yuicompressor:jslint {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [DEBUG] Configuring mojo 'net.sf.alchim:yuicompressor-maven-plugin:0.7.1:compress' -- [DEBUG] (f) removeIncluded = false [DEBUG] (f) insertNewLine = false [DEBUG] (f) output = C:\test\WebContent\js\compressedAll.js [DEBUG] (f) includes = [**/autocomplete.js, **/calendar.js, **/dialogs.js, **/download.js, **/folding.js, **/jquery-1.4.2.min.js, **/jquery.bgifram e.min.js, **/jquery.loadmask.js, **/jquery.printelement-1.1.js, **/jquery.tablesorter.mod.js, **/jquery.tablesorter.pager.js, **/jquery.dialogs.p lugin.js, **/jquery.ui.autocomplete.js, **/jquery.validate.js, **/jquery-ui-1.8.custom.min.js, **/languageDropdown.js, **/messages.js, **/print.js, * */tables.js, **/tabs.js, **/uwTooltip.js] [DEBUG] (f) aggregations = [net.sf.alchim.mojo.yuicompressor.Aggregation@65646564] [DEBUG] (f) disableOptimizations = false [DEBUG] (f) encoding = Cp1252 [DEBUG] (f) failOnWarning = true [DEBUG] (f) force = true [DEBUG] (f) gzip = false [DEBUG] (f) jswarn = true [DEBUG] (f) linebreakpos = 0 [DEBUG] (f) nomunge = false [DEBUG] (f) nosuffix = true [DEBUG] (f) outputDirectory = C:\test\target\classes [DEBUG] (f) preserveAllSemiColons = false [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\src, PatternSet [includes: {}, excludes: {**/*.class, **/*.java, site/*}]}}] [DEBUG] (f) sourceDirectory = C:\test\src\..\js [DEBUG] (f) statistics = true [DEBUG] (f) suffix = -min [DEBUG] (f) warSourceDirectory = C:\test\src\main\webapp [DEBUG] (f) webappDirectory = C:\test\target\test2-19-SNAPSHOT [DEBUG] -- end configuration -- [INFO] [yuicompressor:compress {execution: default}] [INFO] generate aggregation : C:\test\WebContent\js\compressedAll.js [INFO] compressedAll.js (407505b) [INFO] nb warnings: 0, nb errors: 0 [DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-resources-plugin:2.2:testResources' -- [DEBUG] (f) filters = [] [DEBUG] (f) outputDirectory = C:\test\target\test-classes [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\test , PatternSet [includes: {}, excludes: {**/*.class, **/*.java}]}}] [DEBUG] -- end configuration -- The problem is the files are getting aggregated into one file but without compressing. The link above uses version 1.1 and the plugin version I am using is 0.7.1. Is this going to make any diff. Can someone tell what is wrong here. PS: I have obfuscated some text in log to follow the compliance in my firm. So you may find it mismatching somewhere.

    Read the article

  • Hibernate, Spring and SLF4J Binding

    - by asrijaal
    Hi, I'm trying to setup a webapp with maven2 managed dependcies. Here my pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>mtx-production</groupId> <artifactId>mtx-production</artifactId> <version>0.0.1</version> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>3.0.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>3.0.2.RELEASE</version> <type>jar</type> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>3.0.2.RELEASE</version> <type>jar</type> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.12</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>3.5.1-Final</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-annotations</artifactId> <version>3.5.1-Final</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.5.8</version> <type>jar</type> <scope>compile</scope> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.14</version> <type>jar</type> <scope>compile</scope> </dependency> </dependencies> <repositories> <repository> <id>jboss-releases</id> <url>http://repository.jboss.org/maven2</url> </repository> </repositories> </project> Correct me if I got something wrong but this should work?! I only getting in my eclipse this exception 13.06.2010 16:48:15 org.springframework.context.support.AbstractApplicationContext$BeanPostProcessorChecker postProcessAfterInitialization INFO: Bean 'dataSource' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. 13.06.2010 16:48:15 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry destroySingletons INFO: Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@366573: defining beans [org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalPersistenceAnnotationProcessor,org.springframework.beans.factory.config.PropertyPlaceholderConfigurer#0,dataSource,sessionFactory,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,myTxManager,org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor#0]; root of factory hierarchy 13.06.2010 16:48:15 org.springframework.web.context.ContextLoader initWebApplicationContext SCHWERWIEGEND: Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor#0' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sessionFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: org/slf4j/impl/StaticLoggerBinder at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:527) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194) at org.springframework.context.support.AbstractApplicationContext.registerBeanPostProcessors(AbstractApplicationContext.java:687) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:408) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:276) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:197) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3830) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4337) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardHost.start(StandardHost.java:719) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:566) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sessionFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: org/slf4j/impl/StaticLoggerBinder at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1412) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:288) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194) at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeansOfType(DefaultListableBeanFactory.java:387) at org.springframework.beans.factory.BeanFactoryUtils.beansOfTypeIncludingAncestors(BeanFactoryUtils.java:266) at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.detectPersistenceExceptionTranslators(PersistenceExceptionTranslationInterceptor.java:139) at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.<init>(PersistenceExceptionTranslationInterceptor.java:79) at org.springframework.dao.annotation.PersistenceExceptionTranslationAdvisor.<init>(PersistenceExceptionTranslationAdvisor.java:70) at org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor.setBeanFactory(PersistenceExceptionTranslationPostProcessor.java:99) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeAwareMethods(AbstractAutowireCapableBeanFactory.java:1431) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1400) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) ... 25 more Caused by: java.lang.NoClassDefFoundError: org/slf4j/impl/StaticLoggerBinder What is wrong with my setup?

    Read the article

  • How-to configure Spring Social via XML

    - by Matthias Steiner
    I spend a few hours trying to get Twitter integration to work with Spring Social using the XML configuration approach. All the examples I could find on the web (and on stackoverflow) always use the @Config approach as shown in the samples For whatever reason the bean definition to get an instance to the twitter API throws an AOP exception: Caused by: java.lang.IllegalStateException: Cannot create scoped proxy for bean 'scopedTarget.twitter': Target type could not be determined at the time of proxy creation. Here's the complete config file I have: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jaxrs="http://cxf.apache.org/jaxrs" xmlns:context="http://www.springframework.org/schema/context" xmlns:util="http://www.springframework.org/schema/util" xmlns:cxf="http://cxf.apache.org/core" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:jee="http://www.springframework.org/schema/jee" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:jdbc="http://www.springframework.org/schema/jdbc" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://cxf.apache.org/jaxrs http://cxf.apache.org/schemas/jaxrs.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.1.xsd http://cxf.apache.org/core http://cxf.apache.org/schemas/core.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.1.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-3.1.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.1.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-3.1.xsd"> <import resource="classpath:META-INF/cxf/cxf.xml" /> <import resource="classpath:META-INF/cxf/cxf-servlet.xml" /> <jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/DefaultDB" /> <!-- initialize DB required to store user auth tokens --> <jdbc:initialize-database data-source="dataSource" ignore-failures="ALL"> <jdbc:script location="classpath:/org/springframework/social/connect/jdbc/JdbcUsersConnectionRepository.sql"/> </jdbc:initialize-database> <bean id="connectionFactoryLocator" class="org.springframework.social.connect.support.ConnectionFactoryRegistry"> <property name="connectionFactories"> <list> <ref bean="twitterConnectFactory" /> </list> </property> </bean> <bean id="twitterConnectFactory" class="org.springframework.social.twitter.connect.TwitterConnectionFactory"> <constructor-arg value="xyz" /> <constructor-arg value="xzy" /> </bean> <bean id="usersConnectionRepository" class="org.springframework.social.connect.jdbc.JdbcUsersConnectionRepository"> <constructor-arg ref="dataSource" /> <constructor-arg ref="connectionFactoryLocator" /> <constructor-arg ref="textEncryptor" /> </bean> <bean id="connectionRepository" factory-method="createConnectionRepository" factory-bean="usersConnectionRepository" scope="request"> <constructor-arg value="#{request.userPrincipal.name}" /> <aop:scoped-proxy proxy-target-class="false" /> </bean> <bean id="twitter" factory-method="?ndPrimaryConnection" factory-bean="connectionRepository" scope="request" depends-on="connectionRepository"> <constructor-arg value="org.springframework.social.twitter.api.Twitter" /> <aop:scoped-proxy proxy-target-class="false" /> </bean> <bean id="textEncryptor" class="org.springframework.security.crypto.encrypt.Encryptors" factory-method="noOpText" /> <bean id="connectController" class="org.springframework.social.connect.web.ConnectController"> <constructor-arg ref="connectionFactoryLocator"/> <constructor-arg ref="connectionRepository"/> <property name="applicationUrl" value="https://socialscn.int.netweaver.ondemand.com/socialspringdemo" /> </bean> <bean id="signInAdapter" class="com.sap.netweaver.cloud.demo.social.SimpleSignInAdapter" /> </beans> What puzzles me is that the connectionRepositoryinstantiation works perfectly fine (I commented-out the twitter bean and tested the code!) ?!? It uses the same features: request scope and interface AOP proxy and works, but the twitter bean instantiation fails ?!? The spring social config code looks as follows (I can not see any differences, can you?): @Configuration public class SocialConfig { @Inject private Environment environment; @Inject private DataSource dataSource; @Bean @Scope(value="singleton", proxyMode=ScopedProxyMode.INTERFACES) public ConnectionFactoryLocator connectionFactoryLocator() { ConnectionFactoryRegistry registry = new ConnectionFactoryRegistry(); registry.addConnectionFactory(new TwitterConnectionFactory(environment.getProperty("twitter.consumerKey"), environment.getProperty("twitter.consumerSecret"))); return registry; } @Bean @Scope(value="singleton", proxyMode=ScopedProxyMode.INTERFACES) public UsersConnectionRepository usersConnectionRepository() { return new JdbcUsersConnectionRepository(dataSource, connectionFactoryLocator(), Encryptors.noOpText()); } @Bean @Scope(value="request", proxyMode=ScopedProxyMode.INTERFACES) public ConnectionRepository connectionRepository() { Authentication authentication = SecurityContextHolder.getContext().getAuthentication(); if (authentication == null) { throw new IllegalStateException("Unable to get a ConnectionRepository: no user signed in"); } return usersConnectionRepository().createConnectionRepository(authentication.getName()); } @Bean @Scope(value="request", proxyMode=ScopedProxyMode.INTERFACES) public Twitter twitter() { Connection<Twitter> twitter = connectionRepository().findPrimaryConnection(Twitter.class); return twitter != null ? twitter.getApi() : new TwitterTemplate(); } @Bean public ConnectController connectController() { ConnectController connectController = new ConnectController(connectionFactoryLocator(), connectionRepository()); connectController.addInterceptor(new PostToWallAfterConnectInterceptor()); connectController.addInterceptor(new TweetAfterConnectInterceptor()); return connectController; } @Bean public ProviderSignInController providerSignInController(RequestCache requestCache) { return new ProviderSignInController(connectionFactoryLocator(), usersConnectionRepository(), new SimpleSignInAdapter(requestCache)); } } Any help/pointers would be appreciated!!! Cheers, Matthias

    Read the article

  • DHCP Relay V DHCP Local Cisco v 3com

    - by DefSol
    Howdy, I have a client who has a WAN with 7 sites. At one site in particular, randomly about 4-5 clients do not get an IP address. The local gateway is a cisco 871 and relay's to a windows server in a Data Center running a valid scope for the subnet. If I put in a cisco 1800 and configure a dhcp scope (disabling the scope on the server) all clients get an ip address and everything is right with the world. The Wan providing keeps saying it's a local issue although we can work around with the 1800. The provider says a 3Com switch is at fault and the 1800 does not have a local switch, and because the 871 does, means the internal switching will receive a different uplink policy. The 3Com is the only managed switch in the subnet. Any ideas greatly appreciated. Reuben

    Read the article

  • DHCP Relay V DHCP Local Cisoc v 3com

    - by DefSol
    Howdy, I have a client who has a WAN with 7 sites. At one site in particular, randomly about 4-5 clients do not get an IP address. The local gateway is a cisco 871 and relay's to a windows server in a Data Center running a valid scope for the subnet. If I put in a cisco 1800 and configure a dhcp scope (disabling the scope on the server) all clients get an ip address and everything is right with the world. The Wan providing keeps saying it's a local issue although we can work around with the 1800. The provider says a 3Com switch is at fault and the 1800 does not have a local switch, and because the 871 does, means the internal switching will receive a different uplink policy. The 3Com is the only managed switch in the subnet. Any ideas greatly appreciated. Reuben

    Read the article

  • dhcpd: varying vendor-class-identifier

    - by jessicah
    I'm having trouble selectively sending parameters in response to a DHCP Inform packet using groups (or even without, just using host declarations) for bootp stuff. My configuration file right now looks like: subnet 130.123.131.128 netmask 255.255.255.128 { allow unknown-clients; } host dev-mac-09 { option vendor-class-identifier "example-identifier"; hardware ethernet 10:9a:dd:51:ff:83; } If I put vendor-class-identifier in the global scope, using tcpdump I can see that the client receives the vendor class option successfully. If I take it out, and just keep it in the host scope (or group scope), the client never receives the option. Specifying option dhcp-parameter-request list 60 doesn't help either. I did try using a class definition inside a group, but then it applied even if the host wasn't a part of the group. As an aside, how do I get detailed logging? At least something to indicate what groups and things got used to generate the response to the client.

    Read the article

  • Unable to ping between subnets and out to internet

    - by battlemidget
    My setup is Modem - Linksys router - Laptop with 2 devices (wlan0/eth0) - desktop machine Router is 192.168.1.1 gateway to the internet Laptop wlan0 is 192.168.1.4 with a gw of 192.168.1.1 Laptop eth0 is 192.168.2.254 which acts as a second gateway desktop is 192.168.2.100 On laptop i've setup ip_forward to 1, and have inserted 2 iptables rules -A FORWARD -i eth0 -o wlan0 -j ACCEPT -A FORWARD -i wlan0 -o eth0 -j ACCEPT The laptop can ping outside the network (i,e, yahoo.com) it can not ping 192.168.2.100. The desktop can ping 192.168.2.254 but nothing outside the network or 192.168.1.0 subnet. On laptop ip route show lists: 192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.254 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.4 127.0.0.0/8 dev lo scope link default via 192.168.1.1 dev wlan0 What am I missing to make my desktop go through the laptop in order to access the router which provides access to the internet? Thanks

    Read the article

  • Deleting unreferenced child records with nhibernate

    - by Chev
    Hi There I am working on a mvc app using nhibernate as the orm (ncommon framework) I have parent/child entities: Product, Vendor & ProductVendors and a one to many relationship between them with Product having a ProductVendors collection Product.ProductVendors. I currently am retrieving a Product object and eager loading the children and sending these down the wire to my asp.net mvc client. A user will then modify the list of Vendors and post the updated Product back. I am using a custom model binder to generate the modified Product entity. I am able to update the Product fine and insert new ProductVendors. My problem is that dereferenced ProductVendors are not cascade deleted when specifying Product.ProductVendors.Clear() and calling _productRepository.Save(product). The problem seems to be with attaching the detached instance. Here are my mapping files: Product <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Name" type="String" length="250" /> ProductVendors <?xml version="1.0" encoding="utf-8" ?> <id name="Id"> <generator class="guid.comb" /> </id> <version name="LastModified" unsaved-value="0" column="LastModified" /> <property name="Price" /> <many-to-one name="Product" class="Product" column="ProductId" lazy="false" not-null="true" /> <many-to-one name="Vendor" class="Vendor" column="VendorId" lazy="false" not-null="true" /> Custom Model Binder: using System; using Test.Web.Mvc; using Test.Domain; namespace Spoked.MVC { public class ProductUpdateModelBinder : DefaultModelBinder { private readonly ProductSystem ProductSystem; public ProductUpdateModelBinder(ProductSystem productSystem) { ProductSystem = productSystem; } protected override void OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext) { var product = bindingContext.Model as Product; if (product != null) { product.Category = ProductSystem.GetCategory(new Guid(bindingContext.ValueProvider["Category"].AttemptedValue)); product.Brand = ProductSystem.GetBrand(new Guid(bindingContext.ValueProvider["Brand"].AttemptedValue)); product.ProductVendors.Clear(); if (bindingContext.ValueProvider["ProductVendors"] != null) { string[] productVendorIds = bindingContext.ValueProvider["ProductVendors"].AttemptedValue.Split(','); foreach (string id in productVendorIds) { product.AddProductVendor(ProductSystem.GetVendor(new Guid(id)), 90m); } } } } } } Controller: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Update(Product product) { using (var scope = new UnitOfWorkScope()) { //product.ProductVendors.Clear(); _productRepository.Save(product); scope.Commit(); } using (new UnitOfWorkScope()) { IList<Vendor> availableVendors = _productSystem.GetAvailableVendors(product); productDetailEditViewModel = new ProductDetailEditViewModel(product, _categoryRepository.Select(x => x).ToList(), _brandRepository.Select(x => x).ToList(), availableVendors); } return RedirectToAction("Edit", "Products", new {id = product.Id.ToString()}); } The following test does pass though: [Test] [NUnit.Framework.Category("ProductTests")] public void Can_Delete_Product_Vendors_By_Dereferencing() { Product product; using(UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Selecting..."); product = _productRepository.First(); Console.Out.WriteLine("Adding Product Vendor..."); product.AddProductVendor(_vendorRepository.First(), 0m); scope.Commit(); } Console.Out.WriteLine("About to delete Product Vendors..."); using (UnitOfWorkScope scope = new UnitOfWorkScope()) { Console.Out.WriteLine("Clearing Product Vendor..."); _productRepository.Save(product); // seems to be needed to attach entity to the persistance manager product.ProductVendors.Clear(); scope.Commit(); } } Going nuts here as I almost have a very nice solution between mvc, custom model binders and nhibernate. Just not seeing my deletes cascaded. Any help greatly appreciated. Chev

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >