Search Results

Search found 13653 results on 547 pages for 'integration testing'.

Page 82/547 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Free book from Microsoft - Building Elastic and Resilient Cloud Applications - Developer's Guide to the Enterprise Library 5.0 Integration Pack for Windows Azure

    - by TATWORTH
    At http://www.microsoft.com/en-us/download/details.aspx?id=29994, Microsoft are are offering a free book  - "Building Elastic and Resilient Cloud Applications - Developer's Guide to the Enterprise Library 5.0 Integration Pack for Windows Azure"The Microsoft Enterprise Library Integration Pack for Windows Azure is an extension to the Microsoft Enterprise Library 5.0 that can be used with Windows Azure. It includes the Autoscaling Application Block, the Transient Fault Handling Application Block, a protected configuration provider and the Blob configuration source.The book is available as PDF, mobi and epub formats.

    Read the article

  • NetBeans arrête le support du module Ruby on Rails et concentre ses efforts sur l'intégration de Java SE 7

    NetBeans arrête le support du module Ruby on Rails Et concentre ses efforts sur l'intégration de Java SE 7 NetBeans 7.0, actuellement en bêta et prévu en version définitive pour avril, n'offrira plus de module pour Ruby on Rails. En cause, la faible utilisation de l'IDE d'Oracle par les développeurs Rails et la volonté de l'équipe du projet de se concentrer sur une meilleure intégration de Java 7. La décision est assez peu surprenante. Les développeurs Ruby ont généralement un penchant pour des IDE ...

    Read the article

  • Le noyau Linux 3.2 disponible : intégration du code d'Android, améliorations réseaux, Btrfs et support d'une nouvelle architecture

    Le noyau Linux 3.2 disponible : intégration du code d'Android améliorations réseaux, Btrfs et support d'une nouvelle architecture Linus Torvalds vient d'annoncer la disponibilité de la version 3.3 du noyau Linux. Au menu des nouveautés, on notera essentiellement la réintégration des portions de code du noyau d'Android . Pour rappel, en 2009, les pilotes d'Android avaient été exclus du noyau parce qu'ils n'étaient pas suffisamment maintenus. L'intégration d'Android permettra aux développeurs d'utiliser le noyau Linux pour faire fonctionner un système Android, développer un pilote pour les deux et réduira les couts de maintenance des correctifs indépendants d'une...

    Read the article

  • Le noyau Linux 3.3 disponible : intégration du code d'Android, améliorations réseaux, Btrfs et support d'une nouvelle architecture

    Le noyau Linux 3.3 disponible : intégration du code d'Android améliorations réseaux, Btrfs et support d'une nouvelle architecture Linus Torvalds vient d'annoncer la disponibilité de la version 3.3 du noyau Linux. Au menu des nouveautés, on notera essentiellement la réintégration des portions de code du noyau d'Android . Pour rappel, en 2009, les pilotes d'Android avaient été exclus du noyau parce qu'ils n'étaient pas suffisamment maintenus. L'intégration d'Android permettra aux développeurs d'utiliser le noyau Linux pour faire fonctionner un système Android, développer un pilote pour les deux et réduira les couts de maintenance des correctifs indépendants d'une...

    Read the article

  • Hibernate Lazy initialization exception problem with Gilead in GWT 2.0 integration

    - by sylsau
    Hello, I use GWT 2.0 as UI layer on my project. On server side, I use Hibernate. For example, this is 2 domains entities that I have : public class User { private Collection<Role> roles; @ManyToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, mappedBy = "users", targetEntity = Role.class) public Collection<Role> getRoles() { return roles; } ... } public class Role { private Collection<User> users; @ManyToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, targetEntity = User.class) public Collection<User> getUsers() { return users; } ... } On my DAO layer, I use UserDAO that extends HibernateDAOSupport from Spring. UserDAO has getAll method to return all of Users. And on my DAO service, I use UserService that uses userDAO to getAll of Users. So, when I get all of Users from UsersService, Users entities returned are detached from Hibernate session. For that reason, I don't want to use getRoles() method on Users instance that I get from my service. What I want is just to transfer my list of Users thanks to a RPC Service to be able to use others informations of Users in client side with GWT. Thus, my main problem is to be able to convert PersistentBag in Users.roles in simple List to be able to transfer via RPC the Users. To do that, I have seen that Gilead Framework could be a solution. In order to use Gilead, I have changed my domains entities. Now, they extend net.sf.gilead.pojo.gwt.LightEntity and they respect JavaBean specification. On server, I expose my services via RPC thanks to GwtRpcSpring framework (http://code.google.com/p/gwtrpc-spring/). This framework has an advice that makes easier Gilead integration. My applicationContext contains the following configuration for Gilead : <bean id="gileadAdapterAdvisor" class="org.gwtrpcspring.gilead.GileadAdapterAdvice" /> <aop:config> <aop:aspect id="gileadAdapterAspect" ref="gileadAdapterAdvisor"> <aop:pointcut id="gileadPointcut" expression="execution(public * com.google.gwt.user.client.rpc.RemoteService.*(..))" /> <aop:around method="doBasicProfiling" pointcut-ref="gileadPointcut" /> </aop:aspect> </aop:config> <bean id="proxySerializer" class="net.sf.gilead.core.serialization.GwtProxySerialization" /> <bean id="proxyStore" class="net.sf.gilead.core.store.stateless.StatelessProxyStore"> <property name="proxySerializer" ref="proxySerializer" /> </bean> <bean id="persistenceUtil" class="net.sf.gilead.core.hibernate.HibernateUtil"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <bean class="net.sf.gilead.core.PersistentBeanManager"> <property name="proxyStore" ref="proxyStore" /> <property name="persistenceUtil" ref="persistenceUtil" /> </bean> The code of the the method doBasicProfiling is the following : @Around("within(com.google.gwt.user.client.rpc.RemoteService..*)") public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable { if (log.isDebugEnabled()) { String className = pjp.getSignature().getDeclaringTypeName(); String methodName = className .substring(className.lastIndexOf(".") + 1) + "." + pjp.getSignature().getName(); log.debug("Wrapping call to " + methodName + " for PersistentBeanManager"); } GileadHelper.parseInputParameters(pjp.getArgs(), beanManager, RemoteServiceUtil.getThreadLocalSession()); Object retVal = pjp.proceed(); retVal = GileadHelper.parseReturnValue(retVal, beanManager); return retVal; } With that configuration, when I run my application and I use my RPC Service that gets all of Users, I obtain a lazy initialization exception from Hibernate from Users.roles. I am disappointed because I thought that Gilead would let me to serialize my domain entities even if these entities contained PersistentBag. It's not one of the goals of Gilead ? So, someone would know how to configure Gilead (with GwtRpcSpring or other solution) to be able to transfer domain entities without Lazy exception ? Thanks by advance for your help. Sylvain

    Read the article

  • JSF 2.1 Spring 3.0 Integration

    - by danny.lesnik
    I'm trying to make very simple Spring 3 + JSF2.1 integration according to examples I googled in the web. So here is my code: My HTML submitted to actionController.actionSubmitted() method: <h:form> <h:message for="textPanel" style="color:red;" /> <h:panelGrid columns="3" rows="5" id="textPanel"> //all my bean prperties mapped to HTML code. </h:panelGrid> <h:commandButton value="Submit" action="#{actionController.actionSubmitted}" /> </h:form> now the Action Controller itself: @ManagedBean(name="actionController") @SessionScoped public class ActionController implements Serializable{ @ManagedProperty(value="#{user}") User user; @ManagedProperty(value="#{mailService}") MailService mailService; public void setMailService(MailService mailService) { this.mailService = mailService; } public void setUser(User user) { this.user = user; } private static final long serialVersionUID = 1L; public ActionController() {} public String actionSubmitted(){ System.out.println(user.getEmail()); mailService.sendUserMail(user); return "success"; } } Now my bean Spring: public interface MailService { void sendUserMail(User user); } public class MailServiceImpl implements MailService{ @Override public void sendUserMail(User user) { System.out.println("Mail to "+user.getEmail()+" sent." ); } } This is my web.xml <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <listener> <listener-class> org.springframework.web.context.request.RequestContextListener </listener-class> </listener> <!-- Welcome page --> <welcome-file-list> <welcome-file>index.xhtml</welcome-file> </welcome-file-list> <!-- JSF mapping --> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> my applicationContext.xml <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="mailService" class="com.vanilla.jsf.services.MailServiceImpl"> </bean> </beans> my faces-config.xml is the following: <application> <el-resolver> org.springframework.web.jsf.el.SpringBeanFacesELResolver </el-resolver> <message-bundle> com.vanilla.jsf.validators.MyMessages </message-bundle> </application> <managed-bean> <managed-bean-name>actionController</managed-bean-name> <managed-bean-class>com.vanilla.jsf.controllers.ActionController</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> <managed-property> <property-name>mailService</property-name> <value>#{mailService}</value> </managed-property> </managed-bean> <navigation-rule> <from-view-id>index.xhtml</from-view-id> <navigation-case> <from-action>#{actionController.actionSubmitted}</from-action> <from-outcome>success</from-outcome> <to-view-id>submitted.xhtml</to-view-id> <redirect /> </navigation-case> </navigation-rule> My Problem is that I'm getting NullPointerExeption because my mailService Spring bean is null. public String actionSubmitted(){ System.out.println(user.getEmail()); //mailService is null Getting NullPointerException mailService.sendUserMail(user); return "success"; }

    Read the article

  • How to configure CruiseControl.Net for Windows Authentication?

    - by balu
    I am using CruiseControl.Net for continuous integration which is now accessing the dashboard through login plugin, which in turn is authenticating and authorizing after verifying it with a set of users saved as XML file in the CruiseControl.Net server. Now, i need to bring in Windows Authentication to the system whereby which CruiseControl.Net server webdashboard when accessed from a client machine(local machine associated with a common server), would be authenticated and be authorized to access the CruiseControl.Net features based on the authority of the logged in users. Kindly guide me to go ahead with this, appreciate all kinds of resources that would be helpful for achieving this. Thanks.

    Read the article

  • Where are the factory_girl records?

    - by gmile
    I'm trying to perform an integration test via Watir and RSpec. So, I created a test file within /integration and wrote a test, which adds a test user into a base via factory_girl. The problem is — I can't actually perform a login with my test user. The test I wrote looks as following: ... before(:each) @user = Factory(:user) @browser = FireWatir::Firefox.new end it "should login" @browser.text_field(:id, "username").set(@user.username) @browser.text_field(:id, "password").set(@user.password) @browser.button(:id, "get_in").click end ... As I'm starting the test and see a "performance" in browser, it always fires up a Username is not valid error. I've started an investigation, and did a small trick. First of all I've started to have doubts if the factory actually creates the user in DB. So after the immediate call to factory I've put some puts User.find stuff only to discover that the user is actually in DB. Ok, but as user still couldn't have logged in I've decided to see if he's present in DB with my own eyes. I've added a sleep right after a factory call, and went to see what's in the DB at the moment. I was crushed to see that the user is actually missing there! How come? Still, when I'm trying to output a user within the code, he is actually being fetched from somewhere. So where does the records, made by factory_girl within a runtime lie? Is it test or dev DB? I don't get it. I've 10 times checked if I'm running my Mongrel in test mode (does it matter? I think it does, as I'm trying to tun an integration test) and if my database.yml holds the correct connection specific data. I'm using an authlogic, if that can give any clue (no, putting activate_authlogic doesn't work here).

    Read the article

  • Testing IPP Printing with ipptool

    - by senloe
    I'm trying to send an IPP print job using the ipptool. Using the sample .test files, I can send commands to the printer, but I am unable to successfully use the print-job.test file. Here's an example using ipptool. c:\...>ipptool -v ipp://name.local.:631/ipp/printer print-job.test ipptool: Filename "$filename" on line 21 cannot be read. ipptool: Filename mapped to "". It looks like it's failing resolving the variable $filename within the test file so I attempted to hardcode this value in the test file. In this case I get no error, but still no print. Does anybody have any experience using ipptool to test ipp printing?

    Read the article

  • Testing radius server from Mac OS X client

    - by Calvin Froedge
    I have a radius server set up on a server running Ubuntu 11.04. I have configured my switch to use the authentication server's IP (192.168.1.2) for RADIUS / 802.1x authentication, and I created a connection to test connecting from my Mac OSX client. Here is my radius configuration for the client: client 192.168.1.0/16 { secret = testing123 } I can successfully authenticate using both 127.0.0.1 (localhost) and 192.168.1.2 (ip of eth1), so I know radius is getting those requests. I set up a connection to test from my macbook, and my requests are timing out. http://screencast.com/t/tMhRLS3H7 Is there a better way to test the radius connection from my macbook? Thanks! UPDATE: I was able to successfully test on Mac OSX client using RadPerf. This is available as a cross-platform command line tool.

    Read the article

  • Stress test speed on a gateway?

    - by TheLQ
    I'm interested in stress testing my gateway server but am lost on how. Most of the stress testing applications I've seen only see how much load an app like Apache can handle, but not this. Essentially I want to send as many packets I can into this box with one computer on one card and see how many come out the other in another computer just to get an idea of what kind of load this can handle. I'm also interested how Snort will perform. I'm not really sure how to do this though. What tools could you recommend that could do this?

    Read the article

  • Ubuntu Hardy : Testing for environment variables in udev rules doesn't seem to work

    - by Fred
    I have a Ubuntu 8.04 LTS (server edition), and I need to write a udev rule for it to act upon plugging a USB thumb drive. However, I need a different action depending on the filesystem of the drive. I know I can use the ID_FS_TYPE environment variable to check for the filesystem on the drive. Following instructions found here, I try a dummy udev rule as such : KERNEL!="sd[a-z][0-9]", GOTO="my_udev_rule_end" ACTION=="add", RUN+="/usr/bin/touch /tmp/test_udev_%E{ID_FS_TYPE}" ACTION=="add", ENV{ID_FS_TYPE}=="vfat", RUN+="/usr/bin/touch /tmp/test_udev_it_works" LABEL="my_udev_rule_end" However, when I plug in a thumb drive with a vfat filesystem (which should trigger both rules), I end up with a file called /tmp/test_udev_vfat, meaning the first rule was triggered successfully, and that the ID_FS_TYPE environment variable is "vfat", but I don't have the other file, meaning that although I know the ID_FS_TYPE env variable is "vfat", I can't seem to check against it for a match. I tried googling the thing, but pretty much every result seems to assume ENV{ID_FS_TYPE}=="vfat" works. I also tested the exact same udev rule on Ubuntu 10.04 LTS server, and I have the same result. I'm probably missing something very simple, but I just don't get it. Does anyone see what is wrong with my udev rule that would prevent it from matching on ENV{ID_FS_TYPE}? Thanks.

    Read the article

  • Suggest methods for testing changes to "pam.d/common-*" files

    - by Jamie
    How do I test the changes to the pam.d configuration files: Do I need to restart the PAM service to test the changes? Should I go through every service listed in the /etc/pam.d/ directory? I'm about to make changes to the pam.d/common-* files in an effort to put an Ubuntu box into an active directory controlled network. I'm just learning what to do, so I'm preparing the configuration in a VM, which I plan to deploy in metal in the coming week. It is a clean install of Ubuntu 10.04 Beta 2 server, so other than SSH daemon, all other services are stock.

    Read the article

  • Multiple VM environment for developing/testing

    - by Hippo
    I was asked to create a setup for automated deployment, configuration, installation/updates of websites. A bunch of small websites will be bundled on one server. If more website will come up a new server will be created... I decided to us chef for this task. All servers will be running Ubuntu at the same version and configuration. The actual question: Everything needs to be tested properly before starting live deployment, so my question is: What is the best virtualisation tool to run multiple (5 - 10) virtual machines on a Ubuntu Laptop? Requirements: easy setup, fast (clone/snapshot of VMs) All VMs should be easily connected to the internet and should be able to communicate to each other (Open-Source / free would be great) So far I looked into: Virtual box is more for Desktop virtualisation, Cloning not possible, every new machine needs to be installed VMware Player Any suggestions? If there are any question about what I am doing please comment on this question, I will answer as soon as possible. This question is not about the actual set up, it is about a nice working environment.

    Read the article

  • Opening and Testing Ports on Modem > Router Connection

    - by JakeTheSnake
    Working off of my last question, I can access my server's FTP over the LAN but not over the internet. I'm using Filezilla on port 666. My router/modem configuration is as such (similar to other post): 1) Modem connects to WAN 2) WAN port on modem connects to LAN port on Router 3) Modem internal IP address is 192.168.0.254 4) Router internal IP address is 192.168.0.1 5) Modem has DHCP turned OFF 6) Router has DHCP turned ON 7) Router is running Tomato firmware and it's set as 'Router' (not 'Gateway') 8) The internet is working (just had to say that) I've set up port forwarding both on the modem and router - both route port 666 to the IP address of 192.168.0.3 (TCP); that is the IP address of the server which has FileZilla running. I don't know if that's hindering anything but I've also tried it with just the modem and just the router...same result. I've also tried setting the server to be DMZ (both on router and modem). Neither router nor modem have anything in their logs about denying inbound traffic on port 666 so my ability to troubleshoot stops there. I've tried contacting my ISP (Telus, running on mobility plan...it's a "Smart" Hub) but they weren't much help. They said they only block port 25 and 80 and maybe a few others, but not most ports. I test whether or not the port is open by going to canyouseeme.org - I don't know whether or not that would produce a 'connection refused' result just based on the fact that the FTP requires a login...I'm not well versed on this matter. FWIW, sometimes I get a 'connection refused' error on canyouseeme.org but mostly it's 'connection timed out'. I don't know what else to do at this point.

    Read the article

  • Disable Memory Modules In BIOS for Testing Purposes (Optimize Nehalem/Gulftown Memory Performance)

    - by Bob
    I recently acquired an HP Z800 with two Intel Xeon X5650 (Gulftown) 6 core processors. The person that configured the system chose 16GB (8 x 2GB DDR3-1333). I'm assuming this person was unaware these processors have 3 memory channels and to optimize memory performance one should choose memory in multiples of three. Based on this information, I have a question: By entering the BIOS, can I disable the bank on each processor that has the single memory module? If so, will this have any adverse effects or behave differently than physically removing the modules? I ask due to the fact that I prefer to store the extra memory in the system if it truly behaves as if the memory is not even there. Also, I see this as an opportunity to test 12GB vs. 16GB to see if there is a noticeable difference. Note: According to http://www.delltechcenter.com/page/04-08-2009+-+Nehalem+and+Memory+Configurations?t=anon, the current configuration reduces the overall data transfer speed to 1066 and in addition, the memory bandwidth goes down by about 23%.

    Read the article

  • Is there a Kerberos testing tool?

    - by ixe013
    I often use openssl s_client to test and debug SSL connections (to LDAPS or HTTPS services). It allows me to isolate the problem down to SSL, without anything getting in the way. I know about klist that allows me to purge the ticket cache. Is there tool that would allow me to ask a Kerberos ticket for a given server, not event sending it ? Just enough to see the whole Kerberos exchange in Wireshark for example ?

    Read the article

  • Stress test speed on a gateway?

    - by TheLQ
    I'm interested in stress testing my gateway server but am lost on how. Most of the stress testing applications I've seen only see how much load an app like Apache can handle, but not this. Essentially I want to send as many packets I can into this box with one computer on one card and see how many come out the other in another computer just to get an idea of what kind of load this can handle. I'm also interested how Snort will perform. I'm not really sure how to do this though. What tools could you recommend that could do this?

    Read the article

  • Testing Tomcat with Virtual Hosts

    - by Marty Pitt
    I'm trying to test Tomcat virtual hosts on my dev machine (windows 7/Tomcat 6). I'd like to have requests for localhost, test1.localhost and test2.localhost all route through to the same tomcat instance. I've edited my hosts file to look as follows: 127.0.0.1 localhost ::1 localhost 127.0.0.1 test1.localhost 127.0.0.1 test2.localhost And added modified the Engine in server.xml as follows: <Engine defaultHost="localhost" name="Catalina"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase" /> <Host appBase="webapps" autoDeploy="true" name="localhost" unpackWARs="true" xmlNamespaceAware="false" xmlValidation="false"> <Alias>test1.localhost</Alias> <Alias>test2.localhost</Alias> </Host> </Engine> However, I'm getting a 404 when hitting test1.localhost:8080/myWebApp, although localhost:8080/myWebApp works fine. I can ping test1.localhost fine. What have I missed?

    Read the article

  • Testing domains on intranet/local network?

    - by meder
    This may sound like a very silly question, but how could I setup domains ( eg www.foo.com ) on my local network? I know that all a domain is, is just a name registered to a name server and that nameserver has a zone record, and in the zone record there are several records of which the A Record is the most important in dictating where the lookup goes to, which machine it should point to. I basically want to make it so that I can refer to my other computer/webserver as 'www.foo.com' and make my local sites accessible by that, mess with virtualhost records in Apache and zone records for the domain except locally so I can explore and fiddle around and learn instead of having to rely on the domains I own on a public registrar that I could only access through the internet. Once again I apologize if this is a silly question, or if I'm completely thinking backwards. Background information: My OS is Debian, I'm a novice at Linux. I've done very small edits in zone records on a Bind9 Server but that's the extent of my networking experience.

    Read the article

  • Testing for Active Directory Schema modification (not upgrade)

    - by Darktux
    I am trying to test a schema modification. That is i need to add one of the attributes to global catalog by modifying schema , initially in a lab which is exact replica.My questions are below; - What tests need to be done post schema change to determine if its safe for production? - Apart from measuring changes in DIT size post change, is there a way to find the whole size increase for adding an attribute to GC pre change? please let me know if any extra questions or info required.

    Read the article

  • ab benchmarking testing

    - by Tennyson
    I have a question about ab benchmarking test, if i need to measure the time the server takes to serve IO.php with persistent connection. does the persistent connection mean i need to input "./ab -k ........." or "./ab -n 1000 -c 100 ........." Thanks a lot

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >