Search Results

Search found 26059 results on 1043 pages for 'spring test'.

Page 71/1043 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • Finally! Entity Framework working in fully disconnected N-tier web app

    - by oazabir
    Entity Framework was supposed to solve the problem of Linq to SQL, which requires endless hacks to make it work in n-tier world. Not only did Entity Framework solve none of the L2S problems, but also it made it even more difficult to use and hack it for n-tier scenarios. It’s somehow half way between a fully disconnected ORM and a fully connected ORM like Linq to SQL. Some useful features of Linq to SQL are gone – like automatic deferred loading. If you try to do simple select with join, insert, update, delete in a disconnected architecture, you will realize not only you need to make fundamental changes from the top layer to the very bottom layer, but also endless hacks in basic CRUD operations. I will show you in this article how I have  added custom CRUD functions on top of EF’s ObjectContext to make it finally work well in a fully disconnected N-tier web application (my open source Web 2.0 AJAX portal – Dropthings) and how I have produced a 100% unit testable fully n-tier compliant data access layerfollowing the repository pattern. http://www.codeproject.com/KB/linq/ef.aspx In .NET 4.0, most of the problems are solved, but not all. So, you should read this article even if you are coding in .NET 4.0. Moreover, there’s enough insight here to help you troubleshoot EF related problems. You might think “Why bother using EF when Linq to SQL is doing good enough for me.” Linq to SQL is not going to get any innovation from Microsoft anymore. Entity Framework is the future of persistence layer in .NET framework. All the innovations are happening in EF world only, which is frustrating. There’s a big jump on EF 4.0. So, you should plan to migrate your L2S projects to EF soon.

    Read the article

  • Who does code coverage testing?

    - by Athiruban
    Recently, I was given an opportunity to increase the code coverage in a project based on Java Swing, MySQL and other technologies. They told me to bring the code coverage to 100%, while it was only 45% at the time I joined. I am just starting, not a professional developer, right from the beginning I felt bad even though I write and understand computer programs well. (The developed code contains a lot of technical stuff like Generics and no documentation about the code is available.) Has anyone experienced the same situation before? Please tell who is the right person to do the job.

    Read the article

  • Automation at GUI or API Level in Scrum

    - by Sani Parwani
    I am a Automation Engineer. I use QTP for Automation. I wanted to know couple of things. In a scrum Project which has 2 weeks of work, how can complete automation be done in that time frame (talking only about the GUI Level)? Similarly, how can API Level of automated testing be accomplished, especially inside a single sprint? And what exactly is API level testing? How to begin with API Testing? I assume QTP is not the tool here certainly.

    Read the article

  • aspect parameter validation [closed]

    - by user12558
    Hi, Im doing a POC using Aspectj. class BaseInfo{..} class UserInfo extends BaseInfo{..} class UserService { public void getUser(UserInfo userInfo){..} public void deleteUser(String userId){..} } I've defined an advice, that gets invoked when I pass an UserInfo instance.But when i try to pass the BaseInfo, the advice is not getting invoked. Below block executes the afterMethod as expected for getUser. &ltaop:pointcut id="aopafterMethod" expression="execution(* UserService.*(..,UserInfo,..))" / &gtaop:after pointcut-ref="aopafterMethod" method="afterMethod" / But when i try to give BaseInfo instead of UserInfo, the aspect is not getting triggered. Am i missing something? Kindly help me on this issue.

    Read the article

  • Ibatis startBatch() only works with SqlMapClient's own start and commit transactions, not with Sprin

    - by Brian
    Hi, I'm finding that even though I have code wrapped by Spring transactions, and it commits/rolls back when I would expect, in order to make use of JDBC batching when using Ibatis and Spring I need to use explicit SqlMapClient transaction methods. I.e. this does batching as I'd expect: dao.getSqlMapClient().startTransaction(); dao.getSqlMapClient().startBatch(); int i = 0; for (MyObject obj : allObjects) { dao.storeChange(obj); i++; if (i % DB_BATCH_SIZE == 0) { dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().startBatch(); } } dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().commitTransaction(); but if I don't have the opening and closing transaction statements, and rely on Spring to manage things (which is what I want to do!), batching just doesn't happen. Given that Spring does otherwise seem to be handling its side of the bargain regarding transaction management, can anyone advise on any known issues here? (Database is MySQL; I'm aware of the issues regarding its JDBC pseudo-batch approach with INSERT statement rewriting, that's definitely not an issue here)

    Read the article

  • Using ControllerClassNameHandlerMapping with @Controller and extending AbstractController

    - by whiskerz
    Hey there, actually I thought I was trying something really simple. ControllerClassNameHandlerMapping sounded great to produce a small spring webapp using a very lean configuration. Just annotate the Controller with @Controller, have it extend AbstractController and the configuration shouldn't need more than this <context:component-scan base-package="test.mypackage.controller" /> <bean id="urlMapping" class="org.springframework.web.servlet.mvc.support.ControllerClassNameHandlerMapping" /> to resolve my requests and map them to my controllers. I've mapped the servlet to "*.spring", and calling <approot>/hello.spring All I ever get is an error stating that no mapping was found. If however I extend the MultiActionController, and do something like <approot>/hello/hello.spring it works. Which somehow irritates me, as I would have thought that if that is working, why didn't my first try? Does anyone have any idea? The two controllers I used looked like this @Controller public class HelloController extends AbstractController { @Override protected ModelAndView handleRequestInternal(HttpServletRequest request, HttpServletResponse response) throws Exception { ModelAndView modelAndView = new ModelAndView("hello"); modelAndView.addObject("message", "Hello World!"); return modelAndView; } } and @Controller public class HelloController extends MultiActionController { public ModelAndView hello(HttpServletRequest request, HttpServletResponse response) throws Exception { ModelAndView modelAndView = new ModelAndView("hello"); modelAndView.addObject("message", "Hello World!"); return modelAndView; } }

    Read the article

  • Dependency Injection into your Singleton

    - by Langali
    I have a singleton that has a spring injected Dao (simplified below): public class MyService<T> implements Service<T> { private final Map<String, T> objects; private static MyService instance; MyDao myDao; public void set MyDao(MyDao myDao) { this. myDao = myDao; } private MyService() { this.objects = Collections.synchronizedMap(new HashMap<String, T>()); // start a background thread that runs for ever } public static synchronized MyService getInstance() { if(instance == null) { instance = new MyService(); } return instance; } public void doSomething() { myDao.persist(objects); } } My spring config will probably look like this: <bean id="service" class="MyService" factory-method="getInstance"/> But this will instantiate the MyService during startup. Is there a programmatic way to do a dependency injection of MyDao into MyService, but not have spring manage the MyService? Basically I want to be able to do this from my code: MyService.getInstance().doSomething(); while having spring inject the MyDao for me.

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • Injecting the application TransactionManager into a JPA EntityListener

    - by nodje
    I want to use the JPA EntityListener to support spring security ACLs. On @PostPersist events, I create a permission corresponding to the persisted entity. I need this operation to participate to the current Transaction. For this to happen I need to have a reference to the application TransactionManager in the EntityListener. The problem is, Spring can't manage the EntityListener as it is created automatically when EntityManagerFactory is instantiated. And in a classic Spring app, the EntityManagerFactory is itself created during the TransactioManager instantiation. <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> So I have no way to inject the TransactionManager with the constructor, as it is not yet instantiated. Making the EntityManager a @Component create another instance of the EntityManager. Implementing InitiliazingBean and using afterPropertySet() doesn't work as it's not a Spring managed bean. Any idea would be helpful as I'm stuck and out of ideas.

    Read the article

  • Should tests be self written in TDD?

    - by martin
    We run a project, which we want to solve with test driven development. I thought about some questions that came up, when initiating the project. One question was, who should write the unit-test for a feature. Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs? If i understand the concept of TDD in the right way. The feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer? What would you say, should the tests in TDD written by the programmer himself or should another programmer write the tests that describes what a method can do?

    Read the article

  • Problem in Apache CXF (Simple Frontend): 'Already connected'

    - by seanizer
    I am using apache CXF for the first time. I am trying to establish a connection based on the CXF simple front end (Configuration notes) technology. I can't really see what I've done wrong, but I am getting a weird error (see below). I have also posted this question to [email protected], but I haven't received a response yet. Perhaps someone here can help. The service bean that is wrapped here is a Spring / JPA service that does not know anything about the web, I want to use simple frontend to publish it as a web service without having to annotate it with Jax-ws etc. (This works in theory). Here's my configuration: Server: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:simple="http://cxf.apache.org/simple" xmlns:soap="http://cxf.apache.org/bindings/soap" xmlns:context="http://www.springframework.org/schema/context" xmlns:cs="http://[www.mycompany.com]/coupon/service" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://cxf.apache.org/bindings/soap http://cxf.apache.org/schemas/configuration/soap.xsd http://cxf.apache.org/simple http://cxf.apache.org/schemas/simple.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd" default-autowire="byType" > <import resource="classpath:META-INF/cxf/cxf.xml" /> <import resource="classpath:META-INF/cxf/cxf-extension-soap.xml" /> <import resource="classpath:META-INF/cxf/cxf-extension-http.xml" /> <import resource="classpath:META-INF/cxf/cxf-extension-http-binding.xml" /> <import resource="classpath:META-INF/cxf/cxf-servlet.xml" /> <import resource="classpath*:persistenceContext.xml" /> <!—my service implementation --> <!-- serviceClass points to an interface --> <simple:server id="server" serviceBean="couponService" serviceClass="[com.mycompany].MyServiceInterface" bindingId="http://apache.org/cxf/binding/http" address="/${wsdl.path}" serviceName="cs:couponService" endpointName="cs:couponServicePort" > <simple:dataBinding> <bean class="org.apache.cxf.aegis.databinding.AegisDatabinding" /> </simple:dataBinding> <simple:binding> <soap:soapBinding version="1.2" mtomEnabled="true" /> </simple:binding> </simple:server> <context:property-placeholder location="classpath:service.properties" /> </beans> Client: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:simple="http://cxf.apache.org/simple" xmlns:soap="http://cxf.apache.org/bindings/soap" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:context="http://www.springframework.org/schema/context" xmlns:oxm=http://www.springframework.org/schema/oxm xmlns:cs="http://[www.mycompany.com]/coupon/service" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://cxf.apache.org/bindings/soap http://cxf.apache.org/schemas/configuration/soap.xsd http://cxf.apache.org/simple http://cxf.apache.org/schemas/simple.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm-3.0.xsd" default-autowire="byType" > <import resource="classpath:META-INF/cxf/cxf.xml" /> <import resource="classpath:META-INF/cxf/cxf-extension-http.xml" /> <import resource="classpath:META-INF/cxf/cxf-extension-http-binding.xml" /> <import resource="classpath:META-INF/cxf/cxf-extension-soap.xml" /> <simple:client id="couponService" wsdlLocation="${wsdl.url}?wsdl" serviceName="cs:couponService" endpointName="cs:couponServicePort" transportId="http://schemas.xmlsoap.org/soap/http" address="${wsdl.url}" bindingId="http://apache.org/cxf/binding/http" serviceClass="[com.mycompany].MyServiceInterface"> <simple:dataBinding> <bean class="org.apache.cxf.aegis.databinding.AegisDatabinding" /> </simple:dataBinding> <simple:binding> <soap:soapBinding mtomEnabled="true" version="1.2" /> </simple:binding> </simple:client> <context:property-placeholder location="classpath:service.properties" /> On the client side, I inject the generated service into my web application (I am using wicket but that should be irrelevant) and when I call service methods on it I get an IllegalStateException from java.net.HttpURLConnection saying the connection is already open. Here’s the stack trace: java.lang.IllegalStateException: IllegalStateException invoking http://localhost:9999/services/coupon: Already connected at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.mapException(HTTPConduit.java:2058) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:2048) at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:66) at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:639) at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:243) at org.apache.cxf.binding.http.interceptor.DatabindingOutSetupInterceptor.handleMessage(DatabindingOutSetupInterceptor.java:91) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:243) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:487) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:313) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:265) at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:73) at org.apache.cxf.frontend.ClientProxy.invoke(ClientProxy.java:68) at $Proxy30.createIndividualUserCouponsJob(Unknown Source) at [com.mycompany].coupons.web.app.dummycontent.DummyContentInitializer.addSomeIndividualCoupons(DummyContentInitializer.java:84) at [com.mycompany].coupons.web.app.dummycontent.DummyContentInitializer.addSomeCoupons(DummyContentInitializer.java:68) at [com.mycompany].coupons.web.app.dummycontent.DummyContentInitializer.init(DummyContentInitializer.java:50) at org.apache.wicket.Application.callInitializers(Application.java:843) at org.apache.wicket.Application.initializeComponents(Application.java:678) at org.apache.wicket.protocol.http.WicketFilter.init(WicketFilter.java:725) at org.apache.wicket.protocol.http.WicketServlet.init(WicketServlet.java:219) at javax.servlet.GenericServlet.init(GenericServlet.java:241) at org.mortbay.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:433) at org.mortbay.jetty.servlet.ServletHolder.doStart(ServletHolder.java:256) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:617) at org.mortbay.jetty.servlet.Context.startContext(Context.java:139) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1218) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:500) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:117) at org.mortbay.jetty.Server.doStart(Server.java:220) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) at [com.mycompany].coupons.web.test.Start.main(Start.java:45) Caused by: java.lang.IllegalStateException: Already connected at java.net.HttpURLConnection.setFixedLengthStreamingMode(HttpURLConnection.java:103) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.thresholdNotReached(HTTPConduit.java:1889) at org.apache.cxf.io.AbstractThresholdOutputStream.close(AbstractThresholdOutputStream.java:99) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1980) This happens the first time a service call is made, and the only URLConnection that is opened before that is that of the wsdl. I have searched the web for similar problems, but all I found was a bug using rest that has already been fixed. I am trying to use the simple frontend, as my service is not annotated with jax-ws annotations and I would like to keep it that way. Can someone help? Thanks in advance. Sean

    Read the article

  • CSS file in a Spring WAR returns a 404

    - by Rachel G.
    I have a J2EE application that I am building with Spring and Maven. It has the usual project structure. Here is a bit of the hierarchy. MyApplication src main webapp WEB-INF layout header.jsp styles main.css I want to include that CSS file in my JSP. I have the following tag in place. <c:url var="styleSheetUrl" value="/styles/main.css" /> <link rel="stylesheet" href="${styleSheetUrl}"> When I deploy the application, the CSS page isn't being located. When I view the page source, the href is /MyApplication/styles/main.css. Looking inside the WAR, there is a /styles/main.css. However, I get a 404 when I try to access the CSS file directly in the browser. I discovered that the reason for the issue was the Dispatcher Servlet mapping. The mapping looks as follows. <servlet-mapping> <servlet-name>Spring MVC Dispatcher Servlet</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> I imagine the Dispatcher Servlet doesn't know how to handle the CSS request. What is the best way to handle this issue? I would rather not have to change all of my request mappings.

    Read the article

  • Apache internal test servers

    - by user74274
    I want to build a test system in the office. On one box I will install XAMPP. I will put a website that I want to test on that box. Now I do not want it connected to the Internet. So my plan is to use a reverse proxy to resolve things like ads and other external links. So I am thinking of using mod_proxy for that. Now my question is, How many boxes do I need? 1 for Xampp, 1 for mod_proxy and 1 for the server where mod proxy redirects them to. Total of three. But maybe I could do it with less. Can I run Mod_proxy on the first box? Is there a better way? Thanks

    Read the article

  • How to create Isolated test environment

    - by Safin09
    Hi All, I am very new to the VMware world. We have VMWare vCenter 4 in the production environment but we have created multiple VLAN through Cisco Switch. I want know, how I can create an isolated test environment for software testing purpose only, so anything will happen in that test vLan will not make trouble in the production environment. "Host-only networking" is the solution or there is a better way to achieve this result? My requirements A. Hosts should be able to access Internet and a Network Share drive but not Production network B. Hosts should connect each other inside the Virtual LAN C. I should be able to take automatic or periodic backup or snapshoot and deploy snapshot when necessary. Whatever your answer is, please give me steps, how to do, if possible. If I need to purchase anything, I am ready to do but I don't want to spend big money. Many thanks in advance.

    Read the article

  • How to test TempDB performance?

    - by Matt Penner
    I'm getting some conflicting advice on how to best configure our SQL storage with our current SAN. I would like to do some of my own performance testing with a few different configurations. I looked at using SQLIOSim but it doesn't seem to simulate TempDB. Can anyone recommend a way to test data, log and TempDB performance? What about using a SQL profiler trace file from our production system? How would I use This to run against my test server? Thanks, Matt

    Read the article

  • nginx redirect TLD to TLD with virtual folder (example.com => example.com/test)

    - by Amund
    Im running nginx and in the config file I need to always have the domain example.com redirect to example.com/test. I tried various methods for achieving this but I always got a redirect error. What is the correct way to do this? nginx.conf snippet: server { server_name example.com www.example.com; location / { rewrite ^.+ /test permanent; } } server { listen 80; server_name www.example.com example.com; location / { root /var/www/apps/example/current/public; passenger_enabled on; rails_env production; } } Thanks!

    Read the article

  • Test/Dummy SMTP server for Windows

    - by geoaxis
    I would like to install a Test/Dummy SMTP server on a Windows 2008 server (virtual box). I just want to test my web application on the machine it self so I don't need the mails to go out on the internet, but just to be written to disk (so that I can verify that the mail function was indeed called and the correct data was handed over to SMTP) Can you recommend some tool. I guess starting your own SMTP server in python is an option. I am looking for a simple (ready to use) solution, targeted for tests systems. I will need to integrate it to automated tests (Selenium) at a later stage. Thanks

    Read the article

  • Process of carrying out a BER Test

    - by data
    I am subscribed to an ISP supplying a 3meg ADSL line. Lately (for the last 4 weeks) speeds have dropped from the usual average downstream speed of ~250kbps to just 0.14Mbps (according to speedtest.net) and employees are complaining about lack of access to the server. I have been calling customer support and logging calls for the last 3 weeks, but they have been unable to determine the source of the problem other than carrying out a few bitstream tests and checking the DHCP renewal times. I am going to call back and suggest carrying out a BER test. What type of equipment is needed to carry out this test? I have access to a wide range of Cisco networking equipment. Other: We don't need a leased line as there are less than ten employees.

    Read the article

  • Proccess of carrying out a BER Test

    - by data
    I am subscribed to an ISP supplying a 3meg ADSL line. Lately (for the last 4 weeks) speeds have dropped from the usual average downstream speed of ~250kbps to just 0.14Mbps (according to speedtest.net) and employees are complaining about lack of access to the server. I have been calling customer support and logging calls for the last 3 weeks, but they have been unable to determine the source of the problem other carrying out a few bitstream tests and checking the DHCP renewal times. I am going to call back and suggest carrying out a BER test. What type of equipment is needed to carry out this test? I have access to a wide range of Cisco networking equipment. Other: We dont need a leased line as there are less than ten employees.

    Read the article

  • -w test on OS X gives command not found error

    - by RobV
    I'm writing a bash script which I'm testing on OS X though it will ultimately run on a standard Linux environment and running into a weird error. I have tests like this in my script: if [ ! -w $BP ]; then echo "'$1' not writable" exit 1 fi Which seems pretty sane to me and works fine under Linux but when trying to test on OS X I get the following error message: startSvr.sh: line 135: [: missing `]' startSvr.sh: line 135: -w: command not found So is this a case of OS X not supporting the -w test or is there some other reason this isn't working for me? e.g. environment

    Read the article

  • Couchdb failing test suite on Linux

    - by user52674
    Hi I've been trying to install CouchDB on my webfusion virtual server. I followed the latest instructions from the webfusion forum (see: http://forum.webfaction.com/viewtopic.php?id=2355 ) and it runs (just) Futon is very sluggish and I get 502 errors. Anyway when I run the test suite it fails on multiple tests. Webfaction support have been great but don't have erlang experience to interpret the error logs. Can anyone help me know what might be wrong? Test suite result: basics, all_docs, attachments, attachments_multipart, attachment_names, compact, config, conflicts, delayed_commits, design_docs, design_options all the errors are: Exception raised: {"error":"unknown","reason":"\u000d\u000a502 Bad Gateway\u000d\u000a\u000d\u000a<\h1502 Bad Gateway\u000d\u000a nginx\u000d\u000a\u000d\u000a\u000d\u000a"} except for 'compact; which also has: Assertion failed: xhr.responseText == "This is a base64 encoded text" Assertion failed: xhr.getResponseHeader("Content-Type") == "text/plain" I'm stumped. Anybody know what these indicate? AL

    Read the article

  • What's a worthwhile test for a new HD?

    - by Michael Kohne
    I work for a company that uses standard 2.5" SATA HD's in our product. We presently test them by running the Linux 'badblocks -w' command on them when we get them - but they are 160 gig drives, so that takes like 5 hours (we boot parted magic onto a PC to do the scan). We don't actually build that many systems at a time, so this doable, but seriously annoying. Is there any research or anecdotal evidence on what a good incoming test for a hard drive should be? I'm thinking that we should just wipe them with all zeros, write out our image, and do a full drive read back. That would end up being only about 1 hour 45 minutes total. Given that drives do block remapping on their own, would what I've proposed show up any infant mortality just as well as running badblocks?

    Read the article

  • Use test to check for condition with find and execdir option

    - by slosd
    I think I can keep my question short. Why does the following command produce no output? find /usr/share/themes -mindepth 1 -maxdepth 1 -type d -execdir test -d {}/gnome-shell \; I expected it to print all folders in /usr/share/themes that contain a folder gnome-shell. Several websites suggest that this usage of test as a command in exec/execdir is possible. From man find: -exec command ; Execute command; true if 0 status is returned. [...]

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >