Search Results

Search found 53818 results on 2153 pages for 'system testing'.

Page 129/2153 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Logic in Entity Components Sytems

    - by aaron
    I'm making a game that uses an Entity/Component architecture basically a port of Artemis's framework to c++,the problem arises when I try to make a PlayerControllerComponent, my original idea was this. class PlayerControllerComponent: Component { public: virtual void update() = 0; }; class FpsPlayerControllerComponent: PlayerControllerComponent { public: void update() { //handle input } }; and have a system that updates PlayerControllerComponents, but I found out that the artemis framework does not look at sub-classes the way I thought it would. So all in all my question here is should I make the framework aware of subclasses or should I add a new Component like object that is used for logic.

    Read the article

  • How to make Ubuntu 14.04 run with less lag?

    - by King Shimkus
    I recently updated from Ubuntu 12.04 LTS to 14.04 LTS and for the first time it worked! However, whenever I move my mouse over an application icon on the unity shell, the animation takes forever to show me what the name of the application is ( A.k.a the tooltip ). The same happens with menus and sub-menus. Other than that, it is overall in general, slow. I just want to know if there are any tips to fix this or make my system faster. This is what it says when I type in glxinfo | grep renderer: GLX_MESA_multithread_makecurrent, GLX_MESA_query_renderer, OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.4, 128 bits) Output of lspci | grep VGA: 00:02.0 VGA compatible controller: Intel Corporation 82865G Integrated Graphics Controller (rev 02)

    Read the article

  • What would be the market life of a JVM based software framework?

    - by Nav
    I saw how Struts 1 lasted from 2000 to 2013. I hear that people are moving from Struts 2 to Spring. But for a project that may need to be maintained for a decade or two, would it be advisable to opt for a framework or directly code with servlets and jquery? Can a system architecture really be designed keeping in mind a particular framework? What really is the market life of a framework? Do the creators of the framework create it with the assumption that it would become obsolete in a decade?

    Read the article

  • What exactly is a chroot? Is it similar to a simultaneous dual boot?

    - by mathematician1975
    It has been suggested to me that the use of a chroot might solve my problem of building an application that must run on an embedded device. I have inferred from this description that it is somehow similar to creating the embedded environment locally on my machine which I can then use to develop on from my desktop development machine. Is this the right way to look at the functionality or have I totally misunderstood? In order to get some idea of how it works I read this https://wiki.ubuntu.com/DebootstrapChroot which I will attempt to make a chroot for an old Ubuntu version on my machine. However, as I am a total linux novice, I am a bit concerned that as I do not entirely know what I am doing is there anyway that I could end up with an unusable system?? Is this something that a novice should even attempt???

    Read the article

  • Super slow and laggy?

    - by Mystogan
    I'm a native Windows user, running 7 Home Premium atm. In the past I installed Ubuntu 10 on my netbook I believe, but was a bit 'scared' by it. Anyways, I made a partition on my desktop pc, 100gb and installed Ubuntu 12.10 on it. It was pretty slow first thought it had to install updates or something, so I rebooted it. Now after the login it looks like everything freezes, I can move my mouse only ,but that's it! I can see my desktop and launcher, but the task bar doesn't display time and misc system info, only a black bar, I downloaded the android SDK pack and installed vlc media player before I rebooted. My computer info: - 1tb HDD - 8gb ddr3 ram - ATI Radeon 4890 videocard (1gb) - AMD Phenom 2 x4 black 3,2ghz CPU(64 bits) Thanks in advantage!

    Read the article

  • Ubuntu 14.04 crashes after several seconds

    - by AndrewJoyce86
    I just installed Ubuntu on my Compaq desktop and everything seemed to go well. The desktop appeared just like it should... Only then this happens, seconds later: Crash Something tells me that this is not supposed to be happening. I did not partition, I replaced Windows 7 with Ubuntu. When I try to run off the USB stick I used, I get this: USB So, what do I do, now? Should I try to reinstall? Is the ISO bad, somehow? System specs: Compaq Model CQ5205Y AMD Semperon LE-1300 2 GB RAM 250 GB HDD

    Read the article

  • System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse request failed with HTTP status 40

    - by John Galt
    I am trying to make some enhancements to a production web app. After quite a bit of unit testing on my WinXP IIS 5.1 development machine, everything works on my localhost so I used the Visual Studio 2008 PUBLISH dialog on my Dev PC to push the following projects to a staging server: the primary web app the "primary" webservice (the home page tries to invoke this WS) a "secondary" webservice (not yet a problem because home page does not invoke this WS) I get the following when I try to browse to the home page of the web app typing this into my browser: link text Server Error in '/zVersion2' Application. The request failed with HTTP status 404: Not Found. Description: An unhandled exception occurred during the execution of the current web request.Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Net.WebException: The request failed with HTTP status 404: Not Found. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [WebException: The request failed with HTTP status 404: Not Found.] System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) +431289 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +204 ProxyZipeeeService.WSZipeee.Zipeee.GetMessageByType(Int32 iMsgType) in C:\Documents and Settings\johna\My Documents\Visual Studio 2008\Projects\ProxyZipeeeService\ProxyZipeeeService\Web References\WSZipeee\Reference.vb:2168 Zipeee.frmZipeee.LoadMessage() in C:\Documents and Settings\johna\My Documents\Visual Studio 2008\Projects\Zipeee\frmZipeee.aspx.vb:43 Zipeee.frmZipeee.Page_Load(Object sender, EventArgs e) in C:\Documents and Settings\johna\My Documents\Visual Studio 2008\Projects\Zipeee\frmZipeee.aspx.vb:33 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 Version Information: Microsoft .NET Framework Version:2.0.50727.3607; ASP.NET Version:2.0.50727.3082 Here is a bit of the corresponding source code: Public wsZipeee As New ProxyZipeeeService.WSZipeee.Zipeee Dim dsStandardMsg As DataSet Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load If Not Page.IsPostBack Then LoadMessage() End If End Sub Private Sub LoadMessage() Dim iCnt As Integer Dim iValue As Integer dsStandardMsg = wsZipeee.GetMessageByType(BizConstants.MsgType.Standard) End Sub I suspect I may have configured things incorrectly on the staging server. The staging server is Win Server 2003 ServicePack 2 running IIS 6.0. When I published the primary site and the 2 webservices on the staging server called MOJITO I created the physical directories for each on the D drive. Then using INETMGR, I configured the following virtual directories: zVersion2 zVersion2wsSQL zVersion2wsEmergency All of the above are configured to use a new application pool I setup and named zVersion2aspNet20. The default web site for this machine MOJITO is configured to use ASP.NET 1.1 and the IP address is set to (All Unassigned). The production versions of the latter 2 webservices run on the MOJITO machine (named ZipeeeService and EmergencyService respectively). Can my staging versions of the above webservices (named zVersion2wsSQL and zVersion2wsEmergency respectively) co-exist on the same web server with the same IP address? Please note that when I test the zVersion2wsSQL webservice independently (from INETMGR right-mouse and click Browse) it works as expected (i.e. presenting all the methods of the webservice) like this snippet: GetMessageByType MessageName="Get_x0020_Message_x0020_By_x0020_Type" I can test this webmethod by clicking on it and it presents the Test dialog (because it takes a simple datatype and I am invoking it on localhost (i.e. MOJITO): **Get Message By Type** **Test** To test the operation using the HTTP POST protocol, click the 'Invoke' button. Parameter Value iMsgType: _______ [INVOKE button] SOAP 1.1 ....etc. I fear I may have rambled with too much information so I will stop but I hope someone can help me as I cannot understand why this request results in a "not found". Thanks.

    Read the article

  • MSTest Test Context Exception Handling

    - by Flip
    Is there a way that I can get to the exception that was handled by the MSTest framework using the TestContext or some other method on a base test class? If an unhandled exception occurs in one of my tests, I'd like to spin through all the items in the exception.Data dictionary and display them to the test result to help me figure out why the test failed (we usually add data to the exception to help us debug in the production env, so I'd like to do the same for testing). Note: I am not testing that an exception was SUPPOSED TO HAPPEN (I have other tests for that), I am testing a valid case, I just need to see the exception data. Here is a code example of what I'm talking about. [TestMethod] public void IsFinanceDeadlineDateValid() { var target = new BusinessObject(); SetupBusinessObject(target); //How can I capture this in the text context so I can display all the data //in the exception in the test result... var expected = 100; try { Assert.AreEqual(expected, target.PerformSomeCalculationThatMayDivideByZero()); } catch (Exception ex) { ex.Data.Add("SomethingImportant", "I want to see this in the test result, as its important"); ex.Data.Add("Expected", expected); throw ex; } } I understand there are issues around why I probably shouldn't have such an encapsulating method, but we also have sub tests to test all the functionality of PerformSomeCalculation... However, if the test fails, 99% of the time, I rerun it passes, so I can't debug anything without this information. I would also like to do this on a GLOBAL level, so that if any test fails, I get the information in the test results, as opposed to doing it for each individual test. Here is the code that would put the exception info in the test results. public void AddDataFromExceptionToResults(Exception ex) { StringBuilder whereAmI = new StringBuilder(); var holdException = ex; while (holdException != null) { Console.WriteLine(whereAmI.ToString() + "--" + holdException.Message); foreach (var item in holdException.Data.Keys) { Console.WriteLine(whereAmI.ToString() + "--Data--" + item + ":" + holdException.Data[item]); } holdException = holdException.InnerException; } }

    Read the article

  • How to test views in ASP.NET MVC2 (ala RSpec)

    - by Dmitriy Nagirnyak
    Hi, I am really missing heavily the ability to test Views independently of controllers. The way RSpec allows to do it. What I want to do is to perform assertions on the rendered view (where no controller is involved!). In order to do so I should provide required Model, ViewData and maybe some details from HttpContextBase (when will we get rid of HttpContext!). So far I have not found anything that allows doing it. Also it might heavily depend on the ViewEngine being used. List of things that views might contain are: Partial views (may be nested deeply). Master pages (or similar in other view engines). Html helpers generating links and other elements. Generally almost anything in a range of common sense :) . Also please note that I am not talking about client-side testing and thus Selenium is just not related to it at all. It is just plain .NET testing. So are there any options to actually do the testing of views? Thanks, Dmitriy.

    Read the article

  • Test Views in ASP.NET MVC2 (ala RSpec)

    - by Dmitriy Nagirnyak
    Hi, I am really missing heavily the ability to test Views independently of controllers. The way RSpec does it. What I want to do is to perform assertions on the rendered view (where no controller is involved!). In order to do so I should provide required Model, ViewData and maybe some details from HttpContextBase (when will we get rid of HttpContext!). So far I have not found anything that allows doing it. Also it might heavily depend on the ViewEngine being used. List of things that views might contain are: Partial views (may be nested deeply). Master pages (or similar in other view engines). Html helpers generating links and other elements. Generally almost anything in a range of common sense :) . Also please note that I am not talking about client-side testing and thus Selenium is just not related to it at all. It is just plain .NET testing. So are there any options to actually do the testing of views? Thanks, Dmitriy.

    Read the article

  • how do i implement / build / create an 'in memory database' for my unit test

    - by Michel
    Hi all, i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime. So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database" But what is that / how do i implement that? Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :) At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go. Michel Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together. For the real database we're using the entity framework, and this works very simple. And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data. It sounds weird that there is so much work in the fake layer, and so little in the real layer. I hope this all makes sense :)

    Read the article

  • Best practice to test a web application, regarding domain name and integration with external service

    - by ycseattle
    I have run into these problems several times and was never able to find a comfortable solution. Let's say my website has the domain name MyDomain.com. When I run the tests on the test machine (a continuous integration server), I will modify the HOSTS file on this machine so the MyDomain.com is mapped to this local machine instead of the real production server. This doesn't work very well for many situations. For example, my application will create subdomain names user1.MyDomain.com dynamically but this is difficult to keep the testing flexible. Another problem is my web application will interact with Amazon S3, and sometimes other service like Amazon Simple Message Queue. I am only comfortable to include these interaction in my tests but I am never happy with my solution for mixing testing and production on Amazon services. Could somebody offer some tips on these issues? I would like to make my testing framework clean and flexible. I am sure this is a common question for all web applications and there must be a mature way to deal with these. Thanks!

    Read the article

  • Accessing Web.config directly in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve?

    Read the article

  • What Test Environment Setup do Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • What Test Environment Setup do Top Project Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Finding patterns of failure in a Unit Test

    - by Pekka
    I'm new to Unit Testing, and I'm only getting into the routine of building test suites. I have what is going to be a rather large project that I want to build tests for from the start. I'm trying to figure out general strategies and patterns for building test suites. When you look at a class, many tests come to you obviously due to the nature of the class. Say for a "user account" class with basic CRUD operations, being related to a database table, we will want to test - well, the CRUD. creating an object and seeing whether it exists query its properties change some properties change some properties to incorrect values and delete it again. As for how to break things, there are "fail" tests common to most CRUD classes like: Invalid input data types A number as the ID key that exceeds the range of the chosen data type Input in an incorrect character encoding Input that is too long And so on and so on. For a unit test concerned with file operations, the list of "breaking things" could be Invalid characters in file name File name too long File name uses incorrect protocol or path I'm pretty sure similar patterns - applicable beyond the unit test one is currently working on - can be found for most units that are being tested. Now my question is: Am I correct in seeing such "breaking patterns"? Or am I getting something completely wrong about Unit testing, and if I did it right, this wouldn't be an issue at all? Is Unit Testing as a process of finding as many ways to break the unit as possible the right way to go? If I am correct: Are there existing definitions, lists, cheat sheets for such patterns? Are there any provisions (mainly in PHPUnit, as that's the framework I'm working in) to automate such patterns? Is there any assistance - in the form of check lists, or software - to aid in writing complete tests?

    Read the article

  • How can I write automated tests for iptables?

    - by Phil Frost
    I am configuring a Linux router with iptables. I want to write acceptance tests for the configuration that assert things like: traffic from some guy on the internet is not forwarded, and TCP to port 80 on the webserver in the DMZ from hosts on the corporate LAN is forwarded. An ancient FAQ alludes to a iptables -C option which allows one to ask something like, "given a packet from X, to Y, on port Z, would it be accepted or dropped?" Although the FAQ suggests it works like this, for iptables (but maybe not ipchains as it uses in the examples) the -C option seems to not simulate a test packet running through all the rules, but rather checks for the existence for an exactly matching rule. This has little value as a test. I want to assert that the rules have the desired effect, not just that they exist. I've considered creating yet more test VMs and a virtual network, then probing with tools like nmap for effects. However, I'm avoiding this solution due to the complexity of creating all those additional virtual machines, which is really quite a heavy way to generate some test traffic. It would also be nice to have an automated testing methodology which can also work on a real server in production. How else might I solve this problem? Is there some mechanism I might use to generate or simulate arbitrary traffic, then know if it was (or would be) dropped or accepted by iptables?

    Read the article

  • Test server on a local network with XAMPP

    - by hopscotch1978
    Hi, I'm not very proficient with networks and could use some help. I've got a Win 7 desktop with XAMPP which acts as my local dev machine. I've configured a virtual host on the desktop which I'm able to access fine. If I'm understanding things correctly, the virtual host uses port 80 (<VirtualHost 127.0.0.1:80>). I've just tried to configure a separate Win XP laptop on the local wireless network to connect to the main desktop for testing purposes. I've added the IP address and virtual host name to my Hosts file on the laptop. My virtual host is imaginatively named "virtualhost1". When I type this into my laptop browser, it connects correctly to the main desktop and I get the XAMPP welcome screen. But I can't seem to get to the actual site, just the XAMPP welcome screen. It kind of jumps the browser to http://virtualhost1/xampp/. I think it's a port issue of some sort but I have no idea how to resolve it. I would get the same XAMPP welcome screen on my desktop if I omitted ":80" from the virtual host declaration. On my main desktop, typing "virtualhost1" to the browser address bar gives me the site correctly, not the XAMPP welcome screen. Help would be appreciated. Thank you.

    Read the article

  • @ContextConfiguration in Spring 3.0 give me No default constructor found

    - by atomsfat
    I have already do the test using AbstractDependencyInjectionSpringContextTests and it works but in spring 3 it is deprecated, so I decided to try @ContextConfiguration but spring say that default constructor is not found, I check and the class doesn't have any constructor. If I use this test spring give the object. package atoms.portales.servicios.impl; import atoms.portales.model.Cliente; import atoms.portales.servicios.ClienteService; import java.util.List; import javax.persistence.EntityManager; import org.springframework.test.AbstractDependencyInjectionSpringContextTests; /** * * @author tsalazar */ public class ClienteServiceImplDeTest extends AbstractDependencyInjectionSpringContextTests{ private ClienteService clienteService; public ClienteService getClienteService() { return clienteService; } public void setClienteService(ClienteService clienteService) { this.clienteService = clienteService; } public ClienteServiceImplDeTest(String testName) { super(testName); } @Override protected String[] getConfigLocations() { return new String[]{"PersistenceAppCtx.xml", "ServicesAppCtx.xml"}; } /** * Test of buscaCliente method, of class ClienteServiceImplDeTest. */ public void testBuscaCliente() { System.out.println("======================================="); System.out.println("buscaCliente"); String nombre = ""; System.out.println(clienteService); System.out.println("======================================="); } } But if I use this, spring say that default constructor is not found. package atoms.config.portales.servicios.impl; import atoms.portales.model.Cliente; import atoms.portales.servicios.ClienteService; import org.junit.runner.RunWith; import org.junit.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.transaction.TransactionConfiguration; import org.springframework.transaction.annotation.Transactional; /** * * @author tsalazar */ @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = {"/PersistenceAppCtx.xml", "/ServicesAppCtx.xml"}) @TransactionConfiguration(transactionManager = "transactionManager") @Transactional public class ClienteServiceImplTest { @Autowired private ClienteService clienteService; /** * Test of buscaCliente method, of class ClienteServiceImpl. */ @Test public void testBuscaCliente() { System.out.println("======================================="); System.out.println("buscaCliente"); System.out.println(clienteService); System.out.println("======================================="); } } This how I do the implementacion: package atoms.portales.servicios; import atoms.portales.model; /** * Una interface para obtener clientes, con sus surcursales, servicios, layouts * y contratos. Tambien soporta operaciones CRUD. * @author tsalazar */ public interface ClienteService { /** * Busca clientes a partir del nombre * @param nombre */ public Cliente buscaCliente(String nombre); } the implemetacion package atoms.portales..servicios.impl; import atoms.portales.model.Cliente; import atoms.portales.servicios.ClienteService; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import org.springframework.stereotype.Repository; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; /** * A JPA-based implementation.Delegates to a JPA entity manager to issue data access calls * against the backing repository. The EntityManager reference is provided by the managing container (Spring) * automatically. */ @Service("clienteSerivice") @Repository public class ClienteServiceImpl implements ClienteService { public ClienteServiceImpl() { } private EntityManager em; @PersistenceContext public void setEntityManager(EntityManager em) { this.em = em; } @Transactional(readOnly = true) public Cliente buscaCliente(String nombre) { Cliente cliente = em.getReference(Cliente.class, 1l); return cliente; } } spring configuration: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd"> <!-- Instructs Spring to perfrom declarative transaction management on annotated classes --> <tx:annotation-driven /> <!-- Drives transactions using local JPA APIs --> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <!-- Creates a EntityManagerFactory for use with the Hibernate JPA provider and a simple in-memory data source populated with test data --> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" /> </property> </bean> <!-- Deploys a in-memory "booking" datasource populated --> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="org.hsqldb.jdbcDriver" /> <property name="url" value="jdbc:hsqldb:hsql://localhost/test" /> <property name="username" value="sa" /> <property name="password" value="" /> </bean> <context:component-scan base-package="atoms.portales.servicios" /> </beans> This is the persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="configuradorPortales" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <class>atoms.portales.model.Cliente</class> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect"/> <property name="hibernate.hbm2ddl.auto" value="create-drop"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.HashtableCacheProvider"/> </properties> </persistence-unit> </persistence> This is the error that give me:

    Read the article

  • Alternate way to create a clone of a UNIX System

    - by Spirit
    THE STORY: (If you don't like to read much, down below is the question :) ) Where I work we have two HP RP2470 servers same hardware same number of hard drives same everything :). One of them is a production server and runs HP-UX 11.00. The poor ba***rd hasn't been turned off for years and now I have to make a clone of it on the other server - just in case, for redundancy. The problem is simple (or not simple) as I have to make the the other server exactly the same. However the old version of OS (UX 11.00 is a history now) and the old software running on it, have made my task almost impossible. On the production server there is also a cloning/recover utility Ignite-UX. I tried many times to create a recovery tape with it. Then when I load the tape on the backup server, it succeeds with the loading of the tape (no errors no warnings) but on the next restart it fails to load the OS :S and drops into HP`s ISL prompt. --- THE QUESTION: Is there an alternate way to create a clone of the Unix System? The environment is: 1. 2x HP RP2470 Servers (non-Intel), same hardware, same number od HDDs (two each of them) same everything. 2. OS running: HP-UX 11.00 The production server has to be cloned without downtime - sadly :( as I hope that they will reconsider on this one For example (like on Windows platforms), if you try to copy an entire HDD with Windows inside on another HDD, and then put that HDD on another PC it will still work, as long as the hardware is the same. Can I do something like that with a Unix system? Can I somehow COPY the contents of the entire HDD, put those on another HDD, and then just load the HDD into the other server? (If you haven't read the story the servers are exactly the same) Will it work? Can it be done with ordinary commands like cp or dump or something like that? Does any one have a similar experience? --- UPDATE: 26.01.2012 NOTE: The update is related to "The Story". If you haven't read that part then you can skip this update. This is just a short update on the recover logs from the Ignite Tape.. someone with more exp. might notice something.. ... --- READING CONTENTS OF THE IGNITE TAPE --- --- OUTPUT OMITED --- ... ... x ./configure3, 413696 bytes, 808 tape blocks x ./monitor_bpr, 20480 bytes, 40 tape blocks * Download_mini-system: Complete * Loading_software: Begin * Installing boot area on disk. * Enabling swap areas. * Backing up LVM configuration for "vg00". * Processing the archive source (Recovery Archive). * Wed Jan 25 15:27:32 EST 2012: Starting archive load of the source (Recovery Archive). * Positioning the tape (/dev/rmt/0mn). * Archive extraction from tape is beginning. Please wait. * Wed Jan 25 15:39:52 EST 2012: Completed archive load of the source (Recovery Archive). * Executing user specified script: "/opt/ignite/data/scripts/os_arch_post_l". * Running in recovery mode (os_arch_post_l). * Running the ioinit command ("/sbin/ioinit -c") * Creating device files via the insf command. insf: Installing special files for sdisk instance 0 address 0/0/1/1.15.0 insf: Installing special files for sdisk instance 1 address 0/0/2/0.1.0 insf: Installing special files for sdisk instance 2 address 0/0/2/1.15.0 insf: Installing special files for stape instance 0 address 0/0/1/0.3.0 insf: Installing special files for btlan instance 0 address 0/0/0/0 insf: Installing special files for btlan instance 1 address 0/2/0/0 insf: Installing special files for pseudo driver dlpi insf: Installing special files for pseudo driver kepd insf: Installing special files for pseudo driver framebuf insf: Installing special files for pseudo driver sad * Running "/opt/upgrade/bin/tlinstall -v" and correcting transition link permissions. * Constructing the bootconf file. * Setting primary boot path to "0/0/1/1.15.0". * Executing: "/var/adm/sw/products/PHSS_20146/pfiles/iux_postload". * Executing: "/var/adm/sw/products/PHSS_25982/pfiles/iux_postload". NOTE: tlinstall is searching filesystem - please be patient NOTE: Successfully completed * Loading_software: Complete * Build_Kernel: Begin NOTE: Since the /stand/vmunix kernel is already in place, the kernel will not be re-built. Note that no mod_kernel directives will be processed. * Build_Kernel: Complete * Boot_From_Client_Disk: Begin * Rebooting machine as expected. NOTE: Rebooting system. sync'ing disks (0 buffers to flush): 0 buffers not flushed 0 buffers still dirty Closing open logical volumes... Done Console reset done. Boot device reset done. ********** VIRTUAL FRONT PANEL ********** System Boot detected ***************************************** LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF OFF ON ON LED State: Running non-OS code. (i.e. Boot or Diagnostics) ... ... ... --- SERVER IS PERFORMING POST SEQUENCE HERE --- --- OUTPUT OMITED --- ... ... ... ***************************************** ************ EARLY BOOT VFP ************* End of early boot detected ***************************************** Firmware Version 43.50 Duplex Console IO Dependent Code (IODC) revision 1 ------------------------------------------------------------------------------ (c) Copyright 1995-2002, Hewlett-Packard Company, All rights reserved ------------------------------------------------------------------------------ Processor Speed State CoProcessor State Cache Size Number State Inst Data --------- -------- --------------------- ----------------- ------------ 0 650 MHz Active Functional 750 KB 1.5 MB 1 650 MHz Idle Functional 750 KB 1.5 MB Central Bus Speed (in MHz) : 120 Available Memory : 2097152 KB Good Memory Required : 16140 KB Primary boot path: 0/0/1/1.15 Alternate boot path: 0/0/2/1.15 Console path: 0/0/4/1.643 Keyboard path: 0/0/4/0.0 Processor is starting autoboot process. To discontinue, press any key within 10 seconds. 10 seconds expired. Proceeding... Trying Primary Boot Path ------------------------ Booting... Boot IO Dependent Code (IODC) revision 1 HARD Booted. ISL Revision A.00.38 OCT 26, 1994 ISL booting hpux ISL>

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • Installing Lubuntu 14.04.1 forcepae fails

    - by Rantanplan
    I tried to install Lubuntu 14.04.1 from a CD. First, I chose Try Lubuntu without installing which gave: ERROR: PAE is disabled on this Pentium M (PAE can potentially be enabled with kernel parameter "forcepae" ... Following the description on https://help.ubuntu.com/community/PAE, I used forcepae and tried Try Lubuntu without installing again. That worked fine. dmesg | grep -i pae showed: [ 0.000000] Kernel command line: file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- forcepae [ 0.008118] PAE forced! On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Finally, I rebooted once more and tried Check disc for defects with forcepae option; no errors have been found. Now, I am wondering how to find the error or whether it would be better to follow advice c in https://help.ubuntu.com/community/PAE: "Move the hard disk to a computer on which the processor has PAE capability and PAE flag (that is, almost everything else than a Banias). Install the system as usual but don't add restricted drivers. After the install move the disk back." Thanks for some hints! Perhaps some of the following can help: On Lubuntu 12.04: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 bogomips : 1284.76 clflush size : 64 cache_alignment : 64 address sizes : 32 bits physical, 32 bits virtual power management: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise cpuid eax in eax ebx ecx edx 00000000 00000002 756e6547 6c65746e 49656e69 00000001 000006d6 00000816 00000180 afe9f9bf 00000002 02b3b001 000000f0 00000000 2c04307d 80000000 80000004 00000000 00000000 00000000 80000001 00000000 00000000 00000000 00000000 80000002 20202020 20202020 65746e49 2952286c 80000003 6e655020 6d756974 20295228 7270204d 80000004 7365636f 20726f73 30352e31 007a4847 Vendor ID: "GenuineIntel"; CPUID level 2 Intel-specific functions: Version 000006d6: Type 0 - Original OEM Family 6 - Pentium Pro Model 13 - Stepping 6 Reserved 0 Brand index: 22 [not in table] Extended brand string: " Intel(R) Pentium(R) M processor 1.50GHz" CLFLUSH instruction cache line size: 8 Feature flags afe9f9bf: FPU Floating Point Unit VME Virtual 8086 Mode Enhancements DE Debugging Extensions PSE Page Size Extensions TSC Time Stamp Counter MSR Model Specific Registers MCE Machine Check Exception CX8 COMPXCHG8B Instruction SEP Fast System Call MTRR Memory Type Range Registers PGE PTE Global Flag MCA Machine Check Architecture CMOV Conditional Move and Compare Instructions FGPAT Page Attribute Table CLFSH CFLUSH instruction DS Debug store ACPI Thermal Monitor and Clock Ctrl MMX MMX instruction set FXSR Fast FP/MMX Streaming SIMD Extensions save/restore SSE Streaming SIMD Extensions instruction set SSE2 SSE2 extensions SS Self Snoop TM Thermal monitor 31 reserved TLB and cache info: b0: unknown TLB/cache descriptor b3: unknown TLB/cache descriptor 02: Instruction TLB: 4MB pages, 4-way set assoc, 2 entries f0: unknown TLB/cache descriptor 7d: unknown TLB/cache descriptor 30: unknown TLB/cache descriptor 04: Data TLB: 4MB pages, 4-way set assoc, 8 entries 2c: unknown TLB/cache descriptor On Lubuntu 14.04.1 live-CD with forcepae: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe bts est tm2 bogomips : 1284.68 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 32 bits virtual power management: uname -a Linux lubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty cpuid CPU 0: vendor_id = "GenuineIntel" version information (1/eax): processor type = primary processor (0) family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) model = 0xd (13) stepping id = 0x6 (6) extended family = 0x0 (0) extended model = 0x0 (0) (simple synth) = Intel Pentium M (Dothan B1) / Celeron M (Dothan B1), 90nm miscellaneous (1/ebx): process local APIC physical ID = 0x0 (0) cpu count = 0x0 (0) CLFLUSH line size = 0x8 (8) brand index = 0x16 (22) brand id = 0x16 (22): Intel Pentium M, .13um feature information (1/edx): x87 FPU on chip = true virtual-8086 mode enhancement = true debugging extensions = true page size extensions = true time stamp counter = true RDMSR and WRMSR support = true physical address extensions = false machine check exception = true CMPXCHG8B inst. = true APIC on chip = false SYSENTER and SYSEXIT = true memory type range registers = true PTE global bit = true machine check architecture = true conditional move/compare instruction = true page attribute table = true page size extension = false processor serial number = false CLFLUSH instruction = true debug store = true thermal monitor and clock ctrl = true MMX Technology = true FXSAVE/FXRSTOR = true SSE extensions = true SSE2 extensions = true self snoop = true hyper-threading / multi-core supported = false therm. monitor = true IA64 = false pending break event = true feature information (1/ecx): PNI/SSE3: Prescott New Instructions = false PCLMULDQ instruction = false 64-bit debug store = false MONITOR/MWAIT = false CPL-qualified debug store = false VMX: virtual machine extensions = false SMX: safer mode extensions = false Enhanced Intel SpeedStep Technology = true thermal monitor 2 = true SSSE3 extensions = false context ID: adaptive or shared L1 data = false FMA instruction = false CMPXCHG16B instruction = false xTPR disable = false perfmon and debug = false process context identifiers = false direct cache access = false SSE4.1 extensions = false SSE4.2 extensions = false extended xAPIC support = false MOVBE instruction = false POPCNT instruction = false time stamp counter deadline = false AES instruction = false XSAVE/XSTOR states = false OS-enabled XSAVE/XSTOR = false AVX: advanced vector extensions = false F16C half-precision convert instruction = false RDRAND instruction = false hypervisor guest status = false cache and TLB information (2): 0xb0: instruction TLB: 4K, 4-way, 128 entries 0xb3: data TLB: 4K, 4-way, 128 entries 0x02: instruction TLB: 4M pages, 4-way, 2 entries 0xf0: 64 byte prefetching 0x7d: L2 cache: 2M, 8-way, sectored, 64 byte lines 0x30: L1 cache: 32K, 8-way, 64 byte lines 0x04: data TLB: 4M pages, 4-way, 8 entries 0x2c: L1 data cache: 32K, 8-way, 64 byte lines extended feature flags (0x80000001/edx): SYSCALL and SYSRET instructions = false execution disable = false 1-GB large page support = false RDTSCP = false 64-bit extensions technology available = false Intel feature flags (0x80000001/ecx): LAHF/SAHF supported in 64-bit mode = false LZCNT advanced bit manipulation = false 3DNow! PREFETCH/PREFETCHW instructions = false brand = " Intel(R) Pentium(R) M processor 1.50GHz" (multi-processing synth): none (multi-processing method): Intel leaf 1 (synth) = Intel Pentium M (Dothan B1), 90nm

    Read the article

  • NHibernate + Remoting = ReflectionPermission Exception

    - by Pedro
    Hi all, We are dealing with a problem when using NHibernate with Remoting in a machine with full trust enviroment (actually that's our dev machine). The problem happens when whe try to send as a parameter an object previously retrieved from the server, that contains a NHibernate Proxy in one of the properties (a lazy one). As we are in the dev machine, there's no restriction in the trust level of the web app (it's set to Full) and, as a plus, we've configured NHibernate's and Castle's assemblies to full trust in CAS (even thinking that it'd not be necessary as the remoting app in IIS has the full trust level). Does anyone have any idea of what can be causing this exception? Stack trace below. InnerException: System.Security.SecurityException Message="Falha na solicitação da permissão de tipo 'System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'." Source="mscorlib" GrantedSet="" PermissionState="<IPermission class=\"System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089\"\r\nversion=\"1\"\r\nFlags=\"ReflectionEmit\"/>\r\n" RefusedSet="" Url="" StackTrace: em System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) em System.Security.CodeAccessPermission.Demand() em System.Reflection.Emit.AssemblyBuilder.DefineDynamicModuleInternalNoLock(String name, Boolean emitSymbolInfo, StackCrawlMark& stackMark) em System.Reflection.Emit.AssemblyBuilder.DefineDynamicModuleInternal(String name, Boolean emitSymbolInfo, StackCrawlMark& stackMark) em System.Reflection.Emit.AssemblyBuilder.DefineDynamicModule(String name, Boolean emitSymbolInfo) em Castle.DynamicProxy.ModuleScope.CreateModule(Boolean signStrongName) em Castle.DynamicProxy.ModuleScope.ObtainDynamicModuleWithWeakName() em Castle.DynamicProxy.ModuleScope.ObtainDynamicModule(Boolean isStrongNamed) em Castle.DynamicProxy.Generators.Emitters.ClassEmitter.CreateTypeBuilder(ModuleScope modulescope, String name, Type baseType, Type[] interfaces, TypeAttributes flags, Boolean forceUnsigned) em Castle.DynamicProxy.Generators.Emitters.ClassEmitter..ctor(ModuleScope modulescope, String name, Type baseType, Type[] interfaces, TypeAttributes flags, Boolean forceUnsigned) em Castle.DynamicProxy.Generators.Emitters.ClassEmitter..ctor(ModuleScope modulescope, String name, Type baseType, Type[] interfaces, TypeAttributes flags) em Castle.DynamicProxy.Generators.Emitters.ClassEmitter..ctor(ModuleScope modulescope, String name, Type baseType, Type[] interfaces) em Castle.DynamicProxy.Generators.BaseProxyGenerator.BuildClassEmitter(String typeName, Type parentType, Type[] interfaces) em Castle.DynamicProxy.Generators.BaseProxyGenerator.BuildClassEmitter(String typeName, Type parentType, IList interfaceList) em Castle.DynamicProxy.Generators.ClassProxyGenerator.GenerateCode(Type[] interfaces, ProxyGenerationOptions options) em Castle.DynamicProxy.Serialization.ProxyObjectReference.RecreateClassProxy() em Castle.DynamicProxy.Serialization.ProxyObjectReference.RecreateProxy() em Castle.DynamicProxy.Serialization.ProxyObjectReference..ctor(SerializationInfo info, StreamingContext context) Thank you in advance.

    Read the article

  • How did My Object Get into ASP.NET Page State?

    - by Paul Knopf
    I know what this error is, how to fix it, etc. My question is that I don't know why my current page I am developing is throwing this error when I am not using the foo class directly in any way, nor am I setting anything to the viewstate. I am using postbacks alot, but like I said, I am not storing anything in the viewstate etc one integer. I am using nhibernate if that is relevant. Any idea why I need to mark this classes as serializable that arent being used? Where should I start investigating? [SerializationException: Type 'FlexiCommerce.Persistence.NH.ContentPersister' in Assembly 'FlexiCommerce.Persistence.NH, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' is not marked as serializable.] System.Runtime.Serialization.FormatterServices.InternalGetSerializableMembers(RuntimeType type) +9434541 System.Runtime.Serialization.FormatterServices.GetSerializableMembers(Type type, StreamingContext context) +247 System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitMemberInfo() +160 System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitSerialize(Object obj, ISurrogateSelector surrogateSelector, StreamingContext context, SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, ObjectWriter objectWriter, SerializationBinder binder) +218 System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Write(WriteObjectInfo objectInfo, NameInfo memberNameInfo, NameInfo typeNameInfo) +388 System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Serialize(Object graph, Header[] inHeaders, __BinaryWriter serWriter, Boolean fCheck) +444 System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serialize(Stream serializationStream, Object graph, Header[] headers, Boolean fCheck) +133 System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serialize(Stream serializationStream, Object graph) +13 System.Web.UI.ObjectStateFormatter.SerializeValue(SerializerBinaryWriter writer, Object value) +2937 [ArgumentException: Error serializing value 'Music#2' of type 'FlexiCommerce.Components.Category.'] System.Web.UI.ObjectStateFormatter.SerializeValue(SerializerBinaryWriter writer, Object value) +3252 System.Web.UI.ObjectStateFormatter.SerializeValue(SerializerBinaryWriter writer, Object value) +2276 [ArgumentException: Error serializing value 'System.Object[]' of type 'System.Object[].'] System.Web.UI.ObjectStateFormatter.SerializeValue(SerializerBinaryWriter writer, Object value) +3252 System.Web.UI.ObjectStateFormatter.Serialize(Stream outputStream, Object stateGraph) +116 System.Web.UI.ObjectStateFormatter.Serialize(Object stateGraph) +57 System.Web.UI.ObjectStateFormatter.System.Web.UI.IStateFormatter.Serialize(Object state) +4 System.Web.UI.Util.SerializeWithAssert(IStateFormatter formatter, Object stateGraph) +37 System.Web.UI.HiddenFieldPageStatePersister.Save() +79 System.Web.UI.Page.SavePageStateToPersistenceMedium(Object state) +108 System.Web.UI.Page.SaveAllState() +315 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2492

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >