Search Results

Search found 6839 results on 274 pages for 'functional tests'.

Page 254/274 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • How to play audio in Java Application

    - by user577829
    I'm making a java application and I need to play audio. I'm playing mainly small sound files of my cannon firing (its a cannon shooting game) and the projectiles exploding, though I plan on having looping background music. I have found two different methods to accomplish this, but both don't work how I want. The first method is literally a method: public void playSoundFile(File file) {//http://java.ittoolbox.com/groups/technical-functional/java-l/sound-in-an-application-90681 try { //get an AudioInputStream AudioInputStream ais = AudioSystem.getAudioInputStream(file); //get the AudioFormat for the AudioInputStream AudioFormat audioformat = ais.getFormat(); System.out.println("Format: " + audioformat.toString()); System.out.println("Encoding: " + audioformat.getEncoding()); System.out.println("SampleRate:" + audioformat.getSampleRate()); System.out.println("SampleSizeInBits: " + audioformat.getSampleSizeInBits()); System.out.println("Channels: " + audioformat.getChannels()); System.out.println("FrameSize: " + audioformat.getFrameSize()); System.out.println("FrameRate: " + audioformat.getFrameRate()); System.out.println("BigEndian: " + audioformat.isBigEndian()); //ULAW format to PCM format conversion if ((audioformat.getEncoding() == AudioFormat.Encoding.ULAW) || (audioformat.getEncoding() == AudioFormat.Encoding.ALAW)) { AudioFormat newformat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, audioformat.getSampleRate(), audioformat.getSampleSizeInBits() * 2, audioformat.getChannels(), audioformat.getFrameSize() * 2, audioformat.getFrameRate(), true); ais = AudioSystem.getAudioInputStream(newformat, ais); audioformat = newformat; } //checking for a supported output line DataLine.Info datalineinfo = new DataLine.Info(SourceDataLine.class, audioformat); if (!AudioSystem.isLineSupported(datalineinfo)) { //System.out.println("Line matching " + datalineinfo + " is not supported."); } else { //System.out.println("Line matching " + datalineinfo + " is supported."); //opening the sound output line SourceDataLine sourcedataline = (SourceDataLine) AudioSystem.getLine(datalineinfo); sourcedataline.open(audioformat); sourcedataline.start(); //Copy data from the input stream to the output data line int framesizeinbytes = audioformat.getFrameSize(); int bufferlengthinframes = sourcedataline.getBufferSize() / 8; int bufferlengthinbytes = bufferlengthinframes * framesizeinbytes; byte[] sounddata = new byte[bufferlengthinbytes]; int numberofbytesread = 0; while ((numberofbytesread = ais.read(sounddata)) != -1) { int numberofbytesremaining = numberofbytesread; sourcedataline.write(sounddata, 0, numberofbytesread); } } } catch (Exception e) { e.printStackTrace(); } } The problem with this is that my entire program stops until the sound file is finished, or at least nearly finished. The second method is this: File file = new File("Launch1.wav"); AudioClip clip; try { clip = JApplet.newAudioClip(file.toURL()); clip.play(); } catch (Exception e) { e.getMessage(); } The problem I have here is that every time the sound file ends early or doesn't play at all depending on where I place the code. Is their any way to play sound without the above mentioned problems? Am I doing something wrong? Any help is greatly appreciated.

    Read the article

  • How to cache queries in EJB and return result efficient (performance POV)

    - by Maxym
    I use JBoss EJB 3.0 implementation (JBoss 4.2.3 server) At the beginning I created native query all the time using construction like Query query = entityManager.createNativeQuery("select * from _table_"); Of couse it is not that efficient, I performed some tests and found out that it really takes a lot of time... Then I found a better way to deal with it, to use annotation to define native queries: @NamedNativeQuery( name = "fetchData", value = "select * from _table_", resultClass=Entity.class ) and then just use it Query query = entityManager.createNamedQuery("fetchData"); the performance of code line above is two times better than where I started from, but still not that good as I expected... then I found that I can switch to Hibernate annotation for NamedNativeQuery (anyway, JBoss's implementation of EJB is based on Hibernate), and add one more thing: @NamedNativeQuery( name = "fetchData2", value = "select * from _table_", resultClass=Entity.class, readOnly=true) readOnly - marks whether the results are fetched in read-only mode or not. It sounds good, because at least in this case of mine I don't need to update data, I wanna just fetch it for report. When I started server to measure performance I noticed that query without readOnly=true (by default it is false) returns result with each iteration better and better, and at the same time another one (fetchData2) works like "stable" and with time difference between them is shorter and shorter, and after 5 iterations speed of both was almost the same... The questions are: 1) is there any other way to speed query using up? Seems that named queries should be prepared once, but I can't say it... In fact if to create query once and then just use it it would be better from performance point of view, but it is problematic to cache this object, because after creating query I can set parameters (when I use ":variable" in query), and it changes query object (isn't it?). well, is here any way to cache them? Or named query is the best option I can use? 2) any other approaches how to make results retrieveng faster. I mean, for instance I don't need those Entities to be attached, I won't update them, all I need is just fetch collection of data. Maybe readOnly is the only available way, so I can't speed it up, but who knows :) P.S. I don't ask about DB performance, all I need now is how not to create query all the time, so use it efficient, and to "allow" EJB to do less job with the same result concerning data returning.

    Read the article

  • what to do with a flawed C++ skills test

    - by Mike Landis
    In the following gcc.gnu.org post, Nathan Myers says that a C++ skills test at SANS Consulting Services contained three errors in nine questions: Looking around, one of fthe first on-line C++ skills tests I ran across was: http://www.geekinterview.com/question_details/13090 I looked at question 1... find(int x,int y) { return ((x<y)?0:(x-y)):} call find(a,find(a,b)) use to find (a) maximum of a,b (b) minimum of a,b (c) positive difference of a,b (d) sum of a,b ... immediately wondering why would anyone write anything so obtuse. Getting past the absurdity, I didn't really like any of the answers, immediately eliminating (a) and (b) because you can get back zero (which is neither a nor b) in a variety of circumstances. Sum or difference seemed more likely, except that you could also get zero regardless of the magnitudes of a and b. So... I put Matlab to work (code below) and found: when either a or b is negative you get zero; when b a you get a; otherwise you get b, so the answer is (b) min(a,b), if a and b are positive, though strictly speaking the answer should be none of the above because there are no range restrictions on either variable. That forces test takers into a dilemma - choose the best available answer and be wrong in 3 of 4 quadrants, or don't answer, leaving the door open to the conclusion that the grader thinks you couldn't figure it out. The solution for test givers is to fix the test, but in the interim, what's the right course of action for test takers? Complain about the questions? function z = findfunc(x,y) for i=1:length(x) if x(i) < y(i) z(i) = 0; else z(i) = x(i) - y(i); end end end function [b,d1,z] = plotstuff() k = 50; a = [-k:1:k]; b = (2*k+1) * rand(length(a),1) - k; d1 = findfunc(a,b); z = findfunc(a,d1); plot( a, b, 'r.', a, d1, 'g-', a, z, 'b-'); end

    Read the article

  • Mocking methods that call other methods Still hit database.Can I avoid it?

    - by devnet247
    Hi, It has been decided to write some unit tests using moq etc..It's lots of legacy code c# (this is beyond my control so cannot answer the whys of this) Now how do you cope with a scenario when you dont want to hit the database but you indirectly still hit the database? This is something I put together it's not the real code but gives you an idea. How would you deal with this sort of scenario? Basically calling a method on a mocked interface still makes a dal call as inside that method there are other methods not part of that interface?Hope it's clear [TestFixture] public class Can_Test_this_legacy_code { [Test] public void Should_be_able_to_mock_login() { var mock = new Mock<ILoginDal>(); User user; var userName = "Jo"; var password = "password"; mock.Setup(x => x.login(It.IsAny<string>(), It.IsAny<string>(),out user)); var bizLogin = new BizLogin(mock.Object); bizLogin.Login(userName, password, out user); } } public class BizLogin { private readonly ILoginDal _login; public BizLogin(ILoginDal login) { _login = login; } public void Login(string userName, string password, out User user) { //Even if I dont want to this will call the DAL!!!!! var bizPermission = new BizPermission(); var permissionList = bizPermission.GetPermissions(userName); //Method I am actually testing _login.login(userName,password,out user); } } public class BizPermission { public List<Permission>GetPermissions(string userName) { var dal=new PermissionDal(); var permissionlist= dal.GetPermissions(userName); return permissionlist; } } public class PermissionDal { public List<Permission> GetPermissions(string userName) { //I SHOULD NOT BE GETTING HERE!!!!!! return new List<Permission>(); } } public interface ILoginDal { void login(string userName, string password,out User user); } public interface IOtherStuffDal { List<Permission> GetPermissions(); } public class Permission { public int Id { get; set; } public string Name { get; set; } } Any suggestions? Am I missing the obvious? Is this Untestable code? Very very grateful for any suggestions.

    Read the article

  • multithreading with database

    - by Darsin
    I am looking out for a strategy to utilize multithreading (probably asynchronous delegates) to do a synchronous operation. I am new to multithreading so i will outline my scenario first. This synchronous operation right now is done for one set of data (portfolio) based on the the parameters provided. The (psudeo-code) implementation is given below: public DataSet DoTests(int fundId, DateTime portfolioDate) { // Get test results for the portfolio // Call the database adapter method, which in turn is a stored procedure, // which in turns runs a series of "rule" stored procs and fills a local temp table and returns it back. DataSet resultsDataSet = GetTestResults(fundId, portfolioDate); try { // Do some local processing on the results DoSomeProcessing(resultsDataSet); // Save the results in Test, TestResults and TestAllocations tables in a transaction. // Sets a global transaction which is provided to all the adapter methods called below // It is defined in the Base class StartTransaction("TestTransaction"); // Save Test and get a testId int testId = UpdateTest(resultsDataSet); // Adapter method, uses the same transaction // Update testId in the other tables in the dataset UpdateTestId(resultsDataSet, testId); // Update TestResults UpdateTestResults(resultsDataSet); // Adapter method, uses the same transaction // Update TestAllocations UpdateTestAllocations(resultsDataSet); // Adapter method, uses the same transaction // It is defined in the base class CommitTransaction("TestTransaction"); } catch { RollbackTransaction("TestTransaction"); } return resultsDataSet; } Now the requirement is to do it for multiple set of data. One way would be to call the above DoTests() method in a loop and get the data. I would prefer doing it in parallel. But there are certain catches: StartTransaction() method creates a connection (and transaction) every time it is called. All the underlying database tables, procedures are the same for each call of DoTests(). (obviously). Thus my question are: Will using multithreading anyway improve performance? What are the chances of deadlock especially when new TestId's are being created and the Tests, TestResults and TestAllocations are being saved? How can these deadlocked be handled? Is there any other more efficient way of doing the above operation apart from looping over the DoTests() method repeatedly?

    Read the article

  • No unique bean of type [javax.persistence.EntityManager] is defined

    - by sebajb
    I am using JUnit 4 to test Dao Access with Spring (annotations) and JPA (hibernate). The datasource is configured through JNDI(Weblogic) with an ORacle(Backend). This persistence is configured with just the name and a RESOURCE_LOCAL transaction-type The application context file contains notations for annotations, JPA config, transactions, and default package and configuration for annotation detection. I am using Junit4 like so: ApplicationContext <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="workRequest"/> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="databasePlatform" value="${database.target}"/> <property name="showSql" value="${database.showSql}" /> <property name="generateDdl" value="${database.generateDdl}" /> </bean> </property> </bean> <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>workRequest</value> </property> <property name="jndiEnvironment"> <props> <prop key="java.naming.factory.initial">weblogic.jndi.WLInitialContextFactory</prop> <prop key="java.naming.provider.url">t3://localhost:7001</prop> </props> </property> </bean> <bean id="txManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean class="org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor"/> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> JUnit TestCase @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:applicationContext.xml" }) public class AssignmentDaoTest { private AssignmentDao assignmentDao; @Test public void readAll() { assertNotNull("assignmentDao cannot be null", assignmentDao); List assignments = assignmentDao.findAll(); assertNotNull("There are no assignments yet", assignments); } } regardless of what changes I make I get: No unique bean of type [javax.persistence.EntityManager] is defined Any hint on what this could be. I am running the tests inside eclipse.

    Read the article

  • Any way to allow classes implementing IEntity and downcast to have operator == comparisons?

    - by George Mauer
    Basically here's the issue. All entities in my system are identified by their type and their id. new Customer() { Id = 1} == new Customer() {Id = 1}; new Customer() { Id = 1} != new Customer() {Id = 2}; new Customer() { Id = 1} != new Product() {Id = 1}; Pretty standard scenario. Since all Entities have an Id I define an interface for all entities. public interface IEntity { int Id { get; set;} } And to simplify creation of entities I make public abstract class BaseEntity<T> : where T : IEntity { int Id { get; set;} public static bool operator ==(BaseEntity<T> e1, BaseEntity<T> e2) { if (object.ReferenceEquals(null, e1)) return false; return e1.Equals(e2); } public static bool operator !=(BaseEntity<T> e1, BaseEntity<T> e2) { return !(e1 == e2); } } where Customer and Product are something like public class Customer : BaseEntity<Customer>, IEntity {} public class Product : BaseEntity<Product>, IEntity {} I think this is hunky dory. I think all I have to do is override Equals in each entity (if I'm super clever, I can even override it only once in the BaseEntity) and everything with work. So now I'm expanding my test coverage and find that its not quite so simple! First of all , when downcasting to IEntity and using == the BaseEntity< override is not used. So what's the solution? Is there something else I can do? If not, this is seriously annoying. Upadate It would seem that there is something wrong with my tests - or rather with comparing on generics. Check this out [Test] public void when_created_manually_non_generic() { // PASSES! var e1 = new Terminal() {Id = 1}; var e2 = new Terminal() {Id = 1}; Assert.IsTrue(e1 == e2); } [Test] public void when_created_manually_generic() { // FAILS! GenericCompare(new Terminal() { Id = 1 }, new Terminal() { Id = 1 }); } private void GenericCompare<T>(T e1, T e2) where T : class, IEntity { Assert.IsTrue(e1 == e2); } Whats going on here? This is not as big a problem as I was afraid, but is still quite annoying and a completely unintuitive way for the language to behave. Update Update Ah I get it, the generic implicitly downcasts to IEntity for some reason. I stand by this being unintuitive and potentially problematic for my Domain's consumers as they need to remember that anything happening within a generic method or class needs to be compared with Equals()

    Read the article

  • Commitment to Zend Framework - any arguments against?

    - by Pekka
    I am refurbishing a big CMS that I have been working on for quite a number of years now. The product itself is great, but some components, the Database and translation classes for example, need urgent replacing - partly self-made as far back as 2002, grown into a bit of a chaos over time, and might have trouble surviving a security audit. So, I've been looking closely at a number of frameworks (or, more exactly, component Libraries, as I do not intend to change the basic structure of the CMS) and ended up with liking Zend Framework the best. They offer a solid MVC model but don't force you into it, and they offer a lot of professional components that have obviously received a lot of attention (Did you know there are multiple plurals in Russian, and you can't translate them using a simple ($number == 0) or ($number > 1) switch? I didn't, but Zend_Translate can handle it. Just to illustrate the level of thorougness the library seems to have been built with.) I am now literally at the point of no return, starting to replace key components of the system by the Zend-made ones. I'm not really having second thoughts - and I am surely not looking to incite a flame war - but before going onward, I would like to step back for a moment and look whether there is anything speaking against tying a big system closely to Zend Framework. What I like about Zend: As far as I can see, very high quality code Extremely well documented, at least regarding introductions to how things work (Haven't had to use detailed API documentation yet) Backed by a company that has an interest in seeing the framework prosper Well received in the community, has a considerable user base Employs coding standards I like Comes with a full set of unit tests Feels to me like the right choice to make - or at least, one of the right choices - in terms of modern, professional PHP development. I have been thinking about encapsulating and abstracting ZF's functionality into own classes to be able to switch frameworks more easily, but have come to the conclusion that this would not be a good idea because: it would be an unnecessary level of abstraction it could cost performance the big advantage of using a framework - the existence of a developer base that is familiar with its components - would partly be cancelled out therefore, the commitment to ZF would be a deep one. Thus my question: Is there anything substantial speaking against committing to the Zend Framework? Do you have insider knowledge of plans of Zend Inc.'s to go evil in 2011, and make it a closed source library? Is Zend Inc. run by vampires? Are there conceptual flaws in the code base you start to notice when you've transitioned all your projects to it? Is the appearance of quality code an illusion? Does the code look good, but run terribly slow on anything below my quad-core workstation?

    Read the article

  • weird performance in C++ (VC 2010)

    - by raicuandi
    Hello, I have this loop written in C++, that compiled with MSVC2010 takes a long time to run. (300ms) for (int i=0; i<h; i++) { for (int j=0; j<w; j++) { if (buf[i*w+j] > 0) { const int sy = max(0, i - hr); const int ey = min(h, i + hr + 1); const int sx = max(0, j - hr); const int ex = min(w, j + hr + 1); float val = 0; for (int k=sy; k < ey; k++) { for (int m=sx; m < ex; m++) { val += original[k*w + m] * ds[k - i + hr][m - j + hr]; } } heat_map[i*w + j] = val; } } } It seemed a bit strange to me, so I did some tests then changed a few bits to inline assembly: (specifically, the code that sums "val") for (int i=0; i<h; i++) { for (int j=0; j<w; j++) { if (buf[i*w+j] > 0) { const int sy = max(0, i - hr); const int ey = min(h, i + hr + 1); const int sx = max(0, j - hr); const int ex = min(w, j + hr + 1); __asm { fldz } for (int k=sy; k < ey; k++) { for (int m=sx; m < ex; m++) { float val = original[k*w + m] * ds[k - i + hr][m - j + hr]; __asm { fld val fadd } } } float val1; __asm { fstp val1 } heat_map[i*w + j] = val1; } } } Now it runs in half the time, 150ms. It does exactly the same thing, but why is it twice as quick? In both cases it was run in Release mode with optimizations on. Am I doing anything wrong in my original C++ code?

    Read the article

  • XML serialization options in .NET

    - by Borek
    I'm building a service that returns an XML (no SOAP, no ATOM, just plain old XML). Say that I have my domain objects already filled with data and just need to transform them to the XML format. What options do I have on .NET? Requirements: The transformation is not 1:1. Say that I have an Address property of type Address with nested properties like Line1, City, Postcode etc. This may need to result in an XML like <xaddr city="...">Line1, Postcode</xaddr>, i.e. quite different. Some XML elements/attributes are conditional, for example, if a Customer is under 18, the XML needs to contain some additional information. I only need to serialize the objects to XML, the other direction (XML to objects) is not important Some technologies, i.e. Data Contracts use .NET attributes. Other means of configuration (external XML config, buddy classes etc.) would be a plus. Here are the options as I see them as the moment. Corrections / additions will be very welcome. String concatenation - forget it, it was a joke :) Linq 2 XML - complete control but quite a lot of hand written code, would need good suite of unit tests View engines in ASP.NET MVC (or even Web Forms theoretically), the logic being in controllers. It's a question how to structure it, I can have simple rules engine in my controller(s) and one view template per each possible output, or have the decision logic directly in the template. Both have upsides and downsides. XML Serialization - I'm not sure about the flexibility here Data Contracts from WCF - not sure about the flexibility either, plus would they work in a simple ASP.NET MVC app (non-WCF service)? Are they a super-set of the standard XML serialization now? If it exists, some XML-to-object mapper. The more I think about it the more I think I'm looking for something like this but I couldn't find anything appropriate. Any comments / other options?

    Read the article

  • Remove never-run call to templated function, get allocation error on run-time

    - by Narfanator
    First off, I'm a bit at a loss as to how to ask this question. So I'm going to try throwing lots of information at the problem. Ok, so, I went to completely redesign my test project for my experimental core library thingy. I use a lot of template shenanigans in the library. When I removed the "user" code, the tests gave me a memory allocation error. After quite a bit of experimenting, I narrowed it down to this bit of code (out of a couple hundred lines): void VOODOO(components::switchBoard &board){ board.addComponent<using_allegro::keyInputs<'w'> >(); } Fundementally, what's weirding me out is that it appears that the act of compiling this function (and the template function it then uses, and the template functions those then use...), makes this bug not appear. This code is not being run. Similar code (the same, but for different key vals) occurs elsewhere, but is within Boost TDD code. I realize I certainly haven't given enough information for you to solve it for me; I tried, but it more-or-less spirals into most of the code base. I think I'm most looking for "here's what the problem could be", "here's where to look", etc. There's something that's happening during compile because of this line, but I don't know enough about that step to begin looking. Sooo, how can a (presumably) compilied, but never actually run, bit of templated code, when removed, cause another part of code to fail? Error: Unhandled exceptionat 0x6fe731ea (msvcr90d.dll) in Switchboard.exe: 0xC0000005: Access violation reading location 0xcdcdcdc1. Callstack: operator delete(void * pUser Data) allocator< class name related to key inputs callbacks ::deallocate vector< same class ::_Insert_n(...) vector< " " ::insert(...) vector<" "::push_back(...) It looks like maybe the vector isn't valid, because _MyFirst and similar data members are showing values of 0xcdcdcdcd in the debugger. But the vector is a member variable...

    Read the article

  • Wordpress blog with Joomla?

    - by user427902
    Hi, I had this Wordpress installation which was installed in a subfolder (not root). Like http: //server/blog/. Now, I installed Joomla on the root (http: //server/). Everything seems to be working fine with the Joomla part. However, the blog part is messed up. If I try to browse the homepage of my blog which is http: //server/blog/ it works like a charm. But while trying to view individual blog pages like say, http: //server/blog/some_category/some_post I get a Joomla 404 page. So, I was wondering if it was possible to use both Wordpress and Joomla in the same server in the setup I am trying to. Let me clarify that I am NOT looking to integrate user login and other such things. I just want the blog to be functional under a subfolder while I run the Joomla site in the root. So, what is the correct way to go about it. Can this be solved by any .config edits or something else? Edit: Here's the .htaccess for Joomla ... (I can't find any .htaccess for Wp though, still looking for it.) ## # @version $Id: htaccess.txt 14401 2010-01-26 14:10:00Z louis $ # @package Joomla # @copyright Copyright (C) 2005 - 2010 Open Source Matters. All rights reserved. # @license http://www.gnu.org/copyleft/gpl.html GNU/GPL # Joomla! is Free Software ## ##################################################### # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. # ##################################################### ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks # # mod_rewrite in use RewriteEngine On ########## Begin - Rewrite rules to block out some common exploits ## If you experience problems on your site block out the operations listed below ## This attempts to block the most common type of exploit `attempts` to Joomla! # ## Deny access to extension xml files (uncomment out to activate) #<Files ~ "\.xml$"> #Order allow,deny #Deny from all #Satisfy all #</Files> ## End of deny access to extension xml files RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] # Block out any script trying to base64_encode crap to send via URL RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] # Block out any script that includes a <script> tag in URL RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Send all blocked request to homepage with 403 Forbidden error! RewriteRule ^(.*)$ index.php [F,L] # ########## End - Rewrite rules to block out some common exploits # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) # RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/index.php RewriteCond %{REQUEST_URI} (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ [NC] RewriteRule (.*) index.php RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] # ########## End - Joomla! core SEF Section

    Read the article

  • A standard event messaging system with AJAX?

    - by Gutzofter
    Is there any standards or messaging framework for AJAX? Right now I have a single page that loads content using Ajax. Because I had a complex form for data entry as part of my content, I need to validate certain events that can occur in my form. So after some adjustments driven by my tests: asyncShould("search customer list click", 3, function() { stop(1000); $('#content').show(); var forCustomerList = newCustomerListRequest(); var forShipAndCharge = newShipAndChargeRequest(forCustomerList); forCustomerList.page = '../../vt/' + forCustomerList.page; forShipAndCharge.page = 'helpers/helper.php'; forShipAndCharge.data = { 'action': 'shipAndCharge', 'DB': '11001' }; var originalComplete = forShipAndCharge.complete; forShipAndCharge.complete = function(xhr, status) { originalComplete(xhr, status); ok($('#customer_edit').is(":visible"), 'Shows customer editor'); $('#search').click(); ok($('#customer_list').is(":visible"), 'Shows customer list'); ok($('#customer_edit').is(":hidden"), 'Does not show customer editor'); start(); }; testController.getContent(forShipAndCharge); }); Here is the controller for getting content: getContent: function (request) { $.ajax({ type: 'GET', url: request.page, dataType: 'json', data: request.data, async: request.async, success: request.success, complete: request.complete }); }, And here is the request event: function newShipAndChargeRequest(serverRequest) { var that = { serverRequest: serverRequest, page: 'nodes/orders/sc.php', data: 'customer_id=-1', complete: errorHandler, success: function(msg) { shipAndChargeHandler(msg); initWhenCustomer(that.serverRequest); }, async: true }; return that; } And here is a success handler: function shipAndChargeHandler(msg) { $('.contentContainer').html(msg.html); if (msg.status == 'flash') { flash(msg.flash); } } And on my server side I end up with a JSON structure that looks like this: $message['status'] = 'success'; $message['data'] = array(); $message['flash'] = ''; $message['html'] = ''; echo json_encode($message); So now loading content consists of two parts: HTML, this is the presentation of the form. DATA, this is any data that needs be loaded for the form FLASH, any validation or server errors STATUS tells client what happened on server. My question is: Is this a valid way to handle event messaging on the client-side or am I going down a path of heartache and pain?

    Read the article

  • Casting to derived type problem in C++

    - by GONeale
    Hey there everyone, I am quite new to C++, but have worked with C# for years, however it is not helping me here! :) My problem: I have an Actor class which Ball and Peg both derive from on an objective-c iphone game I am working on. As I am testing for collision, I wish to set an instance of Ball and Peg appropriately depending on the actual runtime type of actorA or actorB. My code that tests this as follows: // Actors that collided Actor *actorA = (Actor*) bodyA->GetUserData(); Actor *actorB = (Actor*) bodyB->GetUserData(); Ball* ball; Peg* peg; if (static_cast<Ball*> (actorA)) { // true ball = static_cast<Ball*> (actorA); } else if (static_cast<Ball*> (actorB)) { ball = static_cast<Ball*> (actorB); } if (static_cast<Peg*> (actorA)) { // also true?! peg = static_cast<Peg*> (actorA); } else if (static_cast<Peg*> (actorB)) { peg = static_cast<Peg*> (actorB); } if (peg != NULL) { [peg hitByBall]; } Once ball and peg are set, I then proceed to run the hitByBall method (objective c). Where my problem really lies is in the casting procedurel Ball casts fine from actorA; the first if (static_cast<>) statement steps in and sets the ball pointer appropriately. The second step is to assign the appropriate type to peg. I know peg should be a Peg type and I previously know it will be actorB, however at runtime, detecting the types, I was surprised to find actually the third if (static_cast<>) statement stepped in and set this, this if statement was to check if actorA was a Peg, which we already know actorA is a Ball! Why would it have stepped here and not in the fourth if statement? The only thing I can assume is how casting works differently from c# and that is it finds that actorA which is actually of type Ball derives from Actor and then it found when static_cast<Peg*> (actorA) is performed it found Peg derives from Actor too, so this is a valid test? This could all come down to how I have misunderstood the use of static_cast. How can I achieve what I need? :) I'm really uneasy about what feels to me like a long winded brute-casting attempt here with a ton of ridiculous if statements. I'm sure there is a more elegant way to achieve a simple cast to Peg and cast to Ball dependent on actual type held in actorA and actorB. Hope someone out there can help! :) Thanks a lot.

    Read the article

  • Agile: User Stories for Machine Learning Project?

    - by benjismith
    I've just finished up with a prototype implementation of a supervised learning algorithm, automatically assigning categorical tags to all the items in our company database (roughly 5 million items). The results look good, and I've been given the go-ahead to plan the production implementation project. I've done this kind of work before, so I know how the functional components of the software. I need a collection of web crawlers to fetch data. I need to extract features from the crawled documents. Those documents need to be segregated into a "training set" and a "classification set", and feature-vectors need to be extracted from each document. Those feature vectors are self-organized into clusters, and the clusters are passed through a series of rebalancing operations. Etc etc etc etc. So I put together a plan, with about 30 unique development/deployment tasks, each with time estimates. The first stage of development -- ignoring some advanced features that we'd like to have in the long-term, but aren't high enough priority to make it into the development schedule yet -- is slated for about two months worth of work. (Keep in mind that I already have a working prototype, so the final implementation is significantly simpler than if the project was starting from scratch.) My manager said the plan looked good to him, but he asked if I could reorganize the tasks into user stories, for a few reasons: (1) our project management software is totally organized around user stories; (2) all of our scheduling is based on fitting entire user stories into sprints, rather than individually scheduling tasks; (3) other teams -- like the web developers -- have made great use of agile methodologies, and they've benefited from modelling all the software features as user stories. So I created a user story at the top level of the project: As a user of the system, I want to search for items by category, so that I can easily find the most relevant items within a huge, complex database. Or maybe a better top-level story for this feature would be: As a content editor, I want to automatically create categorical designations for the items in our database, so that customers can easily find high-value data within our huge, complex database. But that's not the real problem. The tricky part, for me, is figuring out how to create subordinate user stories for the rest of the machine learning architecture. Case in point... I know that the algorithm requires two major architectural subdivisions: (A) training, and (B) classification. And I know that the training portion of the architecture requires construction of a cluster-space. All the Agile Development literature I've read seems to indicate that a user story should be the "smallest possible implementation that provides any business value". And that makes a lot of sense when designing a piece of end-user software. Start small, and then incrementally add value when users demand additional functionality. But a cluster-space, in and of itself, provides zero business value. Nor does a crawler, or a feature-extractor. There's no business value (not for the end-user, or for any of the roles internal to the company) in a partial system. A trained cluster-space is only possible with the crawler and feature extractor, and only relevant if we also develop an accompanying classifier. I suppose it would be possible to create user stories where the subordinate components of the system act as the users in the stories: As a supervised-learning cluster-space construction routine, I want to consume data from a feature extractor, so that I can exist. But that seems really weird. What benefit does it provide me as the developer (or our users, or any other stakeholders, for that matter) to model my user stories like that? Although the main story can be easily divided along architectural-component boundaries (crawler, trainer, classifier, etc), I can't think of any useful decomposition from a user's perspective. What do you guys think? How do you plan Agile user stories for sophisticated, indivisible, non-user-facing components?

    Read the article

  • What block is not being tested in my test method? (VS08 Test Framework)

    - by daft
    I have the following code: private void SetControlNumbers() { string controlString = ""; int numberLength = PersonNummer.Length; switch (numberLength) { case (10) : controlString = PersonNummer.Substring(6, 4); break; case (11) : controlString = PersonNummer.Substring(7, 4); break; case (12) : controlString = PersonNummer.Substring(8, 4); break; case (13) : controlString = PersonNummer.Substring(9, 4); break; } ControlNumbers = Convert.ToInt32(controlString); } Which is tested using the following test methods: [TestMethod()] public void SetControlNumbers_Length10() { string pNummer = "9999999999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } [TestMethod()] public void SetControlNumbers_Length11() { string pNummer = "999999-9999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } [TestMethod()] public void SetControlNumbers_Length12() { string pNummer = "199999999999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } [TestMethod()] public void SetControlNumbers_Length13() { string pNummer = "1999999-9999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } For some reason Visual Studio says that I have 1 block that is not tested despite showing all code in the method under test in blue (ie. the code is covered in my unit tests). Is this because of the fact that I don't have a default value defined in the switch? When the SetControlNumbers() method is called, the string on which it operates have already been validated and checked to see that it conforms to the specification and that the various Substring calls in the switch will generate a string containing 4 chars. I'm just curious as to why it says there is 1 untested block. I'm no unit test guru at all, so I'd love some feedback on this. Also, how can I improve on the conversion after the switch to make it safer other than adding a try-catch block and check for FormatExceptions and OverflowExceptions?

    Read the article

  • TestNG - Factories and Dataproviders

    - by Tim K
    Background Story I'm working at a software firm developing a test automation framework to replace our old spaghetti tangled system. Since our system requires a login for almost everything we do, I decided it would be best to use @BeforeMethod, @DataProvider, and @Factory to setup my tests. However, I've run into some issues. Sample Test Case Lets say the software system is a baseball team roster. We want to test to make sure a user can search for a team member by name. (Note: I'm aware that BeforeMethods don't run in any given order -- assume that's been taken care of for now.) @BeforeMethod public void setupSelenium() { // login with username & password // acknowledge announcements // navigate to search page } @Test(dataProvider="players") public void testSearch(String playerName, String searchTerm) { // search for "searchTerm" // browse through results // pass if we find playerName // fail (Didn't find the player) } This test case assumes the following: The user has already logged on (in a BeforeMethod, most likely) The user has already navigated to the search page (trivial, before method) The parameters to the test are associated with the aforementioned login The Problems So lets try and figure out how to handle the parameters for the test case. Idea #1 This method allows us to associate dataproviders with usernames, and lets us use multiple users for any specific test case! @Test(dataProvider="players") public void testSearch(String user, String pass, String name, String search) { // login with user/pass // acknowledge announcements // navigate to search page // ... } ...but there's lots of repetition, as we have to make EVERY function accept two extra parameters. Not to mention, we're also testing the acknowledge announcements feature, which we don't actually want to test. Idea #2 So lets use the factory to initialize things properly! class BaseTestCase { public BaseTestCase(String user, String password, Object[][] data); } class SomeTest { @Factory public void ... } With this, we end up having to write one factory per test case... Although, it does let us have multiple users per test-case. Conclusion I'm about fresh out of ideas. There was another idea I had where I was loading data from an XML file, and then calling the methods from a program... but its getting silly. Any ideas?

    Read the article

  • Copy constructor bug

    - by user168715
    I'm writing a simple nD-vector class, but am encountering a strange bug. I've stripped out the class to the bare minimum that still reproduces the bug: #include <iostream> using namespace std; template<unsigned int size> class nvector { public: nvector() {data_ = new double[size];} ~nvector() {delete[] data_;} template<unsigned int size2> nvector(const nvector<size2> &other) { data_ = new double[size]; int i=0; for(; i<size && i < size2; i++) data_[i] = other[i]; for(; i<size; i++) data_[i] = 0; } double &operator[](int i) {return data_[i];} const double&operator[](int i) const {return data_[i];} private: const nvector<size> &operator=(const nvector<size> &other); //Intentionally unimplemented for now double *data_; }; int main() { nvector<2> vector2d; vector2d[0] = 1; vector2d[1] = 2; nvector<3> vector3d(vector2d); for(int i=0; i<3; i++) cout << vector3d[i] << " "; cout << endl; //Prints 1 2 0 nvector<3> other3d(vector3d); for(int i=0; i<3; i++) cout << other3d[i] << " "; cout << endl; //Prints 1 2 0 } //Segfault??? On the surface this seems to work fine, and both tests print out the correct values. However, at the end of main the program crashes with a segfault, which I've traced to nvector's destructor. At first I thought the (incorrect) default assignment operator was somehow being called, which is why I added the (currently) unimplemented explicit assignment operator to rule this possibility out. So my copy constructor must be buggy, but I'm having one of those days where I'm staring at extremely simple code and just can't see it. Do you guys have any ideas?

    Read the article

  • Is valgrind crazy or is this is a genuine std map iterator memory leak?

    - by Alberto Toglia
    Well, I'm very new to Valgrind and memory leak profilers in general. And I must say it is a bit scary when you start using them cause you can't stop wondering how many leaks you might have left unsolved before! To the point, as I'm not an experienced in c++ programmer, I would like to check if this is certainly a memory leak or is it that Valgrind is doing a false positive? typedef std::vector<int> Vector; typedef std::vector<Vector> VectorVector; typedef std::map<std::string, Vector*> MapVector; typedef std::pair<std::string, Vector*> PairVector; typedef std::map<std::string, Vector*>::iterator IteratorVector; VectorVector vv; MapVector m1; MapVector m2; vv.push_back(Vector()); m1.insert(PairVector("one", &vv.back())); vv.push_back(Vector()); m2.insert(PairVector("two", &vv.back())); IteratorVector i = m1.find("one"); i->second->push_back(10); m2.insert(PairVector("one", i->second)); m2.clear(); m1.clear(); vv.clear(); Why is that? Shouldn't the clear command call the destructor of every object and every vector? Now after doing some tests I found different solutions to the leak: 1) Deleting the line i-second-push_back(10); 2) adding a delete i-second; after it's been used. 3) Deleting the second vv.push_back(Vector()); and m2.insert(PairVector("two", &vv.back())); statements. Using solution 2) makes Valgring print: 10 allocs, 11 frees Is that OK? As I'm not using new why should I delete? Thanks, for any help!

    Read the article

  • What is GC holes?

    - by tianyi
    I wrote a long TCP connection socket server in C#. Spike in memory in my server happens. I used dotNet Memory Profiler(a tool) to detect where the memory leaks. Memory Profiler indicates the private heap is huge, and the memory is something like below(the number is not real,what I want to show is the GC0 and GC2's Holes are very very huge, the data size is normal): Managed heaps - 1,500,000KB Normal heap - 1400,000KB Generation #0 - 600,000KB Data - 100,000KB "Holes" - 500,000KB Generation #1 - xxKB Data - 0KB "Holes" - xKB Generation #2 - xxxxxxxxxxxxxKB Data - 100,000KB "Holes" - 700,000KB Large heap - 131072KB Large heap - 83KB Overhead/unused - 130989KB Overhead - 0KB Howerver, what is GC hole? I read an article about the hole: http://kaushalp.blogspot.com/2007/04/what-is-gc-hole-and-how-to-create-gc.html The author said : The code snippet below is the simplest way to introduce a GC hole into the system. //OBJECTREF is a typedef for Object*. { PointerTable *pTBL = o_pObjectClass->GetPointerTable(); OBJECTREF aObj = AllocateObjectMemory(pTBL); OBJECTREF bObj = AllocateObjectMemory(pTBL); //WRONG!!! “aObj” may point to garbage if the second //“AllocateObjectMemory” triggered a GC. DoSomething (aOb, bObj); } All it does is allocate two managed objects, and then does something with them both. This code compiles fine, and if you run simple pre-checkin tests, it will probably “work.” But this code will crash eventually. Why? If the second call to “AllocateObjectMemory” triggers a GC, that GC discards the object instance you just assigned to “aObj”. This code, like all C++ code inside the CLR, is compiled by a non-managed compiler and the GC cannot know that “aObj” holds a root reference to an object you want kept live. ======================================================================== I can't understand what he explained. Does the sample mean aObj becomes a wild pointer after GC? Is it mean { aObj = (*aObj)malloc(sizeof(object)); free(aObj); function(aObj);? } ? I hope somebody can explain it.

    Read the article

  • When mocking a class with Moq, how can I CallBase for just specific methods?

    - by Daryn
    I really appreciate Moq's Loose mocking behaviour that returns default values when no expectations are set. It's convenient and saves me code, and it also acts as a safety measure: dependencies won't get unintentionally called during the unit test (as long as they are virtual). However, I'm confused about how to keep these benefits when the method under test happens to be virtual. In this case I do want to call the real code for that one method, while still having the rest of the class loosely mocked. All I have found in my searching is that I could set mock.CallBase = true to ensure that the method gets called. However, that affects the whole class. I don't want to do that because it puts me in a dilemma about all the other properties and methods in the class that hide call dependencies: if CallBase is true then I have to either Setup stubs for all of the properties and methods that hide dependencies -- Even though my test doesn't think it needs to care about those dependencies, or Hope that I don't forget to Setup any stubs (and that no new dependencies get added to the code in the future) -- Risk unit tests hitting a real dependency. Q: With Moq, is there any way to test a virtual method, when I mocked the class to stub just a few dependencies? I.e. Without resorting to CallBase=true and having to stub all of the dependencies? Example code to illustrate (uses MSTest, InternalsVisibleTo DynamicProxyGenAssembly2) In the following example, TestNonVirtualMethod passes, but TestVirtualMethod fails - returns null. public class Foo { public string NonVirtualMethod() { return GetDependencyA(); } public virtual string VirtualMethod() { return GetDependencyA();} internal virtual string GetDependencyA() { return "! Hit REAL Dependency A !"; } // [... Possibly many other dependencies ...] internal virtual string GetDependencyN() { return "! Hit REAL Dependency N !"; } } [TestClass] public class UnitTest1 { [TestMethod] public void TestNonVirtualMethod() { var mockFoo = new Mock<Foo>(); mockFoo.Setup(m => m.GetDependencyA()).Returns(expectedResultString); string result = mockFoo.Object.NonVirtualMethod(); Assert.AreEqual(expectedResultString, result); } [TestMethod] public void TestVirtualMethod() // Fails { var mockFoo = new Mock<Foo>(); mockFoo.Setup(m => m.GetDependencyA()).Returns(expectedResultString); // (I don't want to setup GetDependencyB ... GetDependencyN here) string result = mockFoo.Object.VirtualMethod(); Assert.AreEqual(expectedResultString, result); } string expectedResultString = "Hit mock dependency A - OK"; }

    Read the article

  • How can I pipe input to a Java app with Perl?

    - by user319479
    I need to write a Perl script that pipes input into a Java program. This is related to this, but that didn't help me. My issue is that the Java app doesn't get the print statements until I close the handle. What I found online was that $| needs to be set to something greater than 0, in which case newline characters will flush the buffer. This still doesn't work. This is the script: #! /usr/bin/perl -w use strict; use File::Basename; $|=1; open(TP, "| java -jar test.jar") or die "fail"; sleep(2); print TP "this is test 1\n"; print TP "this is test 2\n"; print "tests printed, waiting 5s\n"; sleep(5); print "wait over. closing handle...\n"; close TP; print "closed.\n"; print "sleeping for 5s...\n"; sleep(5); print "script finished!\n"; exit And here is a sample Java app: import java.util.Scanner; public class test{ public static void main( String[] args ){ Scanner sc = new Scanner( System.in ); int crashcount = 0; while( true ){ try{ String input = sc.nextLine(); System.out.println( ":: INPUT: " + input ); if( "bananas".equals(input) ){ break; } } catch( Exception e ){ System.out.println( ":: EXCEPTION: " + e.toString() ); crashcount++; if( crashcount == 5 ){ System.out.println( ":: Looks like stdin is broke" ); break; } } } System.out.println( ":: IT'S OVER!" ); return; } } The Java app should respond to receiving the test prints immediately, but it doesn't until the close statement in the Perl script. What am I doing wrong? Note: the fix can only be in the Perl script. The Java app can't be changed. Also, File::Basename is there because I'm using it in the real script.

    Read the article

  • PHP Sockets Errors (connection refused and No such file or directory)

    - by Purefan
    Hello all, I am writing a server app (broadcaster) and a client (relayer). Several relayers can connect to the broadcaster at the same time, send information and the broadcaster will redirect the message to a matching relayer (for example relayer1 sends to broadcaster who sends to relayer43, relayer2 - broadcaster - relayer73...) The server part is working as I have tested it with a telnet client and although its at this point only an echo server it works. Both relayer and broadcaster sit on the same server so I am using AF_UNIX sockets, both files are in different folders though. I have tried two approaches for the relayer and both have failed, the first one is using socket_create: public function __construct() { // where is the socket server? $this->_sHost = 'tcp://127.0.0.1'; $this->_iPort = 11225; // open a client connection $this->_hSocket = socket_create(AF_UNIX, SOCK_STREAM, 0); echo 'Attempting to connect to '.$this->_sHost.' on port '.$this->_iPort .'...'; $result = socket_connect($this->_hSocket, $this->_sHost, $this->_iPort); if ($result === false) { echo "socket_connect() failed.\nReason: ($result) " . socket_strerror(socket_last_error($this->_hSocket)) . "\n"; } else { echo "OK.\n"; } This returns "Warning: socket_connect(): unable to connect [2]: No such file or directory in relayer.class.php on line 27" and (its running from command line) it often also returns a segmentation fault. The second approach is using pfsockopen: public function __construct() { // where is the socket server? $this->_sHost = 'tcp://127.0.0.1'; $this->_iPort = 11225; // open a client connection $fp = pfsockopen ($this->_sHost, $this->_iPort, $errno, $errstr); if (!$fp) { $result = "Error: could not open socket connection"; } else { // get the welcome message fgets ($fp, 1024); // write the user string to the socket fputs ($fp, 'Message ' . __LINE__); // get the result $result .= fgets ($fp, 1024); // close the connection fputs ($fp, "END"); fclose ($fp); // trim the result and remove the starting ? $result = trim($result); $result = substr($result, 2); // now print it to the browser } which only returns the error "Warning: pfsockopen(): unable to connect to tcp://127.0.0.1:11225 (Connection refused) in relayer.class.php on line 33 " In all tests I have tried with different host names, 127.0.0.1, localhost, tcp://127.0.0.1, 192.168.0.199, tcp://192.168.0.199, none of it has worked. Any ideas on this?

    Read the article

  • Counting number of searches

    - by shinjuo
    I am trying to figure out how to get the total number of tests each search makes in this algorithm. I am not sure how I can pass that information back from this algorithm though. I need to count how many times while runs and then pass that number back into an array to be added together and determine the average number of test. main.c #include <stdio.h> #include <stdlib.h> #include <time.h> #include <stdbool.h> #include "percentage.h" #include "sequentialSearch.h" #define searchAmount 100 int main(int argc, char *argv[]) { int numbers[100]; int searches[searchAmount]; int i; int where; int searchSuccess; int searchUnsuccess; int percent; srand(time(NULL)); for (i = 0; i < 100; i++){ numbers[i] = rand() % 200; } for (i = 0; i < searchAmount; i++){ searches[i] = rand() % 200; } searchUnsuccess = 0; searchSuccess = 0; for(i = 0; i < searchAmount; i++){ if(seqSearch(numbers, 100, searches[i], &where)){ searchSuccess++; }else{ searchUnsuccess++; } } percent = percentRate(searchSuccess, searchAmount); printf("Total number of searches: %d\n", searchAmount); printf("Total successful searches: %d\n", searchSuccess); printf("Success Rate: %d%%\n", percent); system("PAUSE"); return 0; } sequentialSearch.h bool seqSearch (int list[], int last, int target, int* locn){ int looker; looker = 0; while(looker < last && target != list[looker]){ looker++; } *locn = looker; return(target == list[looker]); }

    Read the article

  • What is wrong with this attempt of sending a break-signal?

    - by Jook
    I have quite a headache about this seemingly easy task: send a break signal to my device, like the wxTerm (or any similar Terminal application) does. This signal has to be 125ms long, according to my tests and the devices specification. It should result in a specific response, but what I get is a longer response than expected, and the transmitted date is false. e.g.: what it should respond 08 00 81 00 00 01 07 00 what it does respond 08 01 0A 0C 10 40 40 07 00 7F What really boggles me is, that after I have used wxTerm to look at my available com-ports (without connecting or sending anything), my code starts to work! I can send then as many breaks as I like, I get my response right from then on. I have to reset my PC in order to try it again. What the heck is going on here?! Here is my code for a reset through a break-signal: minicom_client(boost::asio::io_service& io_service, unsigned int baud, const string& device) : active_(true), io_service_(io_service), serialPort(io_service, device) { if (!serialPort.is_open()) { cerr << "Failed to open serial port\n"; return; } boost::asio::serial_port_base::flow_control FLOW( boost::asio::serial_port_base::flow_control::hardware ); boost::asio::serial_port_base::baud_rate baud_option(baud); serialPort.set_option(FLOW); serialPort.set_option(baud_option); read_start(); std::cout << SetCommBreak(serialPort.native_handle()) << std::endl; std::cout << GetLastError() << std::endl; boost::posix_time::ptime mst1 = boost::posix_time::microsec_clock::local_time(); boost::this_thread::sleep(boost::posix_time::millisec(125)); boost::posix_time::ptime mst2 = boost::posix_time::microsec_clock::local_time(); std::cout << ClearCommBreak(serialPort.native_handle()) << std::endl; std::cout << GetLastError() << std::endl; boost::posix_time::time_duration msdiff = mst2 - mst1; std::cout << msdiff.total_milliseconds() << std::endl; } Edit: It was only necessary to look at the combo-box selection of com-ports of wxTerm - no active connection was needed to be established in order to make my code work. I am guessing, that there is some sort of initialisation missing, which is done, when wxTerm is creating the list for the serial-port combo-box.

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >