Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 760/906 | < Previous Page | 756 757 758 759 760 761 762 763 764 765 766 767  | Next Page >

  • C++ variable to const expression

    - by user1344784
    template <typename Real> class A{ }; template <typename Real> class B{ }; //... a few dozen more similar template classes class Computer{ public slots: void setFrom(int from){ from_ = from; } void setTo(int to){ to_ = to; } private: template <int F, int T> void compute(){ using boost::fusion::vector; using boost::fusion::at_c; vector<A<float>, B<float>, ...> v; at_c<from_>(v).operator()(at_c<to_>(v)); //error; needs to be const-expression. }; This question isn't about Qt, but there is a line of Qt code in my example. The setFrom() and setTo() are functions that are called based on user selection via the GUI widget. The root of my problem is that 'from' and 'to' are variables. In my compute member function I need to pick a type (A, B, etc.) based on the values of 'from' and 'to'. The only way I know how to do what I need to do is to use switch statements, but that's extremely tedious in my case and I would like to avoid. Is there anyway to convert the error line to a constant-expression?

    Read the article

  • How do you make use of the provider independency in the Entity Framework?

    - by Anders Svensson
    I'm trying to learn more about data access and the Entity Framework. My goal is to have a "provider independent" data access layer (to be able to switch easily e.g. from SQL Server to MySQL or vice versa), and since the EF is supposed to be provider independent it seems like a good way to go. But how do you use this provider independence? I mean, I was expecting to be able to sort of "program to an interface", and then just be able to switch database provider. But as far as I can tell I'm only getting a concrete type to program against, an "Entities" class, e.g. in my case it is: UserDBEntities _context = new UserDBEntities(); What I would have expected to be able to switch provider easily was to have an interface e.g. like IEntities _context = new UserDBEntities(); Sort of like I can do with datasets... But maybe that isn't how it works at all with EF? Or do you just switch provider in the connectionstring, and the model stays the same?? Please remember that I'm a complete newbie at this EF, and rather inexperienced with databases in general, so I would really appreciate if you could be as clear as possible :-)

    Read the article

  • What are the best practices for unit testing properties with code in the setter?

    - by nportelli
    I'm fairly new to unit testing and we are actually attempting to use it on a project. There is a property like this. public TimeSpan CountDown { get { return _countDown; } set { long fraction = value.Ticks % 10000000; value -= TimeSpan.FromTicks(fraction); if(fraction > 5000000) value += TimeSpan.FromSeconds(1); if(_countDown != value) { _countDown = value; NotifyChanged("CountDown"); } } } My test looks like this. [TestMethod] public void CountDownTest_GetSet_PropChangedShouldFire() { ManualRafflePresenter target = new ManualRafflePresenter(); bool fired = false; string name = null; target.PropertyChanged += new PropertyChangedEventHandler((o, a) => { fired = true; name = a.PropertyName; }); TimeSpan expected = new TimeSpan(0, 1, 25); TimeSpan actual; target.CountDown = expected; actual = target.CountDown; Assert.AreEqual(expected, actual); Assert.IsTrue(fired); Assert.AreEqual("CountDown", name); } The question is how do I test the code in the setter? Do I break it out into a method? If I do it would probably be private since no one else needs to use this. But they say not to test private methods. Do make a class if this is the only case? would two uses of this code make a class worthwhile? What is wrong with this code from a design standpoint. What is correct?

    Read the article

  • GLSL point inside box test

    - by wcochran
    Below is a GLSL fragment shader that outputs a texel if the given texture coord is inside a box, otherwise a color is output. This just feels silly and the there must be a way to do this without branching? uniform sampler2D texUnit; varying vec4 color; varying vec2 texCoord; void main() { vec4 texel = texture2D(texUnit, texCoord); if (any(lessThan(texCoord, vec2(0.0, 0.0))) || any(greaterThan(texCoord, vec2(1.0, 1.0)))) gl_FragColor = color; else gl_FragColor = texel; } Below is a version without branching, but it still feels clumsy. What is the best practice for "texture coord clamping"? uniform sampler2D texUnit; varying vec4 color; varying vec4 labelColor; varying vec2 texCoord; void main() { vec4 texel = texture2D(texUnit, texCoord); bool outside = any(lessThan(texCoord, vec2(0.0, 0.0))) || any(greaterThan(texCoord, vec2(1.0, 1.0))); gl_FragColor = mix(texel*labelColor, color, vec4(outside,outside,outside,outside)); } I am clamping texels to the region with the label is -- the texture s & t coordinates will be between 0 and 1 in this case. Otherwise, I use a brown color where the label ain't. Note that I could also construct a branching version of the code that does not perform a texture lookup when it doesn't need to. Would this be faster than a non-branching version that always performed a texture lookup? Maybe time for some tests...

    Read the article

  • How can I make VS2010 insert using statements in the order dictated by StyleCop rules.

    - by Hamish Grubijan
    The related default StyleCop rules are: Place using statements inside namespace. Sort using statements alphabetically. But ... System using come first (still trying to figure out if that means just using System; or using System[.*];). So, my use case: I find a bug and decide that I need to at least add an intelligible Assert to make debugging less painful for the next guy. So I start typing Debug.Assert( and intellisense marks it in Red. I hover mouse over Debug and between using System.Diagnostics; and System.Diagnostics.Debug I choose the former. This inserts using System.Diagnostics; after all other using statements. It would be nice if VS2010 did not assist me in writing code that won't build due to warnings as errors. How can I make VS2010 smarter? Is there some sort of setting, or does this require a full-fledged add-in of some sort?

    Read the article

  • Managing My Database in Source Control

    - by Jason
    As I am working with a new database project (within VS2008), and as I have never developed a database from scratch, I immediately began looking into how to manage a database within source control (in this case, Subversion). I found some information on SO, including this post: Keeping development databases in multiple environments in sync. One of the answers in particular pointed to a number of a links, all of which had good, useful information. I was reading a series of posts by K. Scott Allen which describe how he manages database change. From my reading (and please pardon the noobishness of my question), it seems as though the database itself is never checked into a repository. Rather, scripts that can build the database, along with test data (which is also populated from scripts) is checked into the repository. Ultimately, this means that, when a developer is testing his or her app, these scripts, which are part of the build process, are run. This ensures that the database is up-to-date, but is also run locally from every developer's machine. This makes sense to me (if I am indeed reading that correctly). However, if I am missing something, I would appreciate correction or additional guidance. In addition, another question I wanted to ask - does this also mean that I should NOT check in the mdf or ldf files that are created from Visual Studio? Thanks for any help and additional insight. Always appreciated.

    Read the article

  • a way to use log4j pass values like java -DmyEnvVar=A_VALUE to my code

    - by raticulin
    I need to pass some value to enable certain code in may app (in this case is to optionally enable writing some stats to a file in certain conditions, but it might be anything generally). My java app is installed as a service. So every way I have thought of has some drawbacks: Add another param to main(): cumbersome as customers already have the tool installed, and the command line would need to be changed every time. Adding java -DmyEnvVar=A_VALUE to my command line: same as above. Set an environment variable: service should at least be restarted, and even then you must take care of what user is the service running under etc. Adding the property in the config file: I prefer not to have this visible on the config file so the user does not see it, it is something for debugging etc. So I thought maybe there is some way (or hack) to use log4j loggers to pass that value to my code. I have thought of one way already, although is very limited: Add a dummy class to my codebase com.dummy.DevOptions public class DevOptions { public static final Logger logger = Logger.getLogger(DevOptions.class); In my code, use it like this: if (DevOptions.logger.isInfoEnabled()){ //do my optional stuff } //... if (DevOptions.logger.isDebugEnabled()){ //do other stuff } This allows me to use discriminate among various values, and I could increase the number by adding more loggers to DevOptions. But I wonder whether there is a cleaner way, possibly by configuring the loggers only in log4j.xml??

    Read the article

  • Unit testing a method with many possible outcomes

    - by Cthulhu
    I've built a simple~ish method that constructs an URL out of approximately 5 parts: base address, port, path, 'action', and a set of parameters. Out of these, only the address part is mandatory, the other parts are all optional. A valid URL has to come out of the method for each permutation of input parameters, such as: address address port address port path address path address action address path action address port action address port path action address action params address path action params address port action params address port path action params andsoforth. The basic approach for this is to write one unit test for each of these possible outcomes, each unit test passing the address and any of the optional parameters to the method, and testing the outcome against the expected output. However, I wonder, is there a Better (tm) way to handle a case like this? Are there any (good) unit test patterns for this? (rant) I only now realize that I've learned to write unit tests a few years ago, but never really (feel like) I've advanced in the area, and that every unit test is a repeat of building parameters, expected outcome, filling mock objects, calling a method and testing the outcome against the expected outcome. I'm pretty sure this is the way to go in unit testing, but it gets kinda tedious, yanno. Advice on that matter is always welcome. (/rant) (note) christmas weekend approaching, probably won't reply to suggestions until next week. (/note)

    Read the article

  • How to get the real, actual duration of an MP3 file (VBR or CBR) server-side

    - by Cummander Checkov
    I used to calculate the duration of MP3 files server-side using ffmpeg - which seemed to work fine. Today i discovered that some of the calculations were wrong. Somehow, for some reason, ffmpeg will miscalculate the duration and it seems to happen with variable bit rate mp3 files only. When testing this locally, i noticed that ffmpeg printed two extra lines in green. Command used: ffmpeg -i song_9747c077aef8.mp3 ffmpeg says: [mp3 @ 0x102052600] max_analyze_duration 5000000 reached at 5015510 [mp3 @ 0x102052600] Estimating duration from bitrate, this may be inaccurate After a nice, warm google session, i found some posts on this, but no solution was found. I then tried to increase the maximum duration: ffmpeg -analyzeduration 999999999 -i song_9747c077aef8.mp3 After this, ffmpeg returned only the second line: [mp3 @ 0x102052600] Estimating duration from bitrate, this may be inaccurate But in either case, the calculated duration was just plain wrong. Comparing it to VLC i noticed that here the duration is correct. After more research i stumbled over mp3info - which i installed and used. mp3info -p "%S" song_9747c077aef8.mp3 mp3info then returned the CORRECT duration, but only as an integer, which i cannot use as i need a more accurate number here. The reason for this was explained in a comment below, by user blahdiblah - mp3info is simply pulling ID3 info from the file and not actually performing any calculations. I also tried using mplayer to retrieve the duration, but just as ffmpeg, mplayer is returning the wrong value. Now i ran out of options. If somebody knows how to get around this, any hints, tips, guides or corrections are welcome! Thank You!

    Read the article

  • Deterministic floating point and .NET

    - by code2code
    How can I guarantee that floating point calculations in a .NET application (say in C#) always produce the same bit-exact result? Especially when using different versions of .NET and running on different platforms (x86 vs x86_64). Inaccuracies of floating point operations do not matter. In Java I'd use strictfp. In C/C++ and other low level languages this problem is essentially solved by accessing the FPU / SSE control registers but that's probably not possible in .NET. Even with control of the FPU control register the JIT of .NET will generate different code on different platforms. Something like HotSpot would be even worse in this case... Why do I need it? I'm thinking about writing a real-time strategy (RTS) game which heavily depends on fast floating point math together with a lock stepped simulation. Essentially I will only transmit user input across the network. This also applies to other games which implement replays by storing the user input. Not an option are: decimals (too slow) fixed point values (too slow and cumbersome when using sqrt, sin, cos, tan, atan...) update state across the network like an FPS: Sending position information for hundreds or a few thousand units is not an option Any ideas?

    Read the article

  • How to build a dynamic resize-able Flash player

    - by Leon
    Morning stackers! So my question today isn't dealing with any code, but how to go about this the correct way from the start. I have a video player built to a static size (max: 800x600) which I'll have to re-code every time I need it to be a different size. What I need it to do in the near future is dynamically resize itself and all the elements inside of it based on 1 width variable that it will received either from HTML or XML. Now to me there are 2 ways to go about this: Start with the smallest size possible and resize upwards, but I'm unsure of how the Flash movie will actually expand upwards as of right now. Or 2, start with the largest size possible (in this case 800x600) and size everything down. Step 1, I think seems to be the better way to go about this (ala YouTube style), but Step 2 also seems like it could be the easier way? A friend of mine mentioned that I should go with the larger size and have elements resize in each class, then fix to the upper left hand corner. However for the player to fit inside of certain div columns on sites, blogs whatever he said that there will have to be an HTML/CSS side of this... meaning that the div containing the resized flash player will have to cover up the areas of the Flash movie that are not to be shown? Is that possible to put a 800x600 flash movie into a div that smaller then 800 pixels wide? And cover it up with another div? Anyways, my mission is to be able to have a dynamically sized player like this: Thoughts? Recommendations? Best practices for this before I start?

    Read the article

  • SharePoint Lists.asmx's UpdateListItems() returns too much data

    - by Philipp Schmid
    Is there a way to prevent the UpdateListItems() web service call in SharePoint's Lists.asmx endpoint from returning all of the fields of the newly created or updated list item? In our case an event handler attached to our custom list is adding some rather large field values which are turned to the client unnecessarily. Is there a way to tell it to only return the ID of the newly created (or updated) list item? For example, currently the web service returns something like this: <Results xmlns="http://schemas.microsoft.com/sharepoint/soap/"> <Result ID="1,Update"> <ErrorCode>0x00000000</ErrorCode> <z:row ows_ID="4" ows_Title="Title" ows_Modified="2003-06-19 20:31:21" ows_Created="2003-06-18 10:15:58" ows_Author="3;#User1_Display_Name" ows_Editor="7;#User2_Display_Name" ows_owshiddenversion="3" ows_Attachments="-1" ows__ModerationStatus="0" ows_LinkTitleNoMenu="Title" ows_LinkTitle="Title" ows_SelectTitle="4" ows_Order="400.000000000000" ows_GUID="{4962F024-BBA5-4A0B-9EC1-641B731ABFED}" ows_DateColumn="2003-09-04 00:00:00" ows_NumberColumn="791.00000000000000" xmlns:z="#RowsetSchema" /> </Result> ... </Results> where as I am looking for a trimmed response only containing for example the ows_ID attribute: <Results xmlns="http://schemas.microsoft.com/sharepoint/soap/"> <Result ID="1,Update"> <ErrorCode>0x00000000</ErrorCode> <z:row ows_ID="4" /> </Result> ... </Results> I have unsuccessfully looked for a resource that documents all of the valid attributes for both the <Batch> and <Method> tags in he updates XmlNode parameter of UpdateListItems() in the hope that I will find a way to specify the fields to return. A solution for WSS 3.0 would be preferable over an SP 2010-only solution.

    Read the article

  • Are there any less costly alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • Fortran pointer as an argument to interface procedure

    - by icarusthecow
    Im trying to use interfaces to call different subroutines with different types, however, it doesnt seem to work when i use the pointer attribute. for example, take this sample code MODULE ptr_types TYPE, abstract :: parent INTEGER :: q END TYPE TYPE, extends(parent) :: child INTEGER :: m END TYPE INTERFACE ptr_interface MODULE PROCEDURE do_something END INTERFACE CONTAINS SUBROUTINE do_something(atype) CLASS(parent), POINTER :: atype ! code determines that this allocation is correct from input ALLOCATE(child::atype) WRITE (*,*) atype%q END SUBROUTINE END MODULE PROGRAM testpass USE ptr_types CLASS(child), POINTER :: ctype CALL ptr_interface(ctype) END PROGRAM This gives error Error: There is no specific subroutine for the generic 'ptr_interface' at (1) however if i remove the pointer attribute in the subroutine it compiles fine. Now, normally this wouldnt be a problem, but for my use case i need to be able to treat that argument as a pointer, mainly so i can allocate it if necessary. Any suggestions? Mind you I'm new to fortran so I may have missed something edit: forgot to put the allocation in the parents subroutine, the initial input is unallocated EDIT 2 this is my second attempt, with caller side casting MODULE ptr_types TYPE, abstract :: parent INTEGER :: q END TYPE TYPE, extends(parent) :: child INTEGER :: m END TYPE TYPE, extends(parent) :: second INTEGER :: meow END TYPE CONTAINS SUBROUTINE do_something(this, type_num) CLASS(parent), POINTER :: this INTEGER type_num IF (type_num == 0) THEN ALLOCATE (child::this) ELSE IF (type_num == 1) THEN ALLOCATE (second::this) ENDIF END SUBROUTINE END MODULE PROGRAM testpass USE ptr_types CLASS(child), POINTER :: ctype SELECT TYPE(ctype) CLASS is (parent) CALL do_something(ctype, 0) END SELECT WRITE (*,*) ctype%q END PROGRAM however this still fails. in the select statement it complains that parent must extend child. Im sure this is due to restrictions when dealing with the pointer attribute, for type safety, however, im looking for a way to convert a pointer into its parent type for generic allocation. Rather than have to write separate allocation functions for every type and hope they dont collide in an interface or something. hopefully this example will illustrate a little more clearly what im trying to achieve, if you know a better way let me know

    Read the article

  • Strange befaviour of spring transaction support for JPA + Hibernate +@Transactional annotation

    - by abovesun
    I found out really strange behavior on relatively simple use case, probably I can't understand it because of not deep knowledges of spring @Transactional nature, but this is quite interesting. I have simple User dao that extends spring JpaDaoSupport class and contains standard save method: @Transactional public User save(User user) { getJpaTemplate().persist(user); return user; } If was working fine until I've add new method to same class: User getSuperUser(), this method should return user with isAdmin == true, and if there is no super user in db, method should create one. Thats how it was looking like: public User createSuperUser() { User admin = null; try { admin = (User) getJpaTemplate().execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { return em.createQuery("select u from UserImpl u where u.admin = true").getSingleResult(); } }); } catch (EmptyResultDataAccessException ex) { User admin = new User('login', 'password'); admin.setAdmin(true); save(admin); // THIS IS THE POINT WHERE STRANGE THING COMING OUT } return admin; } As you see code is strange forward and I was very confused when found out that no transaction was created and committed on invocation of save(admin) method and no new user wasn't actually created despite @Transactional annotation. In result we have situation: when save() method invokes from outside of UserDAO class - @Transactional annotation counted and user successfully created, but if save() invokes from inside of other method of the same dao class - @Transactional annotation ignored. Here how I was change save() method to force it always create transaction. public User save(User user) { getJpaTemplate().execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { em.getTransaction().begin(); em.persist(user); em.getTransaction().commit(); return null; } }); return user; } As you see I manually invoke begin and commit. Any ideas?

    Read the article

  • What are these stray zero-byte files extracted from tarball? (OSX)

    - by Scott M
    I'm extracting a folder from a tarball, and I see these zero-byte files showing up in the result (where they are not in the source.) Setup (all on OS X): On machine one, I have a directory /My/Stuff/Goes/Here/ containing several hundred files. I build it like this tar -cZf mystuff.tgz /My/Stuff/Goes/Here/ On machine two, I scp the tgz file to my local directory, then unpack it. tar -xZf mystuff.tgz It creates ~scott/My/Stuff/Goes/, but then under Goes, I see two files: Here/ - a directory, Here.bGd - a zero byte file. The "Here.bGd" zero-byte file has a random 3-character suffix, mixed upper and lower-case characters. It has the same name as the lowest-level directory mentioned in the tar-creation command. It only appears at the lowest level directory named. Anybody know where these come from, and how I can adjust my tar creation to get rid of them? Update: I checked the table of contents on the files using tar tZvf: toc does not list the zero-byte files, so I'm leaning toward the suggestion that the uncompress machine is at fault. OS X is version 10.5.5 on the unzip machine (not sure how to check the filesystem type). Tar is GNU tar 1.15.1, and it came with the machine.

    Read the article

  • python recursive iteration exceeding limit for tree implementation

    - by user3698027
    I'm implementing a tree dynamically in python. I have defined a class like this... class nodeobject(): def __init__(self,presentnode=None,parent=None): self.currentNode = presentnode self.parentNode = parent self.childs = [] I have a function which gets possible childs for every node from a pool def findchildren(node, childs): # No need to write the whole function on how it gets childs Now I have a recursive function that starts with the head node (no parent) and moves down the chain recursively for every node (base case being the last node having no children) def tree(dad,children): for child in children: childobject = nodeobject(child,dad) dad.childs.append(childobject) newchilds = findchildren(child, children) if len(newchilds) == 0: lastchild = nodeobject(newchilds,childobject) childobject.childs.append(lastchild) loopchild = copy.deepcopy(lastchild) while loopchild.parentNode != None: print "last child" else: tree(childobject,newchilds) The tree formation works for certain number of inputs only. Once the pool gets bigger, it results into "MAXIMUM RECURSION DEPTH EXCEEDED" I have tried setting the recursion limit with set.recursionlimit() and it doesn't work. THe program crashes. I want to implement a stack for recursion, can someone please help, I have gone no where even after trying for a long time ?? Also, is there any other way to fix this other than stack ?

    Read the article

  • boost thread pool

    - by Dtag
    I need a threadpool for my application, and I'd like to rely on standard (C++11 or boost) stuff as much as possible. I realize there is an unofficial(!) boost thread pool class, which basically solves what I need, however I'd rather avoid it because it is not in the boost library itself -- why is it still not in the core library after so many years? In some posts on this page and elsewhere, people suggested using boost::asio to achieve a threadpool like behavior. At first sight, that looked like what I wanted to do, however I found out that all implementations I have seen have no means to join on the currently active tasks, which makes it useless for my application. To perform a join, they send stop signal to all the threads and subsequently join them. However, that completely nullifies the advantage of threadpools in my use case, because that makes new tasks require the creation of a new thread. What I want to do is: ThreadPool pool(4); for (...) { for (int i=0;i<something;i++) pool.pushTask(...); pool.join(); // do something with the results } Can anyone suggest a solution (except for using the existing unofficial thread pool on sourceforge)? Is there anything in C++11 or core boost that can help me here? Thanks a lot

    Read the article

  • Legacy application creates dialogs in non-ui thread.

    - by Frater
    I've been working support for a while on a legacy application and I've noticed a bit of a problem. The system is an incredibly complex client/server with standard and custom frameworks. One of the custom frameworks built into the application involves validating workflow actions. It finds potential errors, separates them into warnings and errors, and passes the results back to the client. The main difference between warnings and errors is that warnings ask the user if they wish to ignore the error. The issue I have is that the dialog for this prompt is created on a non-ui thread, and thus we get cross-threading issues when the dialog is shown. I have attempted to invoke the showing of the dialog, however this fails because the window handle has not been created. (InvokeRequired returns false, which I assume in this case means it cannot find a decent handle in its parent tree, rather than that it doesn't require it.) Does anyone have any suggestions for how I can create this dialog and get the UI thread to set it up and call it?

    Read the article

  • Python: why can't descriptors be instance variables?

    - by Continuation
    Say I define this descriptor: class MyDescriptor(object): def __get__(self, instance, owner): return self._value def __set__(self, instance, value): self._value = value def __delete__(self, instance): del(self._value) And I use it in this: class MyClass1(object): value = MyDescriptor() >>> m1 = MyClass1() >>> m1.value = 1 >>> m2 = MyClass1() >>> m2.value = 2 >>> m1.value 2 So value is a class attribute and is shared by all instances. Now if I define this: class MyClass2(object) value = 1 >>> y1 = MyClass2() >>> y1.value=1 >>> y2 = MyClass2() >>> y2.value=2 >>> y1.value 1 In this case value is an instance attribute and is not shared by the instances. Why is it that when value is a descriptor it can only be a class attribute, but when value is a simple integer it becomes an instance attribute?

    Read the article

  • How to "pin" C++/CLI pointers

    - by Kumar
    I am wrapping up a class which reading a custom binary data file and makes the data available to a .net/c# class However a couple of lines down the code, i start getting the memory access violation error which i believe is due to the GC moving memory around, the class is managed Here's the code if ( ! reader.OpenFile(...) ) return ; foreach(string fieldName in fields) { int colIndex = reader.GetColIndex( fieldName ); int colType = reader.GetColType( colIndex ); // error is raised here on 2nd iteration } for ( int r = 0 ; r < reader.NumFields(); r++ ) { foreach(string fieldName in fields) { int colIndex = reader.GetColIndex( fieldName ); int colType = reader.GetColType( colIndex ); // error is raised here on 2nd iteration switch ( colType ) { case 0 : // INT processField( r, fieldName, reader.GetInt(r,colIndex) ); break ; .... } } } .... i've looked at interior_ptr, pin_ptr but they give an error c3160 cannot be in a managed class Any workaround ? BTW, this is my 1st C++ program in a very long time !

    Read the article

  • Delphi: EInvalidOp in neural network class (TD-lambda)

    - by user89818
    I have the following draft for a neural network class. This neural network should learn with TD-lambda. It is started by calling the getRating() function. But unfortunately, there is an EInvalidOp (invalid floading point operation) error after about 1000 iterations in the following lines: neuronsHidden[j] := neuronsHidden[j]+neuronsInput[t][i]*weightsInput[i][j]; // input -> hidden weightsHidden[j][k] := weightsHidden[j][k]+LEARNING_RATE_HIDDEN*tdError[k]*eligibilityTraceOutput[j][k]; // adjust hidden->output weights according to TD-lambda Why is this error? I can't find the mistake in my code :( Can you help me? Thank you very much in advance! unit uNeuronalesNetz; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, ExtCtrls, StdCtrls, Grids, Menus, Math; const NEURONS_INPUT = 43; // number of neurons in the input layer NEURONS_HIDDEN = 60; // number of neurons in the hidden layer NEURONS_OUTPUT = 1; // number of neurons in the output layer NEURONS_TOTAL = NEURONS_INPUT+NEURONS_HIDDEN+NEURONS_OUTPUT; // total number of neurons in the network MAX_TIMESTEPS = 42; // maximum number of timesteps possible (after 42 moves: board is full) LEARNING_RATE_INPUT = 0.25; // in ideal case: decrease gradually in course of training LEARNING_RATE_HIDDEN = 0.15; // in ideal case: decrease gradually in course of training GAMMA = 0.9; LAMBDA = 0.7; // decay parameter for eligibility traces type TFeatureVector = Array[1..43] of SmallInt; // definition of the array type TFeatureVector TArtificialNeuralNetwork = class // definition of the class TArtificialNeuralNetwork private // GENERAL SETTINGS START learningMode: Boolean; // does the network learn and change its weights? // GENERAL SETTINGS END // NETWORK CONFIGURATION START neuronsInput: Array[1..MAX_TIMESTEPS] of Array[1..NEURONS_INPUT] of Extended; // array of all input neurons (their values) for every timestep neuronsHidden: Array[1..NEURONS_HIDDEN] of Extended; // array of all hidden neurons (their values) neuronsOutput: Array[1..NEURONS_OUTPUT] of Extended; // array of output neurons (their values) weightsInput: Array[1..NEURONS_INPUT] of Array[1..NEURONS_HIDDEN] of Extended; // array of weights: input->hidden weightsHidden: Array[1..NEURONS_HIDDEN] of Array[1..NEURONS_OUTPUT] of Extended; // array of weights: hidden->output // NETWORK CONFIGURATION END // LEARNING SETTINGS START outputBefore: Array[1..NEURONS_OUTPUT] of Extended; // the network's output value in the last timestep (the one before) eligibilityTraceHidden: Array[1..NEURONS_INPUT] of Array[1..NEURONS_HIDDEN] of Array[1..NEURONS_OUTPUT] of Extended; // array of eligibility traces: hidden layer eligibilityTraceOutput: Array[1..NEURONS_TOTAL] of Array[1..NEURONS_TOTAL] of Extended; // array of eligibility traces: output layer reward: Array[1..MAX_TIMESTEPS] of Array[1..NEURONS_OUTPUT] of Extended; // the reward value for all output neurons in every timestep tdError: Array[1..NEURONS_OUTPUT] of Extended; // the network's error value for every single output neuron t: Byte; // current timestep cyclesTrained: Integer; // number of cycles trained so far (learning rates could be decreased accordingly) last50errors: Array[1..50] of Extended; // LEARNING SETTINGS END public constructor Create; // create the network object and do the initialization procedure UpdateEligibilityTraces; // update the eligibility traces for the hidden and output layer procedure tdLearning; // learning algorithm: adjust the network's weights procedure ForwardPropagation; // propagate the input values through the network to the output layer function getRating(state: TFeatureVector; explorative: Boolean): Extended; // get the rating for a given state (feature vector) function HyperbolicTangent(x: Extended): Extended; // calculate the hyperbolic tangent [-1;1] procedure StartNewCycle; // start a new cycle with everything set to default except for the weights procedure setLearningMode(activated: Boolean=TRUE); // switch the learning mode on/off procedure setInputs(state: TFeatureVector); // transfer the given feature vector to the input layer (set input neurons' values) procedure setReward(currentReward: SmallInt); // set the reward for the current timestep (with learning then or without) procedure nextTimeStep; // increase timestep t function getCyclesTrained(): Integer; // get the number of cycles trained so far procedure Visualize(imgHidden: Pointer); // visualize the neural network's hidden layer end; implementation procedure TArtificialNeuralNetwork.UpdateEligibilityTraces; var i, j, k: Integer; begin // how worthy is a weight to be adjusted? for j := 1 to NEURONS_HIDDEN do begin for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceOutput[j][k] := LAMBDA*eligibilityTraceOutput[j][k]+(neuronsOutput[k]*(1-neuronsOutput[k]))*neuronsHidden[j]; for i := 1 to NEURONS_INPUT do begin eligibilityTraceHidden[i][j][k] := LAMBDA*eligibilityTraceHidden[i][j][k]+(neuronsOutput[k]*(1-neuronsOutput[k]))*weightsHidden[j][k]*neuronsHidden[j]*(1-neuronsHidden[j])*neuronsInput[t][i]; end; end; end; end; procedure TArtificialNeuralNetwork.setReward; VAR i: Integer; begin for i := 1 to NEURONS_OUTPUT do begin // +1 = player A wins // 0 = draw // -1 = player B wins reward[t][i] := currentReward; end; end; procedure TArtificialNeuralNetwork.tdLearning; var i, j, k: Integer; begin if learningMode then begin for k := 1 to NEURONS_OUTPUT do begin if reward[t][k] = 0 then begin tdError[k] := GAMMA*neuronsOutput[k]-outputBefore[k]; // network's error value when reward is 0 end else begin tdError[k] := reward[t][k]-outputBefore[k]; // network's error value in the final state (reward received) end; for j := 1 to NEURONS_HIDDEN do begin weightsHidden[j][k] := weightsHidden[j][k]+LEARNING_RATE_HIDDEN*tdError[k]*eligibilityTraceOutput[j][k]; // adjust hidden->output weights according to TD-lambda for i := 1 to NEURONS_INPUT do begin weightsInput[i][j] := weightsInput[i][j]+LEARNING_RATE_INPUT*tdError[k]*eligibilityTraceHidden[i][j][k]; // adjust input->hidden weights according to TD-lambda end; end; end; end; end; procedure TArtificialNeuralNetwork.ForwardPropagation; var i, j, k: Integer; begin for j := 1 to NEURONS_HIDDEN do begin neuronsHidden[j] := 0; for i := 1 to NEURONS_INPUT do begin neuronsHidden[j] := neuronsHidden[j]+neuronsInput[t][i]*weightsInput[i][j]; // input -> hidden end; neuronsHidden[j] := HyperbolicTangent(neuronsHidden[j]); // activation of hidden neuron j end; for k := 1 to NEURONS_OUTPUT do begin neuronsOutput[k] := 0; for j := 1 to NEURONS_HIDDEN do begin neuronsOutput[k] := neuronsOutput[k]+neuronsHidden[j]*weightsHidden[j][k]; // hidden -> output end; neuronsOutput[k] := HyperbolicTangent(neuronsOutput[k]); // activation of output neuron k end; end; procedure TArtificialNeuralNetwork.setLearningMode; begin learningMode := activated; end; constructor TArtificialNeuralNetwork.Create; var i, j, k: Integer; begin inherited Create; Randomize; // initialize random numbers generator learningMode := TRUE; cyclesTrained := -2; // only set to -2 because it will be increased twice in the beginning StartNewCycle; for j := 1 to NEURONS_HIDDEN do begin for k := 1 to NEURONS_OUTPUT do begin weightsHidden[j][k] := abs(Random-0.5); // initialize weights: 0 <= random < 0.5 end; for i := 1 to NEURONS_INPUT do begin weightsInput[i][j] := abs(Random-0.5); // initialize weights: 0 <= random < 0.5 end; end; for i := 1 to 50 do begin last50errors[i] := 0; end; end; procedure TArtificialNeuralNetwork.nextTimeStep; begin t := t+1; end; procedure TArtificialNeuralNetwork.StartNewCycle; var i, j, k, m: Integer; begin t := 1; // start in timestep 1 cyclesTrained := cyclesTrained+1; // increase the number of cycles trained so far for j := 1 to NEURONS_HIDDEN do begin neuronsHidden[j] := 0; for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceOutput[j][k] := 0; outputBefore[k] := 0; neuronsOutput[k] := 0; for m := 1 to MAX_TIMESTEPS do begin reward[m][k] := 0; end; end; for i := 1 to NEURONS_INPUT do begin for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceHidden[i][j][k] := 0; end; end; end; end; function TArtificialNeuralNetwork.getCyclesTrained; begin result := cyclesTrained; end; procedure TArtificialNeuralNetwork.setInputs; var k: Integer; begin for k := 1 to NEURONS_INPUT do begin neuronsInput[t][k] := state[k]; end; end; function TArtificialNeuralNetwork.getRating; begin setInputs(state); ForwardPropagation; result := neuronsOutput[1]; if not explorative then begin tdLearning; // adjust the weights according to TD-lambda ForwardPropagation; // calculate the network's output again outputBefore[1] := neuronsOutput[1]; // set outputBefore which will then be used in the next timestep UpdateEligibilityTraces; // update the eligibility traces for the next timestep nextTimeStep; // go to the next timestep end; end; function TArtificialNeuralNetwork.HyperbolicTangent; begin if x > 5500 then // prevent overflow result := 1 else result := (Exp(2*x)-1)/(Exp(2*x)+1); end; end.

    Read the article

  • trouble with state monad composition

    - by user1308560
    I was trying out the example given at http://www.haskell.org/haskellwiki/State_Monad#Complete_and_Concrete_Example_1 How this makes the solution composible is beyond my understanding. Here is what I tried but I get compile errors as follows: Couldn't match expected type `GameValue -> StateT GameState Data.Functor.Identity.Identity b0' with actual type `State GameState GameValue' In the second argument of `(>>=)', namely `g2' In the expression: g1 >>= g2 In an equation for `g3': g3 = g1 >>= g2 Failed, modules loaded: none. Here is the code: See the end lines module StateGame where import Control.Monad.State type GameValue = Int type GameState = (Bool, Int) -- suppose I want to play one game after the other g1 = playGame "abcaaacbbcabbab" g2 = playGame "abcaaacbbcabb" g3 = g1 >>= g2 m2 = print $ evalState g3 startState playGame :: String -> State GameState GameValue playGame [] = do (_, score) <- get return score playGame (x:xs) = do (on, score) <- get case x of 'a' | on -> put (on, score + 1) 'b' | on -> put (on, score - 1) 'c' -> put (not on, score) _ -> put (on, score) playGame xs startState = (False, 0) main str = print $ evalState (playGame str) startState

    Read the article

  • How to make a custom ear file in maven

    - by Zombies
    Here is my challenge, I need to make an ear file for a specific container. To be more specific on how this ear will be created: This is a standard j2ee ear file, with 1 WAR in it. The container it is deployed to will expect certain xml files (which can easily be found (somewhere) inside the source project). Here are my obstacles The source folder contains various container specific xml files. But, these files do not map directly to where the container expects them inside the EAR file. For example, there will be a file that this container expects to be in 'EARFILE.ear/config/connections.xml'. But this file is located (in the source) at /some/obscure/unrelated/directory. This is the case for about 5-7 files. I cannot change the original source project layout at all. So, how can I create the compliant EAR file that I need. There is NO plugin at this time for the container that I am using, I have certainly looked.

    Read the article

  • Best way to parse this particular string using awk / sed?

    - by Jack
    Hi, I need to get a particular version string from a file (call it version.lst) and use it to compare another in a shell script. For example sake, the file contains lines that look like this: V1.000 -- build date and other info here -- APP1 V1.000 -- build date and other info here -- APP2 V1.500 -- build date and other info here -- APP3 .. and so on. Let's say I am trying to grab the first version (in this case, V1.000) from APP1. Obviously, the versions can change and I want this to be dynamic. What I have right now works: var = `cat version.lst | grep " -- APP1" | grep -Eo V[0-9].[0-9]{3}` Pipe to grep will get the line containing APP1 and the second pipe to grep will get the version string. However, I hear grep is not the way to do this so I'd like to learn the best way using awk or sed. Any ideas? I am new to both and haven't found a tutorial easy enough to learn the syntax of it. Do they support egrep? Thanks!

    Read the article

< Previous Page | 756 757 758 759 760 761 762 763 764 765 766 767  | Next Page >