Search Results

Search found 4306 results on 173 pages for 'axj member'.

Page 157/173 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • About variadic templates

    - by chedi
    Hi, I'm currently experiencing with the new c++0x variadic templates, and it's quit fun, Although I have a question about the process of member instanciation. in this example, I'm trying to emulate the strongly typed enum with the possibility of choose a random valid strong enum (this is used for unit testing). #include<vector> #include<iostream> using namespace std; template<unsigned... values> struct sp_enum; /* this is the solution I found, declaring a globar var vector<unsigned> _data; and it work just fine */ template<> struct sp_enum<>{ static const unsigned _count = 0; static vector<unsigned> _data; }; vector<unsigned> sp_enum<>::_data; template<unsigned T, unsigned... values> struct sp_enum<T, values...> : private sp_enum<values...>{ static const unsigned _count = sp_enum<values...>::_count+1; static vector<unsigned> _data; sp_enum( ) : sp_enum<values...>(values...) {_data.push_back(T);} sp_enum(unsigned v ) {_data.push_back(v);} sp_enum(unsigned v, unsigned...) : sp_enum<values...>(values...) {_data.push_back(v);} }; template<unsigned T, unsigned... values> vector<unsigned> sp_enum<T, values...>::_data; int main(){ enum class t:unsigned{Default = 5, t1, t2}; sp_enum<t::Default, t::t1, t::t2> test; cout <<test._count << endl << test._data.size() << endl; for(auto i= test._data.rbegin();i != test._data.rend();++i){cout<< *i<< ":";} } the result I'm getting with this code is : 3 1 5: can someone point me what I'm messing here ??? Ps: using gcc 4.4.3

    Read the article

  • access exception when invoking method of an anonymous class using java reflection

    - by Asaf David
    Hello I'm trying to use an event dispatcher to allow a model to notify subscribed listeners when it changes. the event dispatcher receives a handler class and a method name to call during dispatch. the presenter subscribes to the model changes and provide a Handler implementation to be called on changes. Here's the code (I'm sorry it's a bit long). EventDispacther: package utils; public class EventDispatcher<T> { List<T> listeners; private String methodName; public EventDispatcher(String methodName) { listeners = new ArrayList<T>(); this.methodName = methodName; } public void add(T listener) { listeners.add(listener); } public void dispatch() { for (T listener : listeners) { try { Method method = listener.getClass().getMethod(methodName); method.invoke(listener); } catch (Exception e) { System.out.println(e.getMessage()); } } } } Model: package model; public class Model { private EventDispatcher<ModelChangedHandler> dispatcher; public Model() { dispatcher = new EventDispatcher<ModelChangedHandler>("modelChanged"); } public void whenModelChange(ModelChangedHandler handler) { dispatcher.add(handler); } public void change() { dispatcher.dispatch(); } } ModelChangedHandler: package model; public interface ModelChangedHandler { void modelChanged(); } Presenter: package presenter; public class Presenter { private final Model model; public Presenter(Model model) { this.model = model; this.model.whenModelChange(new ModelChangedHandler() { @Override public void modelChanged() { System.out.println("model changed"); } }); } } Main: package main; public class Main { public static void main(String[] args) { Model model = new Model(); Presenter presenter = new Presenter(model); model.change(); } } Now, I except to get the "model changed" message. However, I'm getting an java.lang.IllegalAccessException: Class utils.EventDispatcher can not access a member of class presenter.Presenter$1 with modifiers "public". I understand that the class to blame is the anonymous class i created inside the presenter, however I don't know how to make it any more 'public' than it currently is. If i replace it with a named nested class it seem to work. It also works if the Presenter and the EventDispatcher are in the same package, but I can't allow that (several presenters in different packages should use the EventDispatcher) any ideas?

    Read the article

  • Need help with implementation of the jQuery LiveUpdate routine

    - by miCRoSCoPiC_eaRthLinG
    Hey all, Has anyone worked with the LiveUpdate function (may be a bit of a misnomer) found on this page? It's not really a live search/update function, but a quick filtering mechanism for a pre-existing list, based on the pattern you enter in a text field. For easier reference, I'm pasting the entire function in here: jQuery.fn.liveUpdate = function(list){ list = jQuery(list); if ( list.length ) { var rows = list.children('li'), cache = rows.map(function(){ return this.innerHTML.toLowerCase(); }); this .keyup(filter).keyup() .parents('form').submit(function(){ return false; }); } return this; function filter(){ var term = jQuery.trim( jQuery(this).val().toLowerCase() ), scores = []; if ( !term ) { rows.show(); } else { rows.hide(); cache.each(function(i){ var score = this.score(term); if (score > 0) { scores.push([score, i]); } }); jQuery.each(scores.sort(function(a, b){return b[0] - a[0];}), function(){ jQuery(rows[ this[1] ]).show(); }); } } }; I have this list, with members as the ID. And a text field with say, qs as ID. I tried binding the function in the following manner: $( '#qs' ).liveUpdate( '#members' ); But when I do this, the function is called only ONCE when the page is loaded (I put in some console.logs in the function) but never after when text is keyed into the text field. I also tried calling the routine from the keyup() function of qs. $( '#qs' ).keyup( function() { $( this ).liveUpdate( '#members' ); }); This ends up going into infinite loops (almost) and halting with "Too much recursion" errors. So can anyone please shed some light on how I am supposed to actually implement this function? Also while you are at it, can someone kindly explain this line to me: var score = this.score(term); What I want to know is where this member method score() is coming from? I didn't find any such method built into JS or jQuery. Thanks for all the help, m^e

    Read the article

  • Programming a windows service

    - by xarzu
    I have started prgramming a windows service. I have added a notify icon from the toolbox. It has the small notify icon that appears in the systray as a member of those icons. It works so far. So far I have a blank form. I have used the DoubleClick for the notifyIcon to bring up the form (I will use the form for something later). Now I have a list of things I want to accomplish to make this work like a true windows service. First of all, if possible, I owuld like to remove the maximize and cancel button on the form. Most windos service apps that I have seen offer the ability to close the app by right-mouse-button clicking on the notify icon which brings up a menu of options. I see in the properties of the form under Misc there is an CancelButton. But I do not see how do deactivate it. In the Properties of the forum I see under Window Style there is a ControlBox option that, if I turn to false, all three buttons, (minimize, maximize and cancel) go away. These are not what i am looking for. I would not like the option for them to resize, maximize or close the form here. I suspect people will close the box intending to make the box go away while still wanting the app to run. Under the "Focus" caption in Properties, there id "Deactivate". I have created my own event/method/function for this and in debug I noticed that when you click on the x-box in the upper right corner, this function is called. The problem is that after the function is over, the app closes anyway. How do I over-ride this function? Secondly, how do you catch the right button click event on the notify icon in the systray? I can see how to create events for "Click" and "MouseClick" etc. but how so I determine which button was click? Using the right buton click is how such programs know when to pull up a menu. So I would like to know how to do this as well.

    Read the article

  • prgramming a windows service

    - by xarzu
    I have started prgramming a windows service. I have added a notify icon from the toolbox. It has the small notify icon that appears in the systray as a member of those icons. It works so far. So far I have a blank form. I have used the DoubleClick for the notifyIcon to bring up the form (I will use the form for something later). Now I have a list of things I want to accomplish to make this work like a true windows service. First of all, if possible, I owuld like to remove the maximize and cancel button on the form. Most windos service apps that I have seen offer the ability to close the app by right-mouse-button clicking on the notify icon which brings up a menu of options. I see in the properties of the form under Misc there is an CancelButton. But I do not see how do deactivate it. In the Properties of the forum I see under Window Style there is a ControlBox option that, if I turn to false, all three buttons, (minimize, maximize and cancel) go away. These are not what i am looking for. I would not like the option for them to resize, maximize or close the form here. I suspect people will close the box intending to make the box go away while still wanting the app to run. Under the "Focus" caption in Properties, there id "Deactivate". I have created my own event/method/function for this and in debug I noticed that when you click on the x-box in the upper right corner, this function is called. The problem is that after the function is over, the app closes anyway. How do I over-ride this function? Secondly, how do you catch the right button click event on the notify icon in the systray? I can see how to create events for "Click" and "MouseClick" etc. but how so I determine which button was click? Using the right buton click is how such programs know when to pull up a menu. So I would like to know how to do this as well.

    Read the article

  • Having trouble getting MEF imports to be resolved

    - by Dave
    This is sort of a continuation of one of my earlier posts, which involves the resolving of modules in my WPF application. This question is specifically related to the effect of interdependencies of modules and the method of constructing those modules (i.e. via MEF or through new) on MEF's ability to resolve relationships. First of all, here is a simple UML diagram of my test application: I have tried two approaches: left approach: the App implements IError right approach: the App has a member that implements IError Left approach My code behind looked like this (just the MEF-related stuff): // app.cs [Export(typeof(IError))] public partial class Window1 : Window, IError { [Import] public CandyCo.Shared.LibraryInterfaces.IPlugin Plugin { get; set; } [Export] public CandyCo.Shared.LibraryInterfaces.ICandySettings Settings { get; set; } private ICandySettings Settings; public Window1() { // I create the preferences here with new, instead of using MEF. I wonder // if that's my whole problem? If I use MEF, and want to have parameters // going to the constructor, then do I have to [Export] a POCO (i.e. string)? Settings = new CandySettings( "Settings", @"c:\settings.xml"); var catalog = new DirectoryCatalog( "."); var container = new CompositionContainer( catalog); try { container.ComposeParts( this); } catch( CompositionException ex) { foreach( CompositionError e in ex.Errors) { string description = e.Description; string details = e.Exception.Message; } throw; } } } // plugin.cs [Export(typeof(IPlugin))] public class Plugin : IPlugin { [Import] public CandyCo.Shared.LibraryInterfaces.ICandySettings CandySettings { get; set; } [Import] public CandyCo.Shared.LibraryInterfaces.IError ErrorInterface { get; set; } [ImportingConstructor] public Plugin( ICandySettings candy_settings, IError error_interface) { CandySettings = candy_settings; ErrorInterface = error_interface; } } // candysettings.cs [Export(typeof(ICandySettings))] public class CandySettings : ICandySettings { ... } Right-side approach Basically the same as the left-side approach, except that I created a class that inherits from IError in the same assembly as Window1. I then used an [Import] to try to get MEF to resolve that for me. Can anyone explain how the two ways I have approached MEF here are flawed? I have been in the dark for so long that instead of reading about MEF and trying different suggestions, I've added MEF to my solution and am stepping into the code. The part where it looks like it fails is when it calls partManager.GetSavedImport(). For some reason, the importCache is null, which I don't understand. All the way up to this point, it's been looking at the part (Window1) and trying to resolve two imported interfaces -- IError and IPlugin. I would have expected it to enter code that looks at other assemblies in the same executable folder, and then check it for exports so that it knows how to resolve the imports...

    Read the article

  • How do I redirect a page in php

    - by user225269
    I'm having difficulties in the making the logout code for php to work. When I click on the logout button, it will go back to the home page, but when I click on the back button in the browser. I can still access the previous page, wherein the user must be logged on to access it. So I'm thinking of redirecting to the login page when the user clicks on the back button on the browser. This is my code, in the home page(where in no user is logged in yet. This page is being called by a logout link on the user page. <? session_start(); session_destroy(); ?> <table width="300" border="0" align="center" cellpadding="0" cellspacing="1" bgcolor="#CCCCCC"> <tr> <form name="form1" method="post" action="checklogin.php"> <td> <table width="100%" border="0" cellpadding="3" cellspacing="1" bgcolor="#FFFFFF"> <tr> <td colspan="3"><strong>Member Login </strong></td> </tr> <tr> <td width="78">Username</td> <td width="6">:</td> <td width="294"><input name="myusername" type="text" id="myusername"></td> </tr> <tr> <td>Password</td> <td>:</td> <td><input name="mypassword" type="text" id="mypassword"></td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> <td><input type="submit" name="Submit" value="Login"></td> </tr> </table> </td> </form> </tr> </table>

    Read the article

  • May volatile be in user defined types to help writing thread-safe code

    - by David Rodríguez - dribeas
    I know, it has been made quite clear in a couple of questions/answers before, that volatile is related to the visible state of the c++ memory model and not to multithreading. On the other hand, this article by Alexandrescu uses the volatile keyword not as a runtime feature but rather as a compile time check to force the compiler into failing to accept code that could be not thread safe. In the article the keyword is used more like a required_thread_safety tag than the actual intended use of volatile. Is this (ab)use of volatile appropriate? What possible gotchas may be hidden in the approach? The first thing that comes to mind is added confusion: volatile is not related to thread safety, but by lack of a better tool I could accept it. Basic simplification of the article: If you declare a variable volatile, only volatile member methods can be called on it, so the compiler will block calling code to other methods. Declaring an std::vector instance as volatile will block all uses of the class. Adding a wrapper in the shape of a locking pointer that performs a const_cast to release the volatile requirement, any access through the locking pointer will be allowed. Stealing from the article: template <typename T> class LockingPtr { public: // Constructors/destructors LockingPtr(volatile T& obj, Mutex& mtx) : pObj_(const_cast<T*>(&obj)), pMtx_(&mtx) { mtx.Lock(); } ~LockingPtr() { pMtx_->Unlock(); } // Pointer behavior T& operator*() { return *pObj_; } T* operator->() { return pObj_; } private: T* pObj_; Mutex* pMtx_; LockingPtr(const LockingPtr&); LockingPtr& operator=(const LockingPtr&); }; class SyncBuf { public: void Thread1() { LockingPtr<BufT> lpBuf(buffer_, mtx_); BufT::iterator i = lpBuf->begin(); for (; i != lpBuf->end(); ++i) { // ... use *i ... } } void Thread2(); private: typedef vector<char> BufT; volatile BufT buffer_; Mutex mtx_; // controls access to buffer_ };

    Read the article

  • Vbscript - Creating a script that mirrors several sets of folders

    - by Kenny Bones
    Ok, this is my problem. I'm doing a logonscript that basically copies Microsoft Word templates from a serverpath on to a local path of each computer. This is done using a check for group membership. If MemberOf(ObjGroupDict, "g_group1") Then oShell.Run "%comspec% /c %LOGONSERVER%\SYSVOL\mydomain.com\scripts\ROBOCOPY \\server\Templates\Group1\OFFICE2003\ " & TemplateFolder & "\" & " * /E /XO", 0, True End If Previously I used the /MIR switch of robocopy, which is exellent. But, if a user is member of more than one group, the /MIR switch removes the content from the first group, since it's mirroring the content from the second group. Meaning, I can't have both contents. This is "solved" by not using the /MIR switch and just let the content get copied anyway. BUT the whole idea of having the templates on a server is so that I can control the content the users receive through the script. So if I delete a file or folder from the server path, this doesn't replicate on the local computer. Since I don't use the /MIR switch anymore. Comprende? So, what do I do? I did a small script that basically checks the folders and files and then removes them accordingly, but this actually ended up being the same functionality as the /MIR switch anyway. How do I solve this problem? Edit: I've found that what I actually need is a routine that scans my local template folder for files and folders and checks if the same structure exists in any of the source template folders. The server template folders are set up like this: \\fileserver\templates\group1\ \\fileserver\templates\group2\ \\fileserver\templates\group3\ \\fileserver\templates\group4\ \\fileserver\templates\group5\ \\fileserver\templates\group6\ And the script that does the copying is structures like this (pseudo): If User is MemberOf (group1) Then RoboCopy.exe \\fileserver\templates\group1\ c:\templates\workgroup *.* /E /XO End if If User is MemberOf (group2) Then RoboCopy.exe \\fileserver\templates\group2\ c:\templates\workgroup *.* /E /XO End if If User is MemberOf (group3) Then RoboCopy.exe \\fileserver\templates\group3\ c:\templates\workgroup *.* /E /XO End if Etc etc With the /E switch, I make sure it copies subfolders as well. And the /XO switch only copies files and folders that are newer than those in my local path. But it doesn't consider if the local path contains files or folders that doesn't exist on the server template path. So after the copying is done, I would like to check if any of the files or folders on my c:\templates\workgroup actually exists in either of the sources. And if they don't, delete them from my local path. Something that could be combined in these memberchecks perhaps?

    Read the article

  • How do I get ELMAH to work with SQL Server (permission problems)

    - by Gary McGill
    I've got ELMAH working on my (Cassini) development server, and was quite happy with it, but now that I'm trying to move everything to my production server (IIS7), the honeymoon looks like being over. I've got past the "gotcha" with IIS7, which frankly could have been better highlighted in the documentation, and if I just use the in-memory log then it works. However, I'm trying to get it to use the SQL Server log (as I do on my development system), and I'm getting an error along the lines of: The EXECUTE permission was denied on the object ELMAH_GetErrorsXml Well, fine. I know how to grant database permissions, but I'm really struggling to understand which user and which stored procs/tables I need to grant access to. The thing that's really confusing me is that I didn't have to do anything like this to get it to work on my development server. The only difference I can see is that on my development server it seems to connect as NT AUTHORITY\IUSR, whereas on my production server it seems to connect as NT AUTHORITY\NETWORK SERVICE. (It's just using a trusted connection so I've not explicitly configured it to do that - I presume it's to do with the web server). UPDATE: I've since established that because I'm using Cassini, it was actually logging in as me (an admin) and not IUSR, which explains why I didn't get any permission problems. On my development server, the IUSR account is a member of the public database role, and has access to the required database (again as "public"). There's no explicit granting of object-level permissions. [See update above - this is irrelevant]. On my production server, I've added NETWORK SERVICE in exactly the same way (public database role, explicit access to the database as "public"). Yet, I get this permission error. Why?!! [See update above - the only reason I don't get a permission error is because I'm running as an admin]. And, of course, if the fact that it works locally is just "luck", I will need to know which SPs/tables to grant access to. My guess would be all 3 SPs and not the table, but it would be good (again) to see some documentation that makes this explicit.

    Read the article

  • Problem with std::map and std::pair

    - by Tom
    Hi everyone. I have a small program I want to execute to test something #include <map> #include <iostream> using namespace std; struct _pos{ float xi; float xf; bool operator<(_pos& other){ return this->xi < other.xi; } }; struct _val{ float f; }; int main() { map<_pos,_val> m; struct _pos k1 = {0,10}; struct _pos k2 = {10,15}; struct _val v1 = {5.5}; struct _val v2 = {12.3}; m.insert(std::pair<_pos,_val>(k1,v1)); m.insert(std::pair<_pos,_val>(k2,v2)); return 0; } The problem is that when I try to compile it, I get the following error $ g++ m2.cpp -o mtest In file included from /usr/include/c++/4.4/bits/stl_tree.h:64, from /usr/include/c++/4.4/map:60, from m2.cpp:1: /usr/include/c++/4.4/bits/stl_function.h: In member function ‘bool std::less<_Tp>::operator()(const _Tp&, const _Tp&) const [with _Tp = _pos]’: /usr/include/c++/4.4/bits/stl_tree.h:1170: instantiated from ‘std::pair<typename std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::iterator, bool> std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_M_insert_unique(const _Val&) [with _Key = _pos, _Val = std::pair<const _pos, _val>, _KeyOfValue = std::_Select1st<std::pair<const _pos, _val> >, _Compare = std::less<_pos>, _Alloc = std::allocator<std::pair<const _pos, _val> >]’ /usr/include/c++/4.4/bits/stl_map.h:500: instantiated from ‘std::pair<typename std::_Rb_tree<_Key, std::pair<const _Key, _Tp>, std::_Select1st<std::pair<const _Key, _Tp> >, _Compare, typename _Alloc::rebind<std::pair<const _Key, _Tp> >::other>::iterator, bool> std::map<_Key, _Tp, _Compare, _Alloc>::insert(const std::pair<const _Key, _Tp>&) [with _Key = _pos, _Tp = _val, _Compare = std::less<_pos>, _Alloc = std::allocator<std::pair<const _pos, _val> >]’ m2.cpp:30: instantiated from here /usr/include/c++/4.4/bits/stl_function.h:230: error: no match for ‘operator<’ in ‘__x < __y’ m2.cpp:9: note: candidates are: bool _pos::operator<(_pos&) $ I thought that declaring the operator< on the key would solve the problem, but its still there. What could be wrong? Thanks in advance.

    Read the article

  • Issue with class template partial specialization

    - by DeadMG
    I've been trying to implement a function that needs partial template specializations and fallen back to the static struct technique, and I'm having a number of problems. template<typename T> struct PushImpl<const T&> { typedef T* result_type; typedef const T& argument_type; template<int StackSize> static result_type Push(IStack<StackSize>* sptr, argument_type ref) { // Code if the template is T& } }; template<typename T> struct PushImpl<const T*> { typedef T* result_type; typedef const T* argument_type; template<int StackSize> static result_type Push(IStack<StackSize>* sptr, argument_type ptr) { return PushImpl<const T&>::Push(sptr, *ptr); } }; template<typename T> struct PushImpl { typedef T* result_type; typedef const T& argument_type; template<int StackSize> static result_type Push(IStack<StackSize>* sptr, argument_type ref) { // Code if the template is neither T* nor T& } }; template<typename T> typename PushImpl<T>::result_type Push(typename PushImpl<T>::argument_type ref) { return PushImpl<T>::Push(this, ref); } First: The struct is nested inside another class (the one that offers Push as a member func), but it can't access the template parameter (StackSize), even though my other nested classes all could. I've worked around it, but it would be cleaner if they could just access StackSize like a normal class. Second: The compiler complains that it doesn't use or can't deduce T. Really? Thirdly: The compiler complains that it can't specialize a template in the current scope (class scope). I can't see what the problem is. Have I accidentally invoked some bad syntax?

    Read the article

  • replace XmlSlurper tag with arbitrary XML

    - by Misha Koshelev
    Dear All: I am trying to replace specific XmlSlurper tags with arbitrary XML strings. The best way I have managed to come up with to do this is: #!/usr/bin/env groovy import groovy.xml.StreamingMarkupBuilder def page=new XmlSlurper(new org.cyberneko.html.parsers.SAXParser()).parseText(""" <html> <head></head> <body> <one attr1='val1'>asdf</one> <two /> <replacemewithxml /> </body> </html> """.trim()) import groovy.xml.XmlUtil def closure closure={ bind,node-> if (node.name()=="REPLACEMEWITHXML") { bind.mkp.yieldUnescaped "<replacementxml>sometext</replacementxml>" } else { bind."${node.name()}"(node.attributes()) { mkp.yield node.text() node.children().each { child-> closure(bind,child) } } } } println XmlUtil.serialize( new StreamingMarkupBuilder().bind { bind-> closure(bind,page) } ) However, the only problem is the text() element seems to capture all child text nodes, and thus I get: <?xml version="1.0" encoding="UTF-8"?> <HTML>asdf<HEAD/> <BODY>asdf<ONE attr1="val1">asdf</ONE> <TWO/> <replacementxml>sometext</replacementxml> </BODY> </HTML> Any ideas/help much appreciated. Thank you! Misha p.s. Also, out of curiosity, if I change the above to the "Groovier" notation as follows, the groovy compiler thinks I am trying to access the ${node.name()} member of my test class. Is there a way to specify this is not the case while still not passing the actual builder object? Thank you! :) def closure closure={ node-> if (node.name()=="REPLACEMEWITHXML") { mkp.yieldUnescaped "<replacementxml>sometext</replacementxml>" } else { "${node.name()}"(node.attributes()) { mkp.yield node.text() node.children().each { child-> closure(child) } } } } println XmlUtil.serialize( new StreamingMarkupBuilder().bind { closure(page) } )

    Read the article

  • wrong operator() overload called

    - by user313202
    okay, I am writing a matrix class and have overloaded the function call operator twice. The core of the matrix is a 2D double array. I am using the MinGW GCC compiler called from a windows console. the first overload is meant to return a double from the array (for viewing an element). the second overload is meant to return a reference to a location in the array (for changing the data in that location. double operator()(int row, int col) const ; //allows view of element double &operator()(int row, int col); //allows assignment of element I am writing a testing routine and have discovered that the "viewing" overload never gets called. for some reason the compiler "defaults" to calling the overload that returns a reference when the following printf() statement is used. fprintf(outp, "%6.2f\t", testMatD(i,j)); I understand that I'm insulting the gods by writing my own matrix class without using vectors and testing with C I/O functions. I will be punished thoroughly in the afterlife, no need to do it here. Ultimately I'd like to know what is going on here and how to fix it. I'd prefer to use the cleaner looking operator overloads rather than member functions. Any ideas? -Cal the matrix class: irrelevant code omitted class Matrix { public: double getElement(int row, int col)const; //returns the element at row,col //operator overloads double operator()(int row, int col) const ; //allows view of element double &operator()(int row, int col); //allows assignment of element private: //data members double **array; //pointer to data array }; double Matrix::getElement(int row, int col)const{ //transform indices into true coordinates (from sorted coordinates //only row needs to be transformed (user can only sort by row) row = sortedArray[row]; result = array[usrZeroRow+row][usrZeroCol+col]; return result; } //operator overloads double Matrix::operator()(int row, int col) const { //this overload is used when viewing an element return getElement(row,col); } double &Matrix::operator()(int row, int col){ //this overload is used when placing an element return array[row+usrZeroRow][col+usrZeroCol]; } The testing program: irrelevant code omitted int main(void){ FILE *outp; outp = fopen("test_output.txt", "w+"); Matrix testMatD(5,7); //construct 5x7 matrix //some initializations omitted fprintf(outp, "%6.2f\t", testMatD(i,j)); //calls the wrong overload }

    Read the article

  • Help with infrequent segmentation fault in accessing struct

    - by Sarah
    I'm having trouble debugging a segmentation fault. I'd appreciate tips on how to go about narrowing in on the problem. The error appears when an iterator tries to access an element of a struct Infection, defined as: struct Infection { public: explicit Infection( double it, double rt ) : infT( it ), recT( rt ) {} double infT; // infection start time double recT; // scheduled recovery time }; These structs are kept in a special structure, InfectionMap: typedef boost::unordered_multimap< int, Infection > InfectionMap; Every member of class Host has an InfectionMap carriage. Recovery times and associated host identifiers are kept in a priority queue. When a scheduled recovery event arises in the simulation for a particular strain s in a particular host, the program searches through carriage of that host to find the Infection whose recT matches the recovery time (double recoverTime). (For reasons that aren't worth going into, it's not as expedient for me to use recT as the key to InfectionMap; the strain s is more useful, and coinfections with the same strain are possible.) assert( carriage.size() > 0 ); pair<InfectionMap::iterator,InfectionMap::iterator> ret = carriage.equal_range( s ); InfectionMap::iterator it; for ( it = ret.first; it != ret.second; it++ ) { if ( ((*it).second).recT == recoverTime ) { // produces seg fault carriage.erase( it ); } } I get a "Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address..." on the line specified above. The recoverTime is fine, and the assert(...) in the code is not tripped. As I said, this seg fault appears 'randomly' after thousands of successful recovery events. How would you go about figuring out what's going on? I'd love ideas about what could be wrong and how I can further investigate the problem.

    Read the article

  • C++ EZWindows Linker Errors when trying to run demos

    - by Brent Nash
    I'm attempting to download and use the EZWindows ( http://www.cs.virginia.edu/c++programdesign/software/ ) SPARC installation (the http://www.cs.virginia.edu/c++programdesign/software/EzWindows2a-SPARC.tar.gz file). When trying to build some of the examples that come with it, I'm getting some linker errors that I just can't figure out. Here's the result of the uname -a command on the machine I'm running on (on which I am NOT an administrator): SunOS AAA.BBB.edu 5.10 Generic_138888-07 sun4v sparc SUNW,T5240 And here is the result of the g++ -v command: gcc version 2.95.2 19991024 (release) If you untar/unzip the package, I'm trying to compile the example in samples/chap03/lawn by simply doing "gmake" in that directory, here's what I get. Here's the error I get: bash-3.00$ gtar xfz EzWindows2a-SPARC.tar.gz gtar: Removing leading `./' from member names bash-3.00$ cd chap03/lawn bash-3.00$ gmake clean ; gmake rm -f *.o *~ lawn make lawn g++ -I/X11.6/include -I../../EzWindows/include -c prog3-5.cpp prog3-5.cpp: In function `int ApiMain()': prog3-5.cpp:75: warning: initialization to `long int' from `const float' prog3-5.cpp:85: warning: initialization to `int' from `float' prog3-5.cpp:86: warning: initialization to `int' from `float' g++ -o lawn prog3-5.o -L/X11.6/lib -R/X11.6/lib -lX11 -lsocket -L../../EzWindows/lib -lezwin -lXpm ld: warning: symbol `clog' has differing types: (file /usr/usc/gnu/gcc/2.95.2/lib/gcc-lib/sparc-sun-solaris2.6/2.95.2/libstdc++.so type=OBJT; file /lib/libm.so type=FUNC); /usr/usc/gnu/gcc/2.95.2/lib/gcc-lib/sparc-sun-solaris2.6/2.95.2/libstdc++.so definition taken Undefined first referenced symbol in file __dl__Q2t12basic_string3ZcZt18string_char_traits1ZcZt24__default_alloc_template2b0i03RepPv ../../EzWindows/lib/libezwin.a(WindowManager.o) __eh_pc ../../EzWindows/lib/libezwin.a(WindowManager.o) clone__Q2t12basic_string3ZcZt18string_char_traits1ZcZt24__default_alloc_template2b0i03Rep ../../EzWindows/lib/libezwin.a(WindowManager.o) ld: fatal: Symbol referencing errors. No output written to lawn collect2: ld returned 1 exit status *** Error code 1 make: Fatal error: Command failed for target `lawn' Current working directory /export/samfs-bcf/rcf-11/bnash/sparc/chap03/lawn gmake: *** [default] Error 1 This particular run was built using g++ 2.95.2, but I've also tried with versions 3.3.2 and 4.2.1 with other equally strange errors. I'm pretty sure that EZWindows requires a 2.x version of gcc & g++. I've tried to make sure that my LD_LIBRARY_PATH and PATH are setup to include everything that's needed, but it seems that may be incorrect. I'm running out of ideas. Anyone have any other ones?

    Read the article

  • Delaying emails in PHP to avoid exceeding server limit

    - by Andrew P.
    Okay, so here's my problem: I have a list of members on a website, and periodically one of the admins my site (who are not very web or tech savvy) will send a newsletter to the memberlist. My current memberlist is well over 800 individuals long. So, I wrote an email script that sends the email to the full memberlist, with the members listed in the Bcc header. However, I've discovered that my host server has a limit of 300 emails per hour, which I apparently exceed even though the members are listed in the Bcc field. (I wasn't previously aware that the behaviour of Bcc was to send separate emails for each name on the list...) After some thought, I've come to the conclusion that my only solution is to have my script send only the email to only the first 300 emails, wait an hour, and send a second email to the next three hundred, wait another hour, and so on until I've sent the email to the whole member list. Looking around on the internet, I've seen some other solutions people have come up with for delaying emails in PHP. Sleep() is obviously not an option, because I can't just leave the script open and running for 3 or four hours. I've seen some people suggest cron jobs, but I'm not sure how feasible it would be to create three new cron jobs every time I send an email, use them once, and then delete them afterward. The final (and what I think is the smartest) solution I've seen, is to have a table in my database to temporarily store the emails to be delayed and sent later, and then create a cron job that checks this sql table every hour or so, compares the timestamp of the row to the current timestamp, and then sends the email if an hour has passed. So I'm asking you all which method you would recommend. Is there an easier solution that I've completely looked over (aside from getting a different hosting plan. ha!), or is there a cleaner way to do it than the database / cron job approach? tl;dr: I have 800 emails to send in an hour on a server that limits me to 300/hr. Using PHP, find a way to get around this problem in a way that the person sending the email needs only to click "send."

    Read the article

  • How to develop asp.net web service to create the web method which can take the parameter of type win

    - by Shailesh Jaiswal
    I am developing asp.net web service. I am developing this web service so that OPC ( OLE for process control) client application can use it. In this web service I am using the built-in functions provided by the namespaces using OPC, using OPCDA, using OPCDA.NET. I have also added the namespace using System.Windows.Forms in this web service so that I can use the windows form control. In that we service I have created on web method which takes the parameter of type windows form control as given below. public void getOPCServerItems(TreeView tvServerItems, ListView lvBranchItems) { ArrayList ArrlstObj = new ArrayList(); ItemShowTreeList = OpcSrv.ShowBrowseTreeList(tvServerItems, lvBranchItems); ItemShowTreeList.BrowseModeOneLevel = true; // browse hierachy levels when selected. (default) ItemShowTreeList.Show(OpcSrv.ServerName); } In the above web method I need to pass the values to the built-in function ShowBrowseTreeList() (found in OPC, OPCDA, OPCDA.NET namespaces). This function takes the two parameter of windows form control type. These parameters are Treeview & ListView control of the windows form. In the above web method ShowBrowseTreeList() method automatically create the treeview & listview structure of the available items. Now as I am consuming the web service so I need to pass the values to the webmethod getOPCServerItems(). But as I my consuming application is asp.net application there is no such windows form control. In asp.net application there are also & control. I want to display The data returned in these asp.net controls rather than windows form control. I am not getting the way what should I need to do or how should I pass the values form my client application to this web service ? In the above method getOPCServerItems() when I use the parameter of type treeview & listview it generate s error "Cannot serialize member System.ComponentModel.Component.Site of type System.ComponentModel.ISite because it is an interface.". Can you provide me the the way In which I can write the above web method & how should I pass parameter to the Treeview & Listview control (windows form control) from my asp.net application ? which controls I should use to pass parameters ? Is there any need to do any type of casting ? Can you provide me the the code for above web method so that I can resolve the above issue ?

    Read the article

  • Understanding VS2010 C# parallel profiling results

    - by Haggai
    I have a program with many independent computations so I decided to parallelize it. I use Parallel.For/Each. The results were okay for a dual-core machine - CPU utilization of about 80%-90% most of the time. However, with a dual Xeon machine (i.e. 8 cores) I get only about 30%-40% CPU utilization, although the program spends quite a lot of time (sometimes more than 10 seconds) on the parallel sections, and I see it employs about 20-30 more threads in those sections compared to serial sections. Each thread takes more than 1 second to complete, so I see no reason for them to work in parallel - unless there is a synchronization problem. I used the built-in profiler of VS2010, and the results are strange. Even though I use locks only in one place, the profiler reports that about 85% of the program's time is spent on synchronization (also 5-7% sleep, 5-7% execution, under 1% IO). The locked code is only a cache (a dictionary) get/add: bool esn_found; lock (lock_load_esn) esn_found = cache.TryGetValue(st, out esn); if(!esn_found) { esn = pData.esa_inv_idx.esa[term_idx]; esn.populate(pData.esa_inv_idx.datafile); lock (lock_load_esn) { if (!cache.ContainsKey(st)) cache.Add(st, esn); } } lock_load_esn is a static member of the class of type Object. esn.populate reads from a file using a separate StreamReader for each thread. However, when I press the Synchronization button to see what causes the most delay, I see that the profiler reports lines which are function entrance lines, and doesn't report the locked sections themselves. It doesn't even report the function that contains the above code (reminder - the only lock in the program) as part of the blocking profile with noise level 2%. With noise level at 0% it reports all the functions of the program, which I don't understand why they count as blocking synchronizations. So my question is - what is going on here? How can it be that 85% of the time is spent on synchronization? How do I find out what really is the problem with the parallel sections of my program? Thanks.

    Read the article

  • Cannot instantiate abstract class or interface : problem while persisting

    - by sammy
    i have a class campaign that maintains a list of AdGroupInterfaces. im going to persist its implementation @Entity @Table(name = "campaigns") public class Campaign implements Serializable,Comparable<Object>,CampaignInterface { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @OneToMany ( cascade = {CascadeType.ALL}, fetch = FetchType.EAGER, targetEntity=AdGroupInterface.class ) @org.hibernate.annotations.Cascade( value = org.hibernate.annotations.CascadeType.DELETE_ORPHAN ) @org.hibernate.annotations.IndexColumn(name = "CHOICE_POSITION") private List<AdGroupInterface> AdGroup; public Campaign() { super(); } public List<AdGroupInterface> getAdGroup() { return AdGroup; } public void setAdGroup(List<AdGroupInterface> adGroup) { AdGroup = adGroup; } public void set1AdGroup(AdGroupInterface adGroup) { if(AdGroup==null) AdGroup=new LinkedList<AdGroupInterface>(); AdGroup.add(adGroup); } } AdGroupInterface's implementation is AdGroups. when i add an adgroup to the list in campaign, campaign c; c.getAdGroupList().add(new AdGroups()), etc and save campaign it says"Cannot instantiate abstract class or interface :" AdGroupInterface its not recognizing the implementation just before persisting... Whereas Persisting adGroups separately works. when it is a member of another entity, it doesnt get persisted. import java.io.Serializable; import java.util.List; import javax.persistence.*; @Entity @DiscriminatorValue("1") @Table(name = "AdGroups") public class AdGroups implements Serializable,Comparable,AdGroupInterface{ /** * */ private static final long serialVersionUID = 1L; private Long Id; private String Name; private CampaignInterface Campaign; private MonetaryValue DefaultBid; public AdGroups(){ super(); } public AdGroups( String name, CampaignInterface campaign) { super(); this.Campaign=new Campaign(); Name = name; this.Campaign = campaign; DefaultBid = defaultBid; AdList=adList; } @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name="AdGroup_Id") public Long getId() { return Id; } public void setId(Long id) { Id = id; } @Column(name="AdGroup_Name") public String getName() { return Name; } public void setName(String name) { Name = name; } @ManyToOne @JoinColumn (name="Cam_ID", nullable = true,insertable = false) public CampaignInterface getCampaign() { return Campaign; } public void setCampaign(CampaignInterface campaign) { this.Campaign = campaign; } } what am i missing?? please look into it ...

    Read the article

  • Segmenting a double array of labels

    - by Ami
    The Problem: I have a large double array populated with various labels. Each element (cell) in the double array contains a set of labels and some elements in the double array may be empty. I need an algorithm to cluster elements in the double array into discrete segments. A segment is defined as a set of pixels that are adjacent within the double array and one label that all those pixels in the segment have in common. (Diagonal adjacency doesn't count and I'm not clustering empty cells). |-------|-------|------| | Jane | Joe | | | Jack | Jane | | |-------|-------|------| | Jane | Jane | | | | Joe | | |-------|-------|------| | | Jack | Jane | | | Joe | | |-------|-------|------| In the above arrangement of labels distributed over nine elements, the largest cluster is the “Jane” cluster occupying the four upper left cells. What I've Considered: I've considered iterating through every label of every cell in the double array and testing to see if the cell-label combination under inspection can be associated with a preexisting segment. If the element under inspection cannot be associated with a preexisting segment it becomes the first member of a new segment. If the label/cell combination can be associated with a preexisting segment it associates. Of course, to make this method reasonable I'd have to implement an elaborate hashing system. I'd have to keep track of all the cell-label combinations that stand adjacent to preexisting segments and are in the path of the incrementing indices that are iterating through the double array. This hash method would avoid having to iterate through every pixel in every preexisting segment to find an adjacency. Why I Don't Like it: As is, the above algorithm doesn't take into consideration the case where an element in the double array can be associated with two unique segments, one in the horizontal direction and one in the vertical direction. To handle these cases properly, I would need to implement a test for this specific case and then implement a method that will both associate the element under inspection with a segment and then concatenate the two adjacent identical segments. On the whole, this method and the intricate hashing system that it would require feels very inelegant. Additionally, I really only care about finding the large segments in the double array and I'm much more concerned with the speed of this algorithm than with the accuracy of the segmentation, so I'm looking for a better way. I assume there is some stochastic method for doing this that I haven't thought of. Any suggestions?

    Read the article

  • Nservicebus serization issue of derived types

    - by Tiju John
    Hi Guys, for the context setting, I am exchanging messages between my nServiceBus client and nSerivceBus server. its is the namespace xyz.Messages and and a class, Message : IMessage I have more messages that are in the other dlls, like xyz.Messages.Domain1, xyz.Messages.Domain2, xyz.Messages.Domain3. and messages that derive form that base message, Message. I have the endpoints defined as like : at client <UnicastBusConfig> <MessageEndpointMappings> <add Messages="xyz.Messages" Endpoint="xyzServerQueue" /> <add Messages="xyz.Messages.Domain1" Endpoint="xyzServerQueue" /> <add Messages="xyz.Messages.Domain2" Endpoint="xyzServerQueue" /> </MessageEndpointMappings> </UnicastBusConfig> at Server <UnicastBusConfig> <MessageEndpointMappings> <add Messages="xyz.Messages" Endpoint="xyzClientQueue" /> <add Messages="xyz.Messages.Domain1" Endpoint="xyzClientQueue" /> <add Messages="xyz.Messages.Domain2" Endpoint="xyzClientQueue" /> </MessageEndpointMappings> </UnicastBusConfig> and the bus initialized as IBus serviceBus = Configure.With() .SpringBuilder() .XmlSerializer() .MsmqTransport() .UnicastBus() .LoadMessageHandlers() .CreateBus() .Start(); now when i try sending instance of Message type or any type derived types of Message, it successfully sends the message over and at the server, i get the proper type. eg. Message message= new Message(); Bus.Send(message); // works fine, transfers Message type message = new MessageDerived1(); Bus.Send(message); // works fine, transfers MessageDerived1 type message = new MessageDerived2(); Bus.Send(message); // works fine, transfers MessageDerived2 type My problem arises when any type, say MessageDerived1, contains a member variable of type Message, and when i assign it to a derived type, the type is not properly transferred over the wire. It transfers only as Message type, not the derived type. public class MessageDerived2 : Message { public Message message; } MessageDerived2 messageDerived2= new MessageDerived2(); messageDerived2.message = new MessageDerived1(); message = messageDerived2; Bus.Send(message); // incorrect behaviour, transfers MessageDerived2 correctly, but looses type of MessageDerived2.Message (it deserializes as Message type, instead of MessageDerived1) any help is strongly appreciated. Thanks TJ

    Read the article

  • How to call Ajax to run a PHP file while maintaining PHP & Javascript variables.

    - by Umar
    Hi Stackoverflowers. I'm using the Facebook php-sdk to get the users name and friends, right now the loading friends part takes about +3 seconds so I wanted to do it via Ajax, e.g. so the document can load and jQuery then calls an external PHP script which loads the friends (their names and their profile pictures). So to do this I did: $(document).ready(function() { var loadUrl = "http://localhost/fb/getFriends.php" ; $("#friends") .html("Hold on, your friends are loading!") .load(loadUrl); }); But I get a PHP error: Fatal error: Call to a member function api() on a non-object If I do this in the same PHP file (so I don't use Ajax at all to call it) it works fine. Now I think I understand the reason this is happening, but I don't know how to fix it. In my main index.php file I have a bunch of init and session code e.g. FB.init({ appId : '<?php echo $facebook->getAppId(); ?>', session : <?php echo json_encode($session); ?>, // don't refetch the session when PHP already has it status : true, // check login status cookie : true, // enable cookies to allow the server to access the session xfbml : true // parse XFBML }); So I'm just wondering what is the best way to treat my new separate PHP file getFriends.php in a way where it has access to all PHP/JavaScript session data/variables? If you haven't used the Facebook php-sdk I'll quickly explain what I mean: Lets say I have index.php and getUsername.php, from index.php I want to retrieve the getUsername.php file via Ajax using .load. Now the problem is getUsername.php needs to access PHP session data/Javascript Init functions which were created in index.php, so I'm thinking of ways to solve this (I'm new to PHP so sorry if this sounds silly) but I'm thinking maybe I could do a POST in jQuery Ajax and post the session data? Or maybe I could create a PHP class, so something like: class getUsername extends index{} /*Yes I'm a newbie*/ If you have a look at the php-sdk example.php link posted at the top maybe you'd better understand what variables exactly need to be accessed from a new file. Also on a different note, I'm using PHP to work out page rendering times and it seems that fetching the users name alone : // Session based API call. if ($session) { try { $uid = $facebook->getUser(); $me = $facebook->api('/me'); } catch (FacebookApiException $e) { error_log($e); } } Can take a good 4 seconds, is this normal? Once I get the users details is it good to cache it or something? -Speed isn't as important right now, for now I'm just trying to figure out this Ajax-separating php files thing. Woah this is a long post. Thanks very much for your time.

    Read the article

  • C++ type-checking at compile-time

    - by Masterofpsi
    Hi, all. I'm pretty new to C++, and I'm writing a small library (mostly for my own projects) in C++. In the process of designing a type hierarchy, I've run into the problem of defining the assignment operator. I've taken the basic approach that was eventually reached in this article, which is that for every class MyClass in a hierarchy derived from a class Base you define two assignment operators like so: class MyClass: public Base { public: MyClass& operator =(MyClass const& rhs); virtual MyClass& operator =(Base const& rhs); }; // automatically gets defined, so we make it call the virtual function below MyClass& MyClass::operator =(MyClass const& rhs); { return (*this = static_cast<Base const&>(rhs)); } MyClass& MyClass::operator =(Base const& rhs); { assert(typeid(rhs) == typeid(*this)); // assigning to different types is a logical error MyClass const& casted_rhs = dynamic_cast<MyClass const&>(rhs); try { // allocate new variables Base::operator =(rhs); } catch(...) { // delete the allocated variables throw; } // assign to member variables } The part I'm concerned with is the assertion for type equality. Since I'm writing a library, where assertions will presumably be compiled out of the final result, this has led me to go with a scheme that looks more like this: class MyClass: public Base { public: operator =(MyClass const& rhs); // etc virtual inline MyClass& operator =(Base const& rhs) { assert(typeid(rhs) == typeid(*this)); return this->set(static_cast<Base const&>(rhs)); } private: MyClass& set(Base const& rhs); // same basic thing }; But I've been wondering if I could check the types at compile-time. I looked into Boost.TypeTraits, and I came close by doing BOOST_MPL_ASSERT((boost::is_same<BOOST_TYPEOF(*this), BOOST_TYPEOF(rhs)>));, but since rhs is declared as a reference to the parent class and not the derived class, it choked. Now that I think about it, my reasoning seems silly -- I was hoping that since the function was inline, it would be able to check the actual parameters themselves, but of course the preprocessor always gets run before the compiler. But I was wondering if anyone knew of any other way I could enforce this kind of check at compile-time.

    Read the article

  • Proper way to use Linq with WPF

    - by Ingó Vals
    I'm looking for a good guide into the right method of using Linq to Sql together with WPF. Most guides only go into the bare basics like how to show data from a database but noone I found goes into how to save back to the database. Can you answer or point out to me a guide that can answer these questions. I have a separate Data project because the same data will also be used in a web page so I have the repository method. That means I have a seperate class that uses the DataContext and there are methods like GetAllCompanies() and GetCompanyById ( int id ). 1) Where there are collections is it best to return as a IQueryable or should I return a list? Inside the WPF project I have seen reccomendations to wrap the collection in a ObservabgleCollection. 2) Why should I use ObservableCollection and should I use it even with Linq / IQueryable Some properties of the linq entities should be editable in the app so I set them to two-way mode. That would change the object in the observableCollection. 3) Is the object in the ObservableCollection still a instance of the original linq entity and so is the change reflected in the database ( when submitchanges is called ) I should have somekind of save method in the repository. But when should I call it? What happens if someone edits a field but decides not to save it, goes to another object and edits it and then press save. Doesn't the original change also save? When does it not remember the changes to a linq entity object anymore. Should I instance the Datacontext class in each method so it loses scope when done. 4) When and how to call the SubmitChanges method 5) Should I have the DataContext as a member variable of the repository class or a method variable To add a new row I should create a new object in a event ( "new" button push ) and then add it to the database using a repo method. 6) When I add the object to the database there will be no new object in the ObservableCollection. Do I refresh somehow. 7) I wan't to reuse the edit window when creating new but not sure how to dynamically changing from referencing selected item from a listview to this new object. Any examples you can point out.

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >