Search Results

Search found 14142 results on 566 pages for 'missing symbols'.

Page 520/566 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • .NET WinForms INotifyPropertyChanged updates all bindings when one is changed. Better way?

    - by Dave Welling
    In a windows forms application, a property change that triggers INotifyPropertyChanged, will result in the form reading EVERY property from my bound object, not just the property changed. (See example code below) This seems absurdly wasteful since the interface requires the name of the changing property. It is causing a lot of clocking in my app because some of the property getters require calculations to be performed. I'll likely need to implement some sort of logic in my getters to discard the unnecessary reads if there is no better way to do this. Am I missing something? Is there a better way? Don't say to use a different presentation technology please -- I am doing this on Windows Mobile (although the behavior happens on the full framework as well). Here's some toy code to demonstrate the problem. Clicking the button will result in BOTH textboxes being populated even though one property has changed. using System; using System.ComponentModel; using System.Drawing; using System.Windows.Forms; namespace Example { public class ExView : Form { private Presenter _presenter = new Presenter(); public ExView() { this.MinimizeBox = false; TextBox txt1 = new TextBox(); txt1.Parent = this; txt1.Location = new Point(1, 1); txt1.Width = this.ClientSize.Width - 10; txt1.DataBindings.Add("Text", _presenter, "SomeText1"); TextBox txt2 = new TextBox(); txt2.Parent = this; txt2.Location = new Point(1, 40); txt2.Width = this.ClientSize.Width - 10; txt2.DataBindings.Add("Text", _presenter, "SomeText2"); Button but = new Button(); but.Parent = this; but.Location = new Point(1, 80); but.Click +=new EventHandler(but_Click); } void but_Click(object sender, EventArgs e) { _presenter.SomeText1 = "some text 1"; } } public class Presenter : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; private string _SomeText1 = string.Empty; public string SomeText1 { get { return _SomeText1; } set { _SomeText1 = value; _SomeText2 = value; // <-- To demonstrate that both properties are read OnPropertyChanged("SomeText1"); } } private string _SomeText2 = string.Empty; public string SomeText2 { get { return _SomeText2; } set { _SomeText2 = value; OnPropertyChanged("SomeText2"); } } private void OnPropertyChanged(string PropertyName) { PropertyChangedEventHandler temp = PropertyChanged; if (temp != null) { temp(this, new PropertyChangedEventArgs(PropertyName)); } } } }

    Read the article

  • Strange iPhone memory leak in xml parser

    - by Chris
    Update: I edited the code, but the problem persists... Hi everyone, this is my first post here - I found this place a great ressource for solving many of my questions. Normally I try my best to fix anything on my own but this time I really have no idea what goes wrong, so I hope someone can help me out. I am building an iPhone app that parses a couple of xml files using TouchXML. I have a class XMLParser, which takes care of downloading and parsing the results. I am getting memory leaks when I parse an xml file more than once with the same instance of XMLParser. Here is one of the parsing snippets (just the relevant part): for(int counter = 0; counter < [item childCount]; counter++) { CXMLNode *child = [item childAtIndex:counter]; if([[child name] isEqualToString:@"PRODUCT"]) { NSMutableDictionary *product = [[NSMutableDictionary alloc] init]; for(int j = 0; j < [child childCount]; j++) { CXMLNode *grandchild = [child childAtIndex:j]; if([[grandchild stringValue] length] > 1) { NSString *trimmedString = [[grandchild stringValue] stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; [product setObject:trimmedString forKey:[grandchild name]]; } } // Add product to current category array switch (categoryId) { case 0: [self.mobil addObject: product]; break; case 1: [self.allgemein addObject: product]; break; case 2: [self.besitzeIch addObject: product]; break; case 3: [self.willIch addObject: product]; break; default: break; } [product release]; } } The first time, I parse the xml no leak shows up in instruments, the next time I do so, I got a lot of leaks (NSCFString / NSCFDictionary). Instruments points me to this part inside CXMLNode.m, when I dig into a leaked object: theStringValue = [NSString stringWithUTF8String:(const char *)theXMLString]; if ( _node->type != CXMLTextKind ) xmlFree(theXMLString); } return(theStringValue); I really spent a long time and tried multiple approaches to fix this, but to no avail so far, maybe I am missing something essential? Any help is highly appreciated, thank you!

    Read the article

  • Java: Reading a pdf file from URL into Byte array/ByteBuffer in an applet.

    - by Pol
    I'm trying to figure out why this particular snippet of code isn't working for me. I've got an applet which is supposed to read a .pdf and display it with a pdf-renderer library, but for some reason when I read in the .pdf files which sit on my server, they end up as being corrupt. I've tested it by writing the files back out again. I've tried viewing the applet in both IE and Firefox and the corrupt files occur. Funny thing is, when I trying viewing the applet in Safari (for Windows), the file is actually fine! I understand the JVM might be different, but I am still lost. I've compiled in Java 1.5. JVMs are 1.6. The snippet which reads the file is below. public static ByteBuffer getAsByteArray(URL url) throws IOException { ByteArrayOutputStream tmpOut = new ByteArrayOutputStream(); URLConnection connection = url.openConnection(); int contentLength = connection.getContentLength(); InputStream in = url.openStream(); byte[] buf = new byte[512]; int len; while (true) { len = in.read(buf); if (len == -1) { break; } tmpOut.write(buf, 0, len); } tmpOut.close(); ByteBuffer bb = ByteBuffer.wrap(tmpOut.toByteArray(), 0, tmpOut.size()); //Lines below used to test if file is corrupt //FileOutputStream fos = new FileOutputStream("C:\\abc.pdf"); //fos.write(tmpOut.toByteArray()); return bb; } I must be missing something, and I've been banging my head trying to figure it out. Any help is greatly appreciated. Thanks. Edit: To further clarify my situation, the difference in the file before I read then with the snippet and after, is that the ones I output after reading are significantly smaller than they originally are. When opening them, they are not recognized as .pdf files. There are no exceptions being thrown that I ignore, and I have tried flushing to no avail. This snippet works in Safari, meaning the files are read in it's entirety, with no difference in size, and can be opened with any .pdf reader. In IE and Firefox, the files always end up being corrupted, consistently the same smaller size. I monitored the len variable (when reading a 59kb file), hoping to see how many bytes get read in at each loop. In IE and Firefox, at 18kb, the in.read(buf) returns a -1 as if the file has ended. Safari does not do this. I'll keep at it, and I appreciate all the suggestions so far.

    Read the article

  • Creating a simple templated control. Having issues...

    - by Jimock
    Hi, I'm trying to create a really simple templated control. I've never done it before, but I know a lot of my controls I have created in the past would have greatly benefited if I included templating ability - so I'm learning now. The problem I have is that my template is outputted on the page but my property value is not. So all I get is the static text which I include in my template. I must be doing something correctly because the control doesn't cause any errors, so it knows my public property exists. (e.g. if I try to use Container.ThisDoesntExist it throws an exception). I'd appreciate some help on this. I may be just being a complete muppet and missing something. Online tutorials on simple templated server controls seem few and far between, so if you know of one I'd like to know about it. A cut down version of my code is below. Many Thanks, James Here is my code for the control: [ParseChildren(true)] public class TemplatedControl : Control, INamingContainer { private TemplatedControlContainer theContainer; [TemplateContainer(typeof(TemplatedControlContainer)), PersistenceMode(PersistenceMode.InnerProperty)] public ITemplate ItemTemplate { get; set; } protected override void CreateChildControls() { Controls.Clear(); theContainer = new TemplatedControlContainer("Hello World"); this.ItemTemplate.InstantiateIn(theContainer); Controls.Add(theContainer); } } Here is my code for the container: [ToolboxItem(false)] public class TemplatedControlContainer : Control, INamingContainer { private string myString; public string MyString { get { return myString; } } internal TemplatedControlContainer(string mystr) { this.myString = mystr; } } Here is my mark up: <my:TemplatedControl runat="server"> <ItemTemplate> <div style="background-color: Black; color: White;"> Text Here: <%# Container.MyString %> </div> </ItemTemplate> </my:TemplatedControl>

    Read the article

  • Creating a music catalog in C# and extracting first 30 seconds as soon as the first words are sung

    - by Rad
    I already read a question: Separation of singing voice from music. I don’t need this complex audio processing. I only need some detection mechanism that would detect that there is some voice/vocal playing while the music is playing (or not playing) I need to extract first 30 seconds when a vocalist starts singing along with full band music. See question 2 below. I want to create a music catalog using ASP.NET MVC 2 and Silverlight clients and C#.NET 4.0 programming language that would be front store. On the backend I would also like to create a desktop WPF/Windows application to create the music catalog from already existing music files, most of which have metadata in them ID3v1, ID3v2.3, ID3v2.4, iTunes MP4, WMA, Vorbis Comments and APE Tags etc. I would possibly like to create a web service that would allow catalog contributors to upload a zipped album and trigger metadata extraction of music data and extraction of music segments as described below. I would be happy if I achieve no. 1 below. Let's say I have 1000ths of songs in mp3 (or other formats) grouped in subfolders using some classification (Genre, Artists, Albums, Composers or other groupings). I want to create tables in DB that would organize songs so they can be searched based on different criteria (year, length, above classification or by song title, description etc) like what iTune store allows to their customers. I want to extract metadata from various formats (I will try to get songs in mp3 format, but there may be other popular formats) and allow music Catalog manager person to add missing data from either desktop or web applications. He or other contributors can upload zipped music via an HTML or Silverlight upload or WPF. Can anybody suggest open source libraries, articles, code snippets that can do that in an automatic way using .NET and possibly SQL Server DB? My main questions are these. This is an audio processing challenge. I want to extract 2 segments of music (questions 1 and 2): 1. How to extract a music segment: 1-2 seconds before a vocal starts singing and up to 30 seconds from that point in time and 2. Much more challenging is to find repeating segments (One would usually find or recognize the names of the songs and songs are usually known by these refrains. How would I go about creating a list of songs that go great together like what Genius from iTune does? Is there any characteristics of music that can be used to match songs? The goal is for people quickly scan and recognize songs i.e. associate melody, words with a title/album so they can make intelligent decisions like buying a song, create similar mood songs. Thanks, Rad

    Read the article

  • Suggestions for designing large-scale Java webapp from the ground up

    - by Chris Thompson
    Hi all, I'm about to start developing a large-scale system and I'm struggling with which direction to proceed. I've done plenty of Java web apps before and I have plenty of experience with servlet containers and GWT and some experience with Spring. The problem is most of my webapps have been thrown together just to be a proof of concept and what I'm struggling with is what set of frameworks to use. I need to have both a browser based application as well as a web service designed to support access from mobile devices (Android and iPhone for now). Ideally, I'd like to design this system in such a way that I don't end up rewriting all of my servlets for each client (browser and phone) although I don't mind having some small checks in there to properly format the data. In addition, although I'm the only developer now, that won't necessarily be the case down the road and I'd like to design something that scales well both with regards to traffic and number of developers (isn't just a nightmare to maintain). So where I am now is planning on using GWT to design the browser-based interface but I'm struggling with how to reuse that code with to present the interface (most likely xml) for the mobile devices. Using GWT RPC would, I think, make it relatively easy to do all of the AJAX in the browser, but might make generating xml for the mobile phones difficult. In addition, I like the idea of using something like Hibernate for persistence and Spring Security to secure the whole thing. Again, I'm not sure how well those will cooperate with GWT (I think Hibernate should be fine...) There's obviously a lot more to this than I've presented here, but I've tried to give you the 5-minute overview. I'm a bit stumped and was wondering if anybody in the community had any experience starting from this place. Does what I'm trying to do make sense? Is it realistic? I have no doubt I can make all of these frameworks speak the same language, I'm just wondering if it's worth my time to fight with them. Also, am I missing a framework that would be really beneficial? Thanks in advance and sorry for the relatively broad question... Chris

    Read the article

  • Weirdest occurence ever, UIButton @selector detecting right button, doing wrong 'else_if'?

    - by Scott
    So I dynamically create 3 UIButtons (for now), with this loop: NSMutableArray *sites = [[NSMutableArray alloc] init]; NSString *one = @"Constution Center"; NSString *two = @"Franklin Court"; NSString *three = @"Presidents House"; [sites addObject: one]; [one release]; [sites addObject: two]; [two release]; [sites addObject: three]; [three release]; NSString *element; int j = 0; for (element in sites) { UIButton *button = [UIButton buttonWithType:UIButtonTypeCustom]; //setframe (where on screen) //separation is 15px past the width (45-30) button.frame = CGRectMake(a, b + (j*45), c, d); [button setTitle:element forState:UIControlStateNormal]; button.backgroundColor = [SiteOneController myColor1]; [button addTarget:self action:@selector(showCCView:) forControlEvents:UIControlEventTouchUpInside]; [button setTag:j]; [self.view addSubview: button]; j++; } The @Selector method is here: - (void) showCCView:(id) sender { UIButton *button = (UIButton *)sender; int whichButton = button.tag; NSString* myNewString = [NSString stringWithFormat:@"%d", whichButton]; self.view = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; self.view.backgroundColor = [UIColor whiteColor]; UINavigationBar *cc = [SiteOneController myNavBar1:@"Constitution Center Content"]; UINavigationBar *fc = [SiteOneController myNavBar1:@"Franklin Court Content"]; UINavigationBar *ph = [SiteOneController myNavBar1:@"Presidents House Content"]; if (whichButton = 0) { NSLog(myNewString); [self.view addSubview:cc]; } else if (whichButton = 1) { NSLog(myNewString); [self.view addSubview:fc]; } else if (whichButton = 2) { NSLog(myNewString); [self.view addSubview:ph]; } } Now, it is printing the correct button tag to NSLog, as shown in the method, however EVERY SINGLE BUTTON is displaying a navigation bar with "Franklin Court" as the title, EVERY SINGLE ONE, even though when I click button 0, it says "Button 0 clicked" in the console, but still performs the else if (whichButton = 1) code. Am I missing something here?

    Read the article

  • boost multi_index_container and erase performance

    - by rjoshi
    I have a boost multi_index_container declared as below which is indexed by hash_unique id(unsigned long) and hash_non_unique transaction id(long). Insertion and retrieval of elements is fast but when I delete elements, it is much slower. I was expecting it to be constant time as key is hashed. e.g To erase elements from container for 10,000 elements, it takes around 2.53927016 seconds for 15,000 elements, it takes around 7.137100068 seconds for 20,000 elements, it takes around 21.391720757 seconds Is it something I am missing or is it expected behavior? class Session { public: Session() { //increment unique id static unsigned long counter = 0; boost::mutex::scoped_lock guard(mx); counter++; m_nId = counter; } unsigned long GetId() { return m_nId; } long GetTransactionHandle(){ return m_nTransactionHandle; } .... private: unsigned long m_nId; long m_nTransactionHandle; boost::mutext mx; .... }; typedef multi_index_container< Session*, indexed_by< hashed_unique< mem_fun<Session,unsigned long,&Session::GetId> >, hashed_non_unique< mem_fun<Session,unsigned long,&Session::GetTransactionHandle> > > //end indexed_by > SessionContainer; typedef SessionContainer::nth_index<0>::type SessionById; int main() { ... SessionContainer container; SessionById *pSessionIdView = &get<0>(container); unsigned counter = atoi(argv[1]); vector<Session*> vSes(counter); //insert for(unsigned i = 0; i < counter; i++) { Session *pSes = new Session(); container.insert(pSes); vSes.push_back(pSes); } timespec ts; lock_settime(CLOCK_PROCESS_CPUTIME_ID, &ts); //erase for(unsigned i = 0; i < counter; i++) { pSessionIdView->erase(vSes[i]->getId()); delete vSes[i]; } lock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts); std::cout << "Total time taken for erase:" << ts.tv_sec << "." << ts.tv_nsec << "\n"; return (EXIST_SUCCESS); }

    Read the article

  • kXML (XmlPullParser) not hitting END_TAG

    - by Tejaswi Yerukalapudi
    Hello all, I'm trying to figure out a way to rewrite some of my XML parsing code. I'm currently working with kXML2 and here's my code - byte[] xmlByteArray; try { xmlByteArray = inputByteArray; ByteArrayInputStream xmlStream = new ByteArrayInputStream(xmlByteArray); InputStreamReader xmlReader = new InputStreamReader(xmlStream); KXmlParser parser = new KXmlParser(); parser.setInput(xmlReader); parser.nextTag(); while(true) { int eventType = parser.next(); String tag = parser.getName(); if(eventType == XmlPullParser.START_TAG) { System.out.println("****************** STARTING TAG "+tag+"******************"); if(tag == null || tag.equalsIgnoreCase("")) { continue; } else if(tag.equalsIgnoreCase("Category")) { // Gets the name of the category. String attribValue = parser.getAttributeValue(0); } } if(eventType == XmlPullParser.END_TAG) { System.out.println("****************** ENDING TAG "+tag+"******************"); } else if(eventType == XmlPullParser.END_DOCUMENT) { break; } } catch(Exception ex) { } My input XML is as follows - <root xmlns:sql="urn:schemas-microsoft-com:xml-sql" xmlns=""> <Category name="xyz"> <elmt1>value1</elmt1> <elmt2>value2</elmt2> </Category> <Category name="abc"> <elmt1>value1</elmt1> <elmt2>value2</elmt2> </Category> <Category name="def"> <elmt1>value1</elmt1> <elmt2>value2</elmt2> </Category> My problem briefly is, I'm expecting it to hit XmlPullParser.END_TAG when it encounters a closing xml tag. It does hit the XmlPullParser.START_TAG but it just seems to skip / ignore all the END_TAGs. Is this how is it's supposed to work? Or am I missing something? Any help is much appreciated, Teja.

    Read the article

  • Unit finalization order for application, compiled with run-time packages?

    - by Alexander
    I need to execute my code after finalization of SysUtils unit. I've placed my code in separate unit and included it first in uses clause of dpr-file, like this: project Project1; uses MyUnit, // <- my separate unit SysUtils, Classes, SomeOtherUnits; procedure Test; begin // end; begin SetProc(Test); end. MyUnit looks like this: unit MyUnit; interface procedure SetProc(AProc: TProcedure); implementation var Test: TProcedure; procedure SetProc(AProc: TProcedure); begin Test := AProc; end; initialization finalization Test; end. Note that MyUnit doesn't have any uses. This is usual Windows exe, no console, without forms and compiled with default run-time packages. MyUnit is not part of any package (but I've tried to use it from package too). I expect that finalization section of MyUnit will be executed after finalization section of SysUtils. This is what Delphi's help tells me. However, this is not always the case. I have 2 test apps, which differs a bit by code in Test routine/dpr-file and units, listed in uses. MyUnit, however, is listed first in all cases. One application is run as expected: Halt0 - FinalizeUnits - ...other units... - SysUtils's finalization - MyUnit's finalization - ...other units... But the second is not. MyUnit's finalization is invoked before SysUtils's finalization. The actual call chain looks like this: Halt0 - FinalizeUnits - ...other units... - SysUtils's finalization (skipped) - MyUnit's finalization - ...other units... - SysUtils's finalization (executed) Both projects have very similar settings. I tried a lot to remove/minimize their differences, but I still do not see a reason for this behaviour. I've tried to debug this and found out that: it seems that every unit have some kind of reference counting. And it seems that InitTable contains multiply references to the same unit. When SysUtils's finalization section is called first time - it change reference counter and do nothing. Then MyUnit's finalization is executed. And then SysUtils is called again, but this time ref-count reaches zero and finalization section is executed: Finalization: // SysUtils' finalization 5003B3F0 55 push ebp // here and below is some form of stub 5003B3F1 8BEC mov ebp,esp 5003B3F3 33C0 xor eax,eax 5003B3F5 55 push ebp 5003B3F6 688EB50350 push $5003b58e 5003B3FB 64FF30 push dword ptr fs:[eax] 5003B3FE 648920 mov fs:[eax],esp 5003B401 FF05DCAD1150 inc dword ptr [$5011addc] // here: some sort of reference counter 5003B407 0F8573010000 jnz $5003b580 // <- this jump skips execution of finalization for first call 5003B40D B8CC4D0350 mov eax,$50034dcc // here and below is actual SysUtils' finalization section ... Can anyone can shred light on this issue? Am I missing something?

    Read the article

  • Python multiprocessing global variable updates not returned to parent

    - by user1459256
    I am trying to return values from subprocesses but these values are unfortunately unpicklable. So I used global variables in threads module with success but have not been able to retrieve updates done in subprocesses when using multiprocessing module. I hope I'm missing something. The results printed at the end are always the same as initial values given the vars dataDV03 and dataDV04. The subprocesses are updating these global variables but these global variables remain unchanged in the parent. import multiprocessing # NOT ABLE to get python to return values in passed variables. ants = ['DV03', 'DV04'] dataDV03 = ['', ''] dataDV04 = {'driver': '', 'status': ''} def getDV03CclDrivers(lib): # call global variable global dataDV03 dataDV03[1] = 1 dataDV03[0] = 0 # eval( 'CCL.' + lib + '.' + lib + '( "DV03" )' ) these are unpicklable instantiations def getDV04CclDrivers(lib, dataDV04): # pass global variable dataDV04['driver'] = 0 # eval( 'CCL.' + lib + '.' + lib + '( "DV04" )' ) if __name__ == "__main__": jobs = [] if 'DV03' in ants: j = multiprocessing.Process(target=getDV03CclDrivers, args=('LORR',)) jobs.append(j) if 'DV04' in ants: j = multiprocessing.Process(target=getDV04CclDrivers, args=('LORR', dataDV04)) jobs.append(j) for j in jobs: j.start() for j in jobs: j.join() print 'Results:\n' print 'DV03', dataDV03 print 'DV04', dataDV04 I cannot post to my question so will try to edit the original. Here is the object that is not picklable: In [1]: from CCL import LORR In [2]: lorr=LORR.LORR('DV20', None) In [3]: lorr Out[3]: <CCL.LORR.LORR instance at 0x94b188c> This is the error returned when I use a multiprocessing.Pool to return the instance back to the parent: Thread getCcl (('DV20', 'LORR'),) Process PoolWorker-1: Traceback (most recent call last): File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/process.py", line 232, in _bootstrap self.run() File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/process.py", line 88, in run self._target(*self._args, **self._kwargs) File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/pool.py", line 71, in worker put((job, i, result)) File "/alma/ACS-10.1/casa/lib/python2.6/multiprocessing/queues.py", line 366, in put return send(obj) UnpickleableError: Cannot pickle <type 'thread.lock'> objects In [5]: dir(lorr) Out[5]: ['GET_AMBIENT_TEMPERATURE', 'GET_CAN_ERROR', 'GET_CAN_ERROR_COUNT', 'GET_CHANNEL_NUMBER', 'GET_COUNT_PER_C_OP', 'GET_COUNT_REMAINING_OP', 'GET_DCM_LOCKED', 'GET_EFC_125_MHZ', 'GET_EFC_COMB_LINE_PLL', 'GET_ERROR_CODE_LAST_CAN_ERROR', 'GET_INTERNAL_SLAVE_ERROR_CODE', 'GET_MAGNITUDE_CELSIUS_OP', 'GET_MAJOR_REV_LEVEL', 'GET_MINOR_REV_LEVEL', 'GET_MODULE_CODES_CDAY', 'GET_MODULE_CODES_CMONTH', 'GET_MODULE_CODES_DIG1', 'GET_MODULE_CODES_DIG2', 'GET_MODULE_CODES_DIG4', 'GET_MODULE_CODES_DIG6', 'GET_MODULE_CODES_SERIAL', 'GET_MODULE_CODES_VERSION_MAJOR', 'GET_MODULE_CODES_VERSION_MINOR', 'GET_MODULE_CODES_YEAR', 'GET_NODE_ADDRESS', 'GET_OPTICAL_POWER_OFF', 'GET_OUTPUT_125MHZ_LOCKED', 'GET_OUTPUT_2GHZ_LOCKED', 'GET_PATCH_LEVEL', 'GET_POWER_SUPPLY_12V_NOT_OK', 'GET_POWER_SUPPLY_15V_NOT_OK', 'GET_PROTOCOL_MAJOR_REV_LEVEL', 'GET_PROTOCOL_MINOR_REV_LEVEL', 'GET_PROTOCOL_PATCH_LEVEL', 'GET_PROTOCOL_REV_LEVEL', 'GET_PWR_125_MHZ', 'GET_PWR_25_MHZ', 'GET_PWR_2_GHZ', 'GET_READ_MODULE_CODES', 'GET_RX_OPT_PWR', 'GET_SERIAL_NUMBER', 'GET_SIGN_OP', 'GET_STATUS', 'GET_SW_REV_LEVEL', 'GET_TE_LENGTH', 'GET_TE_LONG_FLAG_SET', 'GET_TE_OFFSET_COUNTER', 'GET_TE_SHORT_FLAG_SET', 'GET_TRANS_NUM', 'GET_VDC_12', 'GET_VDC_15', 'GET_VDC_7', 'GET_VDC_MINUS_7', 'SET_CLEAR_FLAGS', 'SET_FPGA_LOGIC_RESET', 'SET_RESET_AMBSI', 'SET_RESET_DEVICE', 'SET_RESYNC_TE', 'STATUS', '_HardwareDevice__componentName', '_HardwareDevice__hw', '_HardwareDevice__stickyFlag', '_LORRBase__logger', '__del__', '__doc__', '__init__', '__module__', '_devices', 'clearDeviceCommunicationErrorAlarm', 'getControlList', 'getDeviceCommunicationErrorCounter', 'getErrorMessage', 'getHwState', 'getInternalSlaveCanErrorMsg', 'getLastCanErrorMsg', 'getMonitorList', 'hwConfigure', 'hwDiagnostic', 'hwInitialize', 'hwOperational', 'hwSimulation', 'hwStart', 'hwStop', 'inErrorState', 'isMonitoring', 'isSimulated'] In [6]:

    Read the article

  • Android - Start service on boot

    - by Gady
    From everything I've seen on Stack Exchange and elsewhere, I have everything set up correctly to start an IntentService when Android OS boots. Unfortunately it is not starting on boot, and I'm not getting any errors. Maybe the experts can help... Manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.phx.batterylogger" android:versionCode="1" android:versionName="1.0" android:installLocation="internalOnly"> <uses-sdk android:minSdkVersion="8" /> <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.BATTERY_STATS" /> <application android:icon="@drawable/icon" android:label="@string/app_name"> <service android:name=".BatteryLogger"/> <receiver android:name=".StartupIntentReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> </intent-filter> </receiver> </application> </manifest> BroadcastReceiver for Startup: package com.phx.batterylogger; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; public class StartupIntentReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { Intent serviceIntent = new Intent(context, BatteryLogger.class); context.startService(serviceIntent); } } UPDATE: I tried just about all of the suggestions below, and I added logging such as Log.v("BatteryLogger", "Got to onReceive, about to start service"); to the onReceive handler of the StartupIntentReceiver, and nothing is ever logged. So it isn't even making it to the BroadcastReceiver. I think I'm deploying the APK and testing correctly, just running Debug in Eclipse and the console says it successfully installs it to my Xoom tablet at \BatteryLogger\bin\BatteryLogger.apk. Then to test, I reboot the tablet and then look at the logs in DDMS and check the Running Services in the OS settings. Does this all sound correct, or am I missing something? Again, any help is much appreciated.

    Read the article

  • Scala actors: receive vs react

    - by jqno
    Let me first say that I have quite a lot of Java experience, but have only recently become interested in functional languages. Recently I've started looking at Scala, which seems like a very nice language. However, I've been reading about Scala's Actor framework in Programming in Scala, and there's one thing I don't understand. In chapter 30.4 it says that using react instead of receive makes it possible to re-use threads, which is good for performance, since threads are expensive in the JVM. Does this mean that, as long as I remember to call react instead of receive, I can start as many Actors as I like? Before discovering Scala, I've been playing with Erlang, and the author of Programming Erlang boasts about spawning over 200,000 processes without breaking a sweat. I'd hate to do that with Java threads. What kind of limits am I looking at in Scala as compared to Erlang (and Java)? Also, how does this thread re-use work in Scala? Let's assume, for simplicity, that I have only one thread. Will all the actors that I start run sequentially in this thread, or will some sort of task-switching take place? For example, if I start two actors that ping-pong messages to each other, will I risk deadlock if they're started in the same thread? According to Programming in Scala, writing actors to use react is more difficult than with receive. This sounds plausible, since react doesn't return. However, the book goes on to show how you can put a react inside a loop using Actor.loop. As a result, you get loop { react { ... } } which, to me, seems pretty similar to while (true) { receive { ... } } which is used earlier in the book. Still, the book says that "in practice, programs will need at least a few receive's". So what am I missing here? What can receive do that react cannot, besides return? And why do I care? Finally, coming to the core of what I don't understand: the book keeps mentioning how using react makes it possible to discard the call stack to re-use the thread. How does that work? Why is it necessary to discard the call stack? And why can the call stack be discarded when a function terminates by throwing an exception (react), but not when it terminates by returning (receive)? I have the impression that Programming in Scala has been glossing over some of the key issues here, which is a shame, because otherwise it's a truly excellent book.

    Read the article

  • Advantage of creating a generic repository vs. specific repository for each object?

    - by LuckyLindy
    We are developing an ASP.NET MVC application, and are now building the repository/service classes. I'm wondering if there are any major advantages to creating a generic IRepository interface that all repositories implement, vs. each Repository having its own unique interface and set of methods. For example: a generic IRepository interface might look like (taken from this answer): public interface IRepository : IDisposable { T[] GetAll<T>(); T[] GetAll<T>(Expression<Func<T, bool>> filter); T GetSingle<T>(Expression<Func<T, bool>> filter); T GetSingle<T>(Expression<Func<T, bool>> filter, List<Expression<Func<T, object>>> subSelectors); void Delete<T>(T entity); void Add<T>(T entity); int SaveChanges(); DbTransaction BeginTransaction(); } Each Repository would implement this interface (e.g. CustomerRepository:IRepository, ProductRepository:IRepository, etc). The alternate that we've followed in prior projects would be: public interface IInvoiceRepository : IDisposable { EntityCollection<InvoiceEntity> GetAllInvoices(int accountId); EntityCollection<InvoiceEntity> GetAllInvoices(DateTime theDate); InvoiceEntity GetSingleInvoice(int id, bool doFetchRelated); InvoiceEntity GetSingleInvoice(DateTime invoiceDate, int accountId); //unique InvoiceEntity CreateInvoice(); InvoiceLineEntity CreateInvoiceLine(); void SaveChanges(InvoiceEntity); //handles inserts or updates void DeleteInvoice(InvoiceEntity); void DeleteInvoiceLine(InvoiceLineEntity); } In the second case, the expressions (LINQ or otherwise) would be entirely contained in the Repository implementation, whoever is implementing the service just needs to know which repository function to call. I guess I don't see the advantage of writing all the expression syntax in the service class and passing to the repository. Wouldn't this mean easy-to-messup LINQ code is being duplicated in many cases? For example, in our old invoicing system, we call InvoiceRepository.GetSingleInvoice(DateTime invoiceDate, int accountId) from a few different services (Customer, Invoice, Account, etc). That seems much cleaner than writing the following in multiple places: rep.GetSingle(x => x.AccountId = someId && x.InvoiceDate = someDate.Date); The only disadvantage I see to using the specific approach is that we could end up with many permutations of Get* functions, but this still seems preferable to pushing the expression logic up into the Service classes. What am I missing?

    Read the article

  • Delphi LoadLibrary Failing to find DLL other directory - any good options?

    - by Chris Thornton
    Two Delphi programs need to load foo.dll, which contains some code that injects a client-auth certificate into a SOAP request. foo.dll resides in c:\fooapp\foo.dll and is normally loaded by c:\fooapp\foo.exe. That works fine. The other program needs the same functionality, but it resides in c:\program files\unwantedstepchild\sadapp.exe. Both aps load the DLL with this code: FOOLib := LoadLibrary('foo.dll'); ... If FOOLib <> 0 then begin FOOProc := GetProcAddress(FOOLib , 'xInjectCert'); FOOProc(myHttpRequest, Data, CertName); end; It works great for foo.exe, as the dll is right there. sadapp.exe fails to load the library, so FOOLib is 0, and the rest never gets called. The sadapp.exe program therefore silently fails to inject the cert, and when we test against production, it the cert is missing, do the connection fails. Obviously, we should have fully-qualified the path to the DLL. Without going into a lot of details, there were aspects of the testing that masked this problem until recently, and now it's basically too late to fix in code, as that would require a full regression test, and there isn't time for that. Since we've painted ourselves into a corner, I need to know if there are any options that I've overlooked. While we can't change the code (for this release), we CAN tweak the installer. I've found that placing c:\fooapp into the path works. As does adding a second copy of foo.dll directly into c:\program files\unwantedstepchild. c:\fooapp\foo.exe will always be running while sadapp.exe is running, so I was hoping that Windows would find it that way, but apparently not. Is there a way to tell Windows that I really want that same DLL? Maybe a manifest or something? This is the sort of "magic bullet" that I'm looking for. I know I can: Modify the windows path, probably in the installer. That's ugly. Add a second copy of the DLL, directly into the unwantedstepchild folder. Also ugly Delay the project while we code and test a proper fix. Unacceptable. Other? Thanks for any guidance, especially with "Other". I understand that this issue is not necessarily specific to Delphi. Thanks!

    Read the article

  • Where is the method call in the EXE file?

    - by Victor Hurdugaci
    Introduction After watching this video from LIDNUG, about .NET code protection http://secureteam.net/lidnug_recording/Untitled.swf (especially from 46:30 to 57:30), I would to locate the call to a MessageBox.Show in an EXE I created. The only logic in my "TrialApp.exe" is: public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { MessageBox.Show("This is trial app"); } } Compiled on the Release configuration: http://rapidshare.com/files/392503054/TrialApp.exe.html What I do to locate the call Run the application in WinDBG and break after the message box appears. Get the CLR stack with !clrstack: 0040e840 5e21350b [InlinedCallFrame: 0040e840] System.Windows.Forms.SafeNativeMethods.MessageBox(System.Runtime.InteropServices.HandleRef, System.String, System.String, Int32) 0040e894 5e21350b System.Windows.Forms.MessageBox.ShowCore(System.Windows.Forms.IWin32Window, System.String, System.String, System.Windows.Forms.MessageBoxButtons, System.Windows.Forms.MessageBoxIcon, System.Windows.Forms.MessageBoxDefaultButton, System.Windows.Forms.MessageBoxOptions, Boolean) 0040e898 002701f0 [InlinedCallFrame: 0040e898] 0040e934 002701f0 TrialApp.Form1.Form1_Load(System.Object, System.EventArgs) Get the MethodDesc structure (using the address of Form1_Load) !ip2md 002701f0 MethodDesc: 001762f8 Method Name: TrialApp.Form1.Form1_Load(System.Object, System.EventArgs) Class: 00171678 MethodTable: 00176354 mdToken: 06000005 Module: 00172e9c IsJitted: yes CodeAddr: 002701d0 Transparency: Critical Source file: D:\temp\TrialApp\TrialApp\Form1.cs @ 22 Dump the IL of this method (by MethodDesc) !dumpil 001762f8 IL_0000: ldstr "This is trial app" IL_0005: call System.Windows.Forms.MessageBox::Show IL_000a: pop IL_000b: ret So, as the video mentioned, the call to to Show is 5 bytes from the beginning of the method implementation. Now I open CFFExplorer (just like in the video) and get the RVA of the Form1_Load method: 00002083. After this, I go to Address Converter (again in CFF Explorer) and navigate to offset 00002083. There we have: 32 72 01 00 00 70 28 16 00 00 0A 26 2A 7A 03 2C 13 02 7B 02 00 00 04 2C 0B 02 7B 02 00 00 04 6F 17 00 00 0A 02 03 28 18 00 00 0A 2A 00 03 30 04 00 67 00 00 00 00 00 00 00 02 28 19 00 00 0A 02 In the video is mentioned that the first 12 bytes are for the method header so I skip them 2A 7A 03 2C 13 02 7B 02 00 00 04 2C 0B 02 7B 02 00 00 04 6F 17 00 00 0A 02 03 28 18 00 00 0A 2A 00 03 30 04 00 67 00 00 00 00 00 00 00 02 28 19 00 00 0A 02 5 bytes from the beginning of the implementation should be the opcode for method call (28). Unfortunately, is not there. 02 7B 02 00 00 04 2C 0B 02 7B 02 00 00 04 6F 17 00 00 0A 02 03 28 18 00 00 0A 2A 00 03 30 04 00 67 00 00 00 00 00 00 00 02 28 19 00 00 0A 02 Questions: What am I doing wrong? Why there is no method call at that position in the file? Or maybe the video is missing some information... Why the guy in that video replaces the call with 9 zeros?

    Read the article

  • executing a script from maven inside a multi module project

    - by Roman
    Hi everyone. I have this multi-module project. In the beginning of each build I would like to run some bat file. So i did the following: <profile> <id>deploy-db</id> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1.1</version> </plugin> </plugins> <pluginManagement> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1.1</version> <executions> <execution> <phase>validate</phase> <goals> <goal>exec</goal> </goals> <inherited>false</inherited> </execution> </executions> <configuration> <executable>../database/schemas/import_databases.bat</executable> </configuration> </plugin> </plugins> </pluginManagement> </build> </profile> when i run the mvn verify -Pdeploy-db from the root I get this script executed over and over again in each of my modules. I want it to be executed only once, in the root module. What is there that I am missing ? Thanks

    Read the article

  • Is Java assert broken?

    - by BlairHippo
    While poking around the questions, I recently discovered the assert keyword in Java. At first, I was excited. Something useful I didn't already know! A more efficient way for me to check the validity of input parameters! Yay learning! But then I took a closer look, and my enthusiasm was not so much "tempered" as "snuffed-out completely" by one simple fact: you can turn assertions off.* This sounds like a nightmare. If I'm asserting that I don't want the code to keep going if the input listOfStuff is null, why on earth would I want that assertion ignored? It sounds like if I'm debugging a piece of production code and suspect that listOfStuff may have been erroneously passed a null but don't see any logfile evidence of that assertion being triggered, I can't trust that listOfStuff actually got sent a valid value; I also have to account for the possibility that assertions may have been turned off entirely. And this assumes that I'm the one debugging the code. Somebody unfamiliar with assertions might see that and assume (quite reasonably) that if the assertion message doesn't appear in the log, listOfStuff couldn't be the problem. If your first encounter with assert was in the wild, would it even occur to you that it could be turned-off entirely? It's not like there's a command-line option that lets you disable try/catch blocks, after all. All of which brings me to my question (and this is a question, not an excuse for a rant! I promise!): What am I missing? Is there some nuance that renders Java's implementation of assert far more useful than I'm giving it credit for? Is the ability to enable/disable it from the command line actually incredibly valuable in some contexts? Am I misconceptualizing it somehow when I envision using it in production code in lieu of statements like if (listOfStuff == null) barf();? I just feel like there's something important here that I'm not getting. *Okay, technically speaking, they're actually off by default; you have to go out of your way to turn them on. But still, you can knock them out entirely.

    Read the article

  • derby + hibernate ConstraintViolationException using manytomany relationships

    - by user364470
    Hi, I'm new to Hibernate+Derby... I've seen this issue mentioned throughout the google, but have not seen a proper resolution. This following code works fine with mysql, but when I try this on derby i get exceptions: ( each Tag has two sets of files and vise-versa - manytomany) Tags.java @Entity @Table(name="TAGS") public class Tags implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(targetEntity=Files.class ) @ForeignKey(name="USER_TAGS_FILES",inverseName="USER_FILES_TAGS") @JoinTable(name="USERTAGS_FILES", joinColumns=@JoinColumn(name="TAGS_ID"), inverseJoinColumns=@JoinColumn(name="FILES_ID")) public Set<data.Files> getUserFiles() { return userFiles; } @ManyToMany(mappedBy="autoTags", targetEntity=data.Files.class) public Set<data.Files> getAutoFiles() { return autoFiles; } Files.java @Entity @Table(name="FILES") public class Files implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) public long getId() { return id; } @ManyToMany(mappedBy="userFiles", targetEntity=data.Tags.class) public Set getUserTags() { return userTags; } @ManyToMany(targetEntity=Tags.class ) @ForeignKey(name="AUTO_FILES_TAGS",inverseName="AUTO_TAGS_FILES") @JoinTable(name="AUTOTAGS_FILES", joinColumns=@JoinColumn(name="FILES_ID"), inverseJoinColumns=@JoinColumn(name="TAGS_ID")) public Set getAutoTags() { return autoTags; } I add some data to the DB, but when running over Derby these exception turn up (the don't using mysql) Exceptions SEVERE: DELETE on table 'FILES' caused a violation of foreign key constraint 'USER_FILES_TAGS' for key (3). The statement has been rolled back. Jun 10, 2010 9:49:52 AM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: could not delete: [data.Files#3] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:96) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2712) at org.hibernate.persister.entity.AbstractEntityPersister.delete(AbstractEntityPersister.java:2895) at org.hibernate.action.EntityDeleteAction.execute(EntityDeleteAction.java:97) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:268) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:260) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:184) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:613) at org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) at $Proxy13.flush(Unknown Source) at data.HibernateORM.removeFile(HibernateORM.java:285) at data.DataImp.removeFile(DataImp.java:195) at booting.DemoBootForTestUntilTestClassesExist.main(DemoBootForTestUntilTestClassesExist.java:62) I have never used derby before so maybe there is something crutal that i'm missing 1) what am I doing wrong? 2) is there any way of cascading properly when I have 2 many-to-many relationships between two classes? Thanks!

    Read the article

  • Custom Rails actions: I have issues every time

    - by normalocity
    Every time I go to add a custom action to a controller, I completely screw it up somehow. I'm trying to add a route "listings/buyer_listings", that will display all of my listings where someone is a buyer (rather than a seller). With the routes.rb file below, when I go to "listings/buyer_listings", I get routed instead to "users" WTF? In the past, I've had to define my routes using "map.", but this seems like a very verbose way to do something that should work with the :collection specification. You can see that I've done this with many routes as specified toward the end of the file, such as "edit_my_profile", etc. If I put the ":collection" part last my browser routes to the "show" action, which is not the correct action, and which also doesn't make sense to me why it would even do this. If I do "rake routes", my routes look correctly mapped. If I go into a Ruby console and have it recognize the url, it maps to the correct action, so what am I missing? ActionController::Routing::Routes.draw do |map| map.resources :locations map.resources :browse_boxes map.resources :tags map.resources :ratings map.resources :listings, :collection => { :buyer_listings => :get }, :has_many => :bids, :has_many => :comments map.resources :users map.resources :invite_requests map.resource :user_session map.resource :account, :controller => "users" map.root :controller => "listings", :action => "index" # optional, this just sets the root route map.login "login", :controller => "user_sessions", :action => "new" map.logout "logout", :controller => "user_sessions", :action => "destroy" map.search "search", :controller => "listings", :action => "search" map.edit_my_profile "edit_my_profile", :controller => "users", :action => "edit_my_profile" map.all_listings "all_listings", :controller => "listings", :action => "all_listings" map.my_listings "my_listings", :controller => "listings", :action => "my_listings" map.posting_guidelines "posting_guidelines", :controller => "listings", :action => "posting_guidelines" map.filter_on "filter_on", :controller => "listings", :action => "filter_on" map.top_25_tags "top_25_tags", :controller => "tagging_search", :action => "top_25_tags" map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' end

    Read the article

  • How to stream XML data using XOM?

    - by Jonik
    Say I want to output a huge set of search results, as XML, into a PrintWriter or an OutputStream, using XOM. The resulting XML would look like this: <?xml version="1.0" encoding="UTF-8"?> <resultset> <result> [child elements and data] </result> ... ... [1000s of result elements more] </resultset> Because the resulting XML document could be big (tens or hundreds of megabytes, perhaps), I want to output it in a streaming fashion (instead of creating the whole Document in memory and then writing that). The granularity of outputting one <result> at a time is fine, so I want to generate one <result> after another, and write it into the stream. Assume there's already a method that helps with iterating the results and generating Element objects: public nu.xom.Element getNextResult(); So I'd simply like to do something like this pseudocode (automatic flushing enabled, so don't worry about that) : open stream/writer write declaration write start tag for <resultset> while more results: write next <result> element write end tag for <resultset> close stream/writer I've been looking at Serializer, but the necessary methods, writeStartTag(Element), writeEndTag(Element), write(DocType) are protected, not public! Is there no other way than to subclass Serializer to be able to use those methods, or to manually write the start and end tags directly into the stream as Strings, bypassing XOM altogether? (The latter wouldn't be too bad in this simple example, but in the general case it would get quite ugly.) Am I missing something or is XOM just not made for this? With dom4j I could do this easily using XMLWriter - it has constructors that take a Writer or OutputStream, and methods writeOpen(Element), writeClose(Element), writeDocType(DocumentType) etc. Compare to XOM's Serializer where the only public write method is the one that takes a whole Document. Please refrain from answering if you're not familiar with XOM! I specifically want to know if and how you can do this kind of streaming with that library. (This is related to my question about the best dom4j replacement where XOM is a strong contender.)

    Read the article

  • Frustration with generics

    - by sbi
    I have a bunch of functions which are currently overloaded to operate on int and string: bool foo(int); bool foo(string); bool bar(int); bool bar(string); void baz(int p); void baz(string p); I then have a bunch of functions taking 1, 2, 3, or 4 arguments of either int or string, which call the aforementioned functions: void g(int p1) { if(foo(p1)) baz(p1); } void g(string p1) { if(foo(p1)) baz(p1); } void g(int p2, int p2) { if(foo(p1)) baz(p1); if(bar(p2)) baz(p2); } void g(int p2, string p2) { if(foo(p1)) baz(p1); if(bar(p2)) baz(p2); } void g(string p2, int p2) { if(foo(p1)) baz(p1); if(bar(p2)) baz(p2); } void g(string p2, string p2) { if(foo(p1)) baz(p1); if(bar(p2)) baz(p2); } // etc. (The implementation of the g() family is just a placeholder. actually they are more complicated.) More types than the current int or string might have to be introduced at any time. The same goes for functions with more arguments than 4. The current number of identical functions is barely manageable. Add one more variant in either dimension and the combinatoric explosion will be so huge, it might blow away the application. In C++, I'd templatize g() and be done. I understand that .NET generics are different. <sigh> But I have been fighting them for two hours trying to come up with a solution that doesn't involve too much copy&paste of code. To no avail. Surely, C#/.NET/generics/whatever won't require me to type out identical code for a family of functions taking five arguments of either of three types? So what am I missing here?

    Read the article

  • iPhone OS: KVO: Why is my Observer only getting notified at applicationDidfinishLaunching

    - by nickthedude
    I am basically trying to implement an achievement tracking setup in my app. I have a managedObjectModel class called StatTracker to keep track of all sorts of stats and I want my Achievement tracking class to be notified when those stats change so I can check them against a value and see if the user has earned an achievement. I've tried to impliment KVO and I think I'm pretty close to making it happen but the problem I'm running into is this: So in the appDelegate i have an Ivar for my Achievement tracker class, I attach it as an observer to a property value of my statTracker core data entity in the applicationDidFinishLaunching method. I know its making the connection because I've been able to trigger a UIAlert in my AchievementTracker instance, and I've put several log statements that should be triggered whenever the value on the StatTracker's property changes. the log statement appears only once at the application launch. I'm wondering if I'm missing something in the whole object lifecycle scheme of things, I just don't understand why the observer stops getting notified of changes after the applicationDidFinishLaunching method has run. Does it have something to do with the scope of the AchievementTracker reference or more likely the reference to my core data StatTracker is going away once that method finishes up. I guess I'm not sure the right place to place these if that is the case. Would love some help. Here is the code where I add the observer in my appDidFinishLaunching method: [[CoreDataSingleton sharedCoreDataSingleton] incrementStatTrackerStat:@"timesLaunched"]; achievementsObserver = [[AchievementTracker alloc] init]; StatTracker *object = nil; object = [[[CoreDataSingleton sharedCoreDataSingleton] getStatTracker] objectAtIndex:0]; NSLog(@"%@",[object description]); [[CoreDataSingleton sharedCoreDataSingleton] addObserver:achievementsObserver toStat:@"refreshCount"]; here is the code in my core data singleton: -(void) addObserver:(id)observer toStat:(NSString *) statToObserve { NSLog(@"observer added"); NSArray *array = [[NSArray alloc] init]; array = [self getStatTracker]; [[array objectAtIndex:0] addObserver:observer forKeyPath:statToObserve options:NSKeyValueObservingOptionNew | NSKeyValueObservingOptionOld context:NULL]; } and my AchievementTracker: - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { NSLog(@"achievemnt hit"); //NSLog("%@", [change description]); if ([keyPath isEqual:@"refreshCount"] && ((NSInteger)[change valueForKey:@"NSKeyValueObservingOptionOld"] == 60) ) { NSLog(@"achievemnt hit inside"); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"title" message:@"achievement unlocked" delegate:self cancelButtonTitle:@"cancel" otherButtonTitles:nil]; [alert show]; } }

    Read the article

  • Cannot instantiate abstract class or interface : problem while persisting

    - by sammy
    i have a class campaign that maintains a list of AdGroupInterfaces. im going to persist its implementation @Entity @Table(name = "campaigns") public class Campaign implements Serializable,Comparable<Object>,CampaignInterface { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @OneToMany ( cascade = {CascadeType.ALL}, fetch = FetchType.EAGER, targetEntity=AdGroupInterface.class ) @org.hibernate.annotations.Cascade( value = org.hibernate.annotations.CascadeType.DELETE_ORPHAN ) @org.hibernate.annotations.IndexColumn(name = "CHOICE_POSITION") private List<AdGroupInterface> AdGroup; public Campaign() { super(); } public List<AdGroupInterface> getAdGroup() { return AdGroup; } public void setAdGroup(List<AdGroupInterface> adGroup) { AdGroup = adGroup; } public void set1AdGroup(AdGroupInterface adGroup) { if(AdGroup==null) AdGroup=new LinkedList<AdGroupInterface>(); AdGroup.add(adGroup); } } AdGroupInterface's implementation is AdGroups. when i add an adgroup to the list in campaign, campaign c; c.getAdGroupList().add(new AdGroups()), etc and save campaign it says"Cannot instantiate abstract class or interface :" AdGroupInterface its not recognizing the implementation just before persisting... Whereas Persisting adGroups separately works. when it is a member of another entity, it doesnt get persisted. import java.io.Serializable; import java.util.List; import javax.persistence.*; @Entity @DiscriminatorValue("1") @Table(name = "AdGroups") public class AdGroups implements Serializable,Comparable,AdGroupInterface{ /** * */ private static final long serialVersionUID = 1L; private Long Id; private String Name; private CampaignInterface Campaign; private MonetaryValue DefaultBid; public AdGroups(){ super(); } public AdGroups( String name, CampaignInterface campaign) { super(); this.Campaign=new Campaign(); Name = name; this.Campaign = campaign; DefaultBid = defaultBid; AdList=adList; } @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name="AdGroup_Id") public Long getId() { return Id; } public void setId(Long id) { Id = id; } @Column(name="AdGroup_Name") public String getName() { return Name; } public void setName(String name) { Name = name; } @ManyToOne @JoinColumn (name="Cam_ID", nullable = true,insertable = false) public CampaignInterface getCampaign() { return Campaign; } public void setCampaign(CampaignInterface campaign) { this.Campaign = campaign; } } what am i missing?? please look into it ...

    Read the article

  • Is this an idiomatic way to pass mocks into objects?

    - by Billy ONeal
    I'm a bit confused about passing in this mock class into an implementation class. It feels wrong to have all this explicitly managed memory flying around. I'd just pass the class by value but that runs into the slicing problem. Am I missing something here? Implementation: namespace detail { struct FileApi { virtual HANDLE CreateFileW( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile ) { return ::CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } virtual void CloseHandle(HANDLE handleToClose) { ::CloseHandle(handleToClose); } }; } class File : boost::noncopyable { HANDLE hWin32; boost::scoped_ptr<detail::FileApi> fileApi; public: File( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile, __in detail::FileApi * method = new detail::FileApi() ) { fileApi.reset(method); hWin32 = fileApi->CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } }; namespace detail { struct FileApi { virtual HANDLE CreateFileW( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile ) { return ::CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } virtual void CloseHandle(HANDLE handleToClose) { ::CloseHandle(handleToClose); } }; } class File : boost::noncopyable { HANDLE hWin32; boost::scoped_ptr<detail::FileApi> fileApi; public: File( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile, __in detail::FileApi * method = new detail::FileApi() ) { fileApi.reset(method); hWin32 = fileApi->CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } ~File() { fileApi->CloseHandle(hWin32); } }; Tests: namespace detail { struct MockFileApi : public FileApi { MOCK_METHOD7(CreateFileW, HANDLE(LPCWSTR, DWORD, DWORD, LPSECURITY_ATTRIBUTES, DWORD, DWORD, HANDLE)); MOCK_METHOD1(CloseHandle, void(HANDLE)); }; } using namespace detail; using namespace testing; TEST(Test_File, OpenPassesArguments) { MockFileApi * api = new MockFileApi; EXPECT_CALL(*api, CreateFileW(Eq(L"BozoFile"), Eq(56), Eq(72), Eq(reinterpret_cast<LPSECURITY_ATTRIBUTES>(67)), Eq(98), Eq(102), Eq(reinterpret_cast<HANDLE>(98)))) .Times(1).WillOnce(Return(reinterpret_cast<HANDLE>(42))); File test(L"BozoFile", 56, 72, reinterpret_cast<LPSECURITY_ATTRIBUTES>(67), 98, 102, reinterpret_cast<HANDLE>(98), api); }

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >