Search Results

Search found 15139 results on 606 pages for 'scripting interface'.

Page 511/606 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • Image Wells, Core Data, and Sqlite files.

    - by Sway
    I've got a mac application that I've developed. I use it to create sqlite files that are bundled with my iphone app. The mac app uses Core Data and bindings and is working fine except for one "weird" issue. I use an NSImageView (or Image Well) to allow me to drag and drop jpg files. This is bound through to an optional binary attribute in my model class. For some reason when I drag and drop a 4k jpg file it onto the image well and save the sqlite file. The data saved to the binary column is over 15 times larger than it should be. Whereas if I use an application like SQLiteManager and add the image into the row in the database. The binary data is the correct (expected size). File 4k jpg Actual size: 2371. Persisted via Core Data size: 35810. Can anyone give me a suggestion as to why this might be happening? Do I need to set some setting in Interface Builder or write some custom code?

    Read the article

  • Why isn't my UITableView in a popover appearing in the correct scroll position?

    - by zbrimhall
    I have a split view-based app that presents a master-detail interface, and uses a popover to present the master list when in portrait mode. The popover presents a sectioned table view that ultimately gets populated by a subclass of NSFetchedResultsController. I can tap the tool bar button to present the master list, scroll to whatever row, and tap the row to dismiss the popover. My problem is that if the table is scrolled past the top of the second section, when I dismiss the popover and then later tap the toolbar button to re-present it, the table's scroll position is always set such that the first row of the second section is at the top of the list. If I haven't scrolled past the top of the second section, it correctly remembers its scroll position when the table is presented again. Similarly, in landscape mode, if I scroll the table past the top of the third section and then rotate to portrait, when I come back to landscape the scroll position is always set such that the first row of the third section is at the top of the list. I tried calling -scrollToNearestSelectedRowAtScrollPosition:animated in both the master view controller's -viewWillAppear, as well as in the split view delegate's splitViewController:popoverController:willPresentViewController:, to no effect. Anybody have a clue what I might be doing wrong?

    Read the article

  • java serialization problems with different JVMs

    - by Alberto
    I am having trouble using serialization in Java. I've searched the web for a solution but haven't found an answer yet. The problem is this - I have a Java library (I have the code and I export it to an archive prior to executing the code) which I need to use with two differents JVMs. One JVM is on the server (Ubuntu, running Java(TM) JRE SE Runtime Environment (build 1.7.0_09-b05)) and the other on Android 2.3.3. I compiled the library in Java 1.6. Now, I am trying to import to the client, an object exported from the server, but I receive this error: java.io.InvalidClassException: [Lweka.classifiers.functions.MultilayerPerceptron$NeuralEnd;; Incompatible class (SUID): [Lweka.classifiers.functions.MultilayerPerceptron$NeuralEnd;: static final long serialVersionUID =-359311387972759020L; but expected [Lweka.classifiers.functions.MultilayerPerceptron$NeuralEnd;: static final long serialVersionUID =1920571045915494592L; I do have an explicit serial version UID declared on the class MultilayerPerceptron$NeuralEnd, like this: protected class NeuralEnd extends NeuralConnection { private static final long serialVersionUID = 7305185603191183338L; } Where NeuralConnection implements the java.io.Serializable interface. If I do a serialver on MultilayerPerceptron$NeuralEnd I receive the serialVersionUID which I declared. So, why have both JVMs changed this value? Can you help me? Thanks, Alberto

    Read the article

  • NSScrollView frame and flipped documentView

    - by StrAbZ
    Hi, I have problems with NSScrollView, It is not displayed the way I want. Yes I know there is a lot of post about it around the web, I need to override the isFlipped, in order to make it return YES, in my NSView subclass. Ok, it's done, so now, my scrollView scroll from top to bottom, and not in the reverse way, as it was before overriding isFlipped. But, this is the second part, my real problem, which I didn't found any answer on the web, how the hell I'm supposed to code, or create my view in interface builder, if everything is flipped? If I put something at the top, it is displayed a the bottom… do you have any magic trick to handle that? And my last problem, is the NSScrollView frame. before setting the documentView of my scroll view, everything is fine, the scrollView is displayed at the place I choose, but, when I set the document view, it looks like the scrollview frame looks bigger, so I have to resize it…. is this a normal behavior? Thank you very much.

    Read the article

  • pooling with Windsor

    - by AlonEl
    I've tried out the pooling lifestyle with Windsor. Lets say I want multiple CustomerTasks to work with a pool of ILogger's. when i try resolving more times than maxPoolSize, new loggers keeps getting created. what am i missing and what exactly is the meaning of min and max pool size? the xml configuration i use is (demo code): <component id="customertasks" type="WindsorTest.CustomerTasks, WindsorTestCheck" lifestyle="transient" /> <component id="logger.console" service="WindsorTest.ILogger, WindsorTestCheck" type="WindsorTest.ConsoleLogger, WindsorTestCheck" lifestyle="pooled" initialPoolSize="2" maxPoolSize="5" /> Code is: public interface ILogger { void Log(string message); } public class ConsoleLogger : ILogger { private static int count = 0; public ConsoleLogger() { Console.WriteLine("Hello from constructor number:" + count); count++; } public void Log(string message) { Console.WriteLine(message); } } public class CustomerTasks { private readonly ILogger logger; public CustomerTasks(ILogger logger) { this.logger = logger; } public void SaveCustomer() { logger.Log("Saved customer"); } }

    Read the article

  • number of args for stored procedure PLS00306

    - by Peter Kaleta
    Hi I have problem with calling for my procedure.Oracle scrams pls00306 Error: Wrong number of types of arguments in call to procedure. With my type declaration procedure has exact the same declaration like in header below. If I run it as separate prcedure it works , when i work in ODCI interface for exensible index creation , it throws pls 00306. MEMBER PROCEDURE FILL_TREE_LVL(target_column VARCHAR2, cur_lvl NUMBER, max_lvl NUMBER, parent_rect NUMBER,start_x NUMBER, start_y NUMBER,end_x NUMBER, end_y NUMBER) IS stmt VARCHAR2(2000); rect_id NUMBER; diff_x NUMBER; diff_y NUMBER; new_start_x NUMBER; new_end_x NUMBER; i NUMBER; j NUMBER; BEGIN {...} END FILL_TREE_LVL; STATIC FUNCTION ODCIINDEXCREATE (ia SYS.ODCIINDEXINFO, parms VARCHAR2, env SYS.ODCIEnv) RETURN NUMBER IS stmt VARCHAR2(2000); stmt2 VARCHAR2(2000); min_x NUMBER; max_x NUMBER; min_y NUMBER; max_y NUMBER; lvl NUMBER; rect_id NUMBER; pt_tab VARCHAR2(50); rect_tab VARCHAR2(50); cnum NUMBER; TYPE point_rect is RECORD( point_id NUMBER, rect_id NUMBER ); TYPE point_rect_tab IS TABLE OF point_rect; pr_table point_rect_tab; BEGIN {...} FILL_TREE_LVL('any string',0,lvl,min_x,min_y,max_x, max_y); {...} END;

    Read the article

  • detecting double free object, release or not release ...

    - by mongeta
    Hello, If we have this code in our interface .h file: NSString *fieldNameToStoreModel; NSFetchedResultsController *fetchedResultsController; NSManagedObjectContext *managedObjectContext; DataEntered *dataEntered; In our implementation file .m we must have: - (void)dealloc { [fieldNameToStoreModel release]; [fetchedResultsController release]; [managedObjectContext release]; [dataEntered release]; [super dealloc]; } The 4 objects are assigned from a previous UIViewController, like this: UIViewController *detailViewController; detailViewController = [[CarModelSelectViewController alloc] initWithStyle:UITableViewStylePlain]; ((CarModelSelectViewController *)detailViewController).dataEntered = self.dataEntered; ((CarModelSelectViewController *)detailViewController).managedObjectContext = self.managedObjectContext; ((CarModelSelectViewController *)detailViewController).fieldNameToStoreModel = self.fieldNameToStoreModel; [self.navigationController pushViewController:detailViewController animated:YES]; [detailViewController release]; The objects that now live in the new UIViewController, are the same as the previous UIViewController, and I can't release them in the new UIViewController ? The problems is that sometimes, my app crashes when I leave the new UIViewController and go to the previous one, not always. Normally the error that I'm getting is a double free object. I've used the malloc_error_break but I'm still not sure wich object is. Sometimes I can go from the previous UIViewController to the next one and come back 4 or 5 times, and the double free object appears. If I don't release any object, all is working and Instruments says that there are no memory leaks ... So, the final question, should I release those objects here or not ? Thanks, m.

    Read the article

  • Android -Layout Manager not showing buttons

    - by Arun
    The following is my code. I want an interface where I have a single line textbox, a multiline textbox with 2 buttons below. I want the multiline textbox to occupy all the space available after rendering the buttons and textbox. For this I created two LinearLayouts inside the main layout. The first one has vertical orientation with layout_width set to fill_parent. The second one is horizontal with fill_parent again. The first one has a textbox for which I have set the layout_height to fill parent. The second one has two textboxes OK and Cancel. When I run this application I get the UI, but the Buttons are very small. I have to set the button height manually. What am I doing wrong here. I don't want to hard code the button height. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1"> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="Name"></TextView> <EditText android:layout_width="fill_parent" android:layout_height="wrap_content"></EditText> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="Contents"></TextView> <EditText android:layout_width="fill_parent" android:layout_height="fill_parent" android:gravity="top" /> </LinearLayout> <LinearLayout android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_weight="1"> <Button android:id="@+id/okbutt" android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="OK" android:layout_weight="1" /> <Button android:layout_width="fill_parent" android:layout_height="fill_parent" android:text="Cancel" android:layout_weight="1" /> </LinearLayout> Thanks, Arun

    Read the article

  • Extending ASP.NET role providers

    - by Quick Joe Smith
    Because the RoleProvider interface seems to treat roles as nothing more than simple strings, I'm wondering if there is any non-hacky way to apply an optional value for a role on a per-user basis. Our current login management system implements roles as key-value pairs, where the value part is optional and usually used to clarify or limit the permissions granted by a role. For example, a role 'editor' might contain a user 'barry', but for 'barry' it will have an optional value 'raptors', which the system would interpret to mean that Barry can only edit articles filed under the 'raptors' category. I have seen elsewhere a suggestion to simply create additional delimited roles, such as 'editor.raptors' or somesuch. That's not really going to be ideal because it would bloat the number of roles greatly, and I can tell it's going to be a very hard sell to replace our current implementation (which is also very less than ideal, but has the advantage of being custom made to work with our user database). I can tell already that the concatenation method mentioned above is going to involve a lot of tedious string-splitting and partial matching. Is there a better way?

    Read the article

  • Any good tools or tips for fuzz testing Windows forms applications?

    - by Ogre Psalm33
    I'm maintaining a ~300K LOC C# legacy thick-client application with a Windows.Forms interface. The app is full of little bugs and quirks. For example, I recently discovered a bug where if a users edits and tabs (not clicks) through cells on a DataViewGrid, and leaves the a certain cell selected, the app gets an "Object reference not set to an instance of an object" exception. I discover (or get a bug report of) something new like this about every week or two. I've had enough, and was thinking of trying some sort of fuzz testing on the application to try to ferret out undiscovered issues. If I roll-my-own fuzz testing, I'd assume I at least need to be able to generate test harnesses that run pieces of my app (main window, FormX, FormY, FormZ, ...) independently and try to inject events into them. I was trying to look for tools suited for this, but so far have come up with nothing for Win Forms. (There seems to be no shortage of fuzz testing tools for web apps, however). Any helpful ideas?

    Read the article

  • The IOC "child" container / Service Locator

    - by Mystagogue
    DISCLAIMER: I know there is debate between DI and service locator patterns. I have a question that is intended to avoid the debate. This question is for the service locator fans, who happen to think like Fowler "DI...is hard to understand...on the whole I prefer to avoid it unless I need it." For the purposes of my question, I must avoid DI (reasons intentionally not given), so I'm not trying to spark a debate unrelated to my question. QUESTION: The only issue I might see with keeping my IOC container in a singleton (remember my disclaimer above), is with the use of child containers. Presumably the child containers would not themselves be singletons. At first I thought that poses a real problem. But as I thought about it, I began to think that is precisely the behavior I want (the child containers are not singletons, and can be Disposed() at will). Then my thoughts went further into a philosophical realm. Because I'm a service locator fan, I'm wondering just how necessary the notion of a child container is in the first place. In a small set of cases where I've seen the usefulness, it has either been to satisfy DI (which I'm mostly avoiding anyway), or the issue was solvable without recourse to the IOC container. My thoughts were partly inspired by the IServiceLocator interface which doesn't even bother to list a "GetChildContainer" method. So my question is just that: if you are a service locator fan, have you found that child containers are usually moot? Otherwise, when have they been essential? extra credit: If there are other philosophical issues with service locator in a singleton (aside from those posed by DI advocates), what are they?

    Read the article

  • Silently catch windows error popups when calling System.load() in java

    - by Marcelo Morales
    I have a Java Swing application, which needs to load some native libraries in windows. The problem is that the client could have different versions of those libraries. In one recent version, either the names changed or the order on which the libraries must be loaded changed. To keep up, we iterated over all possible library names but some fail to load (due to it's nonexistence or because another must be loaded previously). This idea works on older Windows but on latter ones it shows a error popup. I saw on question 4058303 (Silently catch windows error popups when calling LoadLibrary) that I need to call SetErrorMode but I am not sure how to call SetErrorMode from jna. I tried to follow the idea from question 11038595 but I am not sure how to proceed. public interface CKernel32 extends Kernel32 { CKernel32 INSTANCE = (CKernel32) Native.loadLibrary("kernel32", CKernel32.class); // TODO: HELP: HOW define the SetErrorMode function } How do I define (from the SetErrorMode documentation): UINT WINAPI SetErrorMode( _In_ UINT uMode ); in the line marked as TODO: HELP:? Thanks in advance

    Read the article

  • Posting XML form data to a RESTful Server with Javascript or PHP

    - by pjs-worker
    Hi folks, I've been given the task of posting to a RESTful server. I'm new to the official "REST" but I've played with the concept before. However, this time I have an XML Payload example file that I am supposed to post. I'm struggling to figure out how the two relate. Can you help? Right now I can post to a specific site, say www.pcpost.com/schema/Application I can generate the URL for the inital, ie: postApplication?userid=4&... Being relatively new to web programming, I find that don't know how to take the following and interface it with the server. I'm at least familiar with Javascript and PHP. If this is impossible with those two types, I can learn whatever would be best. Thanks for your help on this. C <?xml version=\"1.0\" ?> <Application xmlns="http://www.pcpost.com/schema/Application" SchemaVersion="1.0" ProgramId="8" ApplicationDate="2009-08-29"> <Vendors> <Vendor Role="Applicant" Company="Test Company" Contact="Smith, John"/> <Vendor Role="Seller" Company="Test Company" Contact="Doe, Jane"/> <Vendor Role="Installer" Company="Test Company" Contact="Funk, Carl"/> </Vendors> <Participants> <Participant TaxStatus="Individual" Sector="Commercial"> <Roles> <Role>Host Customer</Role> </Roles> </Participants> </Application>

    Read the article

  • Knowing the selections made on a 'multichooser' box in a mechanical turk hit (using Command Line Too

    - by gveda
    Hi All, I am new to Amazon Mechanical Turk, and wanted to create a hit with a qualification task. I am using the command line tools interface. One of the questions in my qualification task involves users selecting a number of options. I use a 'multichooser' selection type. Now I want to grade the responses based on the selections, where each selection has a different score. So for example, s1 has a score of 5, s2 of 10, s3 of 6, and so on. If the user selects s1 and s3, he/she gets a score of 11. Unfortunately, doing something like the following does not work: <AnswerOption> <SelectionIdentifier>s1</SelectionIdentifier> <AnswerScore>5</AnswerScore> </AnswerOption> <AnswerOption> <SelectionIdentifier>s2</SelectionIdentifier> <AnswerScore>10</AnswerScore> </AnswerOption> <AnswerOption> <SelectionIdentifier>s3</SelectionIdentifier> <AnswerScore>6</AnswerScore> </AnswerOption> If I do this, when I select multiple things, I get a score of 0. If I select only one option, say s1, then I get the appropriate score. Can you please help me on how to go about this? I could ask the same question 5 times with the same options, but then users might choose the same answer multiple times - something I wish to avoid. Thanks! Gaurav

    Read the article

  • Custom UITableViewCell changing indexPath While Scrolling ?

    - by Chris
    I have a custom UITableViewCell which I created in Interface Builder. I am successfully Dequeuing cells, but as I scroll, the cells appear to begin calling different indexPaths. In this example, I am feeding the current indexPath.section and indexPath.row into the customCellLabel. As I scroll the table up and down, some of the cells will change. The numbers can be all over the place, but the cells are not skipping around visually. If I comment out the if(cell==nil), then the problem goes away. If I use a standard cell, the problem goes away. Ideas why this might be happening? // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = (UITableViewCell *)[tableView dequeueReusableCellWithIdentifier:@"CalendarEventCell"]; if (cell == nil) { NSLog(@"Creating New Cell !!!!!"); NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"CalendarEventCell" owner:self options:nil]; cell = (UITableViewCell *)[nib objectAtIndex:0]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; } // Set up the cell... [customCellLabel setText:[NSString stringWithFormat:@"%d - %d",indexPath.section, indexPath.row]]; return cell; }

    Read the article

  • Variable field in a constraint annotation

    - by Javi
    Hello, I need to create a custom constraint annotation which can access the value of another field of my bean. I'll use this annotation to validate the field because it depends on the value of the other but the way I define it the compiler says "The value for annotation attribute" of my field "must be a constant expression". I've defined it in this way: @Target(ElementType.FIELD) @Retention(RetentionPolicy.RUNTIME) @Constraint(validatedBy=EqualsFieldValidator.class) @Documented public @interface EqualsField { public String field(); String message() default "{com.myCom.annotations.EqualsField.message}"; Class<?>[] groups() default {}; Class<? extends Payload>[] payload() default {}; } public class EqualsFieldValidator implements ConstraintValidator<EqualsField, String>{ private EqualsField equalsField; @Override public void initialize(EqualsField equalsField) { this.equalsField = equalsField; } @Override public boolean isValid(String thisField, ConstraintValidatorContext arg1) { //my validation } } and in my bean I want something like this: public class MyBean{ private String field1; @EqualsField(field=field1) private String field2; } Is there any way to define the annotation so the field value can be a variable? Thanks

    Read the article

  • Android java.lang.VerifyError for private method with annotated argument.

    - by alex2k8
    I have a very simple project that compiles, but can't be started on Emulator. The problem is with this method: private void bar(@Some String a) {} // java.lang.VerifyError The issue can be avoided if annotation removed private void bar(String a) {} // OK or the method visibility changed: void bar(@Some String a) {} // OK public void bar(@Some String a) {} // OK protected void bar(@Some String a) {} // OK Any idea what is wrong with original method? Is this a dalvik bug, or? If some one whould like to experiment with code, here it is: Test.java: public class Test { private void bar(@Some String a) {} public void foo() { bar(null); } } Some.java: public @interface Some {} MainActivity.java: public class MainActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); new Test().foo(); } } Stack trace: ERROR/dalvikvm(1358): Could not find method com.my.Test.bar, referenced from method com.my.Test.foo WARN/dalvikvm(1358): VFY: unable to resolve direct method 11: Lcom/my/Test;.bar (Ljava/lang/String;)V WARN/dalvikvm(1358): VFY: rejecting opcode 0x70 at 0x0001 WARN/dalvikvm(1358): VFY: rejected Lcom/my/Test;.foo ()V WARN/dalvikvm(1358): Verifier rejected class Lcom/my/Test; DEBUG/AndroidRuntime(1358): Shutting down VM WARN/dalvikvm(1358): threadid=3: thread exiting with uncaught exception (group=0x4000fe70) ERROR/AndroidRuntime(1358): Uncaught handler: thread main exiting due to uncaught exception ERROR/AndroidRuntime(1358): java.lang.VerifyError: com.my.Test ERROR/AndroidRuntime(1358): at com.my.MainActivity.onCreate(MainActivity.java:13) ERROR/AndroidRuntime(1358): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) ERROR/AndroidRuntime(1358): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2231) ERROR/AndroidRuntime(1358): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2284) ERROR/AndroidRuntime(1358): at android.app.ActivityThread.access$1800(ActivityThread.java:112) ERROR/AndroidRuntime(1358): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1692) ERROR/AndroidRuntime(1358): at android.os.Handler.dispatchMessage(Handler.java:99) ERROR/AndroidRuntime(1358): at android.os.Looper.loop(Looper.java:123) ERROR/AndroidRuntime(1358): at android.app.ActivityThread.main(ActivityThread.java:3948) ERROR/AndroidRuntime(1358): at java.lang.reflect.Method.invokeNative(Native Method) ERROR/AndroidRuntime(1358): at java.lang.reflect.Method.invoke(Method.java:521) ERROR/AndroidRuntime(1358): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:782) ERROR/AndroidRuntime(1358): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:540) ERROR/AndroidRuntime(1358): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Should I use a collection here?

    - by Eva
    So I have code set up like this: public interface IInterface { public void setField(Object field); } public abstract class AbstractClass extends JPanel implements IInterface { private Object field_; public void setField(Object field) { field_ = field; } } public class ClassA extends AbstractClass { public ClassA() { // unique ClassA constructor stuff } public Dimension getPreferredSize() { return new Dimension(1, 1); } } public class ClassB extends AbstractClass { public ClassB() { // unique ClassB constructor stuff } public Dimension getPreferredSize() { return new Dimension(42, 42); } } public class ConsumerA { public ConsumerA(Collection<AbstractClass> collection) { for (AbstractClass abstractClass : collection) { abstractClass.setField(this); abstractClass.repaint(); } } } All hunky-dory so far, until public class ConsumerB { // Option 1 public ConsumerB(ClassA a, ClassB b) { methodThatOnlyTakesA(a); methodThatOnlyTakesB(b); } // Option 2 public ConsumerB(Collection<AbstractClass> collection) { for (IInterface i : collection) { if (i instanceof ClassA) { methodThatOnlyTakesA((ClassA) i); else if (i instanceof ClassB) { methodThatOnlyTakesB((ClassB) i); } } } } public class UsingOption1 { public static void main(String[] args) { ClassA a = new ClassA(); ClassB b = new ClassB(); Collection<AbstractClass> collection = Arrays.asList(a, b); ConsumerA consumerA = new ConsumerA(collection); ConsumerB consumerB = new ConsumerB(a, b); } } public class UsingOption2 { public static void main(String[] args) { Collection<AbstractClass> collection = Arrays.asList(new ClassA(), new ClassB()); ConsumerA = new ConsumerA(collection); ConsumerB = new ConsumerB(collection); } } With a lot more classes extending AbstractClass, both options get unwieldly. Option1 would make the constructor of ConsumerB really long. Also UsingOption1 would get long too. Option2 would have way more if statements than I feel comfortable with. Is there a viable Option3? If it helps, ClassA and ClassB have all the same methods, they're just implemented differently. Thanks for slogging through my code!

    Read the article

  • Adding an annotation to a runtime generated method/class using Javassist

    - by Idan K
    I'm using Javassist to generate a class foo, with method bar, but I can't seem to find a way to add an annotation (the annotation itself isn't runtime generated) to the method. The code I tried looks like this: ClassPool pool = ClassPool.getDefault(); // create the class CtClass cc = pool.makeClass("foo"); // create the method CtMethod mthd = CtNewMethod.make("public Integer getInteger() { return null; }", cc); cc.addMethod(mthd); ClassFile ccFile = cc.getClassFile(); ConstPool constpool = ccFile.getConstPool(); // create the annotation AnnotationsAttribute attr = new AnnotationsAttribute(constpool, AnnotationsAttribute.visibleTag); Annotation annot = new Annotation("MyAnnotation", constpool); annot.addMemberValue("value", new IntegerMemberValue(ccFile.getConstPool(), 0)); attr.addAnnotation(annot); ccFile.addAttribute(attr); // generate the class clazz = cc.toClass(); // length is zero java.lang.annotation.Annotation[] annots = clazz.getAnnotations(); And obviously I'm doing something wrong since annots is an empty array. This is how the annotation looks like: @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD) public @interface MyAnnotation { int value(); }

    Read the article

  • Rails: constraint violation on create but not on update

    - by justinbach
    Note: This is a "railsier" (and more succinct) version of this question, which was getting a little long. I'm getting Rails behavior on a production server that I can't replicate on the development server. The codebases are identical save for credentials and caching settings, and both are powered by Oracle 10g databases with identical schema (but different data). My Rails application contains a user model, which has_one registration; registration in turn has_and_belongs_to_many company_ownerships through a registration_ownerships table. Upon registering, users fill out data pertinent to all three models, including a series of checkboxes indicating what registration_ownerships might apply to their account. On the dev server, the registration process is seamless, no matter what data is entered. On production, however, if users check off any of the company ownership fields before submitting their registration, Oracle complains about a constraint violation on the primary key of the company_ownerships table (which is a two-field key based on company_ownership_id and registration_id) and users get the standard Rails 500 error screen. In every case, I've verified that no conflicting record on these two fields exists in the production database, so I don't know why the constraint is getting violated. To further confuse things, if a user registers without listing any ownerships and later goes back and modifies their account to reflect ownership data (which is done through the same interface), the application happily complies with their request and Oracle is well-behaved (this is both on production and dev). I've spent the past couple days trying to figure out what might be causing this problem and am reaching the end of my wits. Any advice would be greatly appreciated!

    Read the article

  • Empty UIView with minimal drawRect: overhead

    - by Benjohn Barnes
    Hi, I have an application that has three nested views that are mechanically important, but have no visual elements: A vanila UIView that doesn't have any content of its own, and is simply used as a host for CALayers. A UIScrollView (that is queried for it's origin and used to position CALayers in 3d: I really only use this view to faithfully replicate the scroll view's "mechanics"), The scroll view's contents: a UIView subclass. It simply picks up touch events and passes them to a delegate - all that is important are its UIResponder machinery. The UIView hosting CALayers is a sibling of a UIImageView that is a background image over which the CALayers are drawn. I'd really like to ensure that none of these empty UIViews have any drawing or compositing overhead (in time, or storage) associated with them, or if that's not possible, to get this overhead as small as possible, and to understand it so that I can perhaps decide if I should try a different approach. In interface builder, I've set all of the views to not clear their context before drawing. I've not set them to be opaque though, because they definitely are not opaque - they are completely transparent. I've found that I need to give the scroll view contents a transparent clear colour (again in IB by setting the background colour's opacity to zero), and this suggests that it is being drawn, which I don't want. So, in short, I've not got much idea of what is and isn't getting drawn (anyone know of a tool like Quartz Debug for iPhone / simulator?), or how to go about stopping things from getting drawn. Advice would be very welcome! Thanks, Benjohn

    Read the article

  • IUsable: controlling resources in a better way than IDisposable

    - by Ilya Ryzhenkov
    I wish we have "Usable" pattern in C#, when code block of using construct would be passed to a function as delegate: class Usable : IUsable { public void Use(Action action) // implements IUsable { // acquire resources action(); // release resources } } and in user code: using (new Usable()) { // this code block is converted to delegate and passed to Use method above } Pros: Controlled execution, exceptions The fact of using "Usable" is visible in call stack Cons: Cost of delegate Do you think it is feasible and useful, and if it doesn't have any problems from the language point of view? Are there any pitfalls you can see? EDIT: David Schmitt proposed the following using(new Usable(delegate() { // actions here }) {} It can work in the sample scenario like that, but usually you have resource already allocated and want it to look like this: using (Repository.GlobalResource) { // actions here } Where GlobalResource (yes, I know global resources are bad) implements IUsable. You can rewrite is as short as Repository.GlobalResource.Use(() => { // actions here }); But it looks a little bit weird (and more weird if you implement interface explicitly), and this is so often case in various flavours, that I thought it deserve to be new syntactic sugar in a language.

    Read the article

  • Using Moq callbacks correctly according to AAA

    - by Hadi Eskandari
    I've created a unit test that tests interactions on my ViewModel class in a Silverlight application. To be able to do this test, I'm mocking the service interface, injected to the ViewModel. I'm using Moq framework to do the mocking. to be able to verify bounded object in the ViewModel is converted properly, I've used a callback: [Test] public void SaveProposal_Will_Map_Proposal_To_WebService_Parameter() { var vm = CreateNewCampaignViewModel(); var proposal = CreateNewProposal(1, "New Proposal"); Services.Setup(x => x.SaveProposalAsync(It.IsAny<saveProposalParam>())).Callback((saveProposalParam p) => { Assert.That(p.plainProposal, Is.Not.Null); Assert.That(p.plainProposal.POrderItem.orderItemId, Is.EqualTo(1)); Assert.That(p.plainProposal.POrderItem.orderName, Is.EqualTo("New Proposal")); }); proposal.State = ObjectStates.Added; vm.CurrentProposal = proposal; vm.Save(); } It is working fine, but if you've noticed, using this mechanism the Assert and Act part of the unit test have switched their parts (Assert comes before Acting). Is there a better way to do this, while preserving correct AAA order?

    Read the article

  • How to serialize object containing NSData?

    - by AO
    I'm trying to serialize an object containing a number of data fields...where one of the fields is of datatype NSData which won't serialize. I've followed instructions at http://www.isolated.se but my code (see below) results in the error "[NSConcreteData data]: unrecognized selector sent to instance...". How do I serialize my object? Header file: @interface Donkey : NSObject<NSCoding> { NSString* s; NSData* d; } @property (nonatomic, retain) NSString* s; @property (nonatomic, retain) NSData* d; - (NSData*) serialize; @end Implementation file: @implementation Donkey @synthesize s, d; static NSString* const KEY_S = @"string"; static NSString* const KEY_D = @"data"; - (void) encodeWithCoder:(NSCoder*)coder { [coder encodeObject:self.s forKey:KEY_S]; [coder encodeObject:self.d forKey:KEY_D]; } - (id) initWithCoder:(NSCoder*)coder; { if(self = [super init]) { self.s = [coder decodeObjectForKey:KEY_STRING]; self.d [coder decodeObjectForKey:KEY_DATA]; } return self; } - (NSData*) serialize { return [NSKeyedArchiver archivedDataWithRootObject:self]; } @end

    Read the article

  • Why I can't update my UIImageView?

    - by Tattat
    I make my own UIImageView, like this: @interface MyImageView : UIImageView{ } And I have the initWithFrame like this: - (void)initWithFrame{ UIImage *img = [UIImage imageWithContentsOfFile: [[NSBundle mainBundle] pathForResource:@"myImage" ofType:@"png"]]; CGRect cropRect = CGRectMake(175, 0, 175, 175); CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], cropRect); self = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 175, 175)]; self.image = [UIImage imageWithCGImage:imageRef]; [self addSubview:imageView]; CGImageRelease(imageRef); } I want to change the image when the user touchesBegin, so I have something like this: - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UIImage *img = [UIImage imageWithContentsOfFile: [[NSBundle mainBundle] pathForResource:@"myImage2" ofType:@"png"]]; CGRect cropRect = CGRectMake(0, 0, 175, 175); CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], cropRect); self = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 175, 175)]; [self setImage:[UIImage imageWithCGImage:imageRef]]; CGImageRelease(imageRef); } I find that I can't change the image. So, I add a line [self addSubview:imageView]; in the touchesBegan method, that's work. But I have no idea on why I can't change the image from the touchesBegan, thz.

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >