Search Results

Search found 12222 results on 489 pages for 'initial context'.

Page 148/489 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Serializing WPF DataTemplates and {Binding Expressions} (from PowerShell?)

    - by Jaykul
    Ok, here's the deal: I have code that works in C#, but when I call it from PowerShell, it fails. I can't quite figure it out, but it's something specific to PowerShell. Here's the relevant code calling the library (assuming you've added a reference ahead of time) from C#: public class Test { [STAThread] public static void Main() { Console.WriteLine( PoshWpf.XamlHelper.RoundTripXaml( "<TextBlock Text=\"{Binding FullName}\" xmlns=\"http://schemas.microsoft.com/winfx/2006/xaml/presentation\"/>" ) ); } } Compiled into an executable, that works fine ... but if you call that method from PowerShell, it returns with no {Binding FullName} for the Text! add-type -path .\PoshWpf.dll [PoshWpf.Test]::Main() I've pasted below the entire code for the library, all wrapped up in a PowerShell Add-Type call so you can just compile it by pasting it into PowerShell (you can leave off the first and last lines if you want to paste it into a new console app in Visual Studio. To output (from PowerShell 2) as an executable, just change the -OutputType parameter to ConsoleApplication and the -OutputAssembly to PoshWpf.exe (or something). Thus, you can see that running the SAME CODE from the executable gives you the correct output. But running the two lines as above or manually calling [PoshWpf.XamlHelper]::RoundTripXaml or [PoshWpf.XamlHelper]::ConvertToXaml from PowerShell just doesn't seem to work at all ... HELP?! Add-Type -TypeDefinition @" using System; using System.ComponentModel; using System.Globalization; using System.Linq; using System.Windows; using System.Windows.Data; using System.Windows.Markup; namespace PoshWpf { public class Test { [STAThread] public static void Main() { Console.WriteLine( PoshWpf.XamlHelper.RoundTripXaml( "<TextBlock Text=\"{Binding FullName}\" xmlns=\"http://schemas.microsoft.com/winfx/2006/xaml/presentation\"/>" ) ); } } public class BindingTypeDescriptionProvider : TypeDescriptionProvider { private static readonly TypeDescriptionProvider _DEFAULT_TYPE_PROVIDER = TypeDescriptor.GetProvider(typeof(Binding)); public BindingTypeDescriptionProvider() : base(_DEFAULT_TYPE_PROVIDER) { } public override ICustomTypeDescriptor GetTypeDescriptor(Type objectType, object instance) { ICustomTypeDescriptor defaultDescriptor = base.GetTypeDescriptor(objectType, instance); return instance == null ? defaultDescriptor : new BindingCustomTypeDescriptor(defaultDescriptor); } } public class BindingCustomTypeDescriptor : CustomTypeDescriptor { public BindingCustomTypeDescriptor(ICustomTypeDescriptor parent) : base(parent) { } public override PropertyDescriptorCollection GetProperties(Attribute[] attributes) { PropertyDescriptor pd; var pdc = new PropertyDescriptorCollection(base.GetProperties(attributes).Cast<PropertyDescriptor>().ToArray()); if ((pd = pdc.Find("Source", false)) != null) { pdc.Add(TypeDescriptor.CreateProperty(typeof(Binding), pd, new Attribute[] { new DefaultValueAttribute("null") })); pdc.Remove(pd); } return pdc; } } public class BindingConverter : ExpressionConverter { public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType) { return (destinationType == typeof(MarkupExtension)) ? true : false; } public override object ConvertTo(ITypeDescriptorContext context, CultureInfo culture, object value, Type destinationType) { if (destinationType == typeof(MarkupExtension)) { var bindingExpression = value as BindingExpression; if (bindingExpression == null) throw new Exception(); return bindingExpression.ParentBinding; } return base.ConvertTo(context, culture, value, destinationType); } } public static class XamlHelper { static XamlHelper() { // this is absolutely vital: TypeDescriptor.AddProvider(new BindingTypeDescriptionProvider(), typeof(Binding)); TypeDescriptor.AddAttributes(typeof(BindingExpression), new Attribute[] { new TypeConverterAttribute(typeof(BindingConverter)) }); } public static string RoundTripXaml(string xaml) { return XamlWriter.Save(XamlReader.Parse(xaml)); } public static string ConvertToXaml(object wpf) { return XamlWriter.Save(wpf); } } } "@ -language CSharpVersion3 -reference PresentationCore, PresentationFramework, WindowsBase -OutputType Library -OutputAssembly PoshWpf.dll Again, you can get an executable by just altering the last line like so: "@ -language CSharpVersion3 -reference PresentationCore, PresentationFramework, WindowsBase -OutputType ConsoleApplication -OutputAssembly PoshWpf.exe

    Read the article

  • Why i cannot get the frame of a UIView in order to move it? The view is defined.

    - by Jann
    I am creating a nav-based app with a view that floats at the bottom of the screen (Alpha .7 most of the time). I create it like this... // stuff to create the tabbar/nav bar. // THIS ALL WORKS... // then add it to subview. [window addSubview:tabBarController.view]; // need this last line to display the window (and tab bar controller) [window makeKeyAndVisible]; // Okay, here is the code i am using to create a grey-ish strip exactly `zlocationAccuracyHeight` pixels high at `zlocationAccuracyVerticalStartPoint` starting point vertically. CGRect locationManagerAccuracyUIViewFrame = CGRectMake(0,zlocationAccuracyVerticalStartPoint,[[UIScreen mainScreen] bounds].size.width,zlocationAccuracyHeight); self.locationManagerAccuracyUIView = [[UIView alloc] initWithFrame:locationManagerAccuracyUIViewFrame]; self.locationManagerAccuracyUIView.autoresizingMask = (UIViewAutoresizingFlexibleWidth); self.locationManagerAccuracyUIView.backgroundColor = [UIColor darkGrayColor]; [self.locationManagerAccuracyUIView setAlpha:0]; CGRect locationManagerAccuracyLabelFrame = CGRectMake(0, 0,[[UIScreen mainScreen] bounds].size.width,zlocationAccuracyHeight); locationManagerAccuracyLabel = [[UILabel alloc] initWithFrame:locationManagerAccuracyLabelFrame]; if ([myGizmoClass useLocationServices] == 0) { locationManagerAccuracyLabel.text = @"GPS Accuracy: Using Manual Location"; } else { locationManagerAccuracyLabel.text = @"GPS Accuracy: One Moment Please..."; } locationManagerAccuracyLabel.font = [UIFont boldSystemFontOfSize:12]; locationManagerAccuracyLabel.textAlignment = UITextAlignmentCenter; locationManagerAccuracyLabel.textColor = [UIColor whiteColor]; locationManagerAccuracyLabel.backgroundColor = [UIColor clearColor]; [locationManagerAccuracyLabel setAlpha:0]; [self.locationManagerAccuracyUIView addSubview: locationManagerAccuracyLabel]; [window addSubview: self.locationManagerAccuracyUIView]; this all works (i am not sure about the order i create the uiview in ... meaning i am creating the frame, the view, creating the "accuracy text" and adding that to the view, then adding the uiview as a subview of the window . It works and seems correct in my logic. So, here is the tough part. I have a timer that i am testing with. I am trying to float the uiview up by 30 pix. here is that code: [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.3]; CGRect rect = [ self.locationManagerAccuracyUIView frame]; NSLog(@"ORIGIN: %d x %d (%@)\n",rect.origin.x,rect.origin.y,rect); rect.origin.y -= 30; [UIView commitAnimations]; The problem? rect is nill, rect.origin.x and rect.origin.y are both zero. Can anyone tell me why? Here is how i set up self.locationManagerAccuracyUIView in my files: Delegate.h UIView *locationManagerAccuracyUIView; UILabel *locationManagerAccuracyLabel; ... @property (nonatomic, retain) IBOutlet UIView *locationManagerAccuracyUIView; @property (nonatomic, retain) IBOutlet UILabel *locationManagerAccuracyLabel; Delegate.m ... @synthesize locationManagerAccuracyUIView; @synthesize locationManagerAccuracyLabel; ... BTW: Other places in another timer i DO set the alpha to fade in and out and THAT works! So locationManagerAccuracyUIView is valid and defined as a view... For instance: [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.5]; [locationManagerAccuracyLabel setAlpha:1]; [UIView commitAnimations]; [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.5]; [self.locationManagerAccuracyUIView setAlpha:.7]; [UIView commitAnimations]; ...and it DOES work. Can anyone help me? As an aside: I know, when typing this I used self.locationManagerAccuracyUIView and locationManagerAccuracyUIView interchangeably to see if for some reason that was the issue. It is not. :) Thx

    Read the article

  • How do I get a delete trigger working using fluent API in CTP5?

    - by user668472
    I am having trouble getting referential integrity dialled down enough to allow my delete trigger to fire. I have a dependent entity with three FKs. I want it to be deleted when any of the principal entities is deleted. For principal entities Role and OrgUnit (see below) I can rely on conventions to create the required one-many relationship and cascade delete does what I want, ie: Association is removed when either principal is deleted. For Member, however, I have multiple cascade delete paths (not shown here) which SQL Server doesn't like, so I need to use fluent API to disable cascade deletes. Here is my (simplified) model: public class Association { public int id { get; set; } public int roleid { get; set; } public virtual Role role { get; set; } public int? memberid { get; set; } public virtual Member member { get; set; } public int orgunitid { get; set; } public int OrgUnit orgunit { get; set; } } public class Role { public int id { get; set; } public virtual ICollection<Association> associations { get; set; } } public class Member { public int id { get; set; } public virtual ICollection<Association> associations { get; set; } } public class Organization { public int id { get; set; } public virtual ICollection<Association> associations { get; set; } } My first run at fluent API code looks like this: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) { DbDatabase.SetInitializer<ConfDB_Model>(new ConfDBInitializer()); modelBuilder.Entity<Member>() .HasMany(m=>m.assocations) .WithOptional(a=>a.member) .HasForeignKey(a=>a.memberId) .WillCascadeOnDelete(false); } My seed function creates the delete trigger: protected override void Seed(ConfDB_Model context) { context.Database.SqlCommand("CREATE TRIGGER MemberAssocTrigger ON dbo.Members FOR DELETE AS DELETE Assocations FROM Associations, deleted WHERE Associations.memberId = deleted.id"); } PROBLEM: When I run this, create a Role, a Member, an OrgUnit, and an Association tying the three together all is fine. When I delete the Role, the Association gets cascade deleted as I expect. HOWEVER when I delete the Member I get an exception with a referential integrity error. I have tried setting ON CASCADE SET NULL because my memberid FK is nullable but SQL complains again about multiple cascade paths, so apparently I can cascade nothing in the Member-Association relationship. To get this to work I must add the following code to Seed(): context.Database.SqlCommand("ALTER TABLE dbo.ACLEntries DROP CONSTRAINT member_aclentries"); As you can see, this drops the constraint created by the model builder. QUESTION: this feels like a complete hack. Is there a way using fluent API for me to say that referential integrity should NOT be checked, or otherwise to get it to relax enough for the Member delete to work and allow the trigger to be fired? Thanks in advance for any help you can offer. Although fluent APIs may be "fluent" I find them far from intuitive.

    Read the article

  • screnshot in android

    - by ujjawal
    The following is the code I am using to take a screen shot using GLSurfaceView. But I dont know why the onDraw() method in the GLSurfaceView.Renderer Class is not being called. Please if some one can look at the code below and point out what am I doing wrong.`public class MainActivity extends Activity { private GLSurfaceView mGLView; int x,y,w,h; Display disp; /** Called when the activity is first created. */ @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); // ToDo add your GUI initialization code here setContentView(R.layout.main); x=0; y=0; disp = getWindowManager().getDefaultDisplay(); w = disp.getWidth(); h = disp.getHeight(); mGLView = new ClearGLSurfaceView(this); } class ClearGLSurfaceView extends GLSurfaceView { public ClearGLSurfaceView(Context context) { super(context); setDebugFlags(DEBUG_CHECK_GL_ERROR | DEBUG_LOG_GL_CALLS); mRenderer = new ClearRenderer(); setRenderer(mRenderer); } ClearRenderer mRenderer; } class ClearRenderer implements GLSurfaceView.Renderer { public void onSurfaceCreated(GL10 gl, EGLConfig config) { // Do nothing special. } public void onSurfaceChanged(GL10 gl, int w, int h) { //gl.glViewport(0, 0, w, h); } public void onDrawFrame(GL10 gl) { //gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); int b[]=new int[w*(y+h)]; int bt[]=new int[w*h]; IntBuffer ib=IntBuffer.wrap(b); ib.position(0); gl.glReadPixels(x, 0, w, y+h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib); for(int i=0, k=0; i<h; i++, k++) {//remember, that OpenGL bitmap is incompatible with Android bitmap //and so, some correction need. for(int j=0; j<w; j++) { int pix=b[i*w+j]; int pb=(pix>>16)&0xff; int pr=(pix<<16)&0x00ff0000; int pix1=(pix&0xff00ff00) | pr | pb; bt[(h-k-1)*w+j]=pix1; } } Bitmap bmp = Bitmap.createBitmap(bt, w, h,Bitmap.Config.ARGB_8888); try { File f = new File("/sdcard/testpicture.png"); f.createNewFile(); FileOutputStream fos=new FileOutputStream(f); bmp.compress(CompressFormat.PNG, 100, fos); try { fos.flush(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { fos.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch(IOException e) { e.printStackTrace(); } } } } ` Please someone help me out. I have just started learning to work on android.

    Read the article

  • Possible iphone animation timing/rendering bug?

    - by David
    Hi all, I have been working on an iphone apps for several weeks. Now I encounter an animation problem that I can't figure out how to resolve. Mayhbe you can help. Here is the details (a little long, bear with me): Basically the effect I want to achieve is, when user click a button, a loading view pops up, hiding the whole screen; and then the apps does a lot of heavy computation, which takes a few seconds. Once the computation is done, soem result views (something likes checkers on a checker board) are rendered under the loading view. Once all result views are rendered, I used animation animation to remove the loading view nand show the result views to the user. Here is what I do: when user click a button, run this code: [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlDown forView:self.view cache:YES]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(loadingViewInserted:finished:context:)]; // use a really high index number so it will always on top [self.view insertSubview:loadingViewController.view atIndex:1000]; [UIView commitAnimations]; In the "loadingViewInserted" function, it calls another function doing the heavy computation work. Once the computation is done, a lot of result views (like checkers on a checker board) are rendered under the loading view. for(int colIndex = 1; colIndex <= result.columns; colIndex++) { for(int rowIndex = 1; rowIndex <= result.rows; rowIndex++) { ResultView *rv = [ResultView resultViewWithData:results[colIndex][rowIndex]]; [self.view addSubview:rv]; } } Once all result views are added, following animation is invoked to remove the loading view: [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlUp forView:self.view cache:YES]; [loadingViewController.view removeFromSuperview]; [UIView commitAnimations]; By doing this, most of the time (maybe 90%) it does exactly what I want. However, sometime I see some weird result: the loading view shows up first as expected, then before it disappears, some result views, which suppose to be under the loading view, suddenly appears on top of the loading view; and some of them are partial rendered. And then the loading view curled up, and everything looks normal again. The weird situation only lasts for less than a second, but already bad enough to screw up the UI. I have tried all different kinds of thing to fix this (using another thread to remove the loading view, make the loading view non-transparent), but none of them works. The only thing that makes a little better is, I hide all the result views first; after the last animation finished, in its call back, unhide all result views. But this loses the nice effect that when curling up the loading view, the results are already there. At this point, I really think this is a bug in iphone (I compile it with OS 3.0) OS. Or maybe you can point out what I have done wrong (or could do differently). (thanks for finishing this long post, :-) )

    Read the article

  • Resolving a Forward Declaration Issue Involving a State Machine in C++

    - by hypersonicninja
    I've recently returned to C++ development after a hiatus, and have a question regarding implementation of the State Design Pattern. I'm using the vanilla pattern, exactly as per the GoF book. My problem is that the state machine itself is based on some hardware used as part of an embedded system - so the design is fixed and can't be changed. This results in a circular dependency between two of the states (in particular), and I'm trying to resolve this. Here's the simplified code (note that I tried to resolve this by using headers as usual but still had problems - I've omitted them in this code snippet): #include <iostream> #include <memory> using namespace std; class Context { public: friend class State; Context() { } private: State* m_state; }; class State { public: State() { } virtual void Trigger1() = 0; virtual void Trigger2() = 0; }; class LLT : public State { public: LLT() { } void Trigger1() { new DH(); } void Trigger2() { new DL(); } }; class ALL : public State { public: ALL() { } void Trigger1() { new LLT(); } void Trigger2() { new DH(); } }; // DL needs to 'know' about DH. class DL : public State { public: DL() { } void Trigger1() { new ALL(); } void Trigger2() { new DH(); } }; class HLT : public State { public: HLT() { } void Trigger1() { new DH(); } void Trigger2() { new DL(); } }; class AHL : public State { public: AHL() { } void Trigger1() { new DH(); } void Trigger2() { new HLT(); } }; // DH needs to 'know' about DL. class DH : public State { public: DH () { } void Trigger1() { new AHL(); } void Trigger2() { new DL(); } }; int main() { auto_ptr<LLT> llt (new LLT); auto_ptr<ALL> all (new ALL); auto_ptr<DL> dl (new DL); auto_ptr<HLT> hlt (new HLT); auto_ptr<AHL> ahl (new AHL); auto_ptr<DH> dh (new DH); return 0; } The problem is basically that in the State Pattern, state transitions are made by invoking the the ChangeState method in the Context class, which invokes the constructor of the next state. Because of the circular dependency, I can't invoke the constructor because it's not possible to pre-define both of the constructors of the 'problem' states. I had a look at this article, and the template method which seemed to be the ideal solution - but it doesn't compile and my knowledge of templates is a rather limited... The other idea I had is to try and introduce a Helper class to the subclassed states, via multiple inheritance, to see if it's possible to specify the base class's constructor and have a reference to the state subclasse's constructor. But I think that was rather ambitious... Finally, would a direct implmentation of the Factory Method Design Pattern be the best way to resolve the entire problem?

    Read the article

  • Create and Share a File (Also a mysterious error)

    - by Kirk
    My goal is to create a XML file and then send it through the share Intent. I'm able to create a XML file using this code FileOutputStream outputStream = context.openFileOutput(fileName, Context.MODE_WORLD_READABLE); PrintStream printStream = new PrintStream(outputStream); String xml = this.writeXml(); // get XML here printStream.println(xml); printStream.close(); I'm stuck trying to retrieve a Uri to the output file in order to share it. I first tried to access the file by converting the file to a Uri File outFile = context.getFileStreamPath(fileName); return Uri.fromFile(outFile); This returns file:///data/data/com.my.package/files/myfile.xml but I cannot appear to attach this to an email, upload, etc. If I manually check the file length, it's proper and shows there is a reasonable file size. Next I created a content provider and tried to reference the file and it isn't a valid handle to the file. The ContentProvider doesn't ever seem to be called a any point. Uri uri = Uri.parse("content://" + CachedFileProvider.AUTHORITY + "/" + fileName); return uri; This returns content://com.my.package.provider/myfile.xml but I check the file and it's zero length. How do I access files properly? Do I need to create the file with the content provider? If so, how? Update Here is the code I'm using to share. If I select Gmail, it does show as an attachment but when I send it gives an error Couldn't show attachment and the email that arrives has no attachment. public void onClick(View view) { Log.d(TAG, "onClick " + view.getId()); switch (view.getId()) { case R.id.share_cancel: setResult(RESULT_CANCELED, getIntent()); finish(); break; case R.id.share_share: MyXml xml = new MyXml(); Uri uri; try { uri = xml.writeXmlToFile(getApplicationContext(), "myfile.xml"); //uri is "file:///data/data/com.my.package/files/myfile.xml" Log.d(TAG, "Share URI: " + uri.toString() + " path: " + uri.getPath()); File f = new File(uri.getPath()); Log.d(TAG, "File length: " + f.length()); // shows a valid file size Intent shareIntent = new Intent(); shareIntent.setAction(Intent.ACTION_SEND); shareIntent.putExtra(Intent.EXTRA_STREAM, uri); shareIntent.setType("text/plain"); startActivity(Intent.createChooser(shareIntent, "Share")); } catch (FileNotFoundException e) { e.printStackTrace(); } break; } } I noticed that there is an Exception thrown here from inside createChooser(...), but I can't figure out why it's thrown. E/ActivityThread(572): Activity com.android.internal.app.ChooserActivity has leaked IntentReceiver com.android.internal.app.ResolverActivity$1@4148d658 that was originally registered here. Are you missing a call to unregisterReceiver()? I've researched this error and can't find anything obvious. Both of these links suggest that I need to unregister a receiver. Chooser Activity Leak - Android Why does Intent.createChooser() need a BroadcastReceiver and how to implement? I have a receiver setup, but it's for an AlarmManager that is set elsewhere and doesn't require the app to register / unregister. Code for openFile(...) In case it's needed, here is the content provider I've created. public ParcelFileDescriptor openFile(Uri uri, String mode) throws FileNotFoundException { String fileLocation = getContext().getCacheDir() + "/" + uri.getLastPathSegment(); ParcelFileDescriptor pfd = ParcelFileDescriptor.open(new File(fileLocation), ParcelFileDescriptor.MODE_READ_ONLY); return pfd; }

    Read the article

  • How to debug KVO

    - by user8472
    In my program I use KVO manually to observe changes to values of object properties. I receive an EXC_BAD_ACCESS signal at the following line of code inside a custom setter: [self willChangeValueForKey:@"mykey"]; The weird thing is that this happens when a factory method calls the custom setter and there should not be any observers around. I do not know how to debug this situation. Update: The way to list all registered observers is observationInfo. It turned out that there was indeed an object listed that points to an invalid address. However, I have no idea at all how it got there. Update 2: Apparently, the same object and method callback can be registered several times for a given object - resulting in identical entries in the observed object's observationInfo. When removing the registration only one of these entries is removed. This behavior is a little counter-intuitive (and it certainly is a bug in my program to add multiple entries at all), but this does not explain how spurious observers can mysteriously show up in freshly allocated objects (unless there is some caching/reuse going on that I am unaware of). Modified question: How can I figure out WHERE and WHEN an object got registered as an observer? Update 3: Specific sample code. ContentObj is a class that has a dictionary as a property named mykey. It overrides: + (BOOL)automaticallyNotifiesObserversForKey:(NSString *)theKey { BOOL automatic = NO; if ([theKey isEqualToString:@"mykey"]) { automatic = NO; } else { automatic=[super automaticallyNotifiesObserversForKey:theKey]; } return automatic; } A couple of properties have getters and setters as follows: - (CGFloat)value { return [[[self mykey] objectForKey:@"value"] floatValue]; } - (void)setValue:(CGFloat)aValue { [self willChangeValueForKey:@"mykey"]; [[self mykey] setObject:[NSNumber numberWithFloat:aValue] forKey:@"value"]; [self didChangeValueForKey:@"mykey"]; } The container class has a property contents of class NSMutableArray which holds instances of class ContentObj. It has a couple of methods that manually handle registrations: + (BOOL)automaticallyNotifiesObserversForKey:(NSString *)theKey { BOOL automatic = NO; if ([theKey isEqualToString:@"contents"]) { automatic = NO; } else { automatic=[super automaticallyNotifiesObserversForKey:theKey]; } return automatic; } - (void)observeContent:(ContentObj *)cObj { [cObj addObserver:self forKeyPath:@"mykey" options:0 context:NULL]; } - (void)removeObserveContent:(ContentObj *)cObj { [cObj removeObserver:self forKeyPath:@"mykey"]; } - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if (([keyPath isEqualToString:@"mykey"]) && ([object isKindOfClass:[ContentObj class]])) { [self willChangeValueForKey:@"contents"]; [self didChangeValueForKey:@"contents"]; } } There are several methods in the container class that modify contents. They look as follows: - (void)addContent:(ContentObj *)cObj { [self willChangeValueForKey:@"contents"]; [self observeDatum:cObj]; [[self contents] addObject:cObj]; [self didChangeValueForKey:@"contents"]; } And a couple of others that provide similar functionality to the array. They all work by adding/removing themselves as observers. Obviously, anything that results in multiple registrations is a bug and could sit somewhere hidden in these methods. My question targets strategies on how to debug this kind of situation. Alternatively, please feel free to provide an alternative strategy for implementing this kind of notification/observer pattern.

    Read the article

  • How to use XSLT to tag specific nodes with unique, sequential, increasing integer ids?

    - by ~otakuj462
    Hi, I'm trying to use XSLT to transform a document by tagging a group of XML nodes with integer ids, starting at 0, and increasing by one for each node in the group. The XML passed into the stylesheet should be echoed out, but augmented to include this extra information. Just to be clear about what I am talking about, here is how this transformation would be expressed using DOM: states = document.getElementsByTagName("state"); for( i = 0; i < states.length; i++){ states.stateNum = i; } This is very simple with DOM, but I'm having much more trouble doing this with XSLT. The current strategy I've devised has been to start with the identity transformation, then create a global variable which selects and stores all of the nodes that I wish to number. I then create a template that matches that kind of node. The idea, then, is that in the template, I would look up the matched node's position in the global variable nodelist, which would give me a unique number that I could then set as an attribute. The problem with this approach is that the position function can only be used with the context node, so something like the following is illegal: <template match="state"> <variable name="stateId" select="@id"/> <variable name="uniqueStateNum" select="$globalVariable[@id = $stateId]/position()"/> </template> The same is true for the following: <template match="state"> <variable name="stateId" select="@id" <variable name="stateNum" select="position($globalVariable[@id = $stateId])/"/> </template> In order to use position() to look up the position of an element in $globalVariable, the context node must be changed. I have found a solution, but it is highly suboptimal. Basically, in the template, I use for-each to iterate through the global variable. For-each changes the context node, so this allows me to use position() in the way I described. The problem is that this turns what would normally be an O(n) operation into an O(n^2) operation, where n is the length of the nodelist, as this require iterating through the whole list whenever the template is matched. I think that there must be a more elegant solution. Altogether, here is my current (slightly simplified) xslt stylesheet: <?xml version="1.0"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:s="http://www.w3.org/2005/07/scxml" xmlns="http://www.w3.org/2005/07/scxml" xmlns:c="http://msdl.cs.mcgill.ca/" version="1.0"> <xsl:output method="xml"/> <!-- we copy them, so that we can use their positions as identifiers --> <xsl:variable name="states" select="//s:state" /> <!-- identity transform --> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> <xsl:template match="s:state"> <xsl:variable name="stateId"> <xsl:value-of select="@id"/> </xsl:variable> <xsl:copy> <xsl:apply-templates select="@*"/> <xsl:for-each select="$states"> <xsl:if test="@id = $stateId"> <xsl:attribute name="stateNum" namespace="http://msdl.cs.mcgill.ca/"> <xsl:value-of select="position()"/> </xsl:attribute> </xsl:if> </xsl:for-each> <xsl:apply-templates select="node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> I'd appreciate any advice anyone can offer. Thanks.

    Read the article

  • TableView doesnt show any Data(CoreData) - App crashe

    - by brush51
    Hey @all, i cant read my data from my database. I have an app with a tabbarcontroller. in the first tab the iphone camera takes a picture from a barcode and send the result to another view (CameraReturnDetailViewController). In CameraReturnDetailViewController is the savebutton, and here is the code from this save button: - (IBAction)saveAndQuitScan:(id) sender { XLog(@"saveAndQuitScan button wurde geklickt!"); ProjectQRCodeAppDelegate *appDelegate = [[UIApplication sharedApplication]delegate]; NSManagedObjectContext *context = [appDelegate managedObjectContext]; NSManagedObject *newData; newData = [NSEntityDescription insertNewObjectForEntityForName:@"BarcodeDaten" inManagedObjectContext:context]; [newData setValue:dataLabel.text forKey:@"Barcode_CD"]; NSError *error; [context save:&error]; //Aktuelle ansicht (self) animiert verlassen [self dismissModalViewControllerAnimated:YES]; // Nachdem die ansicht verlassen wurde, // auf das zweite Tab wechseln(scanverlauf) /** TO DO - Funktioniert noch nicht **/ [self.tabBarController setSelectedIndex:1]; } Now, my aim is to show the taba in the second tab, in a TableView (ScansViewController): - (void)viewDidLoad { [super viewDidLoad]; if (managedObjectContext_ == nil) { managedObjectContext_ = [(ProjectQRCodeAppDelegate *)[[UIApplication sharedApplication]delegate] managedObjectContext]; NSLog(@"After managedObjectContext: %@", managedObjectContext_); } myTableView = [[UITableView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame] style:UITableViewStylePlain]; myTableView.delegate = self; myTableView.dataSource = self; myTableView.autoresizesSubviews = YES; self.navigationItem.title = @"Code Liste"; self.view = myTableView; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [itemsList count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } return cell; } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSString *selectDay = [NSString stringWithFormat:@"%d", indexPath.row]; TableDetailViewController *fvController = [[TableDetailViewController alloc] initWithNibName:@"TableDetailViewController" bundle:[NSBundle mainBundle]]; fvController.selectDay = selectDay; [self.navigationController pushViewController:fvController animated:YES]; [fvController release]; fvController = nil; } - (void) configureCell:(UITableViewCell *)cell atIndexPath:(NSIndexPath *)indexPath { NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:indexPath]; cell.textLabel.text = [[managedObject valueForKey:@"Barcode_CD"] description]; } - (NSFetchedResultsController *) fetchedResultsController { if (fetchedResultsController_ !=nil) { return fetchedResultsController_; } NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"BarcodeDaten" inManagedObjectContext:self.managedObjectContext]; [fetchRequest setEntity:entity]; [fetchRequest setFetchBatchSize:20]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"Barcode_CD" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:self.managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; NSError *error = nil; if (![fetchedResultsController_ performFetch:&error]) { XLog(@"Error: %@, %@", error, [error userInfo]); abort(); } return fetchedResultsController_; } At first i get this error when i choosed the second tab(ScansViewController): " Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '+entityForName: could not locate an NSManagedObjectModel for entity name 'BarcodeDaten'' " The Name is correct but i dont understand my mistake. No data is showed in the Tableview, why? Have I missed something..? Or something wrong? Thanks for help, brush51

    Read the article

  • Subscribing message sent from another application

    - by tonga
    I have two Java applications: AppOne and AppTwo. In AppOne, I used a JMS sender to publish a Topic. In both AppOne and AppTwo, I used a JMS MessageListener to subscribe to the message published by AppOne. I used ActiveMQ as my JMS broker and Spring JMS. However, I can only see the echoed message received by AppOne message listener. But I can't see the echoed message received by AppTwo listener. AppOne message listener is in the same application/project as the message publisher. But AppTwo message listener is in a different application/project. The AppOne listener class is: public class CustomerStatusListener implements MessageListener { public void onMessage(Message message) { if (message instanceof TextMessage) { try { System.out.println("Subscriber 1 got you! The message is: " + ((TextMessage) message).getText()); } catch (JMSException ex) { throw new RuntimeException(ex); } } else { throw new IllegalArgumentException("Message must be of type TextMessage"); } } } It is invoked by a test calss JmsTest in AppOne: public class JmsTest { public static void main(String[] args) { ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("message-bean.xml"); CustomerStatusSender messageSender = (CustomerStatusSender) context.getBean("customerMessageSender"); messageSender.simpleSend(); context.close(); } } The AppTwo listener class is: public class CustomerStatusMessageListener implements MessageListener { public void onMessage(Message message) { if (message instanceof TextMessage) { try { System.out.println("Subscriber 2 got you! The message is: " + ((TextMessage) message).getText()); } catch (JMSException ex) { throw new RuntimeException(ex); } } else { throw new IllegalArgumentException( "Message must be of type TextMessage"); } } } The bean definition file for AppTwo where the Subscriber 2 lives in is: <bean id="connectionFactoryBean" class="org.apache.activemq.ActiveMQConnectionFactory"> <property name="brokerURL" value="tcp://localhost:61616" /> </bean> <!-- this is the Message Driven POJO (MDP) --> <bean id="customerStatusListener" class="com.mydomain.jms.CustomerStatusMessageListener" /> <!-- and this is the message listener container --> <bean id="listenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactoryBean" /> <property name="destination" ref="topicBean" /> <property name="messageListener" ref="customerStatusListener" /> </bean> The bean id topicBean is the bean that is associated with the publisher. If both listener received the message sent from AppOne, I would have seen two echoed messages: Subscriber 1 got you! The message is: hello world Subscriber 2 got you! The message is: hello world But right now I only see the first line which means only the listener in AppOne got the message. So how to let the listener in AppTwo get the message? The first listener is in the same application as the sender, so it is easy to understand that it can get the message. But how about the second listener which is in a different application? What is the correct way to subscribe to a JMS Topic published in another application?

    Read the article

  • Android how to match text with images by pointing text and images with lines

    - by Shirisha
    I am trying to create app which is match text with appropriate images by pointing with line. I want to create app exactly same which is shown in the below image: can any one please give me an idea? This is my main class: public class MatchActivity extends Activity { ArrayAdapter<String> listadapter; float x1; float y1; float x2; float y2; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); String[] s1 = { "smiley1", "smiley2", "smiley3" }; ListView lv = (ListView) findViewById(R.id.text_list); ArrayList<String> list = new ArrayList<String>(); list.addAll(Arrays.asList(s1)); listadapter = new ArrayAdapter<String>(this, R.layout.rowtext, s1); lv.setAdapter(listadapter); GridView gv = (GridView) findViewById(R.id.image_list); gv.setAdapter(new ImageAdapter(this)); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> arg0, View v, int arg2, long arg3){ x1=v.getX(); y1=v.getY(); Log.d("list","text positions x1:"+x1+" y1:"+y1); } }); gv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> arg0, View v, int arg2, long arg3){ DrawView draw=new DrawView(MatchActivity.this); x2=v.getX(); y2=v.getY(); draw.position1.add(x1); draw.position1.add(y1); draw.position2.add( x2); draw.position2.add(y2); Log.d("list","image positions x2:"+x2+" y2:"+y2); LinearLayout ll=LinearLayout)findViewById(R.id.draw_line); ll.addView(draw); } }); } } This is my drawing class to draw a line: public class DrawView extends View { Paint paint = new Paint(); private List<Float> position1=new ArrayList<Float>(); private List<Float> position2=new ArrayList<Float>();; public DrawView(Context context) { super(context); invalidate(); Log.d("drawview","In DrawView class position1:"+position1+" position2:"+position2) ; } @Override public void onDraw(Canvas canvas) { super.onDraw(canvas); Log.d("on draw","IN onDraw() position1:"+position1+" position2:"+position2); assert position1.size() == position2.size(); for (int i = 0; i < position1.size(); i += 2) { float x1 = position1.get(i); float y1 = position1.get(i + 1); float x2 = position2.get(i); float y2 = position2.get(i + 1); paint.setColor(Color.BLACK); paint.setStrokeWidth(3); canvas.drawLine(x1,y1, x2,y2, paint); } } } Thanks in advance .

    Read the article

  • JsTree v1.0 - How to manipulate effectively the data from the backend to render the trees and operate correctly?

    - by Jean Paul
    Backend info: PHP 5 / MySQL URL: http://github.com/downloads/vakata/jstree/jstree_pre1.0_fix_1.zip Table structure for table discussions_tree -- CREATE TABLE IF NOT EXISTS `discussions_tree` ( `id` int(11) NOT NULL AUTO_INCREMENT, `parent_id` int(11) NOT NULL DEFAULT '0', `user_id` int(11) NOT NULL DEFAULT '0', `label` varchar(16) DEFAULT NULL, `position` bigint(20) unsigned NOT NULL DEFAULT '0', `left` bigint(20) unsigned NOT NULL DEFAULT '0', `right` bigint(20) unsigned NOT NULL DEFAULT '0', `level` bigint(20) unsigned NOT NULL DEFAULT '0', `type` varchar(255) CHARACTER SET utf8 COLLATE utf8_unicode_ci DEFAULT NULL, `h_label` varchar(16) NOT NULL DEFAULT '', `fulllabel` varchar(255) DEFAULT NULL, UNIQUE KEY `uidx_3` (`id`), KEY `idx_1` (`user_id`), KEY `idx_2` (`parent_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; /*The first element should in my understanding not even be shown*/ INSERT INTO `discussions_tree` (`id`, `parent_id`, `user_id`, `label`, `position`, `left`, `right`, `level`, `type`, `h_label`, `fulllabel`) VALUES (0, 0, 0, 'Contacts', 0, 1, 1, 0, NULL, '', NULL); INSERT INTO `discussions_tree` (`id`, `parent_id`, `user_id`, `label`, `position`, `left`, `right`, `level`, `type`, `h_label`, `fulllabel`) VALUES (1, 0, 0, 'How to Tag', 1, 2, 2, 0, 'drive', '', NULL); Front End : I've simplified the logic, it has 6 trees actually inside of a panel and that works fine $array = array("Discussions"); $id_arr = array("d"); $nid = 0; foreach ($array as $k=> $value) { $nid++; ?> <li id="<?=$value?>" class="label"> <a href='#<?=$value?>'><span> <?=$value?> </span></a> <div class="sub-menu" style="height:auto; min-height:120px; background-color:#E5E5E5" > <div class="menu" id="menu_<?=$id_arr[$k]?>" style="position:relative; margin-left:56%"> <img src="./js/jsTree/create.png" alt="" id="create" title="Create" > <img src="./js/jsTree/rename.png" alt="" id="rename" title="Rename" > <img src="./js/jsTree/remove.png" alt="" id="remove" title="Delete"> <img src="./js/jsTree/cut.png" alt="" id="cut" title="Cut" > <img src="./js/jsTree/copy.png" alt="" id="copy" title="Copy"> <img src="./js/jsTree/paste.png" alt="" id="paste" title="Paste"> </div> <div id="<?=$id_arr[$k]?>" class="jstree_container"></div> </div> </li> <!-- JavaScript neccessary for this tree : <?=$value?> --> <script type="text/javascript" > jQuery(function ($) { $("#<?=$id_arr[$k]?>").jstree({ // List of active plugins used "plugins" : [ "themes", "json_data", "ui", "crrm" , "hotkeys" , "types" , "dnd", "contextmenu"], // "ui" :{ "initially_select" : ["#node_"+ $nid ] } , "crrm": { "move": { "always_copy": "multitree" }, "input_width_limit":128 }, "core":{ "strings":{ "new_node" : "New Tag" }}, "themes": {"theme": "classic"}, "json_data" : { "ajax" : { "url" : "./js/jsTree/server-<?=$id_arr[$k]?>.php", "data" : function (n) { // the result is fed to the AJAX request `data` option return { "operation" : "get_children", "id" : n.attr ? n.attr("id").replace("node_","") : 1, "state" : "", "user_id": <?=$uid?> }; } } } , "types" : { "max_depth" : -1, "max_children" : -1, "types" : { // The default type "default" : { "hover_node":true, "valid_children" : [ "default" ], }, // The `drive` nodes "drive" : { // can have files and folders inside, but NOT other `drive` nodes "valid_children" : [ "default", "folder" ], "hover_node":true, "icon" : { "image" : "./js/jsTree/root.png" }, // those prevent the functions with the same name to be used on `drive` nodes.. internally the `before` event is used "start_drag" : false, "move_node" : false, "remove_node" : false } } }, "contextmenu" : { "items" : customMenu , "select_node": true} }) //Hover function binded to jstree .bind("hover_node.jstree", function (e, data) { $('ul li[rel="drive"], ul li[rel="default"], ul li[rel=""]').each(function(i) { $(this).find("a").attr('href', $(this).attr("id")+".php" ); }) }) //Create function binded to jstree .bind("create.jstree", function (e, data) { $.post( "./js/jsTree/server-<?=$id_arr[$k]?>.php", { "operation" : "create_node", "id" : data.rslt.parent.attr("id").replace("node_",""), "position" : data.rslt.position, "label" : data.rslt.name, "href" : data.rslt.obj.attr("href"), "type" : data.rslt.obj.attr("rel"), "user_id": <?=$uid?> }, function (r) { if(r.status) { $(data.rslt.obj).attr("id", "node_" + r.id); } else { $.jstree.rollback(data.rlbk); } } ); }) //Remove operation .bind("remove.jstree", function (e, data) { data.rslt.obj.each(function () { $.ajax({ async : false, type: 'POST', url: "./js/jsTree/server-<?=$id_arr[$k]?>.php", data : { "operation" : "remove_node", "id" : this.id.replace("node_",""), "user_id": <?=$uid?> }, success : function (r) { if(!r.status) { data.inst.refresh(); } } }); }); }) //Rename operation .bind("rename.jstree", function (e, data) { data.rslt.obj.each(function () { $.ajax({ async : true, type: 'POST', url: "./js/jsTree/server-<?=$id_arr[$k]?>.php", data : { "operation" : "rename_node", "id" : this.id.replace("node_",""), "label" : data.rslt.new_name, "user_id": <?=$uid?> }, success : function (r) { if(!r.status) { data.inst.refresh(); } } }); }); }) //Move operation .bind("move_node.jstree", function (e, data) { data.rslt.o.each(function (i) { $.ajax({ async : false, type: 'POST', url: "./js/jsTree/server-<?=$id_arr[$k]?>.php", data : { "operation" : "move_node", "id" : $(this).attr("id").replace("node_",""), "ref" : data.rslt.cr === -1 ? 1 : data.rslt.np.attr("id").replace("node_",""), "position" : data.rslt.cp + i, "label" : data.rslt.name, "copy" : data.rslt.cy ? 1 : 0, "user_id": <?=$uid?> }, success : function (r) { if(!r.status) { $.jstree.rollback(data.rlbk); } else { $(data.rslt.oc).attr("id", "node_" + r.id); if(data.rslt.cy && $(data.rslt.oc).children("UL").length) { data.inst.refresh(data.inst._get_parent(data.rslt.oc)); } } } }); }); }); // This is for the context menu to bind with operations on the right clicked node function customMenu(node) { // The default set of all items var control; var items = { createItem: { label: "Create", action: function (node) { return {createItem: this.create(node) }; } }, renameItem: { label: "Rename", action: function (node) { return {renameItem: this.rename(node) }; } }, deleteItem: { label: "Delete", action: function (node) { return {deleteItem: this.remove(node) }; }, "separator_after": true }, copyItem: { label: "Copy", action: function (node) { $(node).addClass("copy"); return {copyItem: this.copy(node) }; } }, cutItem: { label: "Cut", action: function (node) { $(node).addClass("cut"); return {cutItem: this.cut(node) }; } }, pasteItem: { label: "Paste", action: function (node) { $(node).addClass("paste"); return {pasteItem: this.paste(node) }; } } }; // We go over all the selected items as the context menu only takes action on the one that is right clicked $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element) { if ( $(element).attr("id") != $(node).attr("id") ) { // Let's deselect all nodes that are unrelated to the context menu -- selected but are not the one right clicked $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); } }); //if any previous click has the class for copy or cut $("#<?=$id_arr[$k]?>").find("li").each(function(index,element) { if ($(element) != $(node) ) { if( $(element).hasClass("copy") || $(element).hasClass("cut") ) control=1; } else if( $(node).hasClass("cut") || $(node).hasClass("copy")) { control=0; } }); //only remove the class for cut or copy if the current operation is to paste if($(node).hasClass("paste") ) { control=0; // Let's loop through all elements and try to find if the paste operation was done already $("#<?=$id_arr[$k]?>").find("li").each(function(index,element) { if( $(element).hasClass("copy") ) $(this).removeClass("copy"); if ( $(element).hasClass("cut") ) $(this).removeClass("cut"); if ( $(element).hasClass("paste") ) $(this).removeClass("paste"); }); } switch (control) { //Remove the paste item from the context menu case 0: switch ($(node).attr("rel")) { case "drive": delete items.renameItem; delete items.deleteItem; delete items.cutItem; delete items.copyItem; delete items.pasteItem; break; case "default": delete items.pasteItem; break; } break; //Remove the paste item from the context menu only on the node that has either copy or cut added class case 1: if( $(node).hasClass("cut") || $(node).hasClass("copy") ) { switch ($(node).attr("rel")) { case "drive": delete items.renameItem; delete items.deleteItem; delete items.cutItem; delete items.copyItem; delete items.pasteItem; break; case "default": delete items.pasteItem; break; } } else //Re-enable it on the clicked node that does not have the cut or copy class { switch ($(node).attr("rel")) { case "drive": delete items.renameItem; delete items.deleteItem; delete items.cutItem; delete items.copyItem; break; } } break; //initial state don't show the paste option on any node default: switch ($(node).attr("rel")) { case "drive": delete items.renameItem; delete items.deleteItem; delete items.cutItem; delete items.copyItem; delete items.pasteItem; break; case "default": delete items.pasteItem; break; } break; } return items; } $("#menu_<?=$id_arr[$k]?> img").hover( function () { $(this).css({'cursor':'pointer','outline':'1px double teal'}) }, function () { $(this).css({'cursor':'none','outline':'1px groove transparent'}) } ); $("#menu_<?=$id_arr[$k]?> img").click(function () { switch(this.id) { //Create only the first element case "create": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("create", '#'+$(element).attr("id"), null, /*{attr : {href: '#' }}*/null ,null, false); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; //REMOVE case "remove": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ //only execute if the current node is not the first one (drive) if( $(element).attr("id") != $("div.jstree > ul > li").first().attr("id") ) { $("#<?=$id_arr[$k]?>").jstree("remove",'#'+$(element).attr("id")); } else $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; //RENAME NODE only one selection case "rename": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ if( $(element).attr("id") != $("div.jstree > ul > li").first().attr("id") ) { switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("rename", '#'+$(element).attr("id") ); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } } else $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; //Cut case "cut": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("cut", '#'+$(element).attr("id")); $.facebox('<p class=\'p_inner teal\'>Operation "Cut" successfully done.<p class=\'p_inner teal bold\'>Where to place it?'); setTimeout(function(){ $.facebox.close(); $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id")); }, 2000); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; //Copy case "copy": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("copy", '#'+$(element).attr("id")); $.facebox('<p class=\'p_inner teal\'>Operation "Copy": Successfully done.<p class=\'p_inner teal bold\'>Where to place it?'); setTimeout(function(){ $.facebox.close(); $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); }, 2000); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; case "paste": if ( $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).length ) { $.jstree._reference("#<?=$id_arr[$k]?>").get_selected(false, true).each(function(index,element){ switch(index) { case 0: $("#<?=$id_arr[$k]?>").jstree("paste", '#'+$(element).attr("id")); break; default: $("#<?=$id_arr[$k]?>").jstree("deselect_node", '#'+$(element).attr("id") ); break; } }); } else { $.facebox('<p class=\'p_inner error bold\'>A selection needs to be made to work with this operation'); setTimeout(function(){ $.facebox.close(); }, 2000); } break; } }); <? } ?> server.php $path='../../../..'; require_once "$path/phpfoo/dbif.class"; require_once "$path/global.inc"; // Database config & class $db_config = array( "servername"=> $dbHost, "username" => $dbUser, "password" => $dbPW, "database" => $dbName ); if(extension_loaded("mysqli")) require_once("_inc/class._database_i.php"); else require_once("_inc/class._database.php"); //Tree class require_once("_inc/class.ctree.php"); $dbLink = new dbif(); $dbErr = $dbLink->connect($dbName,$dbUser,$dbPW,$dbHost); $jstree = new json_tree(); if(isset($_GET["reconstruct"])) { $jstree->_reconstruct(); die(); } if(isset($_GET["analyze"])) { echo $jstree->_analyze(); die(); } $table = '`discussions_tree`'; if($_REQUEST["operation"] && strpos($_REQUEST["operation"], "_") !== 0 && method_exists($jstree, $_REQUEST["operation"])) { foreach($_REQUEST as $k => $v) { switch($k) { case 'user_id': //We are passing the user_id from the $_SESSION on each request and trying to pick up the min and max value from the table that matches the 'user_id' $sql = "SELECT max(`right`) , min(`left`) FROM $table WHERE `user_id`=$v"; //If the select does not return any value then just let it be :P if (!list($right, $left)=$dbLink->getRow($sql)) { $sql = $dbLink->dbSubmit("UPDATE $table SET `user_id`=$v WHERE `id` = 1 AND `parent_id` = 0"); $sql = $dbLink->dbSubmit("UPDATE $table SET `user_id`=$v WHERE `parent_id` = 1 AND `label`='How to Tag' "); } else { $sql = $dbLink->dbSubmit("UPDATE $table SET `user_id`=$v, `right`=$right+2 WHERE `id` = 1 AND `parent_id` = 0"); $sql = $dbLink->dbSubmit("UPDATE $table SET `user_id`=$v, `left`=$left+1, `right`=$right+1 WHERE `parent_id` = 1 AND `label`='How to Tag' "); } break; } } header("HTTP/1.0 200 OK"); header('Content-type: application/json; charset=utf-8'); header("Cache-Control: no-cache, must-revalidate"); header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); header("Pragma: no-cache"); echo $jstree->{$_REQUEST["operation"]}($_REQUEST); die(); } header("HTTP/1.0 404 Not Found"); ?> The problem: DND *(Drag and Drop) works, Delete works, Create works, Rename works, but Copy, Cut and Paste don't work

    Read the article

  • Integrating WIF with WCF Data Services

    - by cibrax
    A time ago I discussed how a custom REST Starter kit interceptor could be used to parse a SAML token in the Http Authorization header and wrap that into a ClaimsPrincipal that the WCF services could use. The thing is that code was initially created for Geneva framework, so it got deprecated quickly. I recently needed that piece of code for one of projects where I am currently working on so I decided to update it for WIF. As this interceptor can be injected in any host for WCF REST services, also represents an excellent solution for integrating claim-based security into WCF Data Services (previously known as ADO.NET Data Services). The interceptor basically expects a SAML token in the Authorization header. If a token is found, it is parsed and a new ClaimsPrincipal is initialized and injected in the WCF authorization context. public class SamlAuthenticationInterceptor : RequestInterceptor {   SecurityTokenHandlerCollection handlers;   public SamlAuthenticationInterceptor()     : base(false)   {     this.handlers = FederatedAuthentication.ServiceConfiguration.SecurityTokenHandlers;   }   public override void ProcessRequest(ref RequestContext requestContext)   {     SecurityToken token = ExtractCredentials(requestContext.RequestMessage);     if (token != null)     {       ClaimsIdentityCollection claims = handlers.ValidateToken(token);       var principal = new ClaimsPrincipal(claims);       InitializeSecurityContext(requestContext.RequestMessage, principal);     }     else     {       DenyAccess(ref requestContext);     }   }   private void DenyAccess(ref RequestContext requestContext)   {     Message reply = Message.CreateMessage(MessageVersion.None, null);     HttpResponseMessageProperty responseProperty = new HttpResponseMessageProperty() { StatusCode = HttpStatusCode.Unauthorized };     responseProperty.Headers.Add("WWW-Authenticate",           String.Format("Basic realm=\"{0}\"", ""));     reply.Properties[HttpResponseMessageProperty.Name] = responseProperty;     requestContext.Reply(reply);     requestContext = null;   }   private SecurityToken ExtractCredentials(Message requestMessage)   {     HttpRequestMessageProperty request = (HttpRequestMessageProperty)  requestMessage.Properties[HttpRequestMessageProperty.Name];     string authHeader = request.Headers["Authorization"];     if (authHeader != null && authHeader.Contains("<saml"))     {       XmlTextReader xmlReader = new XmlTextReader(new StringReader(authHeader));       var col = SecurityTokenHandlerCollection.CreateDefaultSecurityTokenHandlerCollection();       SecurityToken token = col.ReadToken(xmlReader);                                        return token;     }     return null;   }   private void InitializeSecurityContext(Message request, IPrincipal principal)   {     List<IAuthorizationPolicy> policies = new List<IAuthorizationPolicy>();     policies.Add(new PrincipalAuthorizationPolicy(principal));     ServiceSecurityContext securityContext = new ServiceSecurityContext(policies.AsReadOnly());     if (request.Properties.Security != null)     {       request.Properties.Security.ServiceSecurityContext = securityContext;     }     else     {       request.Properties.Security = new SecurityMessageProperty() { ServiceSecurityContext = securityContext };      }    }    class PrincipalAuthorizationPolicy : IAuthorizationPolicy    {      string id = Guid.NewGuid().ToString();      IPrincipal user;      public PrincipalAuthorizationPolicy(IPrincipal user)      {        this.user = user;      }      public ClaimSet Issuer      {        get { return ClaimSet.System; }      }      public string Id      {        get { return this.id; }      }      public bool Evaluate(EvaluationContext evaluationContext, ref object state)      {        evaluationContext.AddClaimSet(this, new DefaultClaimSet(System.IdentityModel.Claims.Claim.CreateNameClaim(user.Identity.Name)));        evaluationContext.Properties["Identities"] = new List<IIdentity>(new IIdentity[] { user.Identity });        evaluationContext.Properties["Principal"] = user;        return true;      }    } A WCF Data Service, as any other WCF Service, contains a service host where this interceptor can be injected. The following code illustrates how that can be done in the “svc” file. <%@ ServiceHost Language="C#" Debug="true" Service="ContactsDataService"                 Factory="AppServiceHostFactory" %> using System; using System.ServiceModel; using System.ServiceModel.Activation; using Microsoft.ServiceModel.Web; class AppServiceHostFactory : ServiceHostFactory {    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)   {     WebServiceHost2 result = new WebServiceHost2(serviceType, true, baseAddresses);     result.Interceptors.Add(new SamlAuthenticationInterceptor());                 return result;   } } WCF Data Services includes an specific WCF host of out the box (DataServiceHost). However, the service is not affected at all if you replace it with a custom one as I am doing in the code above (WebServiceHost2 is part of the REST Starter kit). Finally, the client application needs to pass the SAML token somehow to the data service. In case you are using any Http client library for consuming the data service, that’s easy to do, you only need to include the SAML token as part of the “Authorization” header. If you are using the auto-generated data service proxy, a little piece of code is needed to inject a SAML token into the DataServiceContext instance. That class provides an event “SendingRequest” that any client application can leverage to include custom code that modified the Http request before it is sent to the service. So, you can easily create an extension method for the DataServiceContext that negotiates the SAML token with an existing STS, and adds that token as part of the “Authorization” header. public static class DataServiceContextExtensions {        public static void ConfigureFederatedCredentials(this DataServiceContext context, string baseStsAddress, string realm)   {     string address = string.Format(STSAddressFormat, baseStsAddress, realm);                  string token = NegotiateSecurityToken(address);     context.SendingRequest += (source, args) =>     {       args.RequestHeaders.Add("Authorization", token);     };   } private string NegotiateSecurityToken(string address) { } } I left the NegociateSecurityToken method empty for this extension as it depends pretty much on how you are negotiating tokens from an existing STS. In case you want to end-to-end REST solution that involves an Http endpoint for the STS, you should definitely take a look at the Thinktecture starter STS project in codeplex.

    Read the article

  • SignalR Auto Disconnect when Page Changed in AngularJS

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/30/signalr-auto-disconnect-when-page-changed-in-angularjs.aspxIf we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here. In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method. But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS. If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection. 1: // wire the stop handler for when the user leaves the page 2: _pageWindow.bind("unload", function () { 3: connection.log("Window unloading, stopping the connection."); 4:  5: connection.stop(asyncAbort); 6: }); 7:  8: if (isFirefox11OrGreater) { 9: // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close. 10: // #2400 11: _pageWindow.bind("beforeunload", function () { 12: // If connection.stop() runs runs in beforeunload and fails, it will also fail 13: // in unload unless connection.stop() runs after a timeout. 14: window.setTimeout(function () { 15: connection.stop(asyncAbort); 16: }, 0); 17: }); 18: }   Problem Reproduce In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code. 1: public class GreetingHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId)); 6: return base.OnConnected(); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId)); 12: return base.OnDisconnected(); 13: } 14:  15: public void Hello(string user) 16: { 17: Clients.All.hello(string.Format("Hello, {0}!", user)); 18: } 19: } Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express. 1: public class Startup 2: { 3: public void Configuration(IAppBuilder app) 4: { 5: app.Map("/signalr", map => 6: { 7: map.UseCors(CorsOptions.AllowAll); 8: map.RunSignalR(new HubConfiguration() 9: { 10: EnableJavaScriptProxies = false 11: }); 12: }); 13: } 14: } Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above. In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc.. Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js". 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="jquery-2.1.0.js"></script> 1:  2: <script type="text/javascript" src="angular.js"> 1: </script> 2: <script type="text/javascript" src="angular-ui-router.js"> 1: </script> 2: <script type="text/javascript" src="jquery.signalR-2.0.3.js"> 1: </script> 2: <script type="text/javascript" src="app.js"></script> 4: </head> 5: <body> 6: <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1> 7: <div> 8: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 9: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 10: </div> 11: <div data-ui-view></div> 12: </body> 13: </html> Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: $locationProvider.html5Mode(true); 17: }]); 18:  19: app.value('$', $); 20: app.value('endpoint', 'http://localhost:60448'); 21: app.value('hub', 'GreetingHub'); 22:  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) { 24: $scope.user = ''; 25: $scope.response = ''; 26:  27: $scope.greeting = function () { 28: proxy.invoke('Hello', $scope.user) 29: .done(function () {}) 30: .fail(function (error) { 31: console.log(error); 32: }); 33: }; 34:  35: var connection = $.hubConnection(endpoint); 36: var proxy = connection.createHubProxy(hub); 37: proxy.on('hello', function (response) { 38: $scope.$apply(function () { 39: $scope.response = response; 40: }); 41: }); 42: connection.start() 43: .done(function () { 44: console.log('signlar connection established'); 45: }) 46: .fail(function (error) { 47: console.log(error); 48: }); 49: }); 50:  51: app.controller('View2Ctrl', function ($scope, $) { 52: }); When we went to View1 the server side "OnConnect" method will be invoked as below. And in any page we send the message to server, all clients will got the response. If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct. But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.   Solution Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected". 1: var connection; 2: app.run(['$rootScope', function ($rootScope) { 3: $rootScope.$on('$stateChangeStart', function () { 4: if (connection && connection.state && connection.state !== 4 /* disconnected */) { 5: console.log('signlar connection abort'); 6: connection.stop(); 7: } 8: }); 9: }]); Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.   Summary In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here. Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Connecting to DB2 from SSIS

    - by Christopher House
    The project I'm currently working on involves moving various pieces of data from a legacy DB2 environment to some SQL Server and flat file locations.  Most of the data flows are real time, so they were a natural fit for the client's MQSeries on their iSeries servers and BizTalk to handle the messaging.  Some of the data flows, however, are daily batch type transmissions.  For the daily batch transmissions, it was decided that we'd use SSIS to pull the data direct from DB2 to either a SQL Server or flat file.  I'm not at all an SSIS guy, I've done a bit here and there, but mainly for situations were we needed to move data from a dev environment to QA, mostly informal stuff like that.  And, as much as I'm not an SSIS guy, I'm even less a DB2/iSeries guy.  Prior to this engagement, my knowledge of DB2 was limited to the fact that it's an IBM product and that it was probably a DBMS flatform (that's what the DB in DB2 means, right?).   One of my first goals when I came onto this project was to develop of POC SSIS package to pull some data from DB2 and dump it to a flat file.  It sounded like a pretty straight forward task.  As always, the devil is in the details.  Configuring the DB2 connection manager took a bit of trial and error.  As such, I thought I'd post my experiences here in hopes that they might save someone the efforts I went through.  That being said, please keep in mind, as I pointed out, I'm not at all a DB2 guy, so my terminology and explanations may not be 100% spot on. Before you get started, you need to figure out how you're going to connect to DB2.  From the research I did, it looks like there are a few options.  IBM has both an OLE DB and .Net data provider which can be found here.  I installed their client access tools and tried to use both the .Net and OLE DB providers but I received an error message from both when attempting to connect to the iSeries that indicated I needed a license for a product called DB2 Connect.  I inquired with one of my client's iSeries resources about a license for this product and it appears they didn't have one, so that meant the IBM drivers were out.  The other option that I found quite a bit of discussion around was Microsoft's OLE DB Provider for DB2.  This driver is part of the feature pack for SQL Server 2008 Enterprise Edition and can be downloaded here. As it turns out, I already had Microsoft's driver installed on my dev VM, which stuck me as odd since I hadn't installed it.  I discovered that the driver is installed with the BizTalk adapter pack for host systems, which was also installed on my VM.  However, it looks like the version used by the adapter pack is newer than the version provided in the SQL Server feature pack.   Once you get the driver installed, create a connection manager in your package just like you normally would and select the Microsoft OLE DB Provider for DB2 from the list of available drivers. After you select the driver, you'll need to enter in your host name, login credentials and initial catalog. A couple of things to note here.  First, the Initial catalog needs to be the same as your host name.  Not sure why that is, but trust me, it just does.  Second, for credentials, in my environment, we're using what the client's iSeries people refer to as "profiles".  I guess this is similar to SQL auth in the SQL Server world.  In other words, they've given me a username and password for connecting to DB, so I've entered it here. Next, click the Data Links button.  On the Data Links screen, enter your package collection on the first tab. Package collection is one of those DB2 concepts I'm still trying to figure out.  From the little bit I've read, packages are used to control SQL compilation and each DB2 connection needs one.  The package collection, I believe, controls where your package is created.  One of the iSeries folks I've been working with told me that I should always use QGPL for my package collection, as QGPL is "general purpose" and doesn't require any additional authority. Next click the ellipsis next to the Network drop-down.  Here you'll want to enter your host name again. Again, not sure why you need to do this, but trust me, my connection wouldn't work until I entered my hostname here. Finally, go to the Advanced tab, select your DBMS platform and check Process binary as character. My environment is DB2 on the iSeries and iSeries is the replacement for AS/400, so I selected DB2/AS400 for my platform.  Process binary as character was necessary to handle some of the DB2 data types.  I had a few columns that showed all their data as "System.Byte[]".  Checking Process binary as character resolved this. At this point, you should be good to go.  You can go back to the Connection tab on the Data Links dialog to perform a couple of tests to validate your configuration.  The Test Connection button is obvious, this just verifies you can connect to the host using the configuration data you've entered.  The Packages button will attempt to connect to the host and create the packages required to execute queries. This isn't meant to be a comprehensive look SSIS and DB2, these are just some of the notes I've come up with since I've started working with DB2 and SSIS.  I'm sure as I continue developing my packages, I'll find more quirks and will post them here.

    Read the article

  • DirectX11 Swap Chain RGBA vs BGRA Format

    - by Nathan
    I was wondering if anyone could elaborate any further on something that's been bugging me. In DirectX9 the main supported back buffer formats were D3DFMT_X8R8B8G8 and D3DFMT_A8R8G8B8 (Both being BGRA in layout). http://msdn.microsoft.com/en-us/library/windows/desktop/bb174314(v=vs.85).aspx With the initial version of DirectX10 their was no support for BGRA and all the textbooks and online tutorials recommend DXGI_FORMAT_R8G8B8A8_UNORM (being RGBA in layout). Now with DirectX11 BGRA is supported again and it seems as if microsoft recommends using a BGRA format as the back buffer format. http://msdn.microsoft.com/en-us/library/windows/apps/hh465096.aspx Are there any suggestions or are there performance implications of using one or the other? (I assume not as obviously by specifying the format of the underlying resource the runtime will handle what bits your passing through and than infer how to utilise them based on the format.)

    Read the article

  • Default vs Impl when implementing interfaces in Java

    - by Gary Rowe
    After reading Should package names be singular or plural? it occurred to me that I've never seen a proper debate covering one of my pet peeves: naming implementations of interfaces. Let's assume that you have a interface Order that is intended to be implemented in a variety of ways but there is only the initial implementation when the project is first created. Do you go for DefaultOrder or OrderImpl or some other variant to avoid the false dichotomy? And what do you do when more implementations come along? And most important... why?

    Read the article

  • Comments on Comments

    - by Joe Mayo
    I almost tweeted a reply to Capar Kleijne's question about comments on Twitter, but realized that my opinion exceeded 140 characters. The following is based upon my experience with extremes and approaches that I find useful in code comments. There are a couple extremes that I've seen and reasons why people go the distance in each approach. The most common extreme is no comments in the code at all.  A few bad reasons why this happens is because a developer is in a hurry, sloppy, or is interested in job preservation. The unfortunate result is that the code is difficult to understand and hard to maintain. The drawbacks to no comments in code are a primary reason why teachers drill the need for commenting code into our heads.  This viewpoint assumes the lack of comments are bad because the code is bad, but there is another reason for not commenting that is gaining more popularity. I've heard/and read that code should be self documenting. Following this thought pattern, if code is well written with meaningful names, there should not be a reason for comments.  An addendum to this argument is that comments are often neglected and get out-of-date, but the code is what is kept up-to-date. Presumably, if code contained very good naming, it would be easy to maintain.  This is a noble perspective and I like the practice of meaningful naming of identifiers. However, I think it's also an extreme approach that doesn't cover important cases.  i.e. If an identifier is named badly (subjective differences in opinion) or not changed appropriately during maintenance, then the badly named identifier is no more useful than a stale comment. These were the two no-comment extremes, so let's look at the too many comments extreme. On a regular basis, I'll see cases where the code is over-commented; not nearly as often as the no-comment scenarios, but still prevalent.  These are examples of where every single line in the code is commented.  These comments make the code harder to read because they get in the way of the algorithm.  In most cases, the comments parrot what each line of code does.  If a developer understands the language, then most statements are immediately intuitive.  i.e. what use is it to say that I'm assigning foo to bar when it's clear what the code is doing. I think that over-commenting code is a waste of time that slows down initial development and maintenance.  Understandably, the developer's intentions are admirable because they've had it beaten into their heads that they must comment. However, I think it's an extreme and prefer a more moderate approach. I don't think the extremes do justice to code because each can make maintenance harder.  No comments on bad code is obviously a problem, but the other two extremes are subtle and require qualification to address properly. The problem I see with the code-as-documentation approach is that it doesn't lift the developer out of the algorithm to identify dependencies, intentions, and hacks. Any developer can read code and follow an algorithm, but they still need to know where it fits into the big picture of the application. Because of indirections with language features like interfaces, delegates, and virtual members, code can become complex.  Occasionally, it's useful to point out a nuance or reason why a piece of code is there. i.e. If you've building an app that communicates via HTTP, you'll have certain headers to include for the endpoint, and it could be useful to point out why the code for setting those header values is there and how they affect the application. An argument against this could be that you should extract that code into a separate method with a meaningful name to describe the scenario.  My problem with such an approach would be that your code base becomes even more difficult to navigate and work with because you have all of this extra code just to make the code more meaningful. My opinion is that a simple and well-stated comment stating the reasons and intention for the code is more natural and convenient to the initial developer and maintainer.  I just don't agree with the approach of going out of the way to avoid making a comment.  I'm also concerned that some developers would take this approach as an excuse to not comment their bad code. Another area where I like comments is on documentation comments.  Java has it and so does C# and VB.  It's convenient because we can build automated tools that extract these comments.  These extracted comments are often much better than no documentation at all.  The "go read the code" answer always doesn't fulfill the need for a quick summary of an API. To summarize, I think that the extremes of no comments and too many comments are less than desirable approaches. I prefer documentation comments to explain each class and member (API level) and code comments as necessary to supplement well-written code. Joe

    Read the article

  • Entity Framework v1 &hellip; Brief Synopsis and Tips &ndash; Part 2

    - by Rohit Gupta
    Using Entity Framework with ASMX Web sErvices and WCF Web Service: If you use ASMX WebService to expose Entity objects from Entity Framework... then the ASMX Webservice does not  include object graphs, one work around is to use Facade pattern or to use WCF Service. The other important aspect of using ASMX Web Services along with Entity Framework is that the ASMX Client is not aware of the existence of EF v1 since the client solely deals with C# objects (not EntityObjects or ObjectContext). Since the client is not aware of the ObjectContext hence the client cannot participate in change tracking since the client only receives the Current Values and not the Orginal values when the service sends the the Entity objects to the client. Thus there are 2 drawbacks to using EntityFramework with ASMX Web Service: 1. Object state is not maintained... so to overcome this limitation we need insert/update single entity at a time and retrieve the original values for the entity being updated on the server/service end before calling Save Changes. 2. ASMX does not maintain object graphs... i.e. Customer.Reservations or Customer.Reservations.Trip relationships are not maintained. Thus you need to send these relationships separately from service to client. WCF Web Service overcomes the object graph limitation of ASMX Web Service, but we need to insure that we are populating all the non-null scalar properties of all the objects in the object graph before calling Update. WCF Web service still cannot overcome the second limitation of tracking changes to entities at the client end. Also note that the "Customer" class in the Client is very different from the "Customer" class in the Entity Framework Model Entities. They are incompatible with each other hence we cannot cast one to the other. However the .NET Framework translates the client "Customer" Entity to the EFv1 Model "customer" Entity once the entity is serialzed back on the ASMX server end. If you need change tracking enabled on the client then we need to use WCF Data Services which is available with VS 2010. ====================================================================================================== In WCF when adding an object that has relationships, the framework assumes that every object in the object graph needs to be added to store. for e.g. in a Customer.Reservations.Trip object graph, when a Customer Entity is added to the store, the EFv1 assumes that it needs to a add a Reservations collection and also Trips for each Reservation. Thus if we need to use existing Trips for reservations then we need to insure that we null out the Trip object reference from Reservations and set the TripReference to the EntityKey of the desired Trip instead. ====================================================================================================== Understanding Relationships and Associations in EFv1 The Golden Rule of EF is that it does not load entities/relationships unless you ask it to explicitly do so. However there is 1 exception to this rule. This exception happens when you attach/detach entities from the ObjectContext. If you detach an Entity in a ObjectGraph from the ObjectContext, then the ObjectContext removes the ObjectStateEntry for this Entity and all the relationship Objects associated with this Entity. For e.g. in a Customer.Order.OrderDetails if the Customer Entity is detached from the ObjectContext then you cannot traverse to the Order and OrderDetails Entities (that still exist in the ObjectContext) from the Customer Entity(which does not exist in the Object Context) Conversely, if you JOIN a entity that is not in the ObjectContext with a Entity that is in the ObjContext then the First Entity will automatically be added to the ObjContext since relationships for the 2 Entities need to exist in the ObjContext. ========================================================= You cannot attach an EntityCollection to an entity through its navigation property for e.g. you cannot code myContact.Addresses = myAddressEntityCollection ========================================================== Cascade Deletes in EDM: The Designer does not support specifying cascase deletes for a Entity. To enable cascasde deletes on a Entity in EDM use the Association definition in CSDL for the Entity. for e.g. SalesOrderDetail (SOD) has a Foreign Key relationship with SalesOrderHeader (SalesOrderHeader 1 : SalesOrderDetail *) if you specify a cascade Delete on SalesOrderHeader Entity then calling deleteObject on SalesOrderHeader (SOH) Entity will send delete commands for SOH record and all the SOD records that reference the SOH record. ========================================================== As a good design practise, if you use Cascade Deletes insure that Cascade delete facet is used both in the EDM as well as in the database. Even though it is not absolutely mandatory to have Cascade deletes on both Database and EDM (since you can see that just the Cascade delete spec on the SOH Entity in EDM will insure that SOH record and all related SOD records will be deleted from the database ... even though you dont have cascade delete configured in the database in the SOD table) ============================================================== Maintaining relationships in Code When Setting a Navigation property of a Entity (for e.g. setting the Contact Navigation property of Address Entity) the following rules apply : If both objects are detached, no relationship object will be created. You are simply setting a property the CLR way. If both objects are attached, a relationship object will be created. If only one of the objects is attached, the other will become attached and a relationship object will be created. If that detached object is new, when it is attached to the context its EntityState will be Added. One important rule to remember regarding synchronizing the EntityReference.Value and EntityReference.EntityKey properties is that when attaching an Entity which has a EntityReference (e.g. Address Entity with ContactReference) the Value property will take precedence and if the Value and EntityKey are out of sync, the EntityKey will be updated to match the Value. ====================================================== If you call .Load() method on a detached Entity then the .Load() operation will throw an exception. There is one exception to this rule. If you load entities using MergeOption.NoTracking, you will be able to call .Load() on such entities since these Entities are accessible by the ObjectContext. So the bottomline is that we need Objectontext to be able to call .Load() method to do deffered loading on EntityReference or EntityCollection. Another rule to remember is that you cannot call .Load() on entities that have a EntityState.Added State since the ObjectContext uses the EntityKey of the Primary (Parent) Entity when loading the related (Child) Entity (and not the EntityKey of the child (even if the EntityKey of the child is present before calling .Load()) ====================================================== You can use ObjContext.Add() to add a entity to the ObjContext and set the EntityState of the new Entity to EntityState.Added. here no relationships are added/updated. You can also use EntityCollection.Add() method to add an entity to another entity's related EntityCollection for e.g. contact has a Addresses EntityCollection so to add a new address use contact.Addresses.Add(newAddress) to add a new address to the Addresses EntityCollection. Note that if the entity does not already exist in the ObjectContext then calling contact.Addresses.Add(myAddress) will cause a new Address Entity to be added to the ObjContext with EntityState.Added and it will also add a RelationshipEntry (a relationship object) with EntityState.Added which connects the Contact (contact) with the new address newAddress. Note that if the entity already exists in the Objectcontext (being part theOtherContact.Addresses Collection), then calling contact.Addresses.Add(existingAddress) will add 2 RelationshipEntry objects to the ObjectStateEntry Collection, one with EntityState.Deleted and the other with EntityState.Added. This implies that the existingAddress Entity is removed from the theOtherContact.Addresses Collection and Added to the contact.Addresses Collection..effectively reassigning the address entity from the theOtherContact to "contact". This is called moving an existing entity to a new object graph. ====================================================== You usually use ObjectContext.Attach() and EntityCollection.Attach() methods usually when you need to reconstruct the ObjectGraph after deserializing the objects as received from a ASMX Web Service Client. Attach is usually used to connect existing Entities in the ObjectContext. When EntityCollection.Attach() is called the EntityState of the RelationshipEntry (the relationship object) remains as EntityState.unchanged whereas when EntityCollection.Add() method is called the EntityState of the relationship object changes to EntityState.Added or EntityState.Deleted as the situation demands. ========================================================= LINQ To Entities Tips: Select Many does Inner Join by default.   for e.g. from c in Contact from a in c.Address select c ... this will do a Inner Join between the Contacts and Addresses Table and return only those Contacts that have a Address. ======================================================== Group Joins Do LEFT Join by default. e.g. from a in Address join c in Contact ON a.Contact.ContactID == c.ContactID Into g WHERE a.CountryRegion == "US" select g; This query will do a left join on the Contact table and return contacts that have a address in "US" region The following query : from c in Contact join a in Address.Where(a1 => a1.CountryRegion == "US") on c.ContactID  equals a.Contact.ContactID into addresses select new {c, addresses} will do a left join on the Address table and return All Contacts. In these Contacts only those will have its Address EntityCollection Populated which have a Address in the "US" region, the other contacts will have 0 Addresses in the Address collection (even if addresses for those contacts exist in the database but are in a different region) ======================================================== Linq to Entities does not support DefaultIfEmpty().... instead use .Include("Address") Query Builder method to do a Left JOIN or use Group Joins if you need more control like Filtering on the Address EntityCollection of Contact Entity =================================================================== Use CreateSourceQuery() on the EntityReference or EntityCollection if you need to add filters during deferred loading of Entities (Deferred loading in EFv1 happens when you call Load() method on the EntityReference or EntityCollection. for e.g. var cust=context.Contacts.OfType<Customer>().First(); var sq = cust.Reservations.CreateSourceQuery().Where(r => r.ReservationDate > new DateTime(2008,1,1)); cust.Reservations.Attach(sq); This populates only those reservations that are older than Jan 1 2008. This is the only way (in EFv1) to Attach a Range of Entities to a EntityCollection using the Attach() method ================================================================== If you need to get the Foreign Key value for a entity e.g. to get the ContactID value from a Address Entity use this :                                address.ContactReference.EntityKey.EntityKeyValues.Where(k=> k.Key == "ContactID")

    Read the article

  • DirectX11 Swap Chain Format

    - by Nathan
    I was wondering if anyone could elaborate any further on something thats been bugging be me. In DirectX9 the main supported back buffer formats were D3DFMT_X8R8B8G8 and D3DFMT_A8R8G8B8 (Both being BGRA in layout). http://msdn.microsoft.com/en-us/library/windows/desktop/bb174314(v=vs.85).aspx With the initial version of DirectX10 their was no support for BGRA and all the textbooks and online tutorials recommend DXGI_FORMAT_R8G8B8A8_UNORM (being RGBA in layout). Now with DirectX11 BGRA is supported again and it seems as if microsoft recommends using a BGRA format as the back buffer format. http://msdn.microsoft.com/en-us/library/windows/apps/hh465096.aspx Is their any suggestions or are their performance implications of using one or the other. (I assume not as obviously by specifying the format of the underlying resource the runtime will handle what bits your passing through and than infer how to utilise them based on the format). Any feedback is appreciated.

    Read the article

  • BFCM &ndash; Big Fat Check Mark

    - by onefloridacoder
    I was installing TFS on my local laptop last week and just got around to setting up my initial collection using the TFS Console tool and “Bang!”  I received a message that told me that my local database didn’t have the full-text search option installed.  I remember the option in a (long) list of options and didn’t remember fiddling with it.   Whatever the reason, if you are installing TFS Basic on your box, make sure you have that little check ticked, or you won’t get the big fat one pictured above.  I installed SQL 2008 Developer edition which worked well for what I needed so far, and just needed to run the “Add Feature” option instead of the “Repair” option. HTH

    Read the article

  • git tagging comments - best practices

    - by Evan
    I've adopted a tagging system of x.x.x.x, and this works fine. However, you also need to leave a comment with your git tag. I've been using descriptions such as "fixes bug Y" or "feature X", but is this the best sort of comment to be leaving? Particularly, what if a tag encompasses several fixes, it seems not to make sense to have a very long tag comment. Does this mean that I should be creating a tag for every bug fix or feature, or should the tag comments be reflective of something else? I have a few ideas that may be good, but I'd love some advice from seasoned git tagging veterans :) For those who prefer specific examples: 1.0.0.0 - initial release 1.0.0.1 - bug fix for issue X 1.0.0.2 - (what if this is a bug fix for multiple issues, the comment would be too long, no?) Another example, in this example, the comments are more or less the same as the tags, it seems redundant. Is there something else we could be describing? https://github.com/osCommerce/oscommerce2/tags

    Read the article

  • Replication Services in a BI environment

    - by jorg
    In this blog post I will explain the principles of SQL Server Replication Services without too much detail and I will take a look on the BI capabilities that Replication Services could offer in my opinion. SQL Server Replication Services provides tools to copy and distribute database objects from one database system to another and maintain consistency afterwards. These tools basically copy or synchronize data with little or no transformations, they do not offer capabilities to transform data or apply business rules, like ETL tools do. The only “transformations” Replication Services offers is to filter records or columns out of your data set. You can achieve this by selecting the desired columns of a table and/or by using WHERE statements like this: SELECT <published_columns> FROM [Table] WHERE [DateTime] >= getdate() - 60 There are three types of replication: Transactional Replication This type replicates data on a transactional level. The Log Reader Agent reads directly on the transaction log of the source database (Publisher) and clones the transactions to the Distribution Database (Distributor), this database acts as a queue for the destination database (Subscriber). Next, the Distribution Agent moves the cloned transactions that are stored in the Distribution Database to the Subscriber. The Distribution Agent can either run at scheduled intervals or continuously which offers near real-time replication of data! So for example when a user executes an UPDATE statement on one or multiple records in the publisher database, this transaction (not the data itself) is copied to the distribution database and is then also executed on the subscriber. When the Distribution Agent is set to run continuously this process runs all the time and transactions on the publisher are replicated in small batches (near real-time), when it runs on scheduled intervals it executes larger batches of transactions, but the idea is the same. Snapshot Replication This type of replication makes an initial copy of database objects that need to be replicated, this includes the schemas and the data itself. All types of replication must start with a snapshot of the database objects from the Publisher to initialize the Subscriber. Transactional replication need an initial snapshot of the replicated publisher tables/objects to run its cloned transactions on and maintain consistency. The Snapshot Agent copies the schemas of the tables that will be replicated to files that will be stored in the Snapshot Folder which is a normal folder on the file system. When all the schemas are ready, the data itself will be copied from the Publisher to the snapshot folder. The snapshot is generated as a set of bulk copy program (BCP) files. Next, the Distribution Agent moves the snapshot to the Subscriber, if necessary it applies schema changes first and copies the data itself afterwards. The application of schema changes to the Subscriber is a nice feature, when you change the schema of the Publisher with, for example, an ALTER TABLE statement, that change is propagated by default to the Subscriber(s). Merge Replication Merge replication is typically used in server-to-client environments, for example when subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers, like with mobile devices that need to synchronize one in a while. Because I don’t really see BI capabilities here, I will not explain this type of replication any further. Replication Services in a BI environment Transactional Replication can be very useful in BI environments. In my opinion you never want to see users to run custom (SSRS) reports or PowerPivot solutions directly on your production database, it can slow down the system and can cause deadlocks in the database which can cause errors. Transactional Replication can offer a read-only, near real-time database for reporting purposes with minimal overhead on the source system. Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!

    Read the article

  • Big Data – Role of Cloud Computing in Big Data – Day 11 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the NewSQL. In this article we will understand the role of Cloud in Big Data Story What is Cloud? Cloud is the biggest buzzword around from last few years. Everyone knows about the Cloud and it is extremely well defined online. In this article we will discuss cloud in the context of the Big Data. Cloud computing is a method of providing a shared computing resources to the application which requires dynamic resources. These resources include applications, computing, storage, networking, development and various deployment platforms. The fundamentals of the cloud computing are that it shares pretty much share all the resources and deliver to end users as a service.  Examples of the Cloud Computing and Big Data are Google and Amazon.com. Both have fantastic Big Data offering with the help of the cloud. We will discuss this later in this blog post. There are two different Cloud Deployment Models: 1) The Public Cloud and 2) The Private Cloud Public Cloud Public Cloud is the cloud infrastructure build by commercial providers (Amazon, Rackspace etc.) creates a highly scalable data center that hides the complex infrastructure from the consumer and provides various services. Private Cloud Private Cloud is the cloud infrastructure build by a single organization where they are managing highly scalable data center internally. Here is the quick comparison between Public Cloud and Private Cloud from Wikipedia:   Public Cloud Private Cloud Initial cost Typically zero Typically high Running cost Unpredictable Unpredictable Customization Impossible Possible Privacy No (Host has access to the data Yes Single sign-on Impossible Possible Scaling up Easy while within defined limits Laborious but no limits Hybrid Cloud Hybrid Cloud is the cloud infrastructure build with the composition of two or more clouds like public and private cloud. Hybrid cloud gives best of the both the world as it combines multiple cloud deployment models together. Cloud and Big Data – Common Characteristics There are many characteristics of the Cloud Architecture and Cloud Computing which are also essentially important for Big Data as well. They highly overlap and at many places it just makes sense to use the power of both the architecture and build a highly scalable framework. Here is the list of all the characteristics of cloud computing important in Big Data Scalability Elasticity Ad-hoc Resource Pooling Low Cost to Setup Infastructure Pay on Use or Pay as you Go Highly Available Leading Big Data Cloud Providers There are many players in Big Data Cloud but we will list a few of the known players in this list. Amazon Amazon is arguably the most popular Infrastructure as a Service (IaaS) provider. The history of how Amazon started in this business is very interesting. They started out with a massive infrastructure to support their own business. Gradually they figured out that their own resources are underutilized most of the time. They decided to get the maximum out of the resources they have and hence  they launched their Amazon Elastic Compute Cloud (Amazon EC2) service in 2006. Their products have evolved a lot recently and now it is one of their primary business besides their retail selling. Amazon also offers Big Data services understand Amazon Web Services. Here is the list of the included services: Amazon Elastic MapReduce – It processes very high volumes of data Amazon DynammoDB – It is fully managed NoSQL (Not Only SQL) database service Amazon Simple Storage Services (S3) – A web-scale service designed to store and accommodate any amount of data Amazon High Performance Computing – It provides low-tenancy tuned high performance computing cluster Amazon RedShift – It is petabyte scale data warehousing service Google Though Google is known for Search Engine, we all know that it is much more than that. Google Compute Engine – It offers secure, flexible computing from energy efficient data centers Google Big Query – It allows SQL-like queries to run against large datasets Google Prediction API – It is a cloud based machine learning tool Other Players Besides Amazon and Google we also have other players in the Big Data market as well. Microsoft is also attempting Big Data with the Cloud with Microsoft Azure. Additionally Rackspace and NASA together have initiated OpenStack. The goal of Openstack is to provide a massively scaled, multitenant cloud that can run on any hardware. Thing to Watch The cloud based solutions provides a great integration with the Big Data’s story as well it is very economical to implement as well. However, there are few things one should be very careful when deploying Big Data on cloud solutions. Here is a list of a few things to watch: Data Integrity Initial Cost Recurring Cost Performance Data Access Security Location Compliance Every company have different approaches to Big Data and have different rules and regulations. Based on various factors, one can implement their own custom Big Data solution on a cloud. Tomorrow In tomorrow’s blog post we will discuss about various Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >