Search Results

Search found 985 results on 40 pages for 'instantiate'.

Page 13/40 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Pig_Cassandra integration caused - ERROR 1070: Could not resolve CassandraStorage using imports:

    - by Le Dude
    I'm following basic Pig, Cassandra, Hadoop installation. Everything works just fine as a stand alone. No error. However when I tried to run the example file provided by Pig_cassandra example, I got this error. [root@localhost pig]# /opt/cassandra/apache-cassandra-1.1.6/examples/pig/bin/pig_cassandra -x local -x local /opt/cassandra/apache-cassandra-1.1.6/examples/pig/example-script.pig Using /opt/pig/pig-0.10.0/pig-0.10.0-withouthadoop.jar. 2012-10-24 21:14:58,551 [main] INFO org.apache.pig.Main - Apache Pig version 0.10.0 (r1328203) compiled Apr 19 2012, 22:54:12 2012-10-24 21:14:58,552 [main] INFO org.apache.pig.Main - Logging error messages to: /opt/pig/pig_1351138498539.log 2012-10-24 21:14:59,004 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:/// 2012-10-24 21:14:59,472 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] Details at logfile: /opt/pig/pig_1351138498539.log Here is the log file Pig Stack Trace --------------- ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1597) at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1540) at org.apache.pig.PigServer.registerQuery(PigServer.java:540) at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:970) at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:386) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:189) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) at org.apache.pig.Main.run(Main.java:555) at org.apache.pig.Main.main(Main.java:111) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) Caused by: Failed to parse: Cannot instantiate: CassandraStorage at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:184) at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1589) ... 14 more Caused by: java.lang.RuntimeException: Cannot instantiate: CassandraStorage at org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:510) at org.apache.pig.parser.LogicalPlanBuilder.validateFuncSpec(LogicalPlanBuilder.java:791) at org.apache.pig.parser.LogicalPlanBuilder.buildFuncSpec(LogicalPlanBuilder.java:780) at org.apache.pig.parser.LogicalPlanGenerator.func_clause(LogicalPlanGenerator.java:4583) at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3115) at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1291) at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:789) at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:507) at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:382) at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:175) ... 15 more Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] at org.apache.pig.impl.PigContext.resolveClassName(PigContext.java:495) at org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:507) ... 24 more ================================================================================ I googled around and got to this point from other stackoverflow user that identified the potential problem but not the solution. Cassandra and pig integration cause error during startup I believe my configuration is correct and the path has already been defined properly. I didn't change anything in the pig_cassandra file. I'm not quite sure how to proceed from here. Please help?

    Read the article

  • DCOM Authentication Fails to use Kerberos, Falls back to NTLM

    - by Asa Yeamans
    I have a webservice that is written in Classic ASP. In this web service it attempts to create a VirtualServer.Application object on another server via DCOM. This fails with Permission Denied. However I have another component instantiated in this same webservice on the same remote server, that is created without problems. This component is a custom-in house component. The webservice is called from a standalone EXE program that calls it via WinHTTP. It has been verified that WinHTTP is authenticating with Kerberos to the webservice successfully. The user authenticated to the webservice is the Administrator user. The EXE to webservice authentication step is successful and with kerberos. I have verified the DCOM permissions on the remote computer with DCOMCNFG. The default limits allow administrators both local and remote activation, both local and remote access, and both local and remote launch. The default component permissions allow the same. This has been verified. The individual component permissions for the working component are set to defaults. The individual component permissions for the VirtualServer.Application component are also set to defaults. Based upon these settings, the webservice should be able to instantiate and access the components on the remote computer. Setting up a Wireshark trace while running both tests, one with the working component and one with the VirtualServer.Application component reveals an intresting behavior. When the webservice is instantiating the working, custom, component, I can see the request on the wire to the RPCSS endpoint mapper first perform the TCP connect sequence. Then I see it perform the bind request with the appropriate security package, in this case kerberos. After it obtains the endpoint for the working DCOM component, it connects to the DCOM endpoint authenticating again via Kerberos, and it successfully is able to instantiate and communicate. On the failing VirtualServer.Application component, I again see the bind request with kerberos go to the RPCC endpoing mapper successfully. However, when it then attempts to connect to the endpoint in the Virtual Server process, it fails to connect because it only attempts to authenticate with NTLM, which ultimately fails, because the webservice does not have access to the credentials to perform the NTLM hash. Why is it attempting to authenticate via NTLM? Additional Information: Both components run on the same server via DCOM Both components run as Local System on the server Both components are Win32 Service components Both components have the exact same launch/access/activation DCOM permissions Both Win32 Services are set to run as Local System The permission denied is not a permissions issue as far as I can tell, it is an authentication issue. Permission is denied because NTLM authentication is used with a NULL username instead of Kerberos Delegation Constrained delegation is setup on the server hosting the webservice. The server hosting the webservice is allowed to delegate to rpcss/dcom-server-name The server hosting the webservice is allowed to delegate to vssvc/dcom-server-name The dcom server is allowed to delegate to rpcss/webservice-server The SPN's registered on the dcom server include rpcss/dcom-server-name and vssvc/dcom-server-name as well as the HOST/dcom-server-name related SPNs The SPN's registered on the webservice-server include rpcss/webservice-server and the HOST/webservice-server related SPNs Anybody have any Ideas why the attempt to create a VirtualServer.Application object on a remote server is falling back to NTLM authentication causing it to fail and get permission denied? Additional information: When the following code is run in the context of the webservice, directly via a testing-only, just-developed COM component, it fails on the specified line with Access Denied. COSERVERINFO csi; csi.dwReserved1=0; csi.pwszName=L"terahnee.rivin.net"; csi.pAuthInfo=NULL; csi.dwReserved2=NULL; hr=CoGetClassObject(CLSID_VirtualServer, CLSCTX_ALL, &csi, IID_IClassFactory, (void **) &pClsFact); if(FAILED( hr )) goto error1; // Fails here with HRESULT_FROM_WIN32(ERROR_ACCESS_DENIED) hr=pClsFact->CreateInstance(NULL, IID_IUnknown, (void **) &pUnk); if(FAILED( hr )) goto error2; Ive also noticed that in the Wireshark Traces, i see the attempt to connect to the service process component only requests NTLMSSP authentication, it doesnt even attmept to use kerberos. This suggests that for some reason the webservice thinks it cant use kerberos...

    Read the article

  • Dynamic Code for type casting Generic Types 'generically' in C#

    - by Rick Strahl
    C# is a strongly typed language and while that's a fundamental feature of the language there are more and more situations where dynamic types make a lot of sense. I've written quite a bit about how I use dynamic for creating new type extensions: Dynamic Types and DynamicObject References in C# Creating a dynamic, extensible C# Expando Object Creating a dynamic DataReader for dynamic Property Access Today I want to point out an example of a much simpler usage for dynamic that I use occasionally to get around potential static typing issues in C# code especially those concerning generic types. TypeCasting Generics Generic types have been around since .NET 2.0 I've run into a number of situations in the past - especially with generic types that don't implement specific interfaces that can be cast to - where I've been unable to properly cast an object when it's passed to a method or assigned to a property. Granted often this can be a sign of bad design, but in at least some situations the code that needs to be integrated is not under my control so I have to make due with what's available or the parent object is too complex or intermingled to be easily refactored to a new usage scenario. Here's an example that I ran into in my own RazorHosting library - so I have really no excuse, but I also don't see another clean way around it in this case. A Generic Example Imagine I've implemented a generic type like this: public class RazorEngine<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase, new() You can now happily instantiate new generic versions of this type with custom template bases or even a non-generic version which is implemented like this: public class RazorEngine : RazorEngine<RazorTemplateBase> { public RazorEngine() : base() { } } To instantiate one: var engine = new RazorEngine<MyCustomRazorTemplate>(); Now imagine that the template class receives a reference to the engine when it's instantiated. This code is fired as part of the Engine pipeline when it gets ready to execute the template. It instantiates the template and assigns itself to the template: var template = new TBaseTemplateType() { Engine = this } The problem here is that possibly many variations of RazorEngine<T> can be passed. I can have RazorTemplateBase, RazorFolderHostTemplateBase, CustomRazorTemplateBase etc. as generic parameters and the Engine property has to reflect that somehow. So, how would I cast that? My first inclination was to use an interface on the engine class and then cast to the interface.  Generally that works, but unfortunately here the engine class is generic and has a few members that require the template type in the member signatures. So while I certainly can implement an interface: public interface IRazorEngine<TBaseTemplateType> it doesn't really help for passing this generically templated object to the template class - I still can't cast it if multiple differently typed versions of the generic type could be passed. I have the exact same issue in that I can't specify a 'generic' generic parameter, since there's no underlying base type that's common. In light of this I decided on using object and the following syntax for the property (and the same would be true for a method parameter): public class RazorTemplateBase :MarshalByRefObject,IDisposable { public object Engine {get;set; } } Now because the Engine property is a non-typed object, when I need to do something with this value, I still have no way to cast it explicitly. What I really would need is: public RazorEngine<> Engine { get; set; } but that's not possible. Dynamic to the Rescue Luckily with the dynamic type this sort of thing can be mitigated fairly easily. For example here's a method that uses the Engine property and uses the well known class interface by simply casting the plain object reference to dynamic and then firing away on the properties and methods of the base template class that are common to all templates:/// <summary> /// Allows rendering a dynamic template from a string template /// passing in a model. This is like rendering a partial /// but providing the input as a /// </summary> public virtual string RenderTemplate(string template,object model) { if (template == null) return string.Empty; // if there's no template markup if(!template.Contains("@")) return template; // use dynamic to get around generic type casting dynamic engine = Engine; string result = engine.RenderTemplate(template, model); if (result == null) throw new ApplicationException("RenderTemplate failed: " + engine.ErrorMessage); return result; } Prior to .NET 4.0  I would have had to use Reflection for this sort of thing which would have a been a heck of a lot more verbose, but dynamic makes this so much easier and cleaner and in this case at least the overhead is negliable since it's a single dynamic operation on an otherwise very complex operation call. Dynamic as  a Bailout Sometimes this sort of thing often reeks of a design flaw, and I agree that in hindsight this could have been designed differently. But as is often the case this particular scenario wasn't planned for originally and removing the generic signatures from the base type would break a ton of other code in the framework. Given the existing fairly complex engine design, refactoring an interface to remove generic types just to make this particular code work would have been overkill. Instead dynamic provides a nice and simple and relatively clean solution. Now if there were many other places where this occurs I would probably consider reworking the code to make this cleaner but given this isolated instance and relatively low profile operation use of dynamic seems a valid choice for me. This solution really works anywhere where you might end up with an inheritance structure that doesn't have a common base or interface that is sufficient. In the example above I know what I'm getting but there's no common base type that I can cast to. All that said, it's a good idea to think about use of dynamic before you rush in. In many situations there are alternatives that can still work with static typing. Dynamic definitely has some overhead compared to direct static access of objects, so if possible we should definitely stick to static typing. In the example above the application already uses dynamics extensively for dynamic page page templating and passing models around so introducing dynamics here has very little additional overhead. The operation itself also fires of a fairly resource heavy operation where the overhead of a couple of dynamic member accesses are not a performance issue. So, what's your experience with dynamic as a bailout mechanism? © Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Running TeamCity from Amazon EC2 - Cloud based scalable build and continuous Integration

    - by RoyOsherove
    I’ve been having fun playing with the amazon EC2 cloud service. I set up a server running TeamCity, and an image of a server that just runs a TeamCity agent. I also setup TeamCity  to automatically instantiate agents on EC2 and shut them down based upon availability of free agents. Here’s how I did it: The first step was setting up the teamcity server. Create an account on amazon EC2 (BTW, amazon’s sites works better in IE than it does in chrome.. who knew!?) Open the EC2 dashboard, and click “Launch Instance” . From the “Quick Start” tab I selected from the list: “Getting Started on Microsoft Windows Server 2008 (AMI Id: ami-c5e40dac)” .  it’s good enough to just run teamcity. In the instance details, I used the default (Small instance, 1.7 GB mem). You might want to choose a close availability zone based on where you are. We want to “Launch instances” so click continue. Select the default kernel, RAM disk and all. No need to enable monitoring for now (you can do that later). click continue. If you don’t have a key pair, you will be prompted to create one. Once you do, select it in the list. Now you’ll be prompted to create a security group. I named mine “TC” as in “TeamCity”. each group is a bunch of settings on which ports can be let through into and out of a hosted machine.  keep it as the default settings. We will change them later. Click continue,  review and then click “Launch”. Now you’ll be able to see the new instance in the running instances list on your site. Now, you need to install stuff on that instance (TeamCity!) . To do that, you’ll need to Remote desktop into that instance. To do that, we’ll get the admin password for that instance: Check it on the list, and click “Instance Actions” - “Get Windows Admin Password”. You might have to wait about 10 minutes or so for the password to be generated for you. Once you have the password, you will remote desktop (start-run-‘mstsc’) into the instance. It’s address is a dns address shown below the list under “Public DNS”. it looks something like: ec2-256-226-194-91.compute-1.amazonaws.com Once you’re inside the instance – you’ll need to open IE (it is in hardened mode so you’ll have to relax its security settings to download stuff). I first downloaded chrome and using chrome I downloaded TeamCity. Note that the download speed is FAST. several MBs per second. To be able to see TeamCity from the outside, you will need to open the advanced firewall settings inside the remote machine, and add incoming and outgoing rules for port 80 (HTTP). Once you do that, you should be able to see the machine from the outside. If you still can’t, see the next step. I also enabled ports 9090 since I will use this machine to create an agent image later as well. Now configure the security group (TC) to enable talking to agents: IN the EC2 dashboard click on “Security Groups” and select your group. To add a rule, click on the empty list under the ‘protocol’ header. select TCP. from and ‘to’ ports are 9090. source ip is 0.0.0.0/0 (every ip is allowed). click “Save.  Also make sure you can see “HTTP” tcp 80 in that list. if you can’t see it, add it or you won’t be able to browse to the machine’s teamcity server home page. I also set an elastic IP for the machine: so I always have the same IP for the machine instance. Allocate and set one through the”Elastic IP” link on the EC2 dashboard.   you should now have a working instance of teamcity.   Now let’s create an agent image. Repeat steps 1-9, but this time, make sure you select a machine that fits what an agent might do. I selected Instance type – Hihg-CPU medium machine,  that is much faster. On that machine, I installed what I needed (VS 2010, PostSharp etc..). downloading VS 2010 from MSDN (2 GB took less than 10 min!) Now, instead of installing teamcity, browse using the browser to the teamcity homepage (from within the remote machine). go to the Administration page, and click the upper right link “Install agents”. Install the agent on he local machine – set it to the IP or DNS of the running TeamCity server. That way you’ll be able to check their connectivity live before making this machine your official agent image to reuse. Once the agent is installed, see that the TC server can see it and use it. see steps 13-14 above if they can’t. Once it works, you can take steps to make this image your agent image to be reused. next, here is a copy-paste of several steps to take from http://confluence.jetbrains.net/display/TCD5/Setting+Up+TeamCity+for+Amazon+EC2 Configure system so that agent it is started on machine boot (and make sure TeamCity server is accessible on machine boot). Test the setup by rebooting machine and checking that the agent connects normally to the server. Prepare the Image for bundling: Remove any temporary/history information in the system. Stop the agent (under Windows stop the service but leave it in Automatic startup type) Delete content agent logs and temp directories (not necessary) Delete "<Agent Home>/conf/amazon-*" file (not necessary) Change config/buildAgent.properties to remove properties: name, serverAddress, authToken (not necessary)   Now, we need to: Make AMI from the running instance. Configure TeamCity EC2 support on TeamCity server. Making an AMI: Check the instance of the agent in the EC2 dashboard instance list, and select instance actions->Create Image (EBS AMI) you’ll see the image pending in the APIs list in the EC2 dashboard. this could take 30 minutes or more. meanwhile we can configure the could support in the teamcity server. COPY THE AMI ID to the clipboard (looks like ami-a88aa4ce) Configuring TeamCity for Cloud: In TeamCity, click on “Agents” and then on “Cloud” tab. this is where you will control your cloud agents. to configure new cloud agents based on APIs, click on the right link to the “configuration page” Create a new profile and select AMazon EC2 as cloud type. Use your AMI ID that you copied to the clipboard into the “Images” field. Select an availability zone that is the same as the one your instance is running on for best communication perf between them make sure you select the ‘TC’ security group hopefully, that should be it, and teamcity will try to instantiate new instances on demand. Note that it may take around 10 minutes for an agent to become available to teamcity from the time it’s started.

    Read the article

  • Notes on implementing Visual Studio 2010 Navigate To

    - by cyberycon
    One of the many neat functions added to Visual Studio in VS 2010 was the Navigate To feature. You can find it by clicking Edit, Navigate To, or by using the keyboard shortcut Ctrl, (yes, that's control plus the comma key). This pops up the Navigate To dialog that looks like this: As you type, Navigate To starts searching through a number of different search providers for your term. The entries in the list change as you type, with most providers doing some kind of fuzzy or at least substring matching. If you have C#, C++ or Visual Basic projects in your solution, all symbols defined in those projects are searched. There's also a file search provider, which displays all matching filenames from projects in the current solution as well. And, if you have a Visual Studio package of your own, you can implement a provider too. Micro Focus (where I work) provide the Visual COBOL language inside Visual Studio (http://visualstudiogallery.msdn.microsoft.com/ef9bc810-c133-4581-9429-b01420a9ea40 ), and we wanted to provide this functionality too. This post provides some notes on the things I discovered mainly through trial and error, but also with some kind help from devs inside Microsoft. The expectation of Navigate To is that it searches across the whole solution, not just the current project. So in our case, we wanted to search for all COBOL symbols inside all of our Visual COBOL projects inside the solution. So first of all, here's the Microsoft documentation on Navigate To: http://msdn.microsoft.com/en-us/library/ee844862.aspx . It's the reference information on the Microsoft.VisualStudio.Language.NavigateTo.Interfaces Namespace, and it lists all the interfaces you will need to implement to create your own Navigate To provider. Navigate To uses Visual Studio's latest mechanism for integrating external functionality and services, Managed Extensibility Framework (MEF). MEF components don't require any registration with COM or any other registry entries to be found by Visual Studio. Visual Studio looks in several well-known locations for manifest files (extension.vsixmanifest). It then uses reflection to scan for MEF attributes on classes in the assembly to determine which functionality the assembly provides. MEF itself is actually part of the .NET framework, and you can learn more about it here: http://mef.codeplex.com/. To get started with Visual Studio and MEF you could do worse than look at some of the editor examples on the VSX page http://archive.msdn.microsoft.com/vsx . I've also written a small application to help with switching between development and production MEF assemblies, which you can find on Codeproject: http://www.codeproject.com/KB/miscctrl/MEF_Switch.aspx. The Navigate To interfaces Back to Navigate To, and summarizing the MSDN reference documentation, you need to implement the following interfaces: INavigateToItemProviderFactoryThis is Visual Studio's entry point to your Navigate To implementation, and you must decorate your implementation with the following MEF export attribute: [Export(typeof(INavigateToItemProviderFactory))]  INavigateToItemProvider Your INavigateToItemProviderFactory needs to return your implementation of INavigateToItemProvider. This class implements StartSearch() and StopSearch(). StartSearch() is the guts of your provider, and we'll come back to it in a minute. This object also needs to implement IDisposeable(). INavigateToItemDisplayFactory Your INavigateToItemProvider hands back NavigateToItems to the NavigateTo framework. But to give you good control over what appears in the NavigateTo dialog box, these items will be handed back to your INavigateToItemDisplayFactory, which must create objects implementing INavigateToItemDisplay  INavigateToItemDisplay Each of these objects represents one result in the Navigate To dialog box. As well as providing the description and name of the item, this object also has a NavigateTo() method that should be capable of displaying the item in an editor when invoked. Carrying out the search The lifecycle of your INavigateToItemProvider is the same as that of the Navigate To dialog. This dialog is modal, which makes your implementation a little easier because you know that the user can't be changing things in editors and the IDE while this dialog is up. But the Navigate To dialog DOES NOT run on the main UI thread of the IDE – so you need to be aware of that if you want to interact with editors or other parts of the IDE UI. When the user invokes the Navigate To dialog, your INavigateToItemProvider gets sent a TryCreateNavigateToItemProvider() message. Instantiate your INavigateToItemProvider and hand this back. The sequence diagram below shows what happens next. Your INavigateToItemProvider will get called with StartSearch(), and passed an INavigateToCallback. StartSearch() is an asynchronous request – you must return from this method as soon as possible, and conduct your search on a separate thread. For each match to the search term, instantiate a NavigateToItem object and send it to INavigateToCallback.AddItem(). But as the user types in the Search Terms field, NavigateTo will invoke your StartSearch() method repeatedly with the changing search term. When you receive the next StartSearch() message, you have to abandon your current search, and start a new one. You can't rely on receiving a StopSearch() message every time. Finally, when the Navigate To dialog box is closed by the user, you will get a Dispose() message – that's your cue to abandon any uncompleted searches, and dispose any resources you might be using as part of your search. While you conduct your search invoke INavigateToCallback.ReportProgress() occasionally to provide feedback about how close you are to completing the search. There does not appear to be any particular requirement to how often you invoke ReportProgress(), and you report your progress as the ratio of two integers. In my implementation I report progress in terms of the number of symbols I've searched over the total number of symbols in my dictionary, and send a progress report every 16 symbols. Displaying the Results The Navigate to framework invokes INavigateToItemDisplayProvider.CreateItemDisplay() once for each result you passed to the INavigateToCallback. CreateItemDisplay() is passed the NavigateToItem you handed to the callback, and must return an INavigateToItemDisplay object. NavigateToItem is a sealed class which has a few properties, including the name of the symbol. It also has a Tag property, of type object. This enables you to stash away all the information you will need to create your INavigateToItemDisplay, which must implement an INavigateTo() method to display a symbol in an editor IDE when the user double-clicks an entry in the Navigate To dialog box. Since the tag is of type object, it is up to you, the implementor, to decide what kind of object you store in here, and how it enables the retrieval of other information which is not included in the NavigateToItem properties. Some of the INavigateToItemDisplay properties are self-explanatory, but a couple of them are less obvious: Additional informationThe string you return here is displayed inside brackets on the same line as the Name property. In English locales, Visual Studio includes the preposition "of". If you look at the first line in the Navigate To screenshot at the top of this article, Book_WebRole.Default is the additional information for textBookAuthor, and is the namespace qualified type name the symbol appears in. For procedural COBOL code we display the Program Id as the additional information DescriptionItemsYou can use this property to return any textual description you want about the item currently selected. You return a collection of DescriptionItem objects, each of which has a category and description collection of DescriptionRun objects. A DescriptionRun enables you to specify some text, and optional formatting, so you have some control over the appearance of the displayed text. The DescriptionItems property is displayed at the bottom of the Navigate To dialog box, with the Categories on the left and the Descriptions on the right. The Visual COBOL implementation uses it to display more information about the location of an item, making it easier for the user to know disambiguate duplicate names (something there can be a lot of in large COBOL applications). Summary I hope this article is useful for anyone implementing Navigate To. It is a fantastic navigation feature that Microsoft have added to Visual Studio, but at the moment there still don't seem to be any examples on how to implement it, and the reference information on MSDN is a little brief for anyone attempting an implementation.

    Read the article

  • COM+ applications deployment behaves different on different systems

    - by sharptooth
    In order to give my COM+ application enough credentials I want its components to be instantiated under "Local Service" account. When I create a server application with a wizard on Win2k3 it offers to choose under whom to instantiate components - "Local Service" is one of choices. But on WinXP "Local Service" is not offered at all in the wizard. When I open the "Identity" tab of the COM+ application under Win2k3 there'a a handful of choises, "Local Service" included, and I can select any of them. But on WinXP the same "Identity" tab only offers "Interactive user". What does this difference depend on? Does it depend on the system or on something else?

    Read the article

  • How to maximise a Single Window across Multiple Monitors?

    - by LePressentiment
    Say I've 3 monitors. I'll instantiate with http://www.texpaste.com/. What would I need for: The leftmost toolbox to be on the leftmost monitor? The center input blank field on the center monitor? The rightmost output display box on the rightmost monitor? Another example with 2 monitors: http://mathb.in/ Analogously, in this case, I'd want: The input blank field for the code on the left monitor? The rightoutput display box on the right monitor?

    Read the article

  • Servlet stops without giving any exception

    - by Fahim
    Hi, I have implemented a Servlet hosted on Tomcat 6 server on Mandriva Linux. I have been able to make the client communicate with the Servlet. In response to a request the Servlet tries to instantiate a another class (named KalmanFilter) located in the same directory. The KalmanFilter tries to create four Matrices (using Jama Matrix package). But at this point Servlet stops without giving any exception ! However, from another test code in the same directory I have been able to create instance of KalmanFilter class, and proceed without any error. The problem occurs only when my Servlet tries to instantiate the KalmanFilter class and create the matrices. Any idea? Below are the codes: MyServlet.java import javax.servlet.*; import javax.servlet.http.*; import java.io.*; import java.util.*; public class MyServlet extends HttpServlet { public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException{ doGet(request, response); } public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException{ PrintWriter out = null; //response.getWriter(); try{ System.out.println("creating new KalmanFilter"); KalmanFilter filter = new KalmanFilter(); out = response.getWriter(); out.print("filter created"); }catch(Exception ex){ ex.printStackTrace(); System.out.println("Exception in doGet(): " + ex.getMessage()); ex.printStackTrace(out); } } } KalmanFilter.java import Jama.Matrix; public class KalmanFilter { protected Matrix X, X0; protected Matrix F, Q; //protected Matrix F, B, U, Q; protected Matrix H, R; protected Matrix P, P0; private final double EPSILON = 0.001; public KalmanFilter(){ System.out.println("from constructor of KalmanFilter"); createInitialMatrices(); } private void createInitialMatrices(){ System.out.println("from KalmanFilter.createInitialMatrices()"); double[][] pVals = { {1.0, 0.0}, {0.0, 1.0} }; double[][] qVals = { {EPSILON, EPSILON}, {EPSILON, EPSILON} }; double[][] hVals = { {1.0, 0.0}, {0.0, 1.0}, {1.0, 0.0}, {0.0, 1.0} }; double[][] xVals = { {0.0}, {0.0}, }; System.out.println("creating P Q H X matrices in createInitialMatrices()"); try{ this.P = new Matrix(pVals); System.out.println("created P matrix in createInitialMatrices()"); this.Q = new Matrix(qVals); System.out.println("created Q matrix in createInitialMatrices()"); this.H = new Matrix(hVals); System.out.println("created H matrix in createInitialMatrices()"); this.X = new Matrix(xVals); System.out.println("created X matrix in createInitialMatrices()"); System.out.println("created P Q H X matrices in createInitialMatrices()"); }catch(Exception e){ System.out.println("Exception from createInitialMatrices()"+ e.getMessage()); e.printStackTrace(); } System.out.println("returning from createInitialMatrices()"); } }

    Read the article

  • Dynamic model choice field in django formset using multiple select elements

    - by Aryeh Leib Taurog
    I posted this question on the django-users list, but haven't had a reply there yet. I have models that look something like this: class ProductGroup(models.Model): name = models.CharField(max_length=10, primary_key=True) def __unicode__(self): return self.name class ProductRun(models.Model): date = models.DateField(primary_key=True) def __unicode__(self): return self.date.isoformat() class CatalogItem(models.Model): cid = models.CharField(max_length=25, primary_key=True) group = models.ForeignKey(ProductGroup) run = models.ForeignKey(ProductRun) pnumber = models.IntegerField() def __unicode__(self): return self.cid class Meta: unique_together = ('group', 'run', 'pnumber') class Transaction(models.Model): timestamp = models.DateTimeField() user = models.ForeignKey(User) item = models.ForeignKey(CatalogItem) quantity = models.IntegerField() price = models.FloatField() Let's say there are about 10 ProductGroups and 10-20 relevant ProductRuns at any given time. Each group has 20-200 distinct product numbers (pnumber), so there are at least a few thousand CatalogItems. I am working on formsets for the Transaction model. Instead of a single select menu with the several thousand CatalogItems for the ForeignKey field, I want to substitute three drop-down menus, for group, run, and pnumber, which uniquely identify the CatalogItem. I'd also like to limit the choices in the second two drop-downs to those runs and pnumbers which are available for the currently selected product group (I can update them via AJAX if the user changes the product group, but it's important that the initial page load as described without relying on AJAX). What's the best way to do this? As a point of departure, here's what I've tried/considered so far: My first approach was to exclude the item foreign key field from the form, add the substitute dropdowns by overriding the add_fields method of the formset, and then extract the data and populate the fields manually on the model instances before saving them. It's straightforward and pretty simple, but it's not very reusable and I don't think it is the right way to do this. My second approach was to create a new field which inherits both MultiValueField and ModelChoiceField, and a corresponding MultiWidget subclass. This seems like the right approach. As Malcolm Tredinnick put it in a django-users discussion, "the 'smarts' of a field lie in the Field class." The problem I'm having is when/where to fetch the lists of choices from the db. The code I have now does it in the Field's __init__, but that means I have to know which ProductGroup I'm dealing with before I can even define the Form class, since I have to instantiate the Field when I define the form. So I have a factory function which I call at the last minute from my view--after I know what CatalogItems I have and which product group they're in--to create form/formset classes and instantiate them. It works, but I wonder if there's a better way. After all, the field should be able to determine the correct choices much later on, once it knows its current value. Another problem is that my implementation limits the entire formset to transactions relating to (CatalogItems from) a single ProductGroup. A third possibility I'm entertaining is to put it all in the Widget class. Once I have the related model instance, or the cid, or whatever the widget is given, I can get the ProductGroup and construct the drop-downs. This would solve the issues with my second approach, but doesn't seem like the right approach.

    Read the article

  • Is this a right way to use NHibernate?

    - by Venemo
    I spent the rest of the evening reading StackOverflow questions and also some blog entries and links about the subject. All of them turned out to be very helpful, but I still feel that they don't really answer my question. So, I'm developing a simple web application. I'd like to create a reusable data access layer which I can later reuse in other solutions. 99% of these will be web applications. This seems to be a good excuse for me to learn NHibernate and some of the patterns around it. My goals are the following: I don't want the business logic layer to know ANYTHING about the inner workings of the database, nor NHibernate itself. I want the business logic layer to have the least possible number of assumptions about the data access layer. I want the data access layer as simplistic and easy-to-use as possible. This is going to be a simple project, so I don't want to overcomplicate anything. I want the data access layer to be as non-intrusive as possible. Will all this in mind, I decided to use the popular repository pattern. I read about this subject on this site and on various dev blogs, and I heard some stuff about the unit of work pattern. I also looked around and checked out various implementations. (Including FubuMVC contrib, and SharpArchitecture, and stuff on some blogs.) I found out that most of these operate with the same principle: They create a "unit of work" which is instantiated when a repository is instantiated, they start a transaction, do stuff, and commit, and then start all over again. So, only one ISession per Repository and that's it. Then the client code needs to instantiate a repository, do stuff with it, and then dispose. This usage pattern doesn't meet my need of being as simplistic as possible, so I began thinking about something else. I found out that NHibernate already has something which makes custom "unit of work" implementations unnecessary, and that is the CurrentSessionContext class. If I configure the session context correctly, and do the clean up when necessary, I'm good to go. So, I came up with this: I have a static class called NHibernateHelper. Firstly, it has a static property called CurrentSessionFactory, which upon first call, instantiates a session factory and stores it in a static field. (One ISessionFactory per one AppDomain is good enough.) Then, more importantly, it has a CurrentSession static property, which checks if there is an ISession bound to the current session context, and if not, creates one, and binds it, and it returns with the ISession bound to the current session context. Because it will be used mostly with WebSessionContext (so, one ISession per HttpRequest, although for the unit tests, I configured ThreadStaticSessionContext), it should work seamlessly. And after creating and binding an ISession, it hooks an event handler to the HttpContext.Current.ApplicationInstance.EndRequest event, which takes care of cleaning up the ISession after the request ends. (Of course, it only does this if it is really running in a web environment.) So, with all this set up, the NHibernateHelper will always be able to return a valid ISession, so there is no need to instantiate a Repository instance for the "unit of work" to operate properly. Instead, the Repository is a static class which operates with the ISession from the NHibernateHelper.CurrentSession property, and exposes some functionality through that. I'm curious, what do you think about this? Is it a valid way of thinking, or am I completely off track here?

    Read the article

  • Java multiple class compositing and boiler plate reduction

    - by h2g2java
    We all know why Java does/should not have multiple inheritance. So this is not questioning about what has already been debated till-cows-come-home. This discusses what we would do when we wish to create a class that has the characteristics of two or more other classes. Probably, most of us would do this to "inherit" from three classes. For simplicity, I left out the constructor.: class Car extends Vehicle { final public Transport transport; final public Machine machine; } So that, Car class directly inherits methods and objects of Vehicle class, but would have to refer to transport and machine explicitly to refer to objects instantiated in Transport and Machine. Car car = new Car(); car.drive(); // from Vehicle car.transport.isAmphibious(); // from Transport car.machine.getCO2Footprint(); // from Machine I thought this was a good idea until when I encounter frameworks that require setter and getter methods. For example, the XML <Car amphibious='false' footPrint='1000' model='Fordstatic999'/> would look for the methods setAmphibious(..), setFootPrint(..) and setModel(..). Therefore, I have to project the methods from Transport and Machine classes class Car extends Vehicle { final public Transport transport; final public Machine machine; public void setAmphibious(boolean b){ this.transport.setAmphibious(b); } public void setFootPrint(String fp){ this.machine.setFootPrint(fp); } } This is OK, if there were just a few characteristics. Right now, I am trying to adapt all of SmartGWT into GWT UIBinder, especially those classes that are not a GWT widget. There are lots of characteristics to project. Wouldn't it be nice if there exists some form of annotation framework that is like this: class Car extends Vehicle @projects {Transport @projects{Machine @projects Guzzler}} { /* No need to explicitly instantiate Transport, Machine or Guzzler */ .... } Where, in case of common names of characteristics exist, the characteristics of Machine would take precedence Guzzler's, and Transport's would have precedence over Machine's, and Vehicle's would have precedence over Transport's. The annotation framework would then instantiate Transport, Machine and Guzzler as hidden members of Car and expand to break-out the protected/public characteristics, in the precedence dictated by the @project annotation sequence, into actual source code or into byte-code. Preferably into byte-code. So that the setFootPrint method is found in both Machine and Guzzler, only that of Machine's would be projected. Questions: Don't you think this is a good idea to have such a framework? Does such a framework already exist? Tell me where/what. Is there an eclipse plugin that does it? Is there a proposal or plan anywhere that you know about such an annotation framework? It would be wonderful too, if the annotation/plugin framework lets me specify that boolean, int, or whatever else needs to be converted from String and does the conversion/parsing for me too. Please advise, somebody. I hope wording of my question was clear enough. Thx. Edited: To avoid OO enthusiasts jumping to conclusion, I have renamed the title of this question.

    Read the article

  • Fastest way to pad a number in Java to a certain number of digits

    - by Martin
    Am trying to create a well-optimised bit of code to create number of X-digits in length (where X is read from a runtime properties file), based on a DB-generated sequence number (Y), which is then used a folder-name when saving a file. I've come up with three ideas so far, the fastest of which is the last one, but I'd appreciate any advice people may have on this... 1) Instantiate a StringBuilder with initial capacity X. Append Y. While length < X, insert a zero at pos zero. 2) Instantiate a StringBuilder with initial capacity X. While length < X, append a zero. Create a DecimalFormat based on StringBuilder value, and then format the number when it's needed. 3) Create a new int of Math.pow( 10, X ) and add Y. Use String.valueOf() on the new number and then substring(1) it. The second one can obviously be split into outside-loop and inside-loop sections. So, any tips? Using a for-loop of 10,000 iterations, I'm getting similar timings from the first two, and the third method is approximately ten-times faster. Does this seem correct? Full test-method code below... // Setup test variables int numDigits = 9; int testNumber = 724; int numIterations = 10000; String folderHolder = null; DecimalFormat outputFormat = new DecimalFormat( "#,##0" ); // StringBuilder test long before = System.nanoTime(); for ( int i = 0; i < numIterations; i++ ) { StringBuilder sb = new StringBuilder( numDigits ); sb.append( testNumber ); while ( sb.length() < numDigits ) { sb.insert( 0, 0 ); } folderHolder = sb.toString(); } long after = System.nanoTime(); System.out.println( "01: " + outputFormat.format( after - before ) + " nanoseconds" ); System.out.println( "Sanity check: Folder = \"" + folderHolder + "\"" ); // DecimalFormat test before = System.nanoTime(); StringBuilder sb = new StringBuilder( numDigits ); while ( sb.length() < numDigits ) { sb.append( 0 ); } DecimalFormat formatter = new DecimalFormat( sb.toString() ); for ( int i = 0; i < numIterations; i++ ) { folderHolder = formatter.format( testNumber ); } after = System.nanoTime(); System.out.println( "02: " + outputFormat.format( after - before ) + " nanoseconds" ); System.out.println( "Sanity check: Folder = \"" + folderHolder + "\"" ); // Substring test before = System.nanoTime(); int baseNum = (int)Math.pow( 10, numDigits ); for ( int i = 0; i < numIterations; i++ ) { int newNum = baseNum + testNumber; folderHolder = String.valueOf( newNum ).substring( 1 ); } after = System.nanoTime(); System.out.println( "03: " + outputFormat.format( after - before ) + " nanoseconds" ); System.out.println( "Sanity check: Folder = \"" + folderHolder + "\"" );

    Read the article

  • Accessing multiple view controllers in page controller

    - by Apple Delegates
    I am showing view in ipad like a book, single view shows two view. I want to add more views so that when view flipped third and fourth view appears and further. I am using the code below to do so. I am adding ViewControllers to array it got kill at orientation method at this line " ContentViewController *currentViewController = [self.pageViewController.viewControllers objectAtIndex:0];". - (void)viewDidLoad { [super viewDidLoad]; //Instantiate the model array self.modelArray = [[NSMutableArray alloc] init]; for (int index = 1; index <= 12 ; index++) { [self.modelArray addObject:[NSString stringWithFormat:@"Page %d",index]]; } //Step 1 //Instantiate the UIPageViewController. self.pageViewController = [[UIPageViewController alloc] initWithTransitionStyle:UIPageViewControllerTransitionStylePageCurl navigationOrientation:UIPageViewControllerNavigationOrientationHorizontal options:nil]; //Step 2: //Assign the delegate and datasource as self. self.pageViewController.delegate = self; self.pageViewController.dataSource = self; //Step 3: //Set the initial view controllers. ViewOne *one = [[ViewOne alloc]initWithNibName:@"ViewOne" bundle:nil]; viewTwo *two = [[viewTwo alloc]initWithNibName:@"ViewTwo" bundle:nil]; ContentViewController *contentViewController = [[ContentViewController alloc] initWithNibName:@"ContentViewController" bundle:nil]; contentViewController.labelContents = [self.modelArray objectAtIndex:0]; // NSArray *viewControllers = [NSArray arrayWithObject:contentViewController]; viewControllers = [NSArray arrayWithObjects:contentViewController,one,two,nil]; [self.pageViewController setViewControllers:viewControllers direction:UIPageViewControllerNavigationDirectionForward animated:NO completion:nil]; //Step 4: //ViewController containment steps //Add the pageViewController as the childViewController [self addChildViewController:self.pageViewController]; //Add the view of the pageViewController to the current view [self.view addSubview:self.pageViewController.view]; //Call didMoveToParentViewController: of the childViewController, the UIPageViewController instance in our case. [self.pageViewController didMoveToParentViewController:self]; //Step 5: // set the pageViewController's frame as an inset rect. CGRect pageViewRect = self.view.bounds; pageViewRect = CGRectInset(pageViewRect, 40.0, 40.0); self.pageViewController.view.frame = pageViewRect; //Step 6: //Assign the gestureRecognizers property of our pageViewController to our view's gestureRecognizers property. self.view.gestureRecognizers = self.pageViewController.gestureRecognizers; } - (UIPageViewControllerSpineLocation)pageViewController:(UIPageViewController *)pageViewController spineLocationForInterfaceOrientation:(UIInterfaceOrientation)orientation { if(UIInterfaceOrientationIsPortrait(orientation)) { //Set the array with only 1 view controller UIViewController *currentViewController = [self.pageViewController.viewControllers objectAtIndex:0]; NSArray *viewControllers = [NSArray arrayWithObject:currentViewController]; [self.pageViewController setViewControllers:viewControllers direction:UIPageViewControllerNavigationDirectionForward animated:YES completion:NULL]; //Important- Set the doubleSided property to NO. self.pageViewController.doubleSided = NO; //Return the spine location return UIPageViewControllerSpineLocationMin; } else { // NSArray *viewControllers = nil; ContentViewController *currentViewController = [self.pageViewController.viewControllers objectAtIndex:0]; NSUInteger currentIndex = [self.modelArray indexOfObject:[(ContentViewController *)currentViewController labelContents]]; if(currentIndex == 0 || currentIndex %2 == 0) { UIViewController *nextViewController = [self pageViewController:self.pageViewController viewControllerAfterViewController:currentViewController]; viewControllers = [NSArray arrayWithObjects:currentViewController, nextViewController, nil]; } else { UIViewController *previousViewController = [self pageViewController:self.pageViewController viewControllerBeforeViewController:currentViewController]; viewControllers = [NSArray arrayWithObjects:previousViewController, currentViewController, nil]; } //Now, set the viewControllers property of UIPageViewController [self.pageViewController setViewControllers:viewControllers direction:UIPageViewControllerNavigationDirectionForward animated:YES completion:NULL]; return UIPageViewControllerSpineLocationMid; } }

    Read the article

  • Should interfaces inherit interfaces

    - by dreza
    Although this is a general question it is also specific to a problem I am currently experiencing. I currently have an interface specified in my solution called public interface IContextProvider { IDataContext { get; set; } IAreaContext { get; set; } } This interface is often used throughout the program and hence I have easy access to the objects I need. However at a fairly low level of a part of my program I need access to another class that will use IAreaContext and perform some operations off it. So I have created another factory interface to do this creation called: public interface IEventContextFactory { IEventContext CreateEventContext(int eventId); } I have a class that implements the IContextProvider and is injected using NinJect. The problem I have is that the area where I need to use this IEventContextFactory has access to the IContextProvider only and itself uses another class which will need this new interface. I don't want to have to instantiate this implementation of IEventContextFactory at the low level and would rather work with the IEventContextFactory interface throughout. However I also don't want to have to inject another parameter through the constructors just to have it passed through to the class that needs it i.e. // example of problem public class MyClass { public MyClass(IContextProvider context, IEventContextFactory event) { _context = context; _event = event; } public void DoSomething() { // the only place _event is used in the class is to pass it through var myClass = new MyChildClass(_event); myClass.PerformCalculation(); } } So my main question is, would this be acceptable or is it even common or good practice to do something like this (interface inherit another an interface): public interface IContextProvider : IEventContextFactory or should I consider better alternatives to achieving what I need. If I have not provided enough information to give suggestions let me know and I can provide more.

    Read the article

  • Should interfaces extend (and in doing so inherit methods of) other interfaces

    - by dreza
    Although this is a general question it is also specific to a problem I am currently experiencing. I currently have an interface specified in my solution called public interface IContextProvider { IDataContext { get; set; } IAreaContext { get; set; } } This interface is often used throughout the program and hence I have easy access to the objects I need. However at a fairly low level of a part of my program I need access to another class that will use IAreaContext and perform some operations off it. So I have created another factory interface to do this creation called: public interface IEventContextFactory { IEventContext CreateEventContext(int eventId); } I have a class that implements the IContextProvider and is injected using NinJect. The problem I have is that the area where I need to use this IEventContextFactory has access to the IContextProvider only and itself uses another class which will need this new interface. I don't want to have to instantiate this implementation of IEventContextFactory at the low level and would rather work with the IEventContextFactory interface throughout. However I also don't want to have to inject another parameter through the constructors just to have it passed through to the class that needs it i.e. // example of problem public class MyClass { public MyClass(IContextProvider context, IEventContextFactory event) { _context = context; _event = event; } public void DoSomething() { // the only place _event is used in the class is to pass it through var myClass = new MyChildClass(_event); myClass.PerformCalculation(); } } So my main question is, would this be acceptable or is it even common or good practice to do something like this (interface extend another an interface): public interface IContextProvider : IEventContextFactory or should I consider better alternatives to achieving what I need. If I have not provided enough information to give suggestions let me know and I can provide more.

    Read the article

  • Debugging .NET 2.0 assembly from unmanaged code in VS2010?

    - by Rick Strahl
    I’ve run into a serious snag trying to debug a .NET 2.0 assembly that is called from unmanaged code in Visual Studio 2010. I maintain a host of components that using COM interop and custom .NET runtime hosting and ever since installing Visual Studio 2010 I’ve been utterly blocked by VS 2010’s inability to apparently debug .NET 2.0 assemblies when launching through unmanaged code. Here’s what I’m actually doing (simplified scenario to demonstrate): I have a .NET 2.0 assembly that is compiled for COM Interop Compile project with .NET 2.0 target and register for COM Interop Set a breakpoint in the .NET component in one of the class methods Instantiate the .NET component via COM interop and call method The result is that the COM call works fine but the debugger never triggers on the breakpoint. If I now take that same assembly and target it at .NET 4.0 without any other changes everything works as expected – the breakpoint set in the assembly project triggers just fine. The easy answer to this problem seems to be “Just switch to .NET 4.0” but unfortunately the application and the way the runtime is actually hosted has a few complications. Specifically the runtime hosting uses .NET 2.0 hosting and apparently the only reliable way to host the .NET 4.0 runtime is to use the new hosting APIs that are provided only with .NET 4.0 (which all by itself is lame, lame, lame as once again the promise of backwards compatibility is broken once again by .NET). So for the moment I need to continue using the .NET 2.0 hosting APIs due to application requirements. I’ve been searching high and low and experimenting back and forth, posted a few questions on the MSDN forums but haven’t gotten any hints on what might be causing the apparent failure of Visual Studio 2010 to debug my .NET 2.0 assembly properly when called from un-managed code. Incidentally debugging .NET 2.0 targeted assemblies works fine when running with a managed startup application – it seems the issue is specific to the unmanaged code starting up. My particular issue is with custom runtime hosting which at first I thought was the problem. But the same issue manifests when using COM Interop against a .NET 2.0 assembly, so the hosting is probably not the issue. Curious if anybody has any ideas on what could be causing the lack of debugging in this scenario?© Rick Strahl, West Wind Technologies, 2005-2010

    Read the article

  • Evolution of an Application: how to manage and improve core engine?

    - by Phil Carter
    The web application I work on has been live for a year now, but it's time for it to evolve and one of the ways in which it is evolving is into a multi-brand application - in this case several different companies using the application, different templates/content and some slight business logic changes between them. The problem I'm facing is implementing a best practice across the site where there are differences in business logic for each brand. These will mostly be very superficial, using a an alternative mailing list provider or capturing some extra data in a form. I don't want to have if(brand === x) { ... } else { ... } all over the site especially as most of what needs to be changed can be handled with extending the existing class. I've thought of several methods that could be used to instantiate the correct class, but I'm just not sure which is going to be best especially as some seem to lead to duplication of more code than should be necessary. Here's what I've considered: 1) Use a Static Loader similar to Zend_Loader which can take the class being requested, and has knowledge of the Brand and can then return the correct object. $class = App_Loader::getObject('User', $brand); 2) Factory classes. We use these in the application already for Products but we could utilise them here also to provide a transparent interface to the class. 3) Routing the page request to a specific brand controller. This however seems like it would duplicate a lot of code/logic. Is there a pattern or something else I should be considering to solve this problem? 4) How to manage a growing project that has multiple custom instances in production? Update This is a PHP application so the decisions on which class to load are made per request. There could be upwards of 100+ different 'brands' running.

    Read the article

  • Row Number Transformation

    The Row Number Transformation calculates a row number for each row, and adds this as a new output column to the data flow. The column number is a sequential number, based on a seed value. Each row receives the next number in the sequence, based on the defined increment value. The final row number can be stored in a variable for later analysis, and can be used as part of a process to validate the integrity of the data movement. The Row Number transform has a variety of uses, such as generating surrogate keys, or as the basis for a data partitioning scheme when combined with the Conditional Split transformation. Properties Property Data Type Description Seed Int32 The first row number or seed value. Increment Int32 The value added to the previous row number to make the next row number. OutputVariable String The name of the variable into which the final row number is written post execution. (Optional). The three properties have been configured to support expressions, or they can set directly in the normal manner. Expressions on components are only visible on the hosting Data Flow task, not at the individual component level. Sometimes the data type of the property is incorrectly set when the properties are created, see the Troubleshooting section below for details on how to fix this. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Number transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation  is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Number Transformation for SQL Server 2005 Row Number Transformation for SQL Server 2008 Row Number Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.2.0.34 – Updated installer. (25 Jun 2008) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.0.0.0 - Public Release for SQL Server 2005 IDW 15 June CTP (29 Aug 2005) Screenshot Code Sample The following code sample demonstrates using the Data Generator Source and Row Number Transformation programmatically in a very simple package. Package package = new Package(); package.Name = "Data Generator & Row Number"; // Add the Data Flow Task Executable taskExecutable = package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = taskExecutable as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add Data Generator Source IDTSComponentMetaData100 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "Data Generator"; componentSource.ComponentClassID = "Konesans.Dts.Pipeline.DataGenerator.DataGenerator, Konesans.Dts.Pipeline.DataGenerator, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceSource = componentSource.Instantiate(); instanceSource.ProvideComponentProperties(); instanceSource.SetComponentProperty("RowCount", 10000); // Add Row Number Tx IDTSComponentMetaData100 componentRowNumber = dataFlowTask.ComponentMetaDataCollection.New(); componentRowNumber.Name = "FlatFileDestination"; componentRowNumber.ComponentClassID = "Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceRowNumber = componentRowNumber.Instantiate(); instanceRowNumber.ProvideComponentProperties(); instanceRowNumber.SetComponentProperty("Increment", 10); // Connect the two components together IDTSPath100 path = dataFlowTask.PathCollection.New(); path.AttachPathAndPropagateNotifications(componentSource.OutputCollection[0], componentRowNumber.InputCollection[0]); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); #endif package.Execute(); foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); } package.Dispose(); Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? Please also make sure you have installed a minimum of SP1 for SQL 2005. The IDtsPipelineEnvironmentService was added in SQL Server 2005 Service Pack 1 (SP1) (See  http://support.microsoft.com/kb/916940). If you get an error Could not load type 'Microsoft.SqlServer.Dts.Design.IDtsPipelineEnvironmentService' from assembly 'Microsoft.SqlServer.Dts.Design, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'. when trying to open the user interface, it implies that your development machine has not had SP1 applied. Very occasionally we get a problem to do with the properties not being created with the correct data type. Since there is no way to programmatically to define the data type of a pipeline component property, it can only infer it. Whilst we set an integer value as we create the property, sometimes SSIS decides to define it is a decimal. This is often highlighted when you use a property expression against the property and get an error similar to Cannot convert System.Int32 to System.Decimal. Unfortunately this is beyond our control and there appears to be no pattern as to when this happens. If you do have more information we would be happy to hear it. To fix this issue you can manually edit the package file. In Visual Studio right click the package file from the Solution Explorer and select View Code, which will open the package as raw XML. You can now search for the properties by name or the component name. You can then change the incorrect property data types highlighted below from Decimal to Int32. <component id="37" name="Row Number Transformation" componentClassID="{BF01D463-7089-41EE-8F05-0A6DC17CE633}" … >     <properties>         <property id="38" name="UserComponentTypeName" …>         <property id="41" name="Seed" dataType="System.Int32" ...>10</property>         <property id="42" name="Increment" dataType="System.Decimal" ...>10</property>         ... If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • Why a static main method in Java and C#, rather than a constructor?

    - by Konrad Rudolph
    Why did (notably) Java and C# decide to have a static method as their entry point – rather than representing an application instance by an instance of an Application class, with the entry point being an appropriate constructor which, at least to me, seems more natural? I’m interested in a definitive answer from a primary or secondary source, not mere speculations. This has been asked before. Unfortunately, the existing answers are merely begging the question. In particular, the following answers don’t satisfy me, as I deem them incorrect: There would be ambiguity if the constructor were overloaded. – In fact, C# (as well as C and C++) allows different signatures for Main so the same potential ambiguity exists, and is dealt with. A static method means no objects can be instantiated before so order of initialisation is clear. – This is just factually wrong, some objects are instantiated before (e.g. in a static constructor). So they can be invoked by the runtime without having to instantiate a parent object. – This is no answer at all. Just to justify further why I think this is a valid and interesting question: Many frameworks do use classes to represent applications, and constructors as entry points. For instance, the VB.NET application framework uses a dedicated main dialog (and its constructor) as the entry point1. Neither Java nor C# technically need a main method. Well, C# needs one to compile, but Java not even that. And in neither case is it needed for execution. So this doesn’t appear to be a technical restriction. And, as I mentioned in the first paragraph, for a mere convention it seems oddly unfitting with the general design principle of Java and C#. To be clear, there isn’t a specific disadvantage to having a static main method, it’s just distinctly odd, which made me wonder if there was some technical rationale behind it. I’m interested in a definitive answer from a primary or secondary source, not mere speculations. 1 Although there is a callback (Startup) which may intercept this.

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

  • Box Collider isn't rotating with Game Object

    - by pek
    I have a method that creates a room by instantiating a prefab, places it in a grid and the re-sizes the collider based on a room definition (location in grid, rotation, width and height). Here is the method: public void CreateRoom(RoomAction action) { GameObject roomGameObject = Instantiate(this.roomPrefab, Vector3.zero, action.RoomPrefab.transform.rotation) as GameObject; roomGameObject.transform.parent = this.transform; roomGameObject.transform.localPosition = new Vector3(action.MansionOffsetX, 0, -action.MansionOffsetY) * this.blockSize; roomGameObject.transform.localPosition += new Vector3((action.Room.Width * this.blockSize) / 2, 0, -((action.Room.Height * this.blockSize) / 2)); BoxCollider roomCollider = roomGameObject.GetComponent<BoxCollider>(); roomCollider.isTrigger = true; roomCollider.center = new Vector3(0, this.height / 2, 0); roomCollider.size = new Vector3(action.Room.Width * this.blockSize, this.height, action.Room.Height * this.blockSize); roomGameObject.transform.RotateAroundLocal(roomGameObject.transform.up, Mathf.Deg2Rad * -90 * action.Rotation); } The problem I'm having is that, while the room rotates correctly, but for some reason, the collider isn't rotating with the game object. Here is a screenshot: Any idea on what am I doing wrong?

    Read the article

  • Unity3d: calculate the result of a transform without modifying transform object itself

    - by Heisenbug
    I'm in the following situation: I need to move an object in some way, basically rotating it around its parent local position, or translating it in its parent local space (I know how to do this). The amount of rotation and translation is know at runtime (it depends on several factors, the speed of the object, enviroment factors, etc..). The problem is the following: I can perform this transformation only if the result position of the transformed object fit some criterias. An example could be this: the distance between the position before and after the transformation must be less than a given threshold. (Actually the conditions could be several and more complex) The problem is that if I use Transform.Rotate and Transform.Translate methods of my GameObject, I will loose the original Transform values. I think I can't copy the original Transform using instantiate for performance issues. How can I perform such a task? I think I have more or less 2 possibilities: First Don't modify the GameObject position through Transform. Calculate which will be the position after the transform. If the position is legal, modify transform through Translate and Rotate methods Second Store the original transform someway. Transform the object using Translate and Rotate. If the transformed position is illegal, restore the original one.

    Read the article

  • Does Unity have existing support for Timelines?

    - by Raven Dreamer
    I am planning the development of a game in Unity3D, and trying to come to terms with what the engine has already provided, and what I must code myself. The game itself is going to be a rhythm game, which means synching audio and graphical events so that they always play when they're supposed to. What I'm looking to avoid is a potential scenario of lag where either the audio or the graphics starts to progress faster than the other. When we discussed this type of coordinating system in my game design class back at university, my professor called this type of design a "Timeline" class. The idea being that you can instantiate one or more of these to progress at different rates, schedule things to happen in the future, and synch up periodic events. However, calling this a "Timeline" class seems to have been limited to my professor himself, as googling for whether a certain API features "Timeline" functionality has been a fruitless endeavor. Is there some more common name for this kind of functionality? Does Unity have any pre-existing methods to coordinate the scheduling of events like this, or is this the kind of thing that needs to be built onto the engine? And if it does, I'd appreciate being pointed towards some tutorials!

    Read the article

  • Questioning the motivation for dependency injection: Why is creating an object graph hard?

    - by oberlies
    Dependency injection frameworks like Google Guice give the following motivation for their usage (source): To construct an object, you first build its dependencies. But to build each dependency, you need its dependencies, and so on. So when you build an object, you really need to build an object graph. Building object graphs by hand is labour intensive (...) and makes testing difficult. But I don't buy this argument: Even without dependency injection, I can write classes which are both easy to instantiate and convenient to test. E.g. the example from the Guice motivation page could be rewritten in the following way: class BillingService { private final CreditCardProcessor processor; private final TransactionLog transactionLog; // constructor for tests, taking all collaborators as parameters BillingService(CreditCardProcessor processor, TransactionLog transactionLog) { this.processor = processor; this.transactionLog = transactionLog; } // constructor for production, calling the (productive) constructors of the collaborators public BillingService() { this(new PaypalCreditCardProcessor(), new DatabaseTransactionLog()); } public Receipt chargeOrder(PizzaOrder order, CreditCard creditCard) { ... } } So dependency injection may really be an advantage in advanced use cases, but I don't need it for easy construction and testability, do I?

    Read the article

  • Sound not playing on Windows XP - SoundEffect or Song: Monogame

    - by ashes999
    I'm trying to integrate sound into my Monogame game. I don't have the content pipeline hack -- just straight Monogame (Beta 3) at this point. (I tried adding the content pipeline, but ran into some issues.) I added a .wav file to my /Content directory, and I can create and instantiate both SoundEffect and Song classes. However, both show durations of 00:00:00 (on a ten-second long file), and neither plays. I can call LoadContent without any issue. But when I call Play, nothing plays. I've tried a couple of different sounds, and different formats (MP3 and WAV) to rule that out. Only WAV seems to even load without crashing out, but it doesn't play. There seems to be a GitHub issue that fixes this problem in 2.5.1. Downgrading to 2.5.1 doesn't fix this problem; it seems like it's fixed in 3.0 (_data is set in the SoundEffect instance). This issue only occurs on Windows XP. I tested it on a Windows 7 laptop, and the sound plays fine.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >