Search Results

Search found 4156 results on 167 pages for 'assembly emit'.

Page 85/167 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • How to Show detail section only with out any space in Active Report

    - by Sunil Naudiyal
    i have a active report without any Page Header , Report Header and no any Footer type section. for more detail see attached image. Now issue is that When we run this report we got space before report detail. for more detail see attached image Below is my code Assembly asm = Assembly.GetAssembly(this.GetType()); System.IO.Stream stre = asm.GetManifestResourceStream(asm.GetName().Name + ".CoverPage.rpx"); using (XmlTextReader xr = new XmlTextReader(stre)) { arCoverPage.LoadLayout(xr); } //Get detail for Cover Page AddingReportSection(report, HeaderType.CoverPage); arCoverPage.DataSource = lstCoverPage; arCoverPage.Run(); I want remove this space.so please give me any suggestion/idea I also tried to set height of page but i am not get sucess. arCoverPage.PageSettings.DefaultPaperSize = false; arCoverPage.PageSettings.Gutter = 3.0F; arCoverPage.PageSettings.Orientation = DataDynamics.ActiveReports.Document.PageOrientation.Portrait; arCoverPage.PageSettings.PaperHeight = 5.0F; this.viReport.Document = arCoverPage.Document;

    Read the article

  • Add two 32-bit integers in Assembler for use in VB6

    - by Emtucifor
    I would like to come up with the byte code in assembler (assembly?) for Windows machines to add two 32-bit longs and throw away the carry bit. I realize the "Windows machines" part is a little vague, but I'm assuming that the bytes for ADD are pretty much the same in all modern Intel instruction sets. I'm just trying to abuse VB a little and make some things faster. So... if the string "8A4C240833C0F6C1E075068B442404D3E0C20800" is the assembly code for SHL that can be "injected" into a VB6 program for a fast SHL operation expecting two Long parameters (we're ignoring here that 32-bit longs in VB6 are signed, just pretend they are unsigned), what is the hex string of bytes representing assembler instructions that will do the same thing to return the sum? The hex code above for SHL is, according to the author: mov eax, [esp+4] mov cl, [esp+8] shl eax, cl ret 8 I spit those bytes into a file and tried unassembling them in a windows command prompt using the old debug utility, but I figured out it's not working with the newer instruction set because it didn't like EAX when I tried assembling something but it was happy with AX. I know from comments in the source code that SHL EAX, CL is D3E0, but I don't have any reference to know what the bytes are for instruction ADD EAX, CL or I'd try it. I tried flat assembler and am not getting anything I can figure out how to use. I used it to assemble the original SHL code and got a very different result, not the same bytes. Help?

    Read the article

  • Why is a NullReferenceException thrown when a ToolStrip button is clicked twice with code in the `Click` event handler?

    - by Patrick
    I created a clean WindowsFormsApplication solution, added a ToolStrip to the main form, and placed one button on it. I've added also an OpenFileDialog, so that the Click event of the ToolStripButton looks like the following: private void toolStripButton1_Click(object sender, EventArgs e) { openFileDialog1.ShowDialog(); } I didn't change any other properties or events. The funny thing is that when I double-click the ToolStripButton (the second click must be quite fast, before the dialog opens), then cancel both dialogs (or choose a file, it doesn't really matter) and then click in the client area of main form, a NullReferenceException crashes the application (error details attached at the end of the post). Please note that the Click event is implemented while DoubleClick is not. What's even more strange that when the OpenFileDialog is replaced by any user-implemented form, the ToolStripButton blocks from being clicked twice. I'm using VS2008 with .NET3.5. I didn't change many options in VS (only fontsize, workspace folder and line numbering). Does anyone know how to solve this? It is 100% replicable on my machine, is it on others too? One solution that I can think of is disabling the button before calling OpenFileDialog.ShowDialog() and then enabling the button back (but it's not nice). Any other ideas? And now the promised error details: System.NullReferenceException was unhandled Message="Object reference not set to an instance of an object." Source="System.Windows.Forms" StackTrace: at System.Windows.Forms.NativeWindow.WindowClass.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.PeekMessage(MSG& msg, HandleRef hwnd, Int32 msgMin, Int32 msgMax, Int32 remove) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at WindowsFormsApplication1.Program.Main() w C:\Users\Marchewek\Desktop\Workspaces\VisualStudio\WindowsFormsApplication1\Program.cs:line 20 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • Assemblies mysteriously loaded into new AppDomains

    - by Eric
    I'm testing some code that does work whenever assemblies are loaded into an appdomain. For unit testing (in VS2k8's built-in test host) I spin up a new, uniquely-named appdomain prior to each test with the idea that it should be "clean": [TestInitialize()] public void CalledBeforeEachTestMethod() { AppDomainSetup appSetup = new AppDomainSetup(); appSetup.ApplicationBase = @"G:\<ProjectDir>\bin\Debug"; Evidence baseEvidence = AppDomain.CurrentDomain.Evidence; Evidence evidence = new Evidence( baseEvidence ); _testAppDomain = AppDomain.CreateDomain( "myAppDomain" + _appDomainCounter++, evidence, appSetup ); } [TestMethod] public void MissingFactoryCausesAppDomainUnload() { SupportingClass supportClassObj = (SupportingClass)_testAppDomain.CreateInstanceAndUnwrap( GetType().Assembly.GetName().Name, typeof( SupportingClass ).FullName ); try { supportClassObj.LoadMissingRegistrationAssembly(); Assert.Fail( "Should have nuked the app domain" ); } catch( AppDomainUnloadedException ) { } } [TestMethod] public void InvalidFactoryMethodCausesAppDomainUnload() { SupportingClass supportClassObj = (SupportingClass)_testAppDomain.CreateInstanceAndUnwrap( GetType().Assembly.GetName().Name, typeof( SupportingClass ).FullName ); try { supportClassObj.LoadInvalidFactoriesAssembly(); Assert.Fail( "Should have nuked the app domain" ); } catch( AppDomainUnloadedException ) { } } public class SupportingClass : MarshalByRefObject { public void LoadMissingRegistrationAssembly() { MissingRegistration.Main(); } public void LoadInvalidFactoriesAssembly() { InvalidFactories.Main(); } } If every test is run individually I find that it works correctly; the appdomain is created and has only the few intended assemblies loaded. However, if multiple tests are run in succession then each _testAppDomain already has assemblies loaded from all previous tests. Oddly enough, the two tests get appdomains with different names. The test assemblies that define MissingRegistration and InvalidFactories (two different assemblies) are never loaded into the unit test's default appdomain. Can anyone explain this behavior?

    Read the article

  • StrcutureMap Wiring - Sanity Check Please

    - by Steve Ward
    Hi - Im new to IOC and StructureMap and have an n-level application and am looking at how to setup the wirings (ForRequestedType ...) and just want to check with people with more experience that this is the best way of doing it! I dont want my UI application object to reference my persistence layer directly so am not able to wire everything up in this UI project. I now have it working by defining a Registry class in each project which wires up the types in the project as needed. The layer above registers its types and also calls the assembly below and looks for registries so that all types are registered throught the hierrachy. E.g. I have UI, Service, Domain, and Persistence libraries. In my service layer the registry looks like Scan(x => { x.Assembly("MyPersistenceProject"); x.LookForRegistries(); }); ForRequestedType<IService>().TheDefault.Is.OfConcreteType<MyService>(); Is this a recommended way of doing this in a setup such as this? Are there better ways and what are the advantages / disadvantages of these approaches in this case?

    Read the article

  • Is Lightweight Code Generation (LCG) dead?

    - by Greg Beech
    In the .NET 2.0-3.5 frameworks, LCG (aka the DynamicMethod class) was a decent way to emit lightweight methods at runtime when no class structure was needed to support them. In .NET 4.0, expression trees now support statements and blocks, and as such appear to provide sufficient functionality to build just about any functionality you could require from such a method, and can be constructed in a much easier and safer way than directly emitting CIL op-codes. (This statement is borne from today's experimentation of converting some of our most complex LCG code to use expression tree building and compilation instead.) So is there any reason why one would use LCG in any new code? Is there anything it can do that expression trees cannot? Or is it now a 'dead' piece of functionality?

    Read the article

  • Which technology should I use to pop up a simple form in my add-in DLL?

    - by Decker
    I'm building an assembly that runs as an "add-on" to a vendor's Outlook add-in. When it is time for me to execute my "action", I have to put up a simple window with a few simple controls. The vendor's add-in provides me with the parent window's integer handle. I am able to put up a form pretty easily with WinForms by adding are reference to System.Windows.Forms from my assembly and with the following code: FrmHistoryDisplay frm = new FrmHistoryDisplay(); frm.ShowDialog(new ParentWindowWrapper(_parentWindowHandle)); where ParentWindowWrapper is a shim class around the window handle I'm given private class ParentWindowWrapper : IWin32Window { private int _parentWindowHandle; public ParentWindowWrapper(int parentWindowHandle) { _parentWindowHandle = parentWindowHandle; } public IntPtr Handle { get { return new IntPtr(_parentWindowHandle); } } } The Form's ShowDialog method takes an IWin32Window implementor to wrap the parent's window handle. This all works and seems simple enough. I was just wondering whether something similar can be done with a WPF window rather than a WinForm Form? Should I care?

    Read the article

  • Serialized NHibernate Configuration objects - detect out of date or rebuild on demand?

    - by fostandy
    I've been using serialized nhibernate configuration objects (also discussed here and here) to speed up my application startup from about 8s to 1s. I also use fluent-nhibernate, so the path is more like ClassMap class definitions in code fluentconfiguration xml nhibernate configuration configuration serialized to disk. The problem from doing this is that one runs the risk of out of date mappings - if I change the mappings but forget to rebuild the serialized configuration, then I end up using the old mappings without realising it. This does not always result in an immediate and obvious error during testing, and several times the misbehaviour has been a real pain to detect and fix. Does anybody have any idea how I would be able to detect if my classmaps have changed, so that I could either issue an immediate warning/error or rebuild it on demand? At the moment I am comparing timestamps on my compiled assembly against the serialized configuration. This will pickup mapping changes, but unfortunately it generates a massive false positive rate as ANY change to the code results in an out of date flag. I can't move the classmaps to another assembly as they are tightly integrated into the business logic. This has been niggling me for a while so I was wondering if anybody had any suggestions?

    Read the article

  • Problems finding classes in namespace and testing extend expected parent

    - by Matt
    So I am in the process of building a site in ASP.Net MVC, and in the process I am adding certain things to my Site.Master I want to make sure that all of my model classes extend a certain base class that contains all of the pieces the Site.Master needs to be operable. I want to test to make sure this assumption isn't broken (I believe this will save me time when I forget about it and can't figure out why a new combination isn't working.) I wrote a test that I thought would help with this, but I am running into two problems. First it isn't finding the one example model class I have so far in the LINQ call all of a sudden, I am admittedly still a bit new to LINQ. Second, I had it finding the class earlier, but I couldn't get it to verify that the class inherits from the base class. Here is the example test. [Test] public void AllModelClassesExtendAbstractViewModel() { var abstractViewModelType = typeof (AbstractViewModel); Assembly baseAssembly = Assembly.GetAssembly(abstractViewModelType); var modelTypes = baseAssembly.GetTypes() .Where(assemblyType => (assemblyType.Namespace.EndsWith("Models") && assemblyType.Name != "AbstractViewModel")) .Select(assemblyType => assemblyType); foreach (var modelType in modelTypes) { Assert.That(modelType.IsSubclassOf(abstractViewModelType), Is.True , modelType.Name + " does not extend AbstractViewModel"); } }

    Read the article

  • Signed and RequireAdministrator manifested executable being run from temp folder?

    - by Ian Boyd
    i manifested my executable as require administrator: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <!-- Disable Windows Vista UAC compatability heuristics --> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator"/> </requestedPrivileges> </security> </trustInfo> </assembly> And then i digitally signed it. But then when i run the executable i noticed something odd: the name of the executable on the Consent dialog changed from PingWarning.exe to pinxxxx.tmp; as though a temp copy was made, and that is being run: i dug out Process Montior, to see if anyone is creating a *.tmp file when i launch my executable, and there is: The Application Information service inside this particular svchost container is intentionally copying my executable to the Windows temp folder, and asking for user "Consent" from there; giving an invalid filename. Once consent has been granted, the executable is run from its original location: link text The file is not copied to the temp folder if i do not digitally sign it: So my problem is the invalid filename appearing on the consent dialog when i digitally sign my executable which has been manifested as requireAdministrator. What do?

    Read the article

  • c#3.5 Deserialization error - object reference not set

    - by BBR
    I am trying to deserialize an xml string in c#3.5, the code below does work in c# 4.0. When I try to run in the code in c#3.5 I get an Object reference not set to an instance of an object exception when the code tries in initialize the XmlSerializer. Any help would be appreciated. string xml = "<boolean xmlns=\"http://schemas.microsoft.com/2003/10/serialization/\">false</boolean>"; var xSerializer = new XmlSerializer(typeof(bool), null, null, new XmlRootAttribute("boolean"), "http://schemas.microsoft.com/2003/10/serialization/"); using (var sr = new StringReader(xml)) using (var xr = XmlReader.Create(sr)) { var y = xSerializer.Deserialize(xr); } System.NullReferenceException was unhandled Message="Object reference not set to an instance of an object." Source="System.Xml" StackTrace: at System.Xml.Serialization.XmlSerializer..ctor(Type type, XmlAttributeOverrides overrides, Type[] extraTypes, XmlRootAttribute root, String defaultNamespace, String location, Evidence evidence) at System.Xml.Serialization.XmlSerializer..ctor(Type type, XmlAttributeOverrides overrides, Type[] extraTypes, XmlRootAttribute root, String defaultNamespace) .... at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • How does PATH environment affect my running executable from using msvcr90 to msvcr80 ???

    - by Runner
    #include <gtk/gtk.h> int main( int argc, char *argv[] ) { GtkWidget *window; gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_widget_show (window); gtk_main (); return 0; } I tried putting various versions of MSVCR80.dll under the same directory as the generated executable(via cmake),but none matched. Is there a general solution for this kinda problem? UPDATE Some answers recommend install the VS redist,but I'm not sure whether or not it will affect my installed Visual Studio 9, can someone confirm? Manifest file of the executable <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> It seems the manifest file says it should use the MSVCR90, why it always reporting missing MSVCR80.dll? FOUND After spending several hours on it,finally I found it's caused by this setting in PATH: D:\MATLAB\R2007b\bin\win32 After removing it all works fine.But why can that setting affect my running executable from using msvcr90 to msvcr80 ???

    Read the article

  • accessing required modules from other modules

    - by john smith
    I have a bare-bone express application, exactly the one that is created with the express command. I have installed socket.io and attached it to my server, like this: var app = express(), server = http.createServer(app), io = io.listen(server); server.listen(8000); Now, I also have the routes files, which is called like this: app.get('/', routes.index); Inside this module I have the following function: exports.index = function(req, res){ socket.emit('news', { message: "foo" }); }; This obviously leads to a 500 reference error, because the routes file is an exportable module, and obviously has no idea what the socket is, as it is located in the app.js file. Is there a way I can access this socket object from this, or any other file? Please note that it is attached to the express generated app. extra: what about getting/setting session data? Thanks in advance.

    Read the article

  • Importing into a Exported object with MEF

    - by Nathan W
    I'm sorry if this question has already been asked 100 times, but I'm really struggling to get it to work. Say I have have three projects. Core.dll Has common interfaces Shell.exe Loads all modules in assembly folder. References Core.dll ModuleA.dll Exports Name, Version of module. References Core.dll Shell.exe has a [Export] that contains an single instance of a third party application that I need to inject into all loaded modules. So far the code that I have in Shell.exe is: static void Main(string[] args) { ThirdPartyApp map = new ThirdPartyApp(); var ad = new AssemblyCatalog(Assembly.GetExecutingAssembly()); var dircatalog = new DirectoryCatalog("."); var a = new AggregateCatalog(dircatalog, ad); // Not to sure what to do here. } class Test { [Export(typeof(ThirdPartyApp))] public ThirdPartyApp Instance { get; set; } [Import(typeof(IModule))] public IModule Module { get; set; } } I need to create a instance of Test, and load Instance with map from the Main method then load the Module from ModuleA.dll that is in the executing directory then [Import] Instance into the loaded module. In ModuleA I have a class like this: [Export(IModule)] class Module : IModule { [Import(ThirdPartyApp)] public ThirdPartyApp Instance {get;set;} } I know I'm half way there I just don't know how to put it all together, mainly with loading up test with a instance of map from Main. Could anyone help me with this.

    Read the article

  • Run Visual Studio with Administrator Rights using app.manifest [ExecutionLevel]

    - by srk
    I need to change the key in a registry in order to restrict the user from using Task Manager, since it is an Kiosk application. My code for changing the registry is working perfectly for Administrator account. But my application is going to be run in normal user account. When i tried to run my application in normal user account, i get the below error : DisableTaskManagerSystem.UnauthorizedAccessException: Access to the registry key 'HKey_Current_User\Software\Mictrosoft\Windows\CurrentVersion\Policies\System' is denied. at Microsoft.win32.RegistryKey.win32Error(int32 errorcode, String str) So i need to run my application with all administrator privileges. For which i am using the below app.manifest. But some how i getting the same error. How to overcome this ? Code in app.manifest : <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <ms_asmv2:trustInfo xmlns:ms_asmv2="urn:schemas-microsoft- com:asm.v2"> <ms_asmv2:security> <ms_asmv2:requestedPrivileges> <ms_asmv2:requestedExecutionLevel level="requireAdministrator" uiAccess="true"> </ms_asmv2:requestedExecutionLevel> </ms_asmv2:requestedPrivileges> </ms_asmv2:security> </ms_asmv2:trustInfo> </assembly>

    Read the article

  • PyParsing: Is this correct use of setParseAction()?

    - by Rosarch
    I have strings like this: "MSE 2110, 3030, 4102" I would like to output: [("MSE", 2110), ("MSE", 3030), ("MSE", 4102)] This is my way of going about it, although I haven't quite gotten it yet: def makeCourseList(str, location, tokens): print "before: %s" % tokens for index, course_number in enumerate(tokens[1:]): tokens[index + 1] = (tokens[0][0], course_number) print "after: %s" % tokens course = Group(DEPT_CODE + COURSE_NUMBER) # .setResultsName("Course") course_data = (course + ZeroOrMore(Suppress(',') + COURSE_NUMBER)).setParseAction(makeCourseList) This outputs: >>> course.parseString("CS 2110") ([(['CS', 2110], {})], {}) >>> course_data.parseString("CS 2110, 4301, 2123, 1110") before: [['CS', 2110], 4301, 2123, 1110] after: [['CS', 2110], ('CS', 4301), ('CS', 2123), ('CS', 1110)] ([(['CS', 2110], {}), ('CS', 4301), ('CS', 2123), ('CS', 1110)], {}) Is this the right way to do it, or am I totally off? Also, the output of isn't quite correct - I want course_data to emit a list of course symbols that are in the same format as each other. Right now, the first course is different from the others. (It has a {}, whereas the others don't.)

    Read the article

  • In MongoDB, how can I replicate this simple query using map/reduce in ruby?

    - by Matthew Rathbone
    Hi, So using the regular MongoDB library in Ruby I have the following query to find average filesize across a set of 5001 documents: avg = 0 total = collection.count() Rails.logger.info "#{total} asset creation stats in the system" collection.find().each {|row| avg += (row["filesize"] * (1/total.to_f)) if row["filesize"]} Its pretty simple, so I'm trying to do the same using map/reduce as a learning exercise. This is what I came up with: map = 'function(){emit("filesizes", {size: this.filesize, num: 1});}' reduce = 'function(k, vals){ var result = {size: 0, num: 0}; for(var x in vals) { var new_total = result.num + vals[x].num; result.num = new_total result.size = result.size + (vals[x].size * (vals[x].num / new_total)); } return result; }' @results = collection.map_reduce(map, reduce) However the two queries come back with two different results! What am I doing wrong?

    Read the article

  • Installing a Windows Service from a separate GUI - how to install .config file along with it?

    - by Shaul
    I have written a GUI (call it MyGUI) for ClickOnce deployment on any given client site. That GUI installs and configures a Windows Service (MyService), using the method described here by @Marc Gravell. Here's my code, run from inside MyGUI, which contains a reference to MyService: using (var inst = new AssemblyInstaller(typeof(MyService.Program).Assembly, new string[] { })) { IDictionary state = new Hashtable(); inst.UseNewContext = true; try { if (uninstall) { inst.Uninstall(state); } else { inst.Install(state); inst.Commit(state); } } catch { try { inst.Rollback(state); } catch { } throw; } } Take note of that first line: I'm grabbing the assembly for MyService, and installing that. Now, trouble is, the way I've done the deployment, I'm effectively referencing the service's EXE file from the GUI's app folder. So now the service fires up and starts looking for stuff in the MyService.config file, and can't find it, because it's living in someone else's app folder, with only the GUI's MyGUI.config file present. So, how do I get MyService.config to be available to the service?

    Read the article

  • How to use Nhibernate Validator + NHib component + ddl

    - by mynkow
    I just configured my NHibValidator. My NHibernate creates the DB schema. When I set MaxLenght="20" to some property of a class then in the database the length appears in the database column. I am doing this in the NHibValidator xml file. But the problem is that I have components and cannot figure out how to achieve this behaviour. The component is configured correctly in the Customer.hbm.xml file. EDIT: Well, I found that Hibernate Validator users had the same problem two years ago. http://opensource.atlassian.com/projects/hibernate/browse/HV-25 Is this an issue for NHibernate Validator or it is fixed. If it is working tell me how please. ----------------------------------------------------- public class Customer { public virtual string Name{get;set;} public virtual Contact Contacts{ get; } } ----------------------------------------------------- public class Contact { public virtual string Address{get;set;} } ----------------------------------------------------- <?xml version="1.0" encoding="utf-8" ?> <nhv-mapping xmlns="urn:nhibernate-validator-1.0" namespace="MyNamespace" assembly="MyAssembly"> <class name="Customer"> <property name="Name"> <length max="20"/> </property> <property name="Contacts"> <notNull/> <valid/> </property> </class> </nhv-mapping> ----------------------------------------------------- <?xml version="1.0" encoding="utf-8" ?> <nhv-mapping xmlns="urn:nhibernate-validator-1.0" namespace="MyNamespace" assembly="MyAssembly"> <class name="Contact"> <property name="Address"> <length max="50"/> <valid/> </property> </class> </nhv-mapping> -----------------------------------------------------

    Read the article

  • Conditional execution of EventTriggers in Silverlight 3

    - by Jason
    I'm currently working on the UI of a Silverlight application and need to be able to change the visual state of a control to one of two possible states based on it's current state when handling the same event trigger. For example: I have a control that sits partially in a clipping path, when I click the visible part of the control I want to change the state to "visible" and if I click it again when it is in its "visible" state I want to change to the "hidden" state. Example XAML: <i:Interaction.Triggers> <i:EventTrigger EventName="MouseLeftButtonUp"> <ic:GoToStateAction StateName="Visible"/> <ic:GoToStateAction StateName="Hidden"/> </i:EventTrigger> </i:Interaction.Triggers> Where "i" is "System.Windows.Interactivity;assembly=System.Windows.Interactivity" and "ic" is "Microsoft.Expression.Interactivity.Core;assembly=Microsoft.Expression.Interactions". I'm currently working in Expression Blend 3 and would prefer to have a XAML only solution but am not opposed to coding this if it is completely necessary. I have tried recording a change in the target state name in Blend but this did not work. Any thoughts on this?

    Read the article

  • P/Invoke declarations should not be safe-critical

    - by Bobrovsky
    My code imports following native methods: DeleteObject, GetFontData and SelectObject from gdi32.dll GetDC and ReleaseDC from user32.dll I want to run the code in full trust and medium trust environments (I am fine with exceptions being thrown when these imported methods are indirectly used in medium trust environments). When I run Code Analysis on the code I get warnings like: CA5122 P/Invoke declarations should not be safe-critical. P/Invoke method 'GdiFont.DeleteObject(IntPtr)' is marked safe-critical. Since P/Invokes may only be called by critical code, this declaration should either be marked as security critical, or have its annotation removed entirely to avoid being misleading. Could someone explain me (in layman terms) what does this warning really mean? I tried putting these imports in static SafeNativeMethods class as internal static methods but this doesn't make the warnings go away. I didn't try to put them in NativeMethods because after reading this article I am unsure that it's the right way to go because I don't want my code to be completely unusable in medium trust environments (I think this will be the consequence of moving imports to NativeMethods). Honestly, I am pretty much confused about the real meaning of the warning and consequences of different options to suppressing it. Could someone shed some light on all this? EDIT: My code target .NET 2.0 framework. Assembly is marked with [assembly: AllowPartiallyTrustedCallers] Methods are declared like this: [DllImport("gdi32")] internal static extern int DeleteObject(HANDLE hObject);

    Read the article

  • CSS-Friendly Menu adapter that emits the same markup as .NET 4.0

    - by Joe
    For .NET 2.x/3.x there exists a CSS-Friendly Adapter on CodePlex that emits markup for an ASP.NET Menu Control as an ul. The .NET 4.0 Menu control will also emit an ul, but the CSS class names are different from those emitted by the CSS-Friendly Adapter 1.0 on CodePlex. In the interests of having a single version of CSS for .NET 2/3/4 sites, I want to create a version of the CSS-Friendly menu adapter that emits the same markup as the .NET 4.0 Menu control. Before doing so, I thought I'd ask here to see if it's already been done, so I don't reinvent the wheel. Anyone?

    Read the article

  • Ajax not working in visual studio 2005

    - by sachin
    I am trying to do an ajax website, but my ajax is not working. I checked my GAC and system.web,extensions dll is available. Why it is not working .? I am also not getting any errors. I tried many ways. I wrote the below code to test ajax. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <%@ Register Assembly="System.Web.Extensions" Namespace="System.Web.UI" TagPrefix="asp" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="cc1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <cc1:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server"> </cc1:ToolkitScriptManager> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <cc1:CalendarExtender ID="CalendarExtender1" runat="server" TargetControlID="TextBox1"> </cc1:CalendarExtender> </div> </form> </body> </html> JAvascript error that i got 1.Type is not defined http://localhost:1467/testnew/Default.aspx?_TSM_HiddenField_=ToolkitScriptManager1_HiddenField&_TSM_CombinedScripts_=%3b%3bAjaxControlToolkit%2c+Version%3d1.0.20229.20821%2c+Culture%3dneutral%2c+PublicKeyToken%3d28f01b0e84b6d53e%3aen-US%3ac5c982cc-4942-4683-9b48-c2c58277700f%3ae2e86ef9%3aa9a7729d%3a9ea3f0e2%3a9e8e87e9%3a1df13a87%3a4c9865be%3aba594826%3a507fcf1b%3ac7a4182e

    Read the article

  • Using reflection to find all linq2sql tables and ensure they match the database

    - by Jake Stevenson
    I'm trying to use reflection to automatically test that all my linq2sql entities match the test database. I thought I'd do this by getting all the classes that inherit from DataContext from my assembly: var contexttypes = Assembly.GetAssembly(typeof (BaseRepository<,>)).GetTypes().Where( t => t.IsSubclassOf(typeof(DataContext))); foreach (var contexttype in contexttypes) { var context = Activator.CreateInstance(contexttype); var tableProperties = type.GetProperties().Where(t=> t.PropertyType.Name == typeof(ITable<>).Name); foreach (var propertyInfo in tableProperties) { var table = (propertyInfo.GetValue(context, null)); } } So far so good, this loops through each ITable< in each datacontext in the project. If I debug the code, "table" is properly instantiated, and if I expand the results view in the debugger I can see actual data. BUT, I can't figure out how to get my code to actually query that table. I'd really like to just be able to do table.FirstOrDefault() to get the top row out of each table and make sure the SQL fetch doesn't fail. But I cant cast that table to be anything I can query. Any suggestions on how I can make this queryable? Just the ability to call .Count() would be enough for me to ensure the entities don't have anything that doesn't match the table columns.

    Read the article

  • MongoDB map/reduce counts

    - by ibz
    The output from MongoDB's map/reduce includes something like 'counts': {'input': I, 'emit': E, 'output': O}. I thought I clearly understand what those mean, until I hit a weird case which I can't explain. According to my understanding, counts.input is the number of rows that match the condition (as specified in query). If so, how is it possible that the following two queries have different results? db.mycollection.find({MY_CONDITION}).count() db.mycollection.mapReduce(SOME_MAP, SOME_REDUCE, {'query': {MY_CONDITION}}).counts.input I thought the two should always give the same result, independent of the map and reduce functions, as long as the same condition is used.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >