Search Results

Search found 11822 results on 473 pages for 'external assembly'.

Page 84/473 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • InternalsVisibleTo attribute and security vulnerability

    - by Sergey Litvinov
    I found one issue with InternalsVisibleTo attribute usage. The idea of InternalsVisibleTo attribute to allow some other assemblies to use internal classes\methods of this assembly. To make it work you need sign your assemblies. So, if other assemblies isn't specified in main assembly and if they have incorrect public key, then they can't use Internal members. But the issue in Reflection Emit type generation. For example, we have CorpLibrary1 assembly and it has such class: public class TestApi { internal virtual void DoSomething() { Console.WriteLine("Base DoSomething"); } public void DoApiTest() { // some internal logic // ... // call internal method DoSomething(); } } This assembly is marked with such attribute to allow another CorpLibrary2 to make inheritor for that TestAPI and override behaviour of DoSomething method. [assembly: InternalsVisibleTo("CorpLibrary2, PublicKey=0024000004800000940000000602000000240000525341310004000001000100434D9C5E1F9055BF7970B0C106AAA447271ECE0F8FC56F6AF3A906353F0B848A8346DC13C42A6530B4ED2E6CB8A1E56278E664E61C0D633A6F58643A7B8448CB0B15E31218FB8FE17F63906D3BF7E20B9D1A9F7B1C8CD11877C0AF079D454C21F24D5A85A8765395E5CC5252F0BE85CFEB65896EC69FCC75201E09795AAA07D0")] The issue is that I'm able to override this internal DoSomething method and break class logic. My steps to do it: Generate new assembly in runtime via AssemblyBuilder Get AssemblyName from CorpLibrary1 and copy PublikKey to new assembly Generate new assembly that will inherit TestApi class As PublicKey and name of generated assembly is the same as in InternalsVisibleTo, then we can generate new DoSomething method that will override internal method in TestAPI assembly Then we have another assembly that isn't related to this CorpLibrary1 and can't use internal members. We have such test code in it: class Program { static void Main(string[] args) { var builder = new FakeBuilder(InjectBadCode, "DoSomething", true); TestApi fakeType = builder.CreateFake(); fakeType.DoApiTest(); // it will display: // Inject bad code // Base DoSomething Console.ReadLine(); } public static void InjectBadCode() { Console.WriteLine("Inject bad code"); } } And this FakeBuilder class has such code: /// /// Builder that will generate inheritor for specified assembly and will overload specified internal virtual method /// /// Target type public class FakeBuilder { private readonly Action _callback; private readonly Type _targetType; private readonly string _targetMethodName; private readonly string _slotName; private readonly bool _callBaseMethod; public FakeBuilder(Action callback, string targetMethodName, bool callBaseMethod) { int randomId = new Random((int)DateTime.Now.Ticks).Next(); _slotName = string.Format("FakeSlot_{0}", randomId); _callback = callback; _targetType = typeof(TFakeType); _targetMethodName = targetMethodName; _callBaseMethod = callBaseMethod; } public TFakeType CreateFake() { // as CorpLibrary1 can't use code from unreferences assemblies, we need to store this Action somewhere. // And Thread is not bad place for that. It's not the best place as it won't work in multithread application, but it's just a sample LocalDataStoreSlot slot = Thread.AllocateNamedDataSlot(_slotName); Thread.SetData(slot, _callback); // then we generate new assembly with the same nameand public key as target assembly trusts by InternalsVisibleTo attribute var newTypeName = _targetType.Name + "Fake"; var targetAssembly = Assembly.GetAssembly(_targetType); AssemblyName an = new AssemblyName(); an.Name = GetFakeAssemblyName(targetAssembly); // copying public key to new generated assembly var assemblyName = targetAssembly.GetName(); an.SetPublicKey(assemblyName.GetPublicKey()); an.SetPublicKeyToken(assemblyName.GetPublicKeyToken()); AssemblyBuilder assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(an, AssemblyBuilderAccess.RunAndSave); ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule(assemblyBuilder.GetName().Name, true); // create inheritor for specified type TypeBuilder typeBuilder = moduleBuilder.DefineType(newTypeName, TypeAttributes.Public | TypeAttributes.Class, _targetType); // LambdaExpression.CompileToMethod can be used only with static methods, so we need to create another method that will call our Inject method // we can do the same via ILGenerator, but expression trees are more easy to use MethodInfo methodInfo = CreateMethodInfo(moduleBuilder); MethodBuilder methodBuilder = typeBuilder.DefineMethod(_targetMethodName, MethodAttributes.Public | MethodAttributes.Virtual); ILGenerator ilGenerator = methodBuilder.GetILGenerator(); // call our static method that will call inject method ilGenerator.EmitCall(OpCodes.Call, methodInfo, null); // in case if we need, then we put call to base method if (_callBaseMethod) { var baseMethodInfo = _targetType.GetMethod(_targetMethodName, BindingFlags.NonPublic | BindingFlags.Instance); // place this to stack ilGenerator.Emit(OpCodes.Ldarg_0); // call the base method ilGenerator.EmitCall(OpCodes.Call, baseMethodInfo, new Type[0]); // return ilGenerator.Emit(OpCodes.Ret); } // generate type, create it and return to caller Type cheatType = typeBuilder.CreateType(); object type = Activator.CreateInstance(cheatType); return (TFakeType)type; } /// /// Get name of assembly from InternalsVisibleTo AssemblyName /// private static string GetFakeAssemblyName(Assembly assembly) { var internalsVisibleAttr = assembly.GetCustomAttributes(typeof(InternalsVisibleToAttribute), true).FirstOrDefault() as InternalsVisibleToAttribute; if (internalsVisibleAttr == null) { throw new InvalidOperationException("Assembly hasn't InternalVisibleTo attribute"); } var ind = internalsVisibleAttr.AssemblyName.IndexOf(","); var name = internalsVisibleAttr.AssemblyName.Substring(0, ind); return name; } /// /// Generate such code: /// ((Action)Thread.GetData(Thread.GetNamedDataSlot(_slotName))).Invoke(); /// private LambdaExpression MakeStaticExpressionMethod() { var allocateMethod = typeof(Thread).GetMethod("GetNamedDataSlot", BindingFlags.Static | BindingFlags.Public); var getDataMethod = typeof(Thread).GetMethod("GetData", BindingFlags.Static | BindingFlags.Public); var call = Expression.Call(allocateMethod, Expression.Constant(_slotName)); var getCall = Expression.Call(getDataMethod, call); var convCall = Expression.Convert(getCall, typeof(Action)); var invokExpr = Expression.Invoke(convCall); var lambda = Expression.Lambda(invokExpr); return lambda; } /// /// Generate static class with one static function that will execute Action from Thread NamedDataSlot /// private MethodInfo CreateMethodInfo(ModuleBuilder moduleBuilder) { var methodName = "_StaticTestMethod_" + _slotName; var className = "_StaticClass_" + _slotName; TypeBuilder typeBuilder = moduleBuilder.DefineType(className, TypeAttributes.Public | TypeAttributes.Class); MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName, MethodAttributes.Static | MethodAttributes.Public); LambdaExpression expression = MakeStaticExpressionMethod(); expression.CompileToMethod(methodBuilder); var type = typeBuilder.CreateType(); return type.GetMethod(methodName, BindingFlags.Static | BindingFlags.Public); } } remarks about sample: as we need to execute code from another assembly, CorpLibrary1 hasn't access to it, so we need to store this delegate somewhere. Just for testing I stored it in Thread NamedDataSlot. It won't work in multithreaded applications, but it's just a sample. I know that we use Reflection to get private\internal members of any class, but within reflection we can't override them. But this issue is allows anyone to override internal class\method if that assembly has InternalsVisibleTo attribute. I tested it on .Net 3.5\4 and it works for both of them. How does it possible to just copy PublicKey without private key and use it in runtime? The whole sample can be found there - https://github.com/sergey-litvinov/Tests_InternalsVisibleTo UPDATE1: That test code in Program and FakeBuilder classes hasn't access to key.sn file and that library isn't signed, so it hasn't public key at all. It just copying it from CorpLibrary1 by using Reflection.Emit

    Read the article

  • How To Create Your Own x86 Operating System for Modern PC Computers

    - by mudge
    I'd like to create a new operating system for x86 PC computers. I'd like it to be 64-bit but possibly run as 32-bit as well. I have these kinds of questions: What kinds of things do you start working on first? Knowing where to start in writing your own operating system seems to me to be a tricky subject, so I am interested in your input. Generally how to go about making your own 32-bit/64-bit operating system, or good resources that mention useful information about going about writing your own operating system for x86 computers. I don't care how old sources are as long as they are still relevant and useful to what I am doing. I know that I will want it to have kernel drivers that access peripheral hardware directly. Where should I look for advice and documentation for programming and understanding the interface to peripheral hardware the operating system will communicate with? I will need to understand how the operating system will receive input and interact with keyboards, mice, computer monitors, hard drives, USB, etc. etc. This is probably the area I know least about. I have the Intel instruction set manuals and have been getting more familiar with assembly programming, so the CPU side of things is what I know the most about. At this point I'm thinking that I'd like to implement the Linux system calls within my operating system so that programs that run on Linux can run on my operating system. I want my operating system to use the ELF binary format. I wonder what obstacles I have to overcome to achieve this Linux compatibility. Are the main things implementing the system calls that Linux provides, and using the ELF format? What else? I am also interested in people's thoughts about why it might not be a good idea to make your own operating system, and why it is a good idea to make your own operating system. Thank you for any input.

    Read the article

  • .NET Reference "Copy Local" True / False Being Set Based on Contents of GAC

    - by D-Sect
    We had a very interesting problem with a Win Forms project. It's been resolved. We know what happened, but we want to understand why it happened. This may help other people out in the future who have a similar problem. The WinForms project failed on 2 of our client's PCs. The error was an obscure kernel.dll error. The project ran fine on 3 other PCs. We found that a .DLL (log4net.dll - a very popular open-source logging library) was missing from our release folder. It was previously in our release folder. Why was it missing in this latest release? It was missing because I must have installed a program on my Dev box that used log4net.dll and it was added to the Global Assembly Cache. When I checked the solution's references for log4net.dll, they were changed to "copy local=FALSE". They must have changed automatically because log4net.dll was present in my GAC. Here's where my question starts: Why did my reference for log4net.dll get changed from COPY LOCAL = TRUE to COPY LOCAL = FALSE? I suspect it's because it was added to my GAC by another program. How can we prevent this from happening again? As it stands now, if I install a piece of software that uses a common library and it adds it to my GAC, then my SLNs that reference that DLL will change from Copy Local TRUE to FALSE.

    Read the article

  • Which Computer Organization & Architecture book is good for me?

    - by claws
    I'm always interested in learning the inner working of things. I started with C programming and then learnt Operating systems (from stallings) and then linkers & loaders and then assembly language after reading these now I want to go into little more depth. Computer Architecture. I feel that makes everything clear. As per SO archives these are the two good books: Computer Architecture: A Quantitative Approach, 4th Edition Computer Organization and Design, Fourth Edition, ~ David A. Patterson, John L. Hennessy But I've browsed through the contents of these books and found that they don't exactly meet my needs. I want to learn more about caches, Memory Management Unit , mapping b/w virtual memory & physical memory I'm no way interested in other ISAs like MIPS etc.. I'm IA32 and X86-64 fan and I want to stick to it. I'm not a hardware developer I don't want to details like circuit diagrams or How is L1, L2 & L3 caches are implemented? I want to know the parallel processing technologies like HyperThreading at the architecture level but again I don't want to design them. I liked the table of Contents of - Computer Architecture: A Quantitative Approach, 4th Edition but Quantitave Approach? Seriously?? I want to know the details of current technologies and I dont want to spend reading 200 pages of outdated old technologies ( I experienced this while learning ASM}

    Read the article

  • P6 Architecture - Register renaming aside, does the limited user registers result in more ops spent

    - by mrjoltcola
    I'm studying JIT design with regard to dynamic languages VM implementation. I haven't done much Assembly since the 8086/8088 days, just a little here or there, so be nice if I'm out of sorts. As I understand it, the x86 (IA-32) architecture still has the same basic limited register set today that it always did, but the internal register count has grown tremendously, but these internal registers are not generally available and are used with register renaming to achieve parallel pipelining of code that otherwise could not be parallelizable. I understand this optimization pretty well, but my feeling is, while these optimizations help in overall throughput and for parallel algorithms, the limited register set we are still stuck with results in more register spilling overhead such that if x86 had double, or quadruple the registers available to us, there may be significantly less push/pop opcodes in a typical instruction stream? Or are there other processor optmizations that also optimize this away that I am unaware of? Basically if I've a unit of code that has 4 registers to work with for integer work, but my unit has a dozen variables, I've got potentially a push/pop for every 2 or so instructions. Any references to studies, or better yet, personal experiences?

    Read the article

  • Learning about the low level

    - by Anoners
    I'm interested in learning more about the PC from a lower (machine) level. I graduated from a school which taught us concepts using the Java language which abstracted out that level almost completely. As a result I only learned a bit from the one required assembly language course. In order to cram in ASM and quite a few details about architecture, it was hard to get a very deep picture of what is going on there. At work I focus on unix socket programming in C, so i'm much closer to the hardware now, but I feel I should learn a bit more about what streams really are, how memory management and paging works, what goes on when you call "paint()" on a graphics buffer, etc. I missed out on a lot of this and i'm looking for a good resource to get me started. I've heard a lot about the "Pink Book" by Peter Norton (Programmer's Guide to the IBM PC, Programmer's Guide to inside the PC, etc). It seems like this is on the right track, however the original is quite out dated and the newer ones have had conflicting reviews, with many people saying to stay away from it. I'm not sure what the SO crowd thinks about this book or if they have some suggestions for similar books, online resources, etc that may be good primers for this sort of thing. Any suggestions would be appreciated.

    Read the article

  • nasm/yasm arguments, linkage to C++

    - by arionik
    Hello everybody, I've got a question concerning nasm and its linkage to C++. I declare a litte test function as extern "C" void __cdecl myTest( byte i1, byte i2, int stride, int *width ); and I call it like this: byte i1 = 1, i2 = 2; int stride = 3, width = 4; myTest( i1, i2, stride, &width ); the method only serves to debug assembly and have a look at how the stack pointer is used to get the arguments. beyond that, the pointer arguments value shall be set to 7, to figure out how that works. This is implemented like this: global _myTest _myTest: mov eax, [esp+4] ; 1 mov ebx, [esp+8] ; 2 mov ecx, dword [esp+16] ; width mov edx, dword [esp+12] ; stride mov eax, dword [esp+16] mov dword [eax], 7 ret and compiled via yasm -f win32 -g cv8 -m x86 -o "$(IntDir)\$(InputName).obj" "$(InputPath)" , then linked to the c++ app. In debug mode, everything works fine. the function is called a couple of times and works as expected, whereas in release mode the function works once, but subsequent programm operations fail. It seems to me that something's wrong with stack/frame pointers, near/far, but I'm quite new to this subject and need a little help. thanks in advance! a.

    Read the article

  • SAL and SAR by 0 errors

    - by Roy McAvoy
    I have discovered a bug in some assembly code I have been working with but can't figure how to fix it. When shifting left by 0 the result ends up being 0 instead of jut the number. The same applies when shifting to the right. Any and all help is much appreciated. function sal(n,k:integer):integer; begin asm cld mov cx, k @1: sal n, 1 loop @1 end; sal:= n; end; function sar(n,k:integer):integer; begin asm cld mov cx, k @1: sar n, 1 loop @1 end; sar:=n; end; I have tried to changed them in the following way and it still does not work properly. function sal(n,k:integer):integer; begin asm cld mov cx, k jcxz @done @1: sal n, 1 loop @1 @done: end; sal:= n; end; function sar(n,k:integer):integer; begin asm cld mov cx, k jcxz @done @1: sar n, 1 loop @1 @done: end; sar:=n; end;

    Read the article

  • Read from file hexadecimal number and change their representation style

    - by user576844
    I want to write a program changing the notation of all hexadecimal numbers found in an assembly source file from traditional (h) to C-style (0x). I have started the coding part but am not sure how can I detect the hexadecimal numbers and eventually change the style and save it back in the file... I have started writing the program.. ## Mips program - .data fin: .ascii "" # filename for input msg0: .asciiz "aaaa" msg1: .asciiz "Please enter the input file name:" buffer: .asciiz "" .text #----------------------- li $v0, 4 la $a0, msg1 syscall li $v0, 8 la $a0, fin li $a1, 21 syscall jal fileRead #read from file move $s1, $v0 #$t0 = total number of bytes li $t0, 0 # Loop counter loop: bge $t0, $s1, end #if end of file reached OR if there is an error in the file lb $t5, buffer($t0) #load next byte from file jal checkhexa #check for hexadecimal numbers addi $t0, $t0, 1 #increment loop counter j loop end: jal output jal fileClose li $v0, 10 syscall fileRead: # Open file for reading li $v0, 13 # system call for open file la $a0, fin # input file name li $a1, 0 # flag for reading li $a2, 0 # mode is ignored syscall # open a file move $s0, $v0 # save the file descriptor # reading from file just opened li $v0, 14 # system call for reading from file move $a0, $s0 # file descriptor la $a1, buffer # address of buffer from which to read li $a2, 100000 # hardcoded buffer length syscall # read from file jr $ra Any help would be appreciated.

    Read the article

  • How can I know what this does?

    - by Dabor Troppe
    I got this piece of Assembly code extracted from some piece of software, but unfortunately I don't know anything of assembler and the bits I touched of Assembler was back in the Commodore Amiga with the 68000. Can anybody guide me on how I could understand this code without me needing to learn assembler from scratch, or just tell me what it does? Is there any kind of "Simulator" out there that I can run this on to see what it does? -[ObjSample Param1:andParam2:]: 00000c79 pushl %ebp 00000c7a movl %esp,%ebp 00000c7c subl $0x48,%esp 00000c7f movl %ebx,0xf4(%ebp) 00000c82 movl %esi,0xf8(%ebp) 00000c85 movl %edi,0xfc(%ebp) 00000c88 calll 0x00000c8d 00000c8d popl %ebx 00000c8e cmpb $-[ObjSample delegate],_bDoOnce.26952-0xc8d(%ebx) 00000c95 jel 0x00000d47 00000c9b movb $-[ObjSample delegate],_bDoOnce.26952-0xc8d(%ebx) 00000ca2 movl 0x7dc0-0xc8d(%ebx),%eax 00000ca8 movl %eax,0x04(%esp) 00000cac movl 0x7df4-0xc8d(%ebx),%eax 00000cb2 movl %eax,(%esp) 00000cb5 calll _objc_msgSend 00000cba movl 0x7dbc-0xc8d(%ebx),%edx 00000cc0 movl %edx,0x04(%esp) 00000cc4 movl %eax,(%esp) 00000cc7 calll _objc_msgSend 00000ccc movl %eax,0xe4(%ebp) 00000ccf movl 0x7db8-0xc8d(%ebx),%eax 00000cd5 movl %eax,0x04(%esp) 00000cd9 movl 0xe4(%ebp),%eax 00000cdc movl %eax,(%esp) 00000cdf calll _objc_msgSend 00000ce4 leal (%eax,%eax),%edi 00000ce7 movl %edi,(%esp) 00000cea calll _malloc 00000cef movl %eax,%esi 00000cf1 movl %edi,0x08(%esp) 00000cf5 movl $-[ObjSample delegate],0x04(%esp) 00000cfd movl %eax,(%esp) 00000d00 calll _memset 00000d05 movl $0x00000004,0x10(%esp) 00000d0d movl %edi,0x0c(%esp) 00000d11 movl %esi,0x08(%esp) 00000d15 movl 0x7db4-0xc8d(%ebx),%eax 00000d1b movl %eax,0x04(%esp) 00000d1f movl 0xe4(%ebp),%eax 00000d22 movl %eax,(%esp) 00000d25 calll _objc_msgSend 00000d2a xorl %edx,%edx 00000d2c movl %edi,%eax 00000d2e shrl $0x03,%eax 00000d31 jmp 0x00000d34 00000d33 incl %edx 00000d34 cmpl %edx,%eax 00000d36 ja 0x00000d33 00000d38 movl %esi,(%esp) 00000d3b calll _free 00000d40 movb $0x01,_isAuthenticated-0xc8d(%ebx) 00000d47 movzbl _isAuthenticated-0xc8d(%ebx),%eax 00000d4e movl 0xf4(%ebp),%ebx 00000d51 movl 0xf8(%ebp),%esi 00000d54 movl 0xfc(%ebp),%edi 00000d57 leave 00000d58 ret

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • using in-line asm to write a for loop with 2 comparisons

    - by aCuria
    I want to convert the for loop in the following code into assembly but i am not sure how to start. An explanation of how to do it and why it works would be appreciated. I am using VS2010, C++, writing for the x86. The code is as follows: for (n = 0; norm2 < 4.0 && n < N; ++n) { __asm{ ///a*a - b*b + x fld a // a fmul st(0), st(0) // aa fld b // b aa fmul st(0), st(0) // bb aa fsub // (aa-bb) // st(0) - st(1) fld x // x (aa-bb) fadd // (aa-bb+x) /// 2.0*a*b + y; fld d // d (aa-bb+x) fld a // d a (aa-bb+x) fmul // ad (aa-bb+x) fld b // b ad (aa-bb+x) fmul // abd (aa-bb+x) fld y // y adb (aa-bb+x) fadd // b:(adb+y) a:(aa-bb+x) fld st(0) //b b:(adb+y) a:(aa-bb+x) fmul st(0), st(0) // bb b:(adb+y) a:(aa-bb+x) fld st(2) // a bb b:(adb+y) a:(aa-bb+x) fmul st(0), st(0) // aa bb b:(adb+y) a:(aa-bb+x) fadd // aa+bb b:(adb+y) a:(aa-bb+x) fstp norm2 // store aa+bb to norm2, st(0) is popped. fstp b fstp a } }

    Read the article

  • bin-deploying DLLs banned in leiu of GAC on shared IIS 6 servers

    - by craigmoliver
    I need to solicit feedback about a recent security policy change at an organization I work with. They have recently banned the bin-deployment of DLLs to shared IIS 6 application servers. These servers host many isolated web application pools. The new rules require all DLLs to be installed in GAC. The is a problem for me because I bin-deploy several dlls including the ASP.NET MVC Framework, HTML Agility Pack, ELMAH, and my own shared class libraries. I do this because: Eliminates web application server dependencies to the Global Assembly Cache. Allows me (the developer) to have control of what goes on inside my application. Enables the application to deployed as a "package". Removes application deployment burden from the server administrators. Now, here are my questions. From a security perspective what are the advantages to using the GAC vs. bin-deployment? Is it possible to host multiple versions of the same DLL in the GAC? Has anyone run into similar restrictions?

    Read the article

  • Maven - 'all' or 'parent' project for aggregation?

    - by disown
    For educational purposes I have set up a project layout like so (flat in order to suite eclipse better): -product | |-parent |-core |-opt |-all Parent contains an aggregate project with core, opt and all. Core implements the mandatory part of the application. Opt is an optional part. All is supposed to combine core with opt, and has these two modules listed as dependencies. I am now trying to make the following artifacts: product-core.jar product-core-src.jar product-core-with-dependencies.jar product-opt.jar product-opt-src.jar product-opt-with-dependencies.jar product-all.jar product-all-src.jar product-all-with-dependencies.jar Most of them are fairly straightforward to produce. I do have some problem with the aggregating artifacts though. I have managed to make the product-all-src.jar with a custom assembly descriptor in the 'all' module which downloads the sources for all non-transitive deps, and this works fine. This technique also allows me to make the product-all-with-dependencies.jar. I however recently found out that you can use the source:aggregate goal in the source plugin to aggregate sources of the entire aggregate project. This is also true for the javadoc plugin, which also aggregates through the usage of the parent project. So I am torn between my 'all' module approach and ditching the 'all' module and just use the 'parent' module for all aggregation. It feels unclean to have some aggregate artifacts produced in 'parent', and others produced in 'all'. Is there a way of making an 'product-all' jar in the parent project, or to aggregate javadoc in the 'all' project? Or should I just keep both? Thanks

    Read the article

  • maven assemblies. Putting each dependency with transitive dependencies in own directory?

    - by jr
    I have a maven project which consists of a few modules. This is to be deployed on a client machine and will involve installing Tomcat and will make use of NSIS for installer. There is a separate application which monitors tomcat and can restart it, perform updates, etc. So, I have the modules setup as follows: project +-- client (all code, handlers, for the war) +-- client-common - (shared code, shared between monitor and client) +-- client-web - (the war, basically just uses war has applicationcontext, web.xml,etc) +-- monitor - (the monitor application jar. Uses wrapper to run) So, I need to create an installer. I was planning on creating another module which would be the installer. This is where I would have tomcat directory and I'd like maven to "assemble" everything and then run NSIS so I can create the final installer. However, I need to have the monitor jar file in a directory and then have all monitors dependencies in a lib/ directory. The final directory structure should be: project-installer-directory/monitor/monitor-version.jar project-installer-directory/monitor/lib/monitor-dep-1.jar project-installer-directory/monitor/lib/monitor-dep-2.jar project-installer-directory/monitor/lib/monitor-dep-3.jar project-installer-directory/webapps/client-web.war Where in the client-web\WEB-INF\lib directory we will have all client-web's dependencies after it is exploded. That works, I have the .war file. What I am having problems with is getting the monitor module dependencies independent of the dependencies of the client-web module. I tried to just create the installer module and make the monitor and client-web dependencies, but when I use dependencies-copy it gives me everything. Not what I want. I'm leaning towards creating a new module called monitor-assembly or something to give me a zip file which contains the directory format I need, but that is yet another module. Can someone please help me with the correct way to accomplish this? thanks!

    Read the article

  • Handling Types Defined in Plug-ins That Are No Longer Available

    - by Chris
    I am developing a .NET framework application that allows users to maintain and save "projects". A project can consist of components whose types are defined in the assemblies of the framework itself and/or in third-party assemblies that will be made available to the framework via a yet-to-be-built plug-in architecture. When a project is saved, it is simply binary-serialised to file. Projects are portable, so multiple users can load the same project into their own instances of the framework (just as different users may open the same MSWord document in their own local copies of MSWord). What's more, the plug-ins available to one user's framework might not be available to that of another. I need some way of ensuring that when a user attempts to open (i.e. deserialise) a project that includes a type whose defining assembly cannot be found (either because of a framework version incompatibility or the absence of a plug-in), the project still opens but the offending type is somehow substituted or omitted. Trouble is, the research I've done to date does not even hint at a suitable approach. Any ideas would be much appreciated, thanks.

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • Programming Environment for a Motorola 68000 in Linux

    - by Nick Presta
    Greetings all, I am taking a Structure and Application of Microcomputers course this semester and we're programming with the Motorola 68000 series CPU/board. The course syllabus suggests running something like Easy68K or Teesside Motorola 68000 Assembler/Emulator at home to test our programs. I told my prof I run x64 Linux and asked what sort of environment I would need to complete my coursework. He said that the easiest environment to use is a Windows XP 32bit VM with one of the two suggested applications installed, however, he doesn't really care what I use as long as I can test what I write at home. So I'm asking if there exists some sort of emulator or environment for Linux so I can test my code, and what sort of caveats I will run into by writing and testing my code in Linux. Also, I plan to do my editing in Vim, which probably isn't a problem, but I would like any insight into editors for 68000 assembly, if you have any. Thanks! EDIT: Just to clarify - I don't want to install Linux on the board at all - I want to program on my home machine, test the code locally, and then bring it onto the board for grading/running.

    Read the article

  • Question about CALL statement

    - by Bruce
    I have the following code in VC++ Func5(){ StackWalk(); } Func4{ Func5();} I am a Beginner in x86 Assembly Language. I am trying to find out the starting address of Func5(). I get the Func5()'s return address from its stack frame. Now before this return address there should be a CALL statement. So I extract out the bytes before the return address. Sometimes it's a near call like E8 ff ff ff d8. So for this statement I subtract the offset 0x28 from the function's return address to get Func5()'s base address (where it resides in memory). The problem is I don't know how to calculate this for a indirect NEAR call. I have been trying to find out how to do it for some time now. So I have extracted out the first 5 bytes before the return address and they are ff 75 08 ff d2 I think this stands for CALL ECX (ff d2) but I am not sure. I will be very grateful if someone can tell me what kind of CALL statement this is and how I can calculate the function's base address from this kind of call.

    Read the article

  • How is machine code understood by the machine

    - by Kraken
    I have a very naive question here, and I would like you to correct me on whatever wrong concepts I put out here. The question is as follows: I have ubuntu installed on my machine, now I write a helloWorld.c program in C language. Now, on the operating system I have a compiler installed, when I execute my helloWorld.c program, the OS schedules the compiler and that basically compiles my code into machine code, which eventually, I execute. Now my kernel code is written in C, then how does my machine interprets that code? Say my kernel code is helloWorld.c, now would not I require any compiler, to compile this code. Also, if I hardcode a compiler in maybe ROM or something, then what language is it written in? Assembly language? Let me know if I have made myself clear with the problem. Thanks. EDIT: By kernel code I mean, the code for operating system. Operating System code. I guess it is written in C right?

    Read the article

  • Coding with laptop and external screen - neck, back, comfort ...

    - by Xorty
    Guess I'm not the only one here coding on laptop + external keyboard + external screen. I can't really decide. Figure 1: Putting screen directly in front of (upright) my eyes and move laptop to the side. That feels like more comfortable but I can't really see so good to 15" laptop which is now quite away. Feels unused. Figure 2: Putting laptop in front of me and move external monitor on side. Feels like more efficient space usage, but I am afraid that my neck/back might start hurting since I need to turn my head often. What do you prefer? Some good advice? Sacrificing health is definitely no option here so that's why I'm worried and asking this silly question. Thanks

    Read the article

  • Silverlight Unit Testing Framework running tests in external class library

    - by Jonas Follesø
    I'm currently looking into different options for unit testing Silverlight applications. One of the frameworks available is the Silverlight Unit Test Framework from Microsoft (developed primary by Jeff Wilcox, http://www.jeff.wilcox.name/2010/05/sl3-utf-bits/). One of the scenarios I'm looking into is running the same tests on both Silverlight 3 (PC) and Windows Phone 7. The Silverlight Unit Test Framework (SLUT) runs on both PC and phone. To prevent having to copy or link files I would like to put my tests into a shared test library, that can be loaded by either a WP7 application using the SLUT, or a Silverlight 3 application using SLUT. So my question is: will SLUT load unit tests defined in a referenced class library, or only in the executing assembly?

    Read the article

  • Chromebook, Crouton & “External Drive” is it possible to rename the folder?

    - by Cyril N.
    I have a Chromebook where I installed Crouton. I have plugged an SD Card and Chromebook detects it as "External Drive". In my Ubuntu instance, it's located at /media/removable/External Drive/ but this poses some problems for executing some applications I have installed on that external drive. In order to fix the problem, I need to remove all the spaces in the path, which is located in "External Drive". My question is simple, is it possible to move/rename the mount name "External Drive" to something else and doing it automatically at every mount/boot ? Thank you for your help !

    Read the article

  • ERROR 2019 Linker Error Visual Studio

    - by Corrie Duck
    Hey I hope someone can tell me know to fix this issue I am having i keep getting an error 2019 from Visual studio for the following file. Now most of the functions have been removed so excuse the empty varriables etc. Error error LNK2019: unresolved external symbol "void * __cdecl OpenOneDevice(void *,struct _SP_DEVICE_INTERFACE_DATA *,char *)" (?OpenOneDevice@@YAPAXPAXPAU_SP_DEVICE_INTERFACE_DATA@@PAD@Z) referenced in function _wmain c:\Users\K\documents\visual studio 2010\Projects\test2\test2\test2.obj test2 #include "stdafx.h" #include <windows.h> #include <setupapi.h> SP_DEVICE_INTERFACE_DATA deviceInfoData; HDEVINFO hwDeviceInfo; HANDLE hOut; char *devName; // HANDLE OpenOneDevice(IN HDEVINFO hwDeviceInfo,IN PSP_DEVICE_INTERFACE_DATA DeviceInfoData,IN char *devName); // HANDLE OpenOneDevice(IN HDEVINFO HardwareDeviceInfo,IN PSP_DEVICE_INTERFACE_DATA DeviceInfoData,IN char *devName) { PSP_DEVICE_INTERFACE_DETAIL_DATA functionClassDeviceData = NULL; ULONG predictedLength = 0, requiredLength = 0; HANDLE hOut = INVALID_HANDLE_VALUE; SetupDiGetDeviceInterfaceDetail(HardwareDeviceInfo, DeviceInfoData, NULL, 0, &requiredLength, NULL); predictedLength = requiredLength; functionClassDeviceData = (PSP_DEVICE_INTERFACE_DETAIL_DATA)malloc(predictedLength); if(NULL == functionClassDeviceData) { return hOut; } functionClassDeviceData->cbSize = sizeof (SP_DEVICE_INTERFACE_DETAIL_DATA); if (!SetupDiGetDeviceInterfaceDetail(HardwareDeviceInfo, DeviceInfoData, functionClassDeviceData, predictedLength, &requiredLength, NULL)) { free( functionClassDeviceData ); return hOut; } //strcpy(devName,functionClassDeviceData->DevicePath) ; hOut = CreateFile(functionClassDeviceData->DevicePath, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, 0, NULL); free(functionClassDeviceData); return hOut; } // int _tmain(int argc, _TCHAR* argv[]) { hOut = OpenOneDevice (hwDeviceInfo, &deviceInfoData, devName); if(hOut != INVALID_HANDLE_VALUE) { // error report } return 0; } Been driving me mad for hours. Any help appreciated. SOLVED THANKS TO CHRIS :-) Add #pragma comment (lib, "Setupapi.lib") Thanks

    Read the article

  • weird performance in C++ (VC 2010)

    - by raicuandi
    Hello, I have this loop written in C++, that compiled with MSVC2010 takes a long time to run. (300ms) for (int i=0; i<h; i++) { for (int j=0; j<w; j++) { if (buf[i*w+j] > 0) { const int sy = max(0, i - hr); const int ey = min(h, i + hr + 1); const int sx = max(0, j - hr); const int ex = min(w, j + hr + 1); float val = 0; for (int k=sy; k < ey; k++) { for (int m=sx; m < ex; m++) { val += original[k*w + m] * ds[k - i + hr][m - j + hr]; } } heat_map[i*w + j] = val; } } } It seemed a bit strange to me, so I did some tests then changed a few bits to inline assembly: (specifically, the code that sums "val") for (int i=0; i<h; i++) { for (int j=0; j<w; j++) { if (buf[i*w+j] > 0) { const int sy = max(0, i - hr); const int ey = min(h, i + hr + 1); const int sx = max(0, j - hr); const int ex = min(w, j + hr + 1); __asm { fldz } for (int k=sy; k < ey; k++) { for (int m=sx; m < ex; m++) { float val = original[k*w + m] * ds[k - i + hr][m - j + hr]; __asm { fld val fadd } } } float val1; __asm { fstp val1 } heat_map[i*w + j] = val1; } } } Now it runs in half the time, 150ms. It does exactly the same thing, but why is it twice as quick? In both cases it was run in Release mode with optimizations on. Am I doing anything wrong in my original C++ code?

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >