Search Results

Search found 21004 results on 841 pages for 'load assembly'.

Page 2/841 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • 32bit to 64bit inline assembly porting

    - by Simone Margaritelli
    I have a piece of C++ code (compiled with g++ under a GNU/Linux environment) that load a function pointer (how it does that doesn't matter), pushes some arguments onto the stack with some inline assembly and then calls that function, the code is like : unsigned long stack[] = { 1, 23, 33, 43 }; /* save all the registers and the stack pointer */ unsigned long esp; asm __volatile__ ( "pusha" ); asm __volatile__ ( "mov %%esp, %0" :"=m" (esp)); for( i = 0; i < sizeof(stack); i++ ){ unsigned long val = stack[i]; asm __volatile__ ( "push %0" :: "m"(val) ); } unsigned long ret = function_pointer(); /* restore registers and stack pointer */ asm __volatile__ ( "mov %0, %%esp" :: "m" (esp) ); asm __volatile__ ( "popa" ); I'd like to add some sort of #ifdef _LP64 // 64bit inline assembly #else // 32bit version as above example #endif But i don't know inline assembly for 64bit machines, anyone could help me? Thanks

    Read the article

  • Mac OS ? Assembly Language Esoteria

    - by veryfoolish
    I've been playing around with assembly and object files in general on Mac OS ? and was wondering if somebody could provide some edification. Specifically, I'm wondering what the extra code GCC generates when compiling the C file in the following example does. I have a toy C program so I can comprehend the assembly output. int main() { int a = 5; int b = 5; int c = a + b; } Running this through gcc -S creates the following assembly: .text .globl _main _main: LFB2: pushq %rbp LCFI0: movq %rsp, %rbp LCFI1: movl $5, -4(%rbp) movl $5, -8(%rbp) movl -8(%rbp), %eax addl -4(%rbp), %eax movl %eax, -12(%rbp) leave ret LFE2: .section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support EH_frame1: .set L$set$0,LECIE1-LSCIE1 .long L$set$0 LSCIE1: .long 0x0 .byte 0x1 .ascii "zR\0" .byte 0x1 .byte 0x78 .byte 0x10 .byte 0x1 .byte 0x10 .byte 0xc .byte 0x7 .byte 0x8 .byte 0x90 .byte 0x1 .align 3 LECIE1: .globl _main.eh _main.eh: LSFDE1: .set L$set$1,LEFDE1-LASFDE1 .long L$set$1 LASFDE1: .long LASFDE1-EH_frame1 .quad LFB2-. .set L$set$2,LFE2-LFB2 .quad L$set$2 .byte 0x0 .byte 0x4 .set L$set$3,LCFI0-LFB2 .long L$set$3 .byte 0xe .byte 0x10 .byte 0x86 .byte 0x2 .byte 0x4 .set L$set$4,LCFI1-LCFI0 .long L$set$4 .byte 0xd .byte 0x6 .align 3 LEFDE1: .subsections_via_symbols The LCFI1 section seems to contain the actual logic for the program, but I'm not sure what the misc. other stuff is for... also, is there any scheme these labels are following? I'm sorry this is such a vague question. I'd appreciate anything, including being pointed to a resource where I can find out more about this. Thanks!

    Read the article

  • Mac OS ? Assembly Language Esoteria

    - by veryfoolish
    I've been playing around with assembly and object files in general on Mac OS ? and was wondering if somebody could provide some edification. Specifically, I'm wondering what the extra code GCC generates when compiling the C file in the following example does. I have a toy C program so I can comprehend the assembly output. int main() { int a = 5; int b = 5; int c = a + b; } Running this through gcc -S creates the following assembly: .text .globl _main _main: LFB2: pushq %rbp LCFI0: movq %rsp, %rbp LCFI1: movl $5, -4(%rbp) movl $5, -8(%rbp) movl -8(%rbp), %eax addl -4(%rbp), %eax movl %eax, -12(%rbp) leave ret LFE2: .section __TEXT,__eh_frame,coalesced,no_toc+strip_static_syms+live_support EH_frame1: .set L$set$0,LECIE1-LSCIE1 .long L$set$0 LSCIE1: .long 0x0 .byte 0x1 .ascii "zR\0" .byte 0x1 .byte 0x78 .byte 0x10 .byte 0x1 .byte 0x10 .byte 0xc .byte 0x7 .byte 0x8 .byte 0x90 .byte 0x1 .align 3 LECIE1: .globl _main.eh _main.eh: LSFDE1: .set L$set$1,LEFDE1-LASFDE1 .long L$set$1 LASFDE1: .long LASFDE1-EH_frame1 .quad LFB2-. .set L$set$2,LFE2-LFB2 .quad L$set$2 .byte 0x0 .byte 0x4 .set L$set$3,LCFI0-LFB2 .long L$set$3 .byte 0xe .byte 0x10 .byte 0x86 .byte 0x2 .byte 0x4 .set L$set$4,LCFI1-LCFI0 .long L$set$4 .byte 0xd .byte 0x6 .align 3 LEFDE1: .subsections_via_symbols The LCFI1 section seems to contain the actual logic for the program, but I'm not sure what the misc. other stuff is for... also, is there any scheme these labels are following? I'm sorry this is such a vague question. I'd appreciate anything, including being pointed to a resource where I can find out more about this. Thanks!

    Read the article

  • Back to Basics: When does a .NET Assembly Dependency get loaded

    - by Rick Strahl
    When we work on typical day to day applications, it's easy to forget some of the core features of the .NET framework. For me personally it's been a long time since I've learned about some of the underlying CLR system level services even though I rely on them on a daily basis. I often think only about high level application constructs and/or high level framework functionality, but the low level stuff is often just taken for granted. Over the last week at DevConnections I had all sorts of low level discussions with other developers about the inner workings of this or that technology (especially in light of my Low Level ASP.NET Architecture talk and the Razor Hosting talk). One topic that came up a couple of times and ended up a point of confusion even amongst some seasoned developers (including some folks from Microsoft <snicker>) is when assemblies actually load into a .NET process. There are a number of different ways that assemblies are loaded in .NET. When you create a typical project assemblies usually come from: The Assembly reference list of the top level 'executable' project The Assembly references of referenced projects Dynamically loaded at runtime via AppDomain/Reflection loading In addition .NET automatically loads mscorlib (most of the System namespace) the boot process that hosts the .NET runtime in EXE apps, or some other kind of runtime hosting environment (runtime hosting in servers like IIS, SQL Server or COM Interop). In hosting environments the runtime host may also pre-load a bunch of assemblies on its own (for example the ASP.NET host requires all sorts of assemblies just to run itself, before ever routing into your user specific code). Assembly Loading The most obvious source of loaded assemblies is the top level application's assembly reference list. You can add assembly references to a top level application and those assembly references are then available to the application. In a nutshell, referenced assemblies are not immediately loaded - they are loaded on the fly as needed. So regardless of whether you have an assembly reference in a top level project, or a dependent assembly assemblies typically load on an as needed basis, unless explicitly loaded by user code. The same is true of dependent assemblies. To check this out I ran a simple test: I have a utility assembly Westwind.Utilities which is a general purpose library that can work in any type of project. Due to a couple of small requirements for encoding and a logging piece that allows logging Web content (dependency on HttpContext.Current) this utility library has a dependency on System.Web. Now System.Web is a pretty large assembly and generally you'd want to avoid adding it to a non-Web project if it can be helped. So I created a Console Application that loads my utility library: You can see that the top level Console app a reference to Westwind.Utilities and System.Data (beyond the core .NET libs). The Westwind.Utilities project on the other hand has quite a few dependencies including System.Web. I then add a main program that accesses only a simple utillity method in the Westwind.Utilities library that doesn't require any of the classes that access System.Web: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); } StringUtils.NewStringId() calls into Westwind.Utilities, but it doesn't rely on System.Web. Any guesses what the assembly list looks like when I stop the code on the ReadLine() command? I'll wait here while you think about it… … … So, when I stop on ReadLine() and then fire up Process Explorer and check the assembly list I get: We can see here that .NET has not actually loaded any of the dependencies of the Westwind.Utilities assembly. Also not loaded is the top level System.Data reference even though it's in the dependent assembly list of the top level project. Since this particular function I called only uses core System functionality (contained in mscorlib) there's in fact nothing else loaded beyond the main application and my Westwind.Utilities assembly that contains the method accessed. None of the dependencies of Westwind.Utilities loaded. If you were to open the assembly in a disassembler like Reflector or ILSpy, you would however see all the compiled in dependencies. The referenced assemblies are in the dependency list and they are loadable, but they are not immediately loaded by the application. In other words the C# compiler and .NET linker are smart enough to figure out the dependencies based on the code that actually is referenced from your application and any dependencies cascading down into the dependencies from your top level application into the referenced assemblies. In the example above the usage requirement is pretty obvious since I'm only calling a single static method and then exiting the app, but in more complex applications these dependency relationships become very complicated - however it's all taken care of by the compiler and linker figuring out what types and members are actually referenced and including only those assemblies that are in fact referenced in your code or required by any of your dependencies. The good news here is: That if you are referencing an assembly that has a dependency on something like System.Web in a few places that are not actually accessed by any of your code or any dependent assembly code that you are calling, that assembly is never loaded into memory! Some Hosting Environments pre-load Assemblies The load behavior can vary however. In Console and desktop applications we have full control over assembly loading so we see the core CLR behavior. However other environments like ASP.NET for example will preload referenced assemblies explicitly as part of the startup process - primarily to minimize load conflicts. Specifically ASP.NET pre-loads all assemblies referenced in the assembly list and the /bin folder. So in Web applications it definitely pays to minimize your top level assemblies if they are not used. Understanding when Assemblies Load To clarify and see it actually happen what I described in the first example , let's look at a couple of other scenarios. To see assemblies loading at runtime in real time lets create a utility function to print out loaded assemblies to the console: public static void PrintAssemblies() { var assemblies = AppDomain.CurrentDomain.GetAssemblies(); foreach (var assembly in assemblies) { Console.WriteLine(assembly.GetName()); } } Now let's look at the first scenario where I have class method that references internally uses System.Web. In the first scenario lets add a method to my main program like this: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); PrintAssemblies(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } UpdateFromWebRequest() internally accesses HttpContext.Current to read some information of the ASP.NET Request object so it clearly needs a reference System.Web to work. In this first example, the method that holds the calling code is never called, but exists as a static method that can potentially be called externally at some point. What do you think will happen here with the assembly loading? Will System.Web load in this example? No - it doesn't. Because the WebLogEntry() method is never called by the mainline application (or anywhere else) System.Web is not loaded. .NET dynamically loads assemblies as code that needs it is called. No code references the WebLogEntry() method and so System.Web is never loaded. Next, let's add the call to this method, which should trigger System.Web to be loaded because a dependency exists. Let's change the code to: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.WriteLine("--- Before:"); PrintAssemblies(); WebLogEntry(); Console.WriteLine("--- After:"); PrintAssemblies(); Console.ReadLine(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } Looking at the code now, when do you think System.Web will be loaded? Will the before list include it? Yup System.Web gets loaded, but only after it's actually referenced. In fact, just until before the call to UpdateFromRequest() System.Web is not loaded - it only loads when the method is actually called and requires the reference in the executing code. Moral of the Story So what have we learned - or maybe remembered again? Dependent Assembly References are not pre-loaded when an application starts (by default) Dependent Assemblies that are not referenced by executing code are never loaded Dependent Assemblies are just in time loaded when first referenced in code All of this is nothing new - .NET has always worked like this. But it's good to have a refresher now and then and go through the exercise of seeing it work in action. It's not one of those things we think about everyday, and as I found out last week, I couldn't remember exactly how it worked since it's been so long since I've learned about this. And apparently I'm not the only one as several other people I had discussions with in relation to loaded assemblies also didn't recall exactly what should happen or assumed incorrectly that just having a reference automatically loads the assembly. The moral of the story for me is: Trying at all costs to eliminate an assembly reference from a component is not quite as important as it's often made out to be. For example, the Westwind.Utilities module described above has a logging component, including a Web specific logging entry that supports pulling information from the active HTTP Context. Adding that feature requires a reference to System.Web. Should I worry about this in the scope of this library? Probably not, because if I don't use that one class of nearly a hundred, System.Web never gets pulled into the parent process. IOW, System.Web only loads when I use that specific feature and if I am, well I clearly have to be running in a Web environment anyway to use it realistically. The alternative would be considerably uglier: Pulling out the WebLogEntry class and sticking it into another assembly and breaking up the logging code. In this case - definitely not worth it. So, .NET definitely goes through some pretty nifty optimizations to ensure that it loads only what it needs and in most cases you can just rely on .NET to do the right thing. Sometimes though assembly loading can go wrong (especially when signed and versioned local assemblies are involved), but that's subject for a whole other post…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 2

    - by Simon Cooper
    Before we look any further at the CLR metadata, we need a quick diversion to understand how the metadata is actually stored. Encoding table information As an example, we'll have a look at a row in the TypeDef table. According to the spec, each TypeDef consists of the following: Flags specifying various properties of the class, including visibility. The name of the type. The namespace of the type. What type this type extends. The field list of this type. The method list of this type. How is all this data actually represented? Offset & RID encoding Most assemblies don't need to use a 4 byte value to specify heap offsets and RIDs everywhere, however we can't hard-code every offset and RID to be 2 bytes long as there could conceivably be more than 65535 items in a heap or more than 65535 fields or types defined in an assembly. So heap offsets and RIDs are only represented in the full 4 bytes if it is required; in the header information at the top of the #~ stream are 3 bits indicating if the #Strings, #GUID, or #Blob heaps use 2 or 4 bytes (the #US stream is not accessed from metadata), and the rowcount of each table. If the rowcount for a particular table is greater than 65535 then all RIDs referencing that table throughout the metadata use 4 bytes, else only 2 bytes are used. Coded tokens Not every field in a table row references a single predefined table. For example, in the TypeDef extends field, a type can extend another TypeDef (a type in the same assembly), a TypeRef (a type in a different assembly), or a TypeSpec (an instantiation of a generic type). A token would have to be used to let us specify the table along with the RID. Tokens are always 4 bytes long; again, this is rather wasteful of space. Cutting the RID down to 2 bytes would make each token 3 bytes long, which isn't really an optimum size for computers to read from memory or disk. However, every use of a token in the metadata tables can only point to a limited subset of the metadata tables. For the extends field, we only need to be able to specify one of 3 tables, which we can do using 2 bits: 0x0: TypeDef 0x1: TypeRef 0x2: TypeSpec We could therefore compress the 4-byte token that would otherwise be needed into a coded token of type TypeDefOrRef. For each type of coded token, the least significant bits encode the table the token points to, and the rest of the bits encode the RID within that table. We can work out whether each type of coded token needs 2 or 4 bytes to represent it by working out whether the maximum RID of every table that the coded token type can point to will fit in the space available. The space available for the RID depends on the type of coded token; a TypeOrMethodDef coded token only needs 1 bit to specify the table, leaving 15 bits available for the RID before a 4-byte representation is needed, whereas a HasCustomAttribute coded token can point to one of 18 different tables, and so needs 5 bits to specify the table, only leaving 11 bits for the RID before 4 bytes are needed to represent that coded token type. For example, a 2-byte TypeDefOrRef coded token with the value 0x0321 has the following bit pattern: 0 3 2 1 0000 0011 0010 0001 The first two bits specify the table - TypeRef; the other bits specify the RID. Because we've used the first two bits, we've got to shift everything along two bits: 000000 1100 1000 This gives us a RID of 0xc8. If any one of the TypeDef, TypeRef or TypeSpec tables had more than 16383 rows (2^14 - 1), then 4 bytes would need to be used to represent all TypeDefOrRef coded tokens throughout the metadata tables. Lists The third representation we need to consider is 1-to-many references; each TypeDef refers to a list of FieldDef and MethodDef belonging to that type. If we were to specify every FieldDef and MethodDef individually then each TypeDef would be very large and a variable size, which isn't ideal. There is a way of specifying a list of references without explicitly specifying every item; if we order the MethodDef and FieldDef tables by the owning type, then the field list and method list in a TypeDef only have to be a single RID pointing at the first FieldDef or MethodDef belonging to that type; the end of the list can be inferred by the field list and method list RIDs of the next row in the TypeDef table. Going back to the TypeDef If we have a look back at the definition of a TypeDef, we end up with the following reprensentation for each row: Flags - always 4 bytes Name - a #Strings heap offset. Namespace - a #Strings heap offset. Extends - a TypeDefOrRef coded token. FieldList - a single RID to the FieldDef table. MethodList - a single RID to the MethodDef table. So, depending on the number of entries in the heaps and tables within the assembly, the rows in the TypeDef table can be as small as 14 bytes, or as large as 24 bytes. Now we've had a look at how information is encoded within the metadata tables, in the next post we can see how they are arranged on disk.

    Read the article

  • .NET load assembly error metabase config

    - by peter
    Hi, Yesterday I found out that the mailroot directory had moved to a different location on the C drive. I don't know how this happened as no configs were changed, using IIS7 with IIS6 SMTP service on Windows 2008 R2 web edition. I wanted to restore it to the default c:\inetpub\mailroot location. So I opened system32/inetsrv/MetaBase.xml, and in the node was the problem, BadMailDirectory, DropDirectory etc were all pointing to the wrong location... I changed them back to the default c:\inetpub\mailroot. Later I discovered an ASP.NET application was throwing this error: An error occurred in the Microsoft .NET Framework while trying to load assembly id 1. The server may be running out of resources, or the assembly may not be trusted with PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE. Run the query again, or check documentation to see how to solve the assembly trust issues. For more information about this error: System.IO.FileNotFoundException: Could not load file or assembly 'microsoft.sqlserver.types, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I figured it had to do with my changes to the mailroot locations, so I gave ASP.NET access to the mailroot directory and it was working.... My question is, how is this possible? Why does APS.NET require access to the mailroot directory to load assemblies? Thanks

    Read the article

  • Load balancing and HTTPS strategies

    - by Dan
    I am faced with the following problem: Servers get saturated since current load balancing strategy is based on client IP. Some corporate clients access our servers from behind large proxies so all clients appear with same IP to our load balancer. I think we are using some hardware load balancing device (can investigate further if necessary). We need to maintain session affinity (site is constructed in ASP), so all requests with same IP get routed to the same node. Since all the communication goes over the HTTPS, no request data (like session Id) is available to balancer as a client discriminator. Is there a way to use some other data besides the IP to distinguish between clients and route the clients even when coming from same IP to different nodes? Note: I need to maintain the traffic between the balancer and nodes safe (encrypted).

    Read the article

  • Load balancing without a load balancer?

    - by Tom
    I have 4 nginx-powered image servers on their own subdomains which users would access at random. I decided to put them all behind a HAProxy load balancer to improve the reliability and to see the traffic statistics from a single location. It seemed like a no-brainer. Unfortunately, the move was a complete failure as the load balancer's 100mbit port was completely saturated with all requests now going through it. I was wondering what to do about this - I could get a port upgrade ($$) or return to 4 separate image servers that are randomly accessed. I thought about putting HAProxy on each image server which would in turn route to another image server if that server's nginx service was having trouble. What would you do? I would like to not have to spend too much additional money.

    Read the article

  • C2244 when trying to call the pow function from inline assembly

    - by schrödingers cat
    I would like to call the pow function from inline assembly. The problem is i'm getting error C2244: 'pow' : unable to match function definition to an existing declaration. I'm new to assembly so this may be a trivial question but how do i resolve this? I guess it has something to do with the compiler not beeing able to properly resolve the overload of pow. The following code fragment is causing the error: do_POW: // push first argument to the stack sub esp, size value_type fld qword ptr [ecx] fstp qword ptr [esp] // push second argument to the stack sub esp, size value_type fld qword ptr [ecx - size value_type] fstp qword ptr [esp]and pop fpu stack // call the pow function call pow sub ecx, size value_type fstp qword ptr [ecx] add esp, 2 * size value_type jmp loop_start

    Read the article

  • Anatomy of a .NET Assembly - Signature encodings

    - by Simon Cooper
    If you've just joined this series, I highly recommend you read the previous posts in this series, starting here, or at least these posts, covering the CLR metadata tables. Before we look at custom attribute encoding, we first need to have a brief look at how signatures are encoded in an assembly in general. Signature types There are several types of signatures in an assembly, all of which share a common base representation, and are all stored as binary blobs in the #Blob heap, referenced by an offset from various metadata tables. The types of signatures are: Method definition and method reference signatures. Field signatures Property signatures Method local variables. These are referenced from the StandAloneSig table, which is then referenced by method body headers. Generic type specifications. These represent a particular instantiation of a generic type. Generic method specifications. Similarly, these represent a particular instantiation of a generic method. All these signatures share the same underlying mechanism to represent a type Representing a type All metadata signatures are based around the ELEMENT_TYPE structure. This assigns a number to each 'built-in' type in the framework; for example, Uint16 is 0x07, String is 0x0e, and Object is 0x1c. Byte codes are also used to indicate SzArrays, multi-dimensional arrays, custom types, and generic type and method variables. However, these require some further information. Firstly, custom types (ie not one of the built-in types). These require you to specify the 4-byte TypeDefOrRef coded token after the CLASS (0x12) or VALUETYPE (0x11) element type. This 4-byte value is stored in a compressed format before being written out to disk (for more excruciating details, you can refer to the CLI specification). SzArrays simply have the array item type after the SZARRAY byte (0x1d). Multidimensional arrays follow the ARRAY element type with a series of compressed integers indicating the number of dimensions, and the size and lower bound of each dimension. Generic variables are simply followed by the index of the generic variable they refer to. There are other additions as well, for example, a specific byte value indicates a method parameter passed by reference (BYREF), and other values indicating custom modifiers. Some examples... To demonstrate, here's a few examples and what the resulting blobs in the #Blob heap will look like. Each name in capitals corresponds to a particular byte value in the ELEMENT_TYPE or CALLCONV structure, and coded tokens to custom types are represented by the type name in curly brackets. A simple field: int intField; FIELD I4 A field of an array of a generic type parameter (assuming T is the first generic parameter of the containing type): T[] genArrayField FIELD SZARRAY VAR 0 An instance method signature (note how the number of parameters does not include the return type): instance string MyMethod(MyType, int&, bool[][]); HASTHIS DEFAULT 3 STRING CLASS {MyType} BYREF I4 SZARRAY SZARRAY BOOLEAN A generic type instantiation: MyGenericType<MyType, MyStruct> GENERICINST CLASS {MyGenericType} 2 CLASS {MyType} VALUETYPE {MyStruct} For more complicated examples, in the following C# type declaration: GenericType<T> : GenericBaseType<object[], T, GenericType<T>> { ... } the Extends field of the TypeDef for GenericType will point to a TypeSpec with the following blob: GENERICINST CLASS {GenericBaseType} 3 SZARRAY OBJECT VAR 0 GENERICINST CLASS {GenericType} 1 VAR 0 And a static generic method signature (generic parameters on types are referenced using VAR, generic parameters on methods using MVAR): TResult[] GenericMethod<TInput, TResult>( TInput, System.Converter<TInput, TOutput>); GENERIC 2 2 SZARRAY MVAR 1 MVAR 0 GENERICINST CLASS {System.Converter} 2 MVAR 0 MVAR 1 As you can see, complicated signatures are recursively built up out of quite simple building blocks to represent all the possible variations in a .NET assembly. Now we've looked at the basics of normal method signatures, in my next post I'll look at custom attribute application signatures, and how they are different to normal signatures.

    Read the article

  • Opensource Dns based incoming Load balancing ?

    - by sahil
    Hi, I am using Linkproof device for incoming load balancing based on which is based on dns. For example my linkprrof have 3 isp link.... it also act as a dns sevrer for my sites...so when someone connect me by using my dns name my linkprrof give them ip from 3 isp which ever is responding fast. so is there any same kind of open source solution for dns based incoming load balance is possible ??? sahil

    Read the article

  • Load balancing application servers with Alteon 2424-SSL

    - by antispam
    We are having problems with load balancing configuration and we would like to clear the situation. We need to load balance among four JavaEE web application servers. The servers are configured as host1 port 7001 host1 port 7002 host2 port 7001 host2 port 7002 Do any of you know if it is possible with Nortel 2424-SSL application switch? Which would be the best configuration for it? (vips, ports, groups, services, ...) Thank you very much.

    Read the article

  • Assembly load and execute issue

    - by Jean Carlos Suárez Marranzini
    I'm trying to develop Assembly code allowing me to load and execute(by input of the user) 2 other Assembly .EXE programs. I'm having two problems: -I don't seem to be able to assign the pathname to a valid register(Or maybe incorrect syntax) -I need to be able to execute the other program after the first one (could be either) started its execution. This is what I have so far: mov ax,cs ; moving code segment to data segment mov ds,ax mov ah,1h ; here I read from keyboard int 21h mov dl,al cmp al,'1' ; if 1 jump to LOADRUN1 JE LOADRUN1 popf cmp al,'2' ; if 1 jump to LOADRUN2 JE LOADRUN2 popf LOADRUN1: MOV AH,4BH MOV AL,00 LEA DX,[PROGNAME1] ; Not sure if it works INT 21H LOADRUN2: MOV AH,4BH MOV AL,00 LEA DX,[PROGNAME2] ; Not sure if it works INT 21H ; Here I define the bytes containing the pathnames PROGNAME1 db 'C:\Users\Usuario\NASM\Adding.exe',0 PROGNAME2 db 'C:\Users\Usuario\NASM\Substracting.exe',0 I just don't know how start another program by input in the 'parent' program, after one is already executing. Thanks in advance for your help! Any additional information I'll be more than happy to provide. -I'm using NASM 16 bits, Windows 7 32 bits.

    Read the article

  • Cisco Catalyst 3550 + Alteon 184 Load-Balancing Issues...

    - by upkels
    I have just deployed a couple Cisco Catalyst 3550 switches, and a couple Alteon 184 Web Switches for load-balancing. I can ping all RIPs and VIPs to/from the Alteon. Topology Before: (server) <- (Alteon) <- (Internet) Topology Now: (server) <- (3550) <- Alteon <- (Internet) Cisco Port Configuration (Alteon Uplink Port): description LB_1_PORT_9_PRIMARY switchport access vlan 10 switchport mode access switchport nonegotiate speed 100 duplex full Alteon Port 9 Configuration (VLAN 10 WAN): >> Main# /c/port 9/cur Current Port 9 configuration: enabled pref fast, backup gig, PVID 10, BW Contract 1024 name UPLINK >> Main# /c/port 9/fast/cur Current Port 9 Fast link configuration: speed 100, mode full duplex, fctl none, auto off Cisco Configuration (Load-Balanced Servers Port): description LB_1_PORT_1_PRIMARY switchport access vlan 30 switchport mode access switchport nonegotiate speed 100 duplex full Alteon Port 1 Configuration (VLAN 30 LOAD-BALANCED LAN): >> Main# /c/port 1/cur Current Port 1 configuration: enabled pref fast, backup gig, PVID 30, BW Contract 1024 name LB_PORT_1 >> Main# /c/port 1/fast/cur Current Port 1 Fast link configuration: speed 100, mode full duplex, fctl both, auto on Each of my servers are on vlan 10 and 30, properly communicating. I have tried to turn on VLAN tagging on the Alteon, however it seems to cause all communications to stop working. When I tcpdump -i vlan30 on any of the webservers, I see normal ARP communications, and some STP communications, which may or may not be part of the problem: ... 15:00:51.035882 STP 802.1d, Config, Flags [none], bridge-id 801e.00:11:5c:62:fe:80.8041, length 42 15:00:51.493154 IP 10.1.1.254.33923 > 10.1.1.1.http: Flags [S], seq 707324510, win 8760, options [mss 1460], length 0 15:00:51.493336 IP 10.1.1.1.http > 10.1.1.254.33923: Flags [S.], seq 3981707623, ack 707324511, win 65535, options [mss 1460], len gth 0 15:00:51.493778 ARP, Request who-has 10.1.3.1 tell 10.1.3.254, length 46 etc... I'm not sure if I've provided enough information, so please let me know if any more is necessary. Thank you!

    Read the article

  • Maven assembly - Error reading assemblies

    - by Laurent
    Dear all, I have defined a personalized jar-with-dependencies assembly descriptor. However, when I execute it with mvn assembly:assembly, I get : ... [INFO] META-INF/ already added, skipping [INFO] META-INF/MANIFEST.MF already added, skipping [INFO] javax/ already added, skipping [INFO] META-INF/ already added, skipping [INFO] META-INF/MANIFEST.MF already added, skipping [INFO] META-INF/maven/ already added, skipping [INFO] [assembly:assembly {execution: default-cli}] [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error reading assemblies: No assembly descriptors found. My jar-with-dependencies.xml is in src/main/resources/assemblies/. My assembly descriptor is the following : <?xml version='1.0' encoding='UTF-8'?> <assembly> <id>jar-with-dependencies</id> <formats> <format>jar</format> </formats> <dependencySets> <dependencySet> <scope>runtime</scope> <unpack>true</unpack> <unpackOptions> <excludes> <exclude>**/LICENSE*</exclude> <exclude>**/README*</exclude> </excludes> </unpackOptions> </dependencySet> </dependencySets> <fileSets> <fileSet> <directory>${project.build.outputDirectory}</directory> <outputDirectory>/</outputDirectory> </fileSet> <fileSet> <directory>src/main/resources/META-INF/services</directory> <outputDirectory>META-INF/services</outputDirectory> </fileSet> </fileSets> </assembly> And my project pom.xml is : <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>2.2-beta-5</version> <executions> <execution> <id>jar-with-dependencies</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <descriptors> <descriptor>jar-with-dependencies.xml</descriptor> </descriptors> <archive> <manifest> <mainClass>org.my.app.HowTo</mainClass> </manifest> </archive> </configuration> </execution> </executions> </plugin> When mvn assembly:assembly is performed, dependencies are unpacked and I get the previous error when unpack has finished. Moreover, if I execute mvn -e assembly:assembly it is say that no descriptors has been found, however it try to unpack dependencies and a JAR with dependencies is created but it doesn't contain META-INF/services/* as specified in descriptor : [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error reading assemblies: No assembly descriptors found. [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Error reading assemblies: No assembly descriptors found. at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: Error reading assemblies: No assembly descriptors found. at org.apache.maven.plugin.assembly.mojos.AbstractAssemblyMojo.execute(AbstractAssemblyMojo.java:356) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.plugin.assembly.io.AssemblyReadException: No assembly descriptors found. at org.apache.maven.plugin.assembly.io.DefaultAssemblyReader.readAssemblies(DefaultAssemblyReader.java:206) at org.apache.maven.plugin.assembly.mojos.AbstractAssemblyMojo.execute(AbstractAssemblyMojo.java:352) ... 19 more I don't see my error. Does someone has a solution ? Kind Regards Laurent

    Read the article

  • Is there much difference between X86 Assembly language on Windows and Linux?

    - by Logan545
    I'm a complete beginner at Assembly, and my aim is to learn as much as I can to do with Assembly to one day I can reach expert level (I know I'm way off right now, but you never know). My only problem is this: I've got two books which both teach assembly, one on a Linux and the other on Windows. They are Jeff Duntemann's Assembly Language Step By Step (the linux one) and Introduction to 80x86 Assembly Language and Computer Architecture (the windows version). If I want to get the best out of assembly, should I do this on linux and windows? Also, is the syntax the same on Windows and Linux or will I have teach my self again when learning on the other OS( which is my main concern, I want to be able to use assembly on windows and linux).

    Read the article

  • Serializing and Deserializing External Assembly in C#

    - by Heka
    I wrote a plugin system and I want to save/load their properties so that if the program is restarted they can continue working. I use binary serialization. The problem is they can be serialized but not deserialized. During the deserialization "Unable to find assembly" exception is thrown. How can I restore serialized data?

    Read the article

  • create assembly from network location

    - by mjw06d
    The error I'm receiving: CREATE ASSEMBLY failed because it could not open the physical file "\\<server>\<folder>\<assembly>.dll": 5(Access is denied.). TSQL: exec sp_configure 'clr enabled', 1 reconfigure go create assembly <assemblyname> from '\\<server>\<folder>\<assembly>.dll' with permission_set = safe How can I create an assembly from a unc path?

    Read the article

  • High load average due to high system cpu load (%sys)

    - by Nick
    We have server with high traffic website. Recently we moved from 2 x 4 core server (8 cores in /proc/cpuinfo), 32 GB RAM, running CentOS 5.x, to 2 x 4 core server (16 cores in /proc/cpuinfo), 32 GB RAM, running CentOS 6.3 Server running nginx as a proxy, mysql server and sphinx-search. Traffic is high, but mysql and sphinx-search databases are relatively small, and usually everything works blazing fast. Today server experienced load average of 100++. Looking at top and sar, we noticed that (%sys) is very high - 50 to 70%. Disk utilization was less 1%. We tried to reboot, but problem existed after the reboot. At any moment server had at least 3-4 GB free RAM. Only message shown by dmesg was "possible SYN flooding on port 80. Sending cookies.". Here is snippet of sar 11:00:01 CPU %user %nice %system %iowait %steal %idle 11:10:01 all 21.60 0.00 66.38 0.03 0.00 11.99 We know that this is traffic issue, but we do not know how to proceed future and where to check for solution. Is there a way we can find where exactly those "66.38%" are used. Any suggestions would be appreciated.

    Read the article

  • Assembly keep getting seg fault when working with stack [migrated]

    - by user973917
    I'm trying to learn assembly and have found that I keep getting segfaults when trying to push/pop data off of the stack. I've read a few guides and know how the stack works and how to work with the stack; but don't know why I keep getting the error. Can someone help? segment .data myvar: db "hello world", 0xA0, 0 myvarL: equ $-myvar segment .text global _start _start: push ebp mov ebp, esp push myvarL push myvar call _hworld _hworld: mov eax, 4 mov ebx, 1 mov ecx, [ebp+4] mov edx, [ebp+8] pop ebp int 0x80 ret I'm assuming that the +4 is 32 bits, then +8 is 64 bits. It isn't really clear to me why this way is being done on some of the guides I've read. I would assume that myvar is 13 bits?

    Read the article

  • Assembly as a First Programming Language?

    - by Anto
    How good of an idea do you think it would be to teach people Assembly (some variant) as a first programming language? It would take a lot more effort than learning for instance Java or Python, but one would have good understanding of the machine more or less from "programming day one" (compared to many higher level languages, at least). What do you think? Is it a realistic idea, at least to those who are ready to make the extra effort? Advantages and disadvantages? Note: I'm no teacher, just curious

    Read the article

  • Are there jobs which are oriented towards optimisation programming or assembly

    - by jokoon
    3D engine programmers have to care a little about execution speed, but what about the programmers at ATI and nVidia ? How much do they need to optimize their driver applications ? Are there jobs out there who only purpose is execution speed and optimisation, or jobs for people to program only in assembly ? Please, no flame war about "premature optimisation is the root of all evil", I just want to know if such jobs exists, maybe in security ? In kernel programming ? Where ? Not at all ?

    Read the article

  • Assembly as a First Programming Language?

    - by Anto
    How good of an idea do you think it would be to teach people Assembly (some variant) as a first programming language? It would take a lot more effort than learning for instance Java or Python, but one would have good understanding of the machine more or less from "programming day one" (compared to many higher level languages, at least). What do you think? Is it a realistic idea, at least to those who are ready to make the extra effort? Advantages and disadvantages? Note: I'm no teacher, just curious

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >