Search Results

Search found 9894 results on 396 pages for 'primary interop assembly'.

Page 64/396 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Rename INDEX Column

    - by Lee
    Hey All I have a database with around 40 tables and need to rename every index column. IE USER a table has a bunch of fields like user_id | user_username | user_password | etc... I want to rename the ID columns just to id ie id | user_username | user_password | etc... But I keep getting mysql errors on the alter table command ie. ALTER TABLE database RENAME COLUMN user_id to id; Plus many different variations. Whats the best way to do this ? Hope you can advise

    Read the article

  • How to pass a const unsigned char * from c++ to c#

    - by tzup
    So I have a function in unmanaged c++ that gets called when some text happens to have "arrived": #using <MyParser.dll> ... void dump_body(const unsigned char *Body, int BodyLen) { // Need to pass the body to DumpBody, but as what type? ... lMyParser::Parser::DumpBody(???); } DumpBody is a function defined in a C# DLL that should take one parameter of type? Body holds an array of characters (text) of length BodyLen. There's obviously some marshalling to be done here but I have no idea how. Please help.

    Read the article

  • find out if there are hidden columns in excel

    - by ps
    Hi all, I need to know if there hidden columns in the excel sheet. i used use the following which worked fine and then suddenly it stopped working.now it always returns false. bool.Parse(worksheet.PageSetup.Application.Columns.Hidden.ToString()) TIA excel 2007 .net 3.5

    Read the article

  • Database PK-FK design for future-effective-date entries?

    - by Scott Balmos
    Ultimately I'm going to convert this into a Hibernate/JPA design. But I wanted to start out from purely a database perspective. We have various tables containing data that is future-effective-dated. Take an employee table with the following pseudo-definition: employee id INT AUTO_INCREMENT ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Very simplistic. But let's say Employee A has id = 1, effectiveFrom = 1/1/2011, effectiveTo = 1/1/2099. That employee is going to be changing jobs in the future, which would in theory create a new row, id = 2 with effectiveFrom = 7/1/2011, effectiveTo = 1/1/2099, and id = 1's effectiveTo updated to 6/30/2011. But now, my program would have to go through any table that has a FK relationship to employee every night, and update those FK to reference the newly-effective employee entry. I have seen various postings in both pure SQL and Hibernate forums that I should have a separate employee_versions table, which is where I would have all effective-dated data stored, resulting in the updated pseudo-definition below: employee id INT AUTO_INCREMENT employee_versions id INT AUTO_INCREMENT employee_id INT FK employee.id ... data fields ... effectiveFrom DATE effectiveTo DATE employee_reviews id INT AUTO_INCREMENT employee_id INT FK employee.id Then to get any actual data, one would have to actually select from employee_versions with the proper employee_id and date range. This feels rather unnatural to have this secondary "versions" table for each versioned entity. Anyone have any opinions, suggestions from your own prior work, etc? Like I said, I'm taking this purely from a general SQL design standpoint first before layering in Hibernate on top. Thanks!

    Read the article

  • structure in .NET

    - by redpaladin
    My problem is very simple.. My problem is to send a structure beetween a program in C to a C# program.. I made a struct in C#: public struct NetPoint { public float lat; // 4 bytes public float lon; // 4 bytes public int alt; // 4 bytes public long time; // 8 bytes } Total size of the struct must be 20 bytes When i do a sizeof() of this struct System.Diagnostics.Debug.WriteLine("SizeOf(NetPoint)=" + System.Runtime.InteropServices.Marshal.SizeOf(new NetPoint())); The debug console show : SizeOf(NetPoint)=24 But i expected to have 20 bytes ? Why do I have a difference ? Thank you in advance

    Read the article

  • Creating a C# DLL and using it from unmanaged C++

    - by John
    I have a native (unmanaged) C++ application (using wxWidgets for what it's worth). I'm considering a separate tool application to be written in C# which would contain winform-based dialogs. putting some of those dialogs in a separate DLL would be useful as I'd hope to be able to use them from my C++ app. But I have no idea how much messing about is needed to accomplish this, is it particularly easy?

    Read the article

  • Blittable Vs. Non-Blittable in IL

    - by Michael Covelli
    I'm trying to make sure that my Managed to Unmanaged calls are optimized. Is there a quick way to see by looking at the IL if any non-blittable types have accidentally gotten into my pinvoke calls? I tried just writing two unmanaged functions in a .dll, one that uses bool (which is non-blittable) and one that uses ints. But I didn't see anything different when looking at the IL to let me know that it was doing something extra to marshal the bool.

    Read the article

  • Add 64 bit offset to a pointer

    - by Novox
    In F#, there's the NativePtr module, but it seems to only support 32 bit offsets for its’ add/get/set functions, just like System.IntPtr does. Is there a way to add a 64 bit offset to a native pointer (nativeptr<'a) in F#? Of course I could convert all addresses to 64 bit integers, do normal integer operations and then convert the result again to nativeptr<'a, but this would cost additional add and imul instructions. I really want the AGUs to perform the address calculations. For instance, using unsafe in C# you could do something like void* ptr = Marshal.AllocHGlobal(...).ToPointer(); int64 offset = ...; T* newAddr = (T*)ptr + offset; // T has to be an unmanaged type Well actually you can't, because there is no "unmanaged" constraint for type parameters, but at least you can do general pointer arithmetic in a non-generic way. In F# we finally got the unmanaged constraint; but how do I do the pointer arithmetic?

    Read the article

  • InternalsVisibleTo attribute and security vulnerability

    - by Sergey Litvinov
    I found one issue with InternalsVisibleTo attribute usage. The idea of InternalsVisibleTo attribute to allow some other assemblies to use internal classes\methods of this assembly. To make it work you need sign your assemblies. So, if other assemblies isn't specified in main assembly and if they have incorrect public key, then they can't use Internal members. But the issue in Reflection Emit type generation. For example, we have CorpLibrary1 assembly and it has such class: public class TestApi { internal virtual void DoSomething() { Console.WriteLine("Base DoSomething"); } public void DoApiTest() { // some internal logic // ... // call internal method DoSomething(); } } This assembly is marked with such attribute to allow another CorpLibrary2 to make inheritor for that TestAPI and override behaviour of DoSomething method. [assembly: InternalsVisibleTo("CorpLibrary2, PublicKey=0024000004800000940000000602000000240000525341310004000001000100434D9C5E1F9055BF7970B0C106AAA447271ECE0F8FC56F6AF3A906353F0B848A8346DC13C42A6530B4ED2E6CB8A1E56278E664E61C0D633A6F58643A7B8448CB0B15E31218FB8FE17F63906D3BF7E20B9D1A9F7B1C8CD11877C0AF079D454C21F24D5A85A8765395E5CC5252F0BE85CFEB65896EC69FCC75201E09795AAA07D0")] The issue is that I'm able to override this internal DoSomething method and break class logic. My steps to do it: Generate new assembly in runtime via AssemblyBuilder Get AssemblyName from CorpLibrary1 and copy PublikKey to new assembly Generate new assembly that will inherit TestApi class As PublicKey and name of generated assembly is the same as in InternalsVisibleTo, then we can generate new DoSomething method that will override internal method in TestAPI assembly Then we have another assembly that isn't related to this CorpLibrary1 and can't use internal members. We have such test code in it: class Program { static void Main(string[] args) { var builder = new FakeBuilder(InjectBadCode, "DoSomething", true); TestApi fakeType = builder.CreateFake(); fakeType.DoApiTest(); // it will display: // Inject bad code // Base DoSomething Console.ReadLine(); } public static void InjectBadCode() { Console.WriteLine("Inject bad code"); } } And this FakeBuilder class has such code: /// /// Builder that will generate inheritor for specified assembly and will overload specified internal virtual method /// /// Target type public class FakeBuilder { private readonly Action _callback; private readonly Type _targetType; private readonly string _targetMethodName; private readonly string _slotName; private readonly bool _callBaseMethod; public FakeBuilder(Action callback, string targetMethodName, bool callBaseMethod) { int randomId = new Random((int)DateTime.Now.Ticks).Next(); _slotName = string.Format("FakeSlot_{0}", randomId); _callback = callback; _targetType = typeof(TFakeType); _targetMethodName = targetMethodName; _callBaseMethod = callBaseMethod; } public TFakeType CreateFake() { // as CorpLibrary1 can't use code from unreferences assemblies, we need to store this Action somewhere. // And Thread is not bad place for that. It's not the best place as it won't work in multithread application, but it's just a sample LocalDataStoreSlot slot = Thread.AllocateNamedDataSlot(_slotName); Thread.SetData(slot, _callback); // then we generate new assembly with the same nameand public key as target assembly trusts by InternalsVisibleTo attribute var newTypeName = _targetType.Name + "Fake"; var targetAssembly = Assembly.GetAssembly(_targetType); AssemblyName an = new AssemblyName(); an.Name = GetFakeAssemblyName(targetAssembly); // copying public key to new generated assembly var assemblyName = targetAssembly.GetName(); an.SetPublicKey(assemblyName.GetPublicKey()); an.SetPublicKeyToken(assemblyName.GetPublicKeyToken()); AssemblyBuilder assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(an, AssemblyBuilderAccess.RunAndSave); ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule(assemblyBuilder.GetName().Name, true); // create inheritor for specified type TypeBuilder typeBuilder = moduleBuilder.DefineType(newTypeName, TypeAttributes.Public | TypeAttributes.Class, _targetType); // LambdaExpression.CompileToMethod can be used only with static methods, so we need to create another method that will call our Inject method // we can do the same via ILGenerator, but expression trees are more easy to use MethodInfo methodInfo = CreateMethodInfo(moduleBuilder); MethodBuilder methodBuilder = typeBuilder.DefineMethod(_targetMethodName, MethodAttributes.Public | MethodAttributes.Virtual); ILGenerator ilGenerator = methodBuilder.GetILGenerator(); // call our static method that will call inject method ilGenerator.EmitCall(OpCodes.Call, methodInfo, null); // in case if we need, then we put call to base method if (_callBaseMethod) { var baseMethodInfo = _targetType.GetMethod(_targetMethodName, BindingFlags.NonPublic | BindingFlags.Instance); // place this to stack ilGenerator.Emit(OpCodes.Ldarg_0); // call the base method ilGenerator.EmitCall(OpCodes.Call, baseMethodInfo, new Type[0]); // return ilGenerator.Emit(OpCodes.Ret); } // generate type, create it and return to caller Type cheatType = typeBuilder.CreateType(); object type = Activator.CreateInstance(cheatType); return (TFakeType)type; } /// /// Get name of assembly from InternalsVisibleTo AssemblyName /// private static string GetFakeAssemblyName(Assembly assembly) { var internalsVisibleAttr = assembly.GetCustomAttributes(typeof(InternalsVisibleToAttribute), true).FirstOrDefault() as InternalsVisibleToAttribute; if (internalsVisibleAttr == null) { throw new InvalidOperationException("Assembly hasn't InternalVisibleTo attribute"); } var ind = internalsVisibleAttr.AssemblyName.IndexOf(","); var name = internalsVisibleAttr.AssemblyName.Substring(0, ind); return name; } /// /// Generate such code: /// ((Action)Thread.GetData(Thread.GetNamedDataSlot(_slotName))).Invoke(); /// private LambdaExpression MakeStaticExpressionMethod() { var allocateMethod = typeof(Thread).GetMethod("GetNamedDataSlot", BindingFlags.Static | BindingFlags.Public); var getDataMethod = typeof(Thread).GetMethod("GetData", BindingFlags.Static | BindingFlags.Public); var call = Expression.Call(allocateMethod, Expression.Constant(_slotName)); var getCall = Expression.Call(getDataMethod, call); var convCall = Expression.Convert(getCall, typeof(Action)); var invokExpr = Expression.Invoke(convCall); var lambda = Expression.Lambda(invokExpr); return lambda; } /// /// Generate static class with one static function that will execute Action from Thread NamedDataSlot /// private MethodInfo CreateMethodInfo(ModuleBuilder moduleBuilder) { var methodName = "_StaticTestMethod_" + _slotName; var className = "_StaticClass_" + _slotName; TypeBuilder typeBuilder = moduleBuilder.DefineType(className, TypeAttributes.Public | TypeAttributes.Class); MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName, MethodAttributes.Static | MethodAttributes.Public); LambdaExpression expression = MakeStaticExpressionMethod(); expression.CompileToMethod(methodBuilder); var type = typeBuilder.CreateType(); return type.GetMethod(methodName, BindingFlags.Static | BindingFlags.Public); } } remarks about sample: as we need to execute code from another assembly, CorpLibrary1 hasn't access to it, so we need to store this delegate somewhere. Just for testing I stored it in Thread NamedDataSlot. It won't work in multithreaded applications, but it's just a sample. I know that we use Reflection to get private\internal members of any class, but within reflection we can't override them. But this issue is allows anyone to override internal class\method if that assembly has InternalsVisibleTo attribute. I tested it on .Net 3.5\4 and it works for both of them. How does it possible to just copy PublicKey without private key and use it in runtime? The whole sample can be found there - https://github.com/sergey-litvinov/Tests_InternalsVisibleTo UPDATE1: That test code in Program and FakeBuilder classes hasn't access to key.sn file and that library isn't signed, so it hasn't public key at all. It just copying it from CorpLibrary1 by using Reflection.Emit

    Read the article

  • How To Create Your Own x86 Operating System for Modern PC Computers

    - by mudge
    I'd like to create a new operating system for x86 PC computers. I'd like it to be 64-bit but possibly run as 32-bit as well. I have these kinds of questions: What kinds of things do you start working on first? Knowing where to start in writing your own operating system seems to me to be a tricky subject, so I am interested in your input. Generally how to go about making your own 32-bit/64-bit operating system, or good resources that mention useful information about going about writing your own operating system for x86 computers. I don't care how old sources are as long as they are still relevant and useful to what I am doing. I know that I will want it to have kernel drivers that access peripheral hardware directly. Where should I look for advice and documentation for programming and understanding the interface to peripheral hardware the operating system will communicate with? I will need to understand how the operating system will receive input and interact with keyboards, mice, computer monitors, hard drives, USB, etc. etc. This is probably the area I know least about. I have the Intel instruction set manuals and have been getting more familiar with assembly programming, so the CPU side of things is what I know the most about. At this point I'm thinking that I'd like to implement the Linux system calls within my operating system so that programs that run on Linux can run on my operating system. I want my operating system to use the ELF binary format. I wonder what obstacles I have to overcome to achieve this Linux compatibility. Are the main things implementing the system calls that Linux provides, and using the ELF format? What else? I am also interested in people's thoughts about why it might not be a good idea to make your own operating system, and why it is a good idea to make your own operating system. Thank you for any input.

    Read the article

  • .NET Reference "Copy Local" True / False Being Set Based on Contents of GAC

    - by D-Sect
    We had a very interesting problem with a Win Forms project. It's been resolved. We know what happened, but we want to understand why it happened. This may help other people out in the future who have a similar problem. The WinForms project failed on 2 of our client's PCs. The error was an obscure kernel.dll error. The project ran fine on 3 other PCs. We found that a .DLL (log4net.dll - a very popular open-source logging library) was missing from our release folder. It was previously in our release folder. Why was it missing in this latest release? It was missing because I must have installed a program on my Dev box that used log4net.dll and it was added to the Global Assembly Cache. When I checked the solution's references for log4net.dll, they were changed to "copy local=FALSE". They must have changed automatically because log4net.dll was present in my GAC. Here's where my question starts: Why did my reference for log4net.dll get changed from COPY LOCAL = TRUE to COPY LOCAL = FALSE? I suspect it's because it was added to my GAC by another program. How can we prevent this from happening again? As it stands now, if I install a piece of software that uses a common library and it adds it to my GAC, then my SLNs that reference that DLL will change from Copy Local TRUE to FALSE.

    Read the article

  • Which Computer Organization & Architecture book is good for me?

    - by claws
    I'm always interested in learning the inner working of things. I started with C programming and then learnt Operating systems (from stallings) and then linkers & loaders and then assembly language after reading these now I want to go into little more depth. Computer Architecture. I feel that makes everything clear. As per SO archives these are the two good books: Computer Architecture: A Quantitative Approach, 4th Edition Computer Organization and Design, Fourth Edition, ~ David A. Patterson, John L. Hennessy But I've browsed through the contents of these books and found that they don't exactly meet my needs. I want to learn more about caches, Memory Management Unit , mapping b/w virtual memory & physical memory I'm no way interested in other ISAs like MIPS etc.. I'm IA32 and X86-64 fan and I want to stick to it. I'm not a hardware developer I don't want to details like circuit diagrams or How is L1, L2 & L3 caches are implemented? I want to know the parallel processing technologies like HyperThreading at the architecture level but again I don't want to design them. I liked the table of Contents of - Computer Architecture: A Quantitative Approach, 4th Edition but Quantitave Approach? Seriously?? I want to know the details of current technologies and I dont want to spend reading 200 pages of outdated old technologies ( I experienced this while learning ASM}

    Read the article

  • P6 Architecture - Register renaming aside, does the limited user registers result in more ops spent

    - by mrjoltcola
    I'm studying JIT design with regard to dynamic languages VM implementation. I haven't done much Assembly since the 8086/8088 days, just a little here or there, so be nice if I'm out of sorts. As I understand it, the x86 (IA-32) architecture still has the same basic limited register set today that it always did, but the internal register count has grown tremendously, but these internal registers are not generally available and are used with register renaming to achieve parallel pipelining of code that otherwise could not be parallelizable. I understand this optimization pretty well, but my feeling is, while these optimizations help in overall throughput and for parallel algorithms, the limited register set we are still stuck with results in more register spilling overhead such that if x86 had double, or quadruple the registers available to us, there may be significantly less push/pop opcodes in a typical instruction stream? Or are there other processor optmizations that also optimize this away that I am unaware of? Basically if I've a unit of code that has 4 registers to work with for integer work, but my unit has a dozen variables, I've got potentially a push/pop for every 2 or so instructions. Any references to studies, or better yet, personal experiences?

    Read the article

  • Learning about the low level

    - by Anoners
    I'm interested in learning more about the PC from a lower (machine) level. I graduated from a school which taught us concepts using the Java language which abstracted out that level almost completely. As a result I only learned a bit from the one required assembly language course. In order to cram in ASM and quite a few details about architecture, it was hard to get a very deep picture of what is going on there. At work I focus on unix socket programming in C, so i'm much closer to the hardware now, but I feel I should learn a bit more about what streams really are, how memory management and paging works, what goes on when you call "paint()" on a graphics buffer, etc. I missed out on a lot of this and i'm looking for a good resource to get me started. I've heard a lot about the "Pink Book" by Peter Norton (Programmer's Guide to the IBM PC, Programmer's Guide to inside the PC, etc). It seems like this is on the right track, however the original is quite out dated and the newer ones have had conflicting reviews, with many people saying to stay away from it. I'm not sure what the SO crowd thinks about this book or if they have some suggestions for similar books, online resources, etc that may be good primers for this sort of thing. Any suggestions would be appreciated.

    Read the article

  • SAL and SAR by 0 errors

    - by Roy McAvoy
    I have discovered a bug in some assembly code I have been working with but can't figure how to fix it. When shifting left by 0 the result ends up being 0 instead of jut the number. The same applies when shifting to the right. Any and all help is much appreciated. function sal(n,k:integer):integer; begin asm cld mov cx, k @1: sal n, 1 loop @1 end; sal:= n; end; function sar(n,k:integer):integer; begin asm cld mov cx, k @1: sar n, 1 loop @1 end; sar:=n; end; I have tried to changed them in the following way and it still does not work properly. function sal(n,k:integer):integer; begin asm cld mov cx, k jcxz @done @1: sal n, 1 loop @1 @done: end; sal:= n; end; function sar(n,k:integer):integer; begin asm cld mov cx, k jcxz @done @1: sar n, 1 loop @1 @done: end; sar:=n; end;

    Read the article

  • Read from file hexadecimal number and change their representation style

    - by user576844
    I want to write a program changing the notation of all hexadecimal numbers found in an assembly source file from traditional (h) to C-style (0x). I have started the coding part but am not sure how can I detect the hexadecimal numbers and eventually change the style and save it back in the file... I have started writing the program.. ## Mips program - .data fin: .ascii "" # filename for input msg0: .asciiz "aaaa" msg1: .asciiz "Please enter the input file name:" buffer: .asciiz "" .text #----------------------- li $v0, 4 la $a0, msg1 syscall li $v0, 8 la $a0, fin li $a1, 21 syscall jal fileRead #read from file move $s1, $v0 #$t0 = total number of bytes li $t0, 0 # Loop counter loop: bge $t0, $s1, end #if end of file reached OR if there is an error in the file lb $t5, buffer($t0) #load next byte from file jal checkhexa #check for hexadecimal numbers addi $t0, $t0, 1 #increment loop counter j loop end: jal output jal fileClose li $v0, 10 syscall fileRead: # Open file for reading li $v0, 13 # system call for open file la $a0, fin # input file name li $a1, 0 # flag for reading li $a2, 0 # mode is ignored syscall # open a file move $s0, $v0 # save the file descriptor # reading from file just opened li $v0, 14 # system call for reading from file move $a0, $s0 # file descriptor la $a1, buffer # address of buffer from which to read li $a2, 100000 # hardcoded buffer length syscall # read from file jr $ra Any help would be appreciated.

    Read the article

  • How can I know what this does?

    - by Dabor Troppe
    I got this piece of Assembly code extracted from some piece of software, but unfortunately I don't know anything of assembler and the bits I touched of Assembler was back in the Commodore Amiga with the 68000. Can anybody guide me on how I could understand this code without me needing to learn assembler from scratch, or just tell me what it does? Is there any kind of "Simulator" out there that I can run this on to see what it does? -[ObjSample Param1:andParam2:]: 00000c79 pushl %ebp 00000c7a movl %esp,%ebp 00000c7c subl $0x48,%esp 00000c7f movl %ebx,0xf4(%ebp) 00000c82 movl %esi,0xf8(%ebp) 00000c85 movl %edi,0xfc(%ebp) 00000c88 calll 0x00000c8d 00000c8d popl %ebx 00000c8e cmpb $-[ObjSample delegate],_bDoOnce.26952-0xc8d(%ebx) 00000c95 jel 0x00000d47 00000c9b movb $-[ObjSample delegate],_bDoOnce.26952-0xc8d(%ebx) 00000ca2 movl 0x7dc0-0xc8d(%ebx),%eax 00000ca8 movl %eax,0x04(%esp) 00000cac movl 0x7df4-0xc8d(%ebx),%eax 00000cb2 movl %eax,(%esp) 00000cb5 calll _objc_msgSend 00000cba movl 0x7dbc-0xc8d(%ebx),%edx 00000cc0 movl %edx,0x04(%esp) 00000cc4 movl %eax,(%esp) 00000cc7 calll _objc_msgSend 00000ccc movl %eax,0xe4(%ebp) 00000ccf movl 0x7db8-0xc8d(%ebx),%eax 00000cd5 movl %eax,0x04(%esp) 00000cd9 movl 0xe4(%ebp),%eax 00000cdc movl %eax,(%esp) 00000cdf calll _objc_msgSend 00000ce4 leal (%eax,%eax),%edi 00000ce7 movl %edi,(%esp) 00000cea calll _malloc 00000cef movl %eax,%esi 00000cf1 movl %edi,0x08(%esp) 00000cf5 movl $-[ObjSample delegate],0x04(%esp) 00000cfd movl %eax,(%esp) 00000d00 calll _memset 00000d05 movl $0x00000004,0x10(%esp) 00000d0d movl %edi,0x0c(%esp) 00000d11 movl %esi,0x08(%esp) 00000d15 movl 0x7db4-0xc8d(%ebx),%eax 00000d1b movl %eax,0x04(%esp) 00000d1f movl 0xe4(%ebp),%eax 00000d22 movl %eax,(%esp) 00000d25 calll _objc_msgSend 00000d2a xorl %edx,%edx 00000d2c movl %edi,%eax 00000d2e shrl $0x03,%eax 00000d31 jmp 0x00000d34 00000d33 incl %edx 00000d34 cmpl %edx,%eax 00000d36 ja 0x00000d33 00000d38 movl %esi,(%esp) 00000d3b calll _free 00000d40 movb $0x01,_isAuthenticated-0xc8d(%ebx) 00000d47 movzbl _isAuthenticated-0xc8d(%ebx),%eax 00000d4e movl 0xf4(%ebp),%ebx 00000d51 movl 0xf8(%ebp),%esi 00000d54 movl 0xfc(%ebp),%edi 00000d57 leave 00000d58 ret

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • using in-line asm to write a for loop with 2 comparisons

    - by aCuria
    I want to convert the for loop in the following code into assembly but i am not sure how to start. An explanation of how to do it and why it works would be appreciated. I am using VS2010, C++, writing for the x86. The code is as follows: for (n = 0; norm2 < 4.0 && n < N; ++n) { __asm{ ///a*a - b*b + x fld a // a fmul st(0), st(0) // aa fld b // b aa fmul st(0), st(0) // bb aa fsub // (aa-bb) // st(0) - st(1) fld x // x (aa-bb) fadd // (aa-bb+x) /// 2.0*a*b + y; fld d // d (aa-bb+x) fld a // d a (aa-bb+x) fmul // ad (aa-bb+x) fld b // b ad (aa-bb+x) fmul // abd (aa-bb+x) fld y // y adb (aa-bb+x) fadd // b:(adb+y) a:(aa-bb+x) fld st(0) //b b:(adb+y) a:(aa-bb+x) fmul st(0), st(0) // bb b:(adb+y) a:(aa-bb+x) fld st(2) // a bb b:(adb+y) a:(aa-bb+x) fmul st(0), st(0) // aa bb b:(adb+y) a:(aa-bb+x) fadd // aa+bb b:(adb+y) a:(aa-bb+x) fstp norm2 // store aa+bb to norm2, st(0) is popped. fstp b fstp a } }

    Read the article

  • bin-deploying DLLs banned in leiu of GAC on shared IIS 6 servers

    - by craigmoliver
    I need to solicit feedback about a recent security policy change at an organization I work with. They have recently banned the bin-deployment of DLLs to shared IIS 6 application servers. These servers host many isolated web application pools. The new rules require all DLLs to be installed in GAC. The is a problem for me because I bin-deploy several dlls including the ASP.NET MVC Framework, HTML Agility Pack, ELMAH, and my own shared class libraries. I do this because: Eliminates web application server dependencies to the Global Assembly Cache. Allows me (the developer) to have control of what goes on inside my application. Enables the application to deployed as a "package". Removes application deployment burden from the server administrators. Now, here are my questions. From a security perspective what are the advantages to using the GAC vs. bin-deployment? Is it possible to host multiple versions of the same DLL in the GAC? Has anyone run into similar restrictions?

    Read the article

  • Maven - 'all' or 'parent' project for aggregation?

    - by disown
    For educational purposes I have set up a project layout like so (flat in order to suite eclipse better): -product | |-parent |-core |-opt |-all Parent contains an aggregate project with core, opt and all. Core implements the mandatory part of the application. Opt is an optional part. All is supposed to combine core with opt, and has these two modules listed as dependencies. I am now trying to make the following artifacts: product-core.jar product-core-src.jar product-core-with-dependencies.jar product-opt.jar product-opt-src.jar product-opt-with-dependencies.jar product-all.jar product-all-src.jar product-all-with-dependencies.jar Most of them are fairly straightforward to produce. I do have some problem with the aggregating artifacts though. I have managed to make the product-all-src.jar with a custom assembly descriptor in the 'all' module which downloads the sources for all non-transitive deps, and this works fine. This technique also allows me to make the product-all-with-dependencies.jar. I however recently found out that you can use the source:aggregate goal in the source plugin to aggregate sources of the entire aggregate project. This is also true for the javadoc plugin, which also aggregates through the usage of the parent project. So I am torn between my 'all' module approach and ditching the 'all' module and just use the 'parent' module for all aggregation. It feels unclean to have some aggregate artifacts produced in 'parent', and others produced in 'all'. Is there a way of making an 'product-all' jar in the parent project, or to aggregate javadoc in the 'all' project? Or should I just keep both? Thanks

    Read the article

  • maven assemblies. Putting each dependency with transitive dependencies in own directory?

    - by jr
    I have a maven project which consists of a few modules. This is to be deployed on a client machine and will involve installing Tomcat and will make use of NSIS for installer. There is a separate application which monitors tomcat and can restart it, perform updates, etc. So, I have the modules setup as follows: project +-- client (all code, handlers, for the war) +-- client-common - (shared code, shared between monitor and client) +-- client-web - (the war, basically just uses war has applicationcontext, web.xml,etc) +-- monitor - (the monitor application jar. Uses wrapper to run) So, I need to create an installer. I was planning on creating another module which would be the installer. This is where I would have tomcat directory and I'd like maven to "assemble" everything and then run NSIS so I can create the final installer. However, I need to have the monitor jar file in a directory and then have all monitors dependencies in a lib/ directory. The final directory structure should be: project-installer-directory/monitor/monitor-version.jar project-installer-directory/monitor/lib/monitor-dep-1.jar project-installer-directory/monitor/lib/monitor-dep-2.jar project-installer-directory/monitor/lib/monitor-dep-3.jar project-installer-directory/webapps/client-web.war Where in the client-web\WEB-INF\lib directory we will have all client-web's dependencies after it is exploded. That works, I have the .war file. What I am having problems with is getting the monitor module dependencies independent of the dependencies of the client-web module. I tried to just create the installer module and make the monitor and client-web dependencies, but when I use dependencies-copy it gives me everything. Not what I want. I'm leaning towards creating a new module called monitor-assembly or something to give me a zip file which contains the directory format I need, but that is yet another module. Can someone please help me with the correct way to accomplish this? thanks!

    Read the article

  • Handling Types Defined in Plug-ins That Are No Longer Available

    - by Chris
    I am developing a .NET framework application that allows users to maintain and save "projects". A project can consist of components whose types are defined in the assemblies of the framework itself and/or in third-party assemblies that will be made available to the framework via a yet-to-be-built plug-in architecture. When a project is saved, it is simply binary-serialised to file. Projects are portable, so multiple users can load the same project into their own instances of the framework (just as different users may open the same MSWord document in their own local copies of MSWord). What's more, the plug-ins available to one user's framework might not be available to that of another. I need some way of ensuring that when a user attempts to open (i.e. deserialise) a project that includes a type whose defining assembly cannot be found (either because of a framework version incompatibility or the absence of a plug-in), the project still opens but the offending type is somehow substituted or omitted. Trouble is, the research I've done to date does not even hint at a suitable approach. Any ideas would be much appreciated, thanks.

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >