Search Results

Search found 5543 results on 222 pages for 'legacy terms'.

Page 13/222 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Seeking suggestions on redesigning the interface

    - by ratkok
    As a part of maintaining large piece of legacy code, we need to change part of the design mainly to make it more testable (unit testing). One of the issues we need to resolve is the existing interface between components. The interface between two components is a class that contains static methods only. Simplified example: class ABInterface { static methodA(); static methodB(); ... static methodZ(); }; The interface is used by component A so that different methods can use ABInterface::methodA() in order to prepare some input data and then invoke appropriate functions within component B. Now we are trying to redesign this interface for various reasons: Extending our unit test coverage - we need to resolve this dependency between the components and stubs/mocks are to be introduced The interface between these components diverged from the original design (ie. a lots of newer functions, used for the inter-component i/f are created outside this interface class). The code is old, changed a lot over the time and needs to be refactored. The change should not be disruptive for the rest of the system. We try to limit leaving many test-required artifacts in the production code. Performance is very important and should be no (or very minimal) degradation after the redesign. Code is OO in C++. I am looking for some ideas what approach to take. Any suggestions on how to do this efficiently?

    Read the article

  • Being pressured to GOTO the dark-side

    - by Dan McG
    We have a situation at work where developers working on a legacy (core) system are being pressured into using GOTO statements when adding new features into existing code that is already infected with spagetti code. Now, I understand there may be arguments for using 'just one little GOTO' instead of spending the time on refactoring to a more maintainable solution. The issue is, this isolated 'just one little GOTO' isn't so isolated. At least once every week or so there is a new 'one little GOTO' to add. This codebase is already a horror to work with due to code dating back to or before 1984 being riddled with GOTOs that would make many Pastafarians believe it was inspired by the Flying Spagetti Monster itself. Unfortunately the language this is written in doesn't have any ready made refactoring tools, so it makes it harder to push the 'Refactor to increase productivity later' because short-term wins are the only wins paid attention to here... Has anyone else experienced this issue whereby everybody agrees that we cannot be adding new GOTOs to jump 2000 lines to a random section, but continually have Anaylsts insist on doing it just this one time and having management approve it? tldr; How can one go about addressing the issue of developers being pressured (forced) to continually add GOTO statements (by add, I mean add to jump to random sections many lines away) because it 'gets that feature in quicker'? I'm beginning to fear we may loses valuable developers to the raptors over this...

    Read the article

  • In the Aggregate: How Will We Maintain Legacy Systems?

    - by Jim G.
    NEW YORK - With a blast that made skyscrapers tremble, an 83-year-old steam pipe sent a powerful message that the miles of tubes, wires and iron beneath New York and other U.S. cities are getting older and could become dangerously unstable. July 2007 Story About a Burst Steam Pipe in Manhattan We've heard about software rot and technical debt. And we've heard from the likes of: "Uncle Bob" Martin - Who warned us about "the consequences of making a mess". Michael C. Feathers - Who gave us guidance for 'Working Effectively With Legacy Code'. So certainly the software engineering community is aware of these issues. But I feel like our aggregate society does not appreciate how these issues can plague working systems and applications. As Steve McConnell notes: ...Unlike financial debt, technical debt is much less visible, and so people have an easier time ignoring it. If this is true, and I believe that it is, then I fear that governments and businesses may defer regular maintenance and fortification against hackers until it is too late. [Much like NYC and the steam pipes.] My Question: Do you share my concern? And if so, is there a way that we can avoid the software equivalent of NYC and the steam pipes?

    Read the article

  • How to make a legacy system time-zone sensitive?

    - by Jerry Dodge
    I need to implement time zones in a very large and old Delphi system, where there's a central SQL Server database and possibly hundreds of client installations around the world in different time zones. The application already interacts with the database by only using the date/time of the database server. So, all the time stamps saved in both the database and on the client machines are the date/time of the database server when it happened, never the time of the client machine. So, when a client is about to display the date/time of something (such as a transaction) which is coming from this database, it needs to show the date/time converted to the local time zone. This is where I get lost. I would naturally assume there should be something in SQL to recognize the time zone and convert a DateTime field dynamically. I'm not sure if such a thing exists though. If so, that would be perfect, but if not, I need to figure out another way. This Delphi system (multiple projects) utilizes the SQL Server database using ADO components, VCL data-aware controls, and QuickReports (using data sources). So, there's many places where the data goes directly from the database query to rendering on the screen, without any code to actually put this data on the screen. In the end, I need to know when and how should I get the properly converted time? What is the proper way to ensure that I handle Dates and Times correctly in a legacy application?

    Read the article

  • In the Aggregate: How Will We Maintain Legacy Systems? [closed]

    - by Jim G.
    NEW YORK - With a blast that made skyscrapers tremble, an 83-year-old steam pipe sent a powerful message that the miles of tubes, wires and iron beneath New York and other U.S. cities are getting older and could become dangerously unstable. July 2007 Story About a Burst Steam Pipe in Manhattan We've heard about software rot and technical debt. And we've heard from the likes of: "Uncle Bob" Martin - Who warned us about "the consequences of making a mess". Michael C. Feathers - Who gave us guidance for 'Working Effectively With Legacy Code'. So certainly the software engineering community is aware of these issues. But I feel like our aggregate society does not appreciate how these issues can plague working systems and applications. As Steve McConnell notes: ...Unlike financial debt, technical debt is much less visible, and so people have an easier time ignoring it. If this is true, and I believe that it is, then I fear that governments and businesses may defer regular maintenance and fortification against hackers until it is too late. [Much like NYC and the steam pipes.] My Question: Is there a way that we can avoid the software equivalent of NYC and the steam pipes?

    Read the article

  • How to justify rewriting/revamping legacy software in a business case?

    - by sxthomson
    I work for a great little software company which makes good revenue from our main software package. The problem for me is that it's almost unmaintainable. It's written in Delphi 7 (has upgraded versions over time) and has been worked on by a lot of developers over the past 20 or so years. The software lacks any meaningful architecture - there's no object orientation whatsoever, horrible amounts of cyclical dependencies and an over-reliance on global variables to name just a few things. Another huge thing for me is Delphi 7 does NOT support 64-bit. The problem here for me is that my management team don't care about technical things, they want to know why they should care. Obviously that's expected, so what I'm asking here is for some guidance, or tales, or pitfalls about this kind of thing. There's a few things I would love to include, namely for me, the length of time taken to debug/write a feature in "legacy" code, versus coherent, well structured OO code. Does anyone know of any blog posts or the like where this is talked about? For us in the company this is a huge reason. Despite being decent developers we feel like writing a new feature is just piling more rubbish on top. On top of that, even for me who has a decent level of understanding of the code, changing things is infuriating - a small change can have a ridiculous domino effect. Anyone have any experiences they'd like to share?

    Read the article

  • How to fix legacy code that uses <string.h> unsafely?

    - by Snowbody
    We've got a bunch of legacy code, written in straight C (some of which is K&R!), which for many, many years has been compiled using Visual C 6.0 (circa 1998) on an XP machine. We realize this is unsustainable, and we're trying to move it to a modern compiler. Political issues have said that the most recent compiler allowed is VC++ 2005. When compiling the project, there are many warnings about the unsafe string manipulation functions used (sprintf(), strcpy(), etc). Reviewing some of these places shows that the code is indeed unsafe; it does not check for buffer overflows. The compiler warning recommends that we move to using sprintf_s(), strcpy_s(), etc. However, these are Microsoft-created (and proprietary) functions and aren't available on (say) gcc (although we're primarily a Windows shop we do have some clients on various flavors of *NIX) How ought we to proceed? I don't want to roll our own string libraries. I only want to go over the code once. I'd rather not switch to C++ if we can help it.

    Read the article

  • How do you quantify competency in terms of time (years)?

    - by o.k.w
    While looking for a job via agencies some time ago, I kept having questions from the recuitment agents or in the application forms like: How many years of experience do you have in: Oracle ASP.NET J2EE etc etc etc.... At first I answered faithfully... 5yrs, 7yrs, 2 yrs, none, few months etc etc.. Then I thought; I can be doing something shallow for 7 years and not being competent at it simply because I am just doing a minor support for a legacy system running SQL2000 which requires 10 days of my time for the past 7 years. Eventualy I declined to answers such questions. I wonder why do they ask these questions anymore. Anyone who just graduated with a computer science can claim 3 to 4 years experience in anything they 'touched' in the cirriculum, which to me can be equivalent to zero or 10 years depending how you look at it. It might hold true decades ago where programmers and IT skills are of very different nature. I might be wrong but I really doubt 'time' or 'years' are a good gauge of competency or experience anymore. Any opinion/rebuttal are welcome!

    Read the article

  • Page appears indexed in Google but not findable for any search terms?

    - by Jeff Atwood
    (Note that I am going to use screenshots here because I suspect writing about this will change the behavior over time.) If you do a Google search for uiviewcontroller best practices either with or without the quotes, you end up with results like this: Note that none of these pages resolve to the actual Stack Overflow question containing those words in the title. They resolve to either a) sites that are mirroring our creative commons data and correctly pointing back to the source question without nofollow, as properly specified by our attribution requirements or b) our own internal links to the question, but not the actual question itself. The actual page with the title ... Custom UIView and UIViewController best practices? ... does exist at this URL ... http://stackoverflow.com/questions/3300183/custom-uiview-and-uiviewcontroller-best-practices ... and apparently it is present in Google's index! But why does it not appear when we search for uiviewcontroller best practices ? We know that Google contains this page in its index Our search terms match the title of the question Stack Overflow has much higher pagerank than the other sites that are mirroring this question under Creative Commons I don't get it. What are we doing wrong here?

    Read the article

  • How to REALLY start thinking in terms of objects?

    - by Mr Grieves
    I work with a team of developers who all have several years of experience with languages such as C# and Java. Most of them are young enough to have been shown OOP as a standard way to develop software in university and are very comfortable with concepts such as inheritance, abstraction, encapsulation and polymorphism. Yet, many of them, and I have to include myself, still tend to create classes which are meant to be used in a very functional fashion. The resulting software is often several smaller classes which correctly represent business objects which get passed through larger classes which only supply ways to modify and use those objects (functions). Large complex difficult-to-maintain classes named Manager are usually the result of such behaviour. I can see two theoretical reasons why people might write this type of code: It's easy to start thinking of everything in terms of the database Deep down, for me, a computer handling a web request feels more like a functional operation than an object oriented operation when you think about Request Handlers, Threads, Processes, CPU Cores and CPU operations... I want source code which is easy to read and easy to modify. I have seen excellent examples of OO code which meet these objectives. How can I start writing code like this? How I can I really start thinking in an object oriented fashion? How can I share such a mentality with my colleagues?

    Read the article

  • As a person getting into mobile development, what's the best mobile platform in terms of profitability? [closed]

    - by Kyle Loman
    I realize this question can range very far so would love to hear any and all opinions on this. However, I'll be honest and say that I have been thinking of this in terms of most profitable. I know how this may sound either way but this is one of my main sticking points. I realize that I'm not guaranteed a single cent and success is never guaranteed but I'm going into this with the thought of making something out of it both financially and also for my own interest. I know that iOS gets a lot of attention on this front but Android commands a lot more market share. However, I know there are drawbacks to Android too, whether it's in the actual development process and programming (though I've heard conflicting reports on this, such as how easy/difficult it is for to address screen res in different devices) or the app ecosystem being flooded. But iOS's app ecosystem has been described as too saturated and harder to compete in for that reason. Since Windows Phone has fewer apps than both of those two, that might be the best place to start in order to be closer to the ground floor of the store and be noticed more? Less saturation = better chances of sales or differentiating? Something like the gold rush during the first years of the iOS App Store (not exactly but at least in concept)? Would it be that despite fewer users on the platform, there's more exposure due to less competition so that may translate to better success at sales? Plus, I know MS is in it for the long haul so I'm not too fearful of something like WebOS going away. Obviously RIM isn't very popular nowadays but I read a recent article that says Blackberry actually has the apps that make the most money, any thoughts on that: http://gigaom.com/mobile/which-mobile-oss-apps-make-most-money-surprise-its-blackberry/ Again, this is all I've heard or known about so if there's anything to add or correct here, please do. In addition, this has actually affected my next personal phone upgrade. I'm eligible for a carrier discount now and I've had my eye on the iPhone 5. However, the Lumia 920 is the one I'm holding out for and I'm open to trying an Android but I'm not sure I can wait that long for any new Nexus or even the Razr HD. Even the new Lumia in November is making me antsy, I'm so close to just getting an iPhone 5. But when I say this has affected my phone choice, I'd want to be able to carry the apps I write with me so that I'm able to pull my phone out to show people without having to carry around a second device to do so. So that's why I'd like to make my personal phone match the main platform I'm developing for. Of course, I will likely expand to other platforms if I gain any decent success but the one I target now would serve well as my personal phone I carry around so that I can use it as a marketing tool, in a sense, showing people my apps if the opportunity presents itself. So what's the best mobile platform to choose, and especially in regards to most lucrative? As said previously, this would influence my personal phone choice greatly. Thanks in advance and I hope this isn't taken the wrong way - I understand there are trade-offs and other factors that may balance this out but making some revenue is key among that. For some background, I have done software development and know programming language concepts so I'm not entirely new to it and I do get the notion of being familiar with these things so that I can translate this skill among a variety of languages but I'm currently just having difficulty choosing my first main mobile platform based on the criteria I've outlined above.

    Read the article

  • Marshalling a big-endian byte collection into a struct in order to pull out values

    - by Pat
    There is an insightful question about reading a C/C++ data structure in C# from a byte array, but I cannot get the code to work for my collection of big-endian (network byte order) bytes. (EDIT: Note that my real struct has more than just one field.) Is there a way to marshal the bytes into a big-endian version of the structure and then pull out the values in the endianness of the framework (that of the host, which is usually little-endian)? This should summarize what I'm looking for (LE=LittleEndian, BE=BigEndian): void Main() { var leBytes = new byte[] {1, 0}; var beBytes = new byte[] {0, 1}; Foo fooLe = ByteArrayToStructure<Foo>(leBytes); Foo fooBe = ByteArrayToStructureBigEndian<Foo>(beBytes); Assert.AreEqual(fooLe, fooBe); } [StructLayout(LayoutKind.Explicit, Size=2)] public struct Foo { [FieldOffset(0)] public ushort firstUshort; } T ByteArrayToStructure<T>(byte[] bytes) where T: struct { GCHandle handle = GCHandle.Alloc(bytes, GCHandleType.Pinned); T stuff = (T)Marshal.PtrToStructure(handle.AddrOfPinnedObject(),typeof(T)); handle.Free(); return stuff; } T ByteArrayToStructureBigEndian<T>(byte[] bytes) where T: struct { ??? }

    Read the article

  • Why is the software world full of status codes?

    - by David V McKay
    Why did programmers ever start using status codes? I mean, I guess I could imagine this might be useful back in the days when a text string was an expensive resource. WAYYY back then. But even after we had megabytes of memory to work with, we continued to use them. What possible advantage could there be for obfuscating the meaning of an error message or status message behind a status code?

    Read the article

  • Converting C source to C++

    - by Barry Kelly
    How would you go about converting a reasonably large (300K), fairly mature C codebase to C++? The kind of C I have in mind is split into files roughly corresponding to modules (i.e. less granular than a typical OO class-based decomposition), using internal linkage in lieu private functions and data, and external linkage for public functions and data. Global variables are used extensively for communication between the modules. There is a very extensive integration test suite available, but no unit (i.e. module) level tests. I have in mind a general strategy: Compile everything in C++'s C subset and get that working. Convert modules into huge classes, so that all the cross-references are scoped by a class name, but leaving all functions and data as static members, and get that working. Convert huge classes into instances with appropriate constructors and initialized cross-references; replace static member accesses with indirect accesses as appropriate; and get that working. Now, approach the project as an ill-factored OO application, and write unit tests where dependencies are tractable, and decompose into separate classes where they are not; the goal here would be to move from one working program to another at each transformation. Obviously, this would be quite a bit of work. Are there any case studies / war stories out there on this kind of translation? Alternative strategies? Other useful advice? Note 1: the program is a compiler, and probably millions of other programs rely on its behaviour not changing, so wholesale rewriting is pretty much not an option. Note 2: the source is nearly 20 years old, and has perhaps 30% code churn (lines modified + added / previous total lines) per year. It is heavily maintained and extended, in other words. Thus, one of the goals would be to increase mantainability. [For the sake of the question, assume that translation into C++ is mandatory, and that leaving it in C is not an option. The point of adding this condition is to weed out the "leave it in C" answers.]

    Read the article

  • How to handle window closed in the middle of a long running operation gracefully?

    - by Marek
    We have the following method called directly from the UI thread: void DoLengthyProcessing() { DoStuff(); var items = DoMoreStuff(); //do even more stuff - 200 lines of code trimmed this.someControl.PrepareForBigThing(); //someControl is a big user control //additional 100 lines of code that access this.someControl this.someControl.Finish(items); } Many of the called methods call Application.DoEvents() (and they do so many times) (do not ask me why, this is black magic written by black magic programmers and it can not be changed because everyone is scared what the impact would be) and there is also an operation running on a background thread involved in the processing. As a result, the window is not fully nonresponsive and can be closed manually during the processing. The Dispose method of the form "releases" the someControl variable by setting it to null. As a result, in case the user closes the window during the lengthy process, a null reference exception is thrown. How to handle this gracefully without just catching and logging the exception caused by disposal? Assigning the someControl instance to a temporary variable in the beginning of the method - but the control contains many subcontrols with similar disposal scheme - sets them to null and this causes null reference exceptions in other place put if (this.IsDisposed) return; calls before every access of the someControl variable. - making the already nasty long method even longer and unreadable. in Closing event, just indicate that we should close and only hide the window. Dispose it at the end of the lengthy operation. This is not very viable because there are many other methods involved (think 20K LOC for a single control) that would need to handle this mechanism as well. How to most effectively handle window disposal (by user action) in the middle of this kind of processing?

    Read the article

  • How to take a collection of bytes and pull typed values out of it?

    - by Pat
    Say I have a collection of bytes var bytes = new byte[] {0, 1, 2, 3, 4, 5, 6, 7}; and I want to pull out a defined value from the bytes as a managed type, e.g. a ushort. What is a simple way to define what types reside at what location in the collection and pull out those values? One (ugly) way is to use System.BitConverter and a Queue or byte[] with an index and simply iterate through, e.g.: int index = 0; ushort first = System.BitConverter.ToUint16(bytes, index); index += 2; // size of a ushort int second = System.BitConverter.ToInt32(bytes, index); index += 4; ... This method gets very, very tedious when you deal with a lot of these structures! I know that there is the System.Runtime.InteropServices.StructLayoutAttribute which allows me to define the locations of types inside a struct or class, but there doesn't seem to be a way to import the collection of bytes into that struct. If I could somehow overlay the struct on the collection of bytes and pull out the values, that would be ideal. E.g. Foo foo = (Foo)bytes; // doesn't work because I'd need to implement the implicit operator ushort first = foo.first; int second = foo.second; ... [StructLayout(LayoutKind.Explicit, Size=FOO_SIZE)] public struct Foo { [FieldOffset(0)] public ushort first; [FieldOffset(2)] public int second; } Any thoughts on how to achieve this?

    Read the article

  • How a web app identify if a click came from another web app via code?

    - by Diego
    Hi, we have a web application that users can take online reports from ou ERP system data... And we have another web application that is used by our teachers and employees. We can't change the ERP web app because its a closed DLL, in this case we made some extended functionality in our custom internal web app and we are willing to put this functionality on the "menu" of the ERP web app. I need to integrate the two applications in the following way: When I click in the menu of the ERP web app, I want that our internal web app assert that the click have come from our ERP web app and not typed in the URL, this is possible?

    Read the article

  • Use of malloc() and free() in C++

    - by Matt H
    Is there any reason to use malloc and free in C++ over their more modern counterparts? Occasionally I see this, and I can't see why some people do it. Are there any advantages/disadvantage, or is there no real difference, except that it's just better to use C++ constructs in C++?

    Read the article

  • C++0x implementation guesstimates?

    - by dsimcha
    The C++0x standard is on its way to being complete. Until now, I've dabbled in C++, but avoided learning it thoroughly because it seems like it's missing a lot of modern features that I've been spoiled by in other languages. However, I'd be very interested in C++0x, which addresses a lot of my complaints. Any guesstimates, after the standard is ratified, as to how long it will take for major compiler vendors to provide reasonably complete, production-quality implementations? Will it happen soon enough to reverse the decline in C++'s popularity, or is it too little, too late? Do you believe that C++0x will become "the C++" within a few years, or do you believe that most people will stick to the earlier standard in practice and C++0x will be somewhat of a bastard stepchild, kind of like C99?

    Read the article

  • Pre-done SQLs to be converted to Rails' style moduls

    - by Hoornet
    I am a Rails newbie and would really appreciate if someone converted these SQLs to complete modules for rails. I know its a lot to ask but I can't just use find_by_sql for all of them. Or can I? These are the SQLs (they run on MS-SQL): 1) SELECT STANJA_NA_DAN_POSTAVKA.STA_ID, STP_DATE, STP_TIME, STA_OPIS, STA_SIFRA, STA_POND FROM STANJA_NA_DAN_POSTAVKA INNER JOIN STANJA_NA_DAN ON(STANJA_NA_DAN.STA_ID=STANJA_NA_DAN_POSTAVKA.STA_ID) WHERE ((OSE_ID=10)AND (STANJA_NA_DAN_POSTAVKA.STP_DATE={d '2010-03-30'}) AND (STANJA_NA_DAN_POSTAVKA.STP_DATE<={d '2010-03-30'})) 2) SELECT ZIGI_OBDELANI.OSE_ID, ZIGI_OBDELANI.DOG_ID AS DOG_ID, ZIGI_OBDELANI.ZIO_DATUM AS DATUM, ZIGI_PRICETEK.ZIG_TIME_D AS ZIG_PRICETEK, ZIGI_KONEC.ZIG_TIME_D AS ZIG_KONEC FROM (ZIGI_OBDELANI INNER JOIN ZIGI ZIGI_PRICETEK ON ZIGI_OBDELANI.ZIG_ID_PRICETEK = ZIGI_PRICETEK.ZIG_ID) INNER JOIN ZIGI ZIGI_KONEC ON ZIGI_OBDELANI.ZIG_ID_KONEC = ZIGI_KONEC.ZIG_ID WHERE (ZIGI_OBDELANI.OSE_ID = 10) AND (ZIGI_OBDELANI.ZIO_DATUM = {d '2010-03-30'}) AND (ZIGI_OBDELANI.ZIO_DATUM <= {d '2010-03-30'}) AND (ZIGI_PRICETEK.ZIG_VELJAVEN < 0) AND (ZIGI_KONEC.ZIG_VELJAVEN < 0) ORDER BY ZIGI_OBDELANI.OSE_ID, ZIGI_PRICETEK.ZIG_TIME ASC 3) SELECT STA_ID, SUM(STP_TIME) AS SUM_STP_TIME, COUNT(STA_ID) FROM STANJA_NA_DAN_POSTAVKA WHERE ((STP_DATE={d '2010-03-30'}) AND (STP_DATE<={d '2010-03-30'}) AND (STA_ID=3) AND (OSE_ID=10)) GROUP BY STA_ID 4) SELECT DATUM, TDN_ID, TDN_OPIS, URN_OPIS, MOZNI_PROBLEMI, PRIHOD, ODHOD, OBVEZNOST, ZAKLJUCEVANJE_DATUM FROM OBRACUNAJ_DAN WHERE ((OSE_ID=10) AND (DATUM={d '2010-02-28'}) AND (DATUM<={d '2010-03-30'})) ORDER BY DATUM These SQLs are daily working hours and I got them as is. Also I got Database with it which (as you can see from the SQL-s) is not in Rails conventions. As a P.S.: 1)Things like STP_DATE={d '2010-03-30'}) are of course dates (in Slovenian date notation) and will be replaced with a variable (date), so that the user could choose date from and date to. 2) All of this data will be shown in the same page in the table,so maybe all in one module? Or many?; if this helps, maybe. So can someone help me? Its for my work and its my 1st project and I am a Rails newbie and the bosses are getting inpatient(they are getting quite loud actually) Thank you very very much!

    Read the article

  • Converting table based layout into a div/css based one.

    - by nimo9367
    I'm supposed to rewrite the UI for a rather large web application. The thing is that the layout is completely based on tables and if I could somehow, semi automatically, convert the tables into divs it would save me a huge amount of time. What are the best practices when doing something like this? Is this a good idea at all? The application use layout files (containing something similar to helpers) that are parsed into html at runtime and the application itself also output html at specific places. So the work will consist of converting these helpers and the htmloutput code within the application.

    Read the article

  • Docbook: Centralized glossary, where each document includes only terms which appear in it?

    - by DanM
    Trying to figure out if this (or something similar) is possible. I'm working with a collection of technical documents, all written in DocBook. The documents each contain many acronyms, technical terms and other jargon, so we need to include a glossary with each of them. The ideal situation would be this: I have a central glossary.xml file which contains a glossentry item (or similar) for each such term; then, each of the documents uses that glossary file, but only prints out the terms which appear IN that document. So, each document has its own glossary printed at the end, but the actual glossary entries are stored centrally. Is that doable?

    Read the article

  • Just one client bound to address and port: does it make a difference broadcast versus unicast in terms of overhead?

    - by chrisapotek
    Scenario: I am implementing failed over for a network node, so my idea is to make the master node listens on a broadcast ip address and port. If the master node fails, another failover node will start listening on this broadcast address (and port) and take over. Question: My concern is that I will be using a broadcast IP address just for a single node: the master. The failover node only binds if the master fails, in other words, almost never. In terms of network/traffic overhead, is it bad to talk to a single node through a broadcast address or the network somehow is smart enough to know that nobody else is listening to this broadcast address and kind of treat it as a unicast in terms of overhead? My concern is that I will be flooding my network with packets from this broadcast address even thought I am just really talking to a single node (the master). But I can't use unicast because the failover node has to be able to pick up the master stream quickly and transparently in case it fails.

    Read the article

  • What is New in ASP.NET 4.0 Code Access Security

    - by Xiaohong
    ASP.NET Code Access Security (CAS) is a feature that helps protect server applications on hosting multiple Web sites, ASP.NET lets you assign a configurable trust level that corresponds to a predefined set of permissions. ASP.NET has predefined ASP.NET Trust Levels and Policy Files that you can assign to applications, you also can assign custom trust level and policy files. Most web hosting companies run ASP.NET applications in Medium Trust to prevent that one website affect or harm another site etc. As .NET Framework's Code Access Security model has evolved, ASP.NET 4.0 Code Access Security also has introduced several changes and improvements. The main change in ASP.NET 4.0 CAS In ASP.NET v4.0 partial trust applications, application domain can have a default partial trust permission set as opposed to being full-trust, the permission set name is defined in the <trust /> new attribute permissionSetName that is used to initialize the application domain . By default, the PermissionSetName attribute value is "ASP.Net" which is the name of the permission set you can find in all predefined partial trust configuration files. <trust level="Something" permissionSetName="ASP.Net" /> This is ASP.NET 4.0 new CAS model. For compatibility ASP.NET 4.0 also support legacy CAS model where application domain still has full trust permission set. You can specify new legacyCasModel attribute on the <trust /> element to indicate whether the legacy CAS model is enabled. By default legacyCasModel is false which means that new 4.0 CAS model is the default. <trust level="Something" legacyCasModel="true|false" /> In .Net FX 4.0 Config directory, there are two set of predefined partial trust config files for each new CAS model and legacy CAS model, trust config files with name legacy.XYZ.config are for legacy CAS model: New CAS model: Legacy CAS model: web_hightrust.config legacy.web_hightrust.config web_mediumtrust.config legacy.web_mediumtrust.config web_lowtrust.config legacy.web_lowtrust.config web_minimaltrust.config legacy.web_minimaltrust.config   The figure below shows in ASP.NET 4.0 new CAS model what permission set to grant to code for partial trust application using predefined partial trust levels and policy files:    There also some benefits that comes with the new CAS model: You can lock down a machine by making all managed code no-execute by default (e.g. setting the MyComputer zone to have no managed execution code permissions), it should still be possible to configure ASP.NET web applications to run as either full-trust or partial trust. UNC share doesn’t require full trust with CASPOL at machine-level CAS policy. Side effect that comes with the new CAS model: processRequestInApplicationTrust attribute is deprecated  in new CAS model since application domain always has partial trust permission set in new CAS model.   In ASP.NET 4.0 legacy CAS model or ASP.NET 2.0 CAS model, even though you assign partial trust level to a application but the application domain still has full trust permission set. The figure below shows in ASP.NET 4.0 legacy CAS model (or ASP.NET 2.0 CAS model) what permission set to grant to code for partial trust application using predefined partial trust levels and policy files:     What $AppDirUrl$, $CodeGen$, $Gac$ represents: $AppDirUrl$ The application's virtual root directory. This allows permissions to be applied to code that is located in the application's bin directory. For example, if a virtual directory is mapped to C:\YourWebApp, then $AppDirUrl$ would equate to C:\YourWebApp. $CodeGen$ The directory that contains dynamically generated assemblies (for example, the result of .aspx page compiles). This can be configured on a per application basis and defaults to %windir%\Microsoft.NET\Framework\{version}\Temporary ASP.NET Files. $CodeGen$ allows permissions to be applied to dynamically generated assemblies. $Gac$ Any assembly that is installed in the computer's global assembly cache (GAC). This allows permissions to be granted to strong named assemblies loaded from the GAC by the Web application.   The new customization of CAS Policy in ASP.NET 4.0 new CAS model 1. Define which named permission set in partial trust configuration files By default the permission set that will be assigned at application domain initialization time is the named "ASP.Net" permission set found in all predefined partial trust configuration files. However ASP.NET 4.0 allows you set PermissionSetName attribute to define which named permission set in a partial trust configuration file should be the one used to initialize an application domain. Example: add "ASP.Net_2" named permission set in partial trust configuration file: <PermissionSet class="NamedPermissionSet" version="1" Name="ASP.Net_2"> <IPermission class="FileIOPermission" version="1" Read="$AppDir$" PathDiscovery="$AppDir$" /> <IPermission class="ReflectionPermission" version="1" Flags ="RestrictedMemberAccess" /> <IPermission class="SecurityPermission " version="1" Flags ="Execution, ControlThread, ControlPrincipal, RemotingConfiguration" /></PermissionSet> Then you can use "ASP.Net_2" named permission set for the application domain permission set: <trust level="Something" legacyCasModel="false" permissionSetName="ASP.Net_2" /> 2. Define a custom set of Full Trust Assemblies for an application By using the new fullTrustAssemblies element to configure a set of Full Trust Assemblies for an application, you can modify set of partial trust assemblies to full trust at the machine, site or application level. The configuration definition is shown below: <fullTrustAssemblies> <add assemblyName="MyAssembly" version="1.1.2.3" publicKey="hex_char_representation_of_key_blob" /></fullTrustAssemblies> 3. Define <CodeGroup /> policy in partial trust configuration files ASP.NET 4.0 new CAS model will retain the ability for developers to optionally define <CodeGroup />with membership conditions and assigned permission sets. The specific restriction in ASP.NET 4.0 new CAS model though will be that the results of evaluating custom policies can only result in one of two outcomes: either an assembly is granted full trust, or an assembly is granted the partial trust permission set currently associated with the running application domain. It will not be possible to use custom policies to create additional custom partial trust permission sets. When parsing the partial trust configuration file: Any assemblies that match to code groups associated with "PermissionSet='FullTrust'" will run at full trust. Any assemblies that match to code groups associated with "PermissionSet='Nothing'" will result in a PolicyError being thrown from the CLR. This is acceptable since it provides administrators with a way to do a blanket-deny of managed code followed by selectively defining policy in a <CodeGroup /> that re-adds assemblies that would be allowed to run. Any assemblies that match to code groups associated with other permissions sets will be interpreted to mean the assembly should run at the permission set of the appdomain. This means that even though syntactically a developer could define additional "flavors" of partial trust in an ASP.NET partial trust configuration file, those "flavors" will always be ignored. Example: defines full trust in <CodeGroup /> for my strong named assemblies in partial trust config files: <CodeGroup class="FirstMatchCodeGroup" version="1" PermissionSetName="Nothing"> <IMembershipCondition    class="AllMembershipCondition"    version="1" /> <CodeGroup    class="UnionCodeGroup"    version="1"    PermissionSetName="FullTrust"    Name="My_Strong_Name"    Description="This code group grants code signed full trust. "> <IMembershipCondition      class="StrongNameMembershipCondition" version="1"       PublicKeyBlob="hex_char_representation_of_key_blob" /> </CodeGroup> <CodeGroup   class="UnionCodeGroup" version="1" PermissionSetName="ASP.Net">   <IMembershipCondition class="UrlMembershipCondition" version="1" Url="$AppDirUrl$/*" /> </CodeGroup> <CodeGroup class="UnionCodeGroup" version="1" PermissionSetName="ASP.Net">   <IMembershipCondition class="UrlMembershipCondition" version="1" Url="$CodeGen$/*"   /> </CodeGroup></CodeGroup>   4. Customize CAS policy at runtime in ASP.NET 4.0 new CAS model ASP.NET 4.0 new CAS model allows to customize CAS policy at runtime by using custom HostSecurityPolicyResolver that overrides the ASP.NET code access security policy. Example: use custom host security policy resolver to resolve partial trust web application bin folder MyTrustedAssembly.dll to full trust at runtime: You can create a custom host security policy resolver and compile it to assembly MyCustomResolver.dll with strong name enabled and deploy in GAC: public class MyCustomResolver : HostSecurityPolicyResolver{ public override HostSecurityPolicyResults ResolvePolicy(Evidence evidence) { IEnumerator hostEvidence = evidence.GetHostEnumerator(); while (hostEvidence.MoveNext()) { object hostEvidenceObject = hostEvidence.Current; if (hostEvidenceObject is System.Security.Policy.Url) { string assemblyName = hostEvidenceObject.ToString(); if (assemblyName.Contains(“MyTrustedAssembly.dll”) return HostSecurityPolicyResult.FullTrust; } } //default fall-through return HostSecurityPolicyResult.DefaultPolicy; }} Because ASP.NET accesses the custom HostSecurityPolicyResolver during application domain initialization, and a custom policy resolver requires full trust, you also can add a custom policy resolver in <fullTrustAssemblies /> , or deploy in the GAC. You also need configure a custom HostSecurityPolicyResolver instance by adding the HostSecurityPolicyResolverType attribute in the <trust /> element: <trust level="Something" legacyCasModel="false" hostSecurityPolicyResolverType="MyCustomResolver, MyCustomResolver" permissionSetName="ASP.Net" />   Note: If an assembly policy define in <CodeGroup/> and also in hostSecurityPolicyResolverType, hostSecurityPolicyResolverType will win. If an assembly added in <fullTrustAssemblies/> then the assembly has full trust no matter what policy in <CodeGroup/> or in hostSecurityPolicyResolverType.   Other changes in ASP.NET 4.0 CAS Use the new transparency model introduced in .Net Framework 4.0 Change in dynamically compiled code generated assemblies by ASP.NET: In new CAS model they will be marked as security transparent level2 to use Framework 4.0 security transparent rule that means partial trust code is treated as completely Transparent and it is more strict enforcement. In legacy CAS model they will be marked as security transparent level1 to use Framework 2.0 security transparent rule for compatibility. Most of ASP.NET products runtime assemblies are also changed to be marked as security transparent level2 to switch to SecurityTransparent code by default unless SecurityCritical or SecuritySafeCritical attribute specified. You also can look at Security Changes in the .NET Framework 4 for more information about these security attributes. Support conditional APTCA If an assembly is marked with the Conditional APTCA attribute to allow partially trusted callers, and if you want to make the assembly both visible and accessible to partial-trust code in your web application, you must add a reference to the assembly in the partialTrustVisibleAssemblies section: <partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" />/partialTrustVisibleAssemblies>   Most of ASP.NET products runtime assemblies are also changed to be marked as conditional APTCA to prevent use of ASP.NET APIs in partial trust environments such as Winforms or WPF UI controls hosted in Internet Explorer.   Differences between ASP.NET new CAS model and legacy CAS model: Here list some differences between ASP.NET new CAS model and legacy CAS model ASP.NET 4.0 legacy CAS model  : Asp.net partial trust appdomains have full trust permission Multiple different permission sets in a single appdomain are allowed in ASP.NET partial trust configuration files Code groups Machine CAS policy is honored processRequestInApplicationTrust attribute is still honored    New configuration setting for legacy model: <trust level="Something" legacyCASModel="true" ></trust><partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" /></partialTrustVisibleAssemblies>   ASP.NET 4.0 new CAS model: ASP.NET will now run in homogeneous application domains. Only full trust or the app-domain's partial trust grant set, are allowable permission sets. It is no longer possible to define arbitrary permission sets that get assigned to different assemblies. If an application currently depends on fine-tuning the partial trust permission set using the ASP.NET partial trust configuration file, this will no longer be possible. processRequestInApplicationTrust attribute is deprecated Dynamically compiled assemblies output by ASP.NET build providers will be updated to explicitly mark assemblies as transparent. ASP.NET partial trust grant sets will be independent from any enterprise, machine, or user CAS policy levels. A simplified model for locking down web servers that only allows trusted managed web applications to run. Machine policy used to always grant full-trust to managed code (based on membership conditions) can instead be configured using the new ASP.NET 4.0 full-trust assembly configuration section. The full-trust assembly configuration section requires explicitly listing each assembly as opposed to using membership conditions. Alternatively, the membership condition(s) used in machine policy can instead be re-defined in a <CodeGroup /> within ASP.NET's partial trust configuration file to grant full-trust.   New configuration setting for new model: <trust level="Something" legacyCASModel="false" permissionSetName="ASP.Net" hostSecurityPolicyResolverType=".NET type string" ></trust><fullTrustAssemblies> <add assemblyName=”MyAssembly” version=”1.0.0.0” publicKey="hex_char_representation_of_key_blob" /></fullTrustAssemblies><partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" /></partialTrustVisibleAssemblies>     Hope this post is helpful to better understand the ASP.Net 4.0 CAS. Xiaohong Tang ASP.NET QA Team

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >