Search Results

Search found 1234 results on 50 pages for 'langauge agnostic'.

Page 46/50 | < Previous Page | 42 43 44 45 46 47 48 49 50  | Next Page >

  • Should CSS be listed on your resume under Languages?

    - by Sandeepan Nath
    I have some doubts like Whether CSS should be put under Languages or not? Although Wikipedia says Cascading Style Sheets (CSS) is a style sheet language ... But do they write CSS under the languages section of the resume, along with PHP, etc? Similarly what about HTML? I have some doubt and I don't want to sound like someone who is not aware of the trends. Just to give an example, currently I have the following languages,frameworks, technologies, etc. listed under the "Technical Expertise" section of my resume - Technical Expertise * Languages - Proficient - PHP 5, Javascript, HTML ?*,CSS ?*,Sass ?*. Beginner - Linux Bash. * Databases - MySQL 5. * Technologies - AJAX. * Frameworks/Libraries - Symfony, jQuery. * CMSes - Wordpress. Although my domain is Web-development/design, I welcome domain-agnostic answers which can provide some generic ideas/reasoning. I have seen, a lot of people messing up these sections (even more serious than my doubts :) ), putting things under wrong sub-headings and thus putting a big question mark on their understanding of those things. I don't know much about XML, Comet Technology etc. Considering those are included too, What things should be put under Languages? E.g. Should CSS be put under Languages? Please give some reasoning to support your views. Where should the others (XML, Comet, cURL etc. ) be put? I welcome some examples of how you put it. Or do you have an additional Keywords section where you write all the unsortables ? Considering a set of standards like W3C standards, etc. do you have a standards sub-heading? I guess I have put the contents of other sections Okay. But do let me know about your ideas and reasoning. After all, I understand there may not be a single answer to this, but let's see what is the trend. Thanks Updates Further, do you mention design patters you have used? Web Services etc.? Where do you mention SOAP, XML etc...

    Read the article

  • Engine Rendering pipeline : Making shaders generic

    - by fakhir
    I am trying to make a 2D game engine using OpenGL ES 2.0 (iOS for now). I've written Application layer in Objective C and a separate self contained RendererGLES20 in C++. No GL specific call is made outside the renderer. It is working perfectly. But I have some design issues when using shaders. Each shader has its own unique attributes and uniforms that need to be set just before the main draw call (glDrawArrays in this case). For instance, in order to draw some geometry I would do: void RendererGLES20::render(Model * model) { // Set a bunch of uniforms glUniformMatrix4fv(.......); // Enable specific attributes, can be many glEnableVertexAttribArray(......); // Set a bunch of vertex attribute pointers: glVertexAttribPointer(positionSlot, 2, GL_FLOAT, GL_FALSE, stride, m->pCoords); // Now actually Draw the geometry glDrawArrays(GL_TRIANGLES, 0, m->vertexCount); // After drawing, disable any vertex attributes: glDisableVertexAttribArray(.......); } As you can see this code is extremely rigid. If I were to use another shader, say ripple effect, i would be needing to pass extra uniforms, vertex attribs etc. In other words I would have to change the RendererGLES20 render source code just to incorporate the new shader. Is there any way to make the shader object totally generic? Like What if I just want to change the shader object and not worry about game source re-compiling? Any way to make the renderer agnostic of uniforms and attributes etc?. Even though we need to pass data to uniforms, what is the best place to do that? Model class? Is the model class aware of shader specific uniforms and attributes? Following shows Actor class: class Actor : public ISceneNode { ModelController * model; AIController * AI; }; Model controller class: class ModelController { class IShader * shader; int textureId; vec4 tint; float alpha; struct Vertex * vertexArray; }; Shader class just contains the shader object, compiling and linking sub-routines etc. In Game Logic class I am actually rendering the object: void GameLogic::update(float dt) { IRenderer * renderer = g_application->GetRenderer(); Actor * a = GetActor(id); renderer->render(a->model); } Please note that even though Actor extends ISceneNode, I haven't started implementing SceneGraph yet. I will do that as soon as I resolve this issue. Any ideas how to improve this? Related design patterns etc? Thank you for reading the question.

    Read the article

  • Is there a pedagogical game engine?

    - by K.G.
    I'm looking for a book, website, or other resource that gives modern 3D game engines the same treatment as Operating Systems: Design and Implementation gave operating systems. I have read Jason Gregory's Game Engine Architecture, which I enjoyed. However, by intent the author treated components of the architecture as atomic units, whereas what I'm interested in is the plumbing between those units that makes a coherent whole out of ideally loosely coupled parts. In books such as these, one usually reads that "that's academic," but that's the point! I have also read Julian Gold's Object-oriented Game Development, which likewise was good, but I feel is beginning to show its age. Since even mobile platforms these days are multicore and have fast video memory, those kinds of things (concurrency, display item buffering) would ideally be covered. There are other resources, such as the Doom 3 source code, which is highly instructive for its being a shipped product. The problem with those is as follows: float Q_rsqrt( float number ) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; // evil floating point bit level hacking i = 0x5f3759df - ( i >> 1 ); // what the f***? y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration // y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed return y; } To wit, while brilliant, this kind of source requires more enlightenment than I can usually muster upon first read. In summary, here's my white whale: For an adult reader with experience in programming. I wish I could save all the trees killed by every. Single. Game Programming book ever devoting the first two chapters to "Now just what is a variable anyway?" In C or C++, very preferably C++. Languages that are more concise are fantastic for teaching, except for when what you want to learn is how to cope with a verbose language. There is also the benefit of the guardrails that C++ doesn't provide, such as garbage collection. Platform agnostic. I'm sincerely afraid that this book is out there and it's Visual C++/DirectX oriented. I'm a Linux guy, and I'd do what it takes, but I would very much like to be able to use OpenGL. Thanks for everything! Before anyone gets on my case about it, Fast inverse square root was from Quake III Arena, not Doom 3!

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

  • Finding Tools Guidance in OUM

    - by user716869
    OUM is not tool – specific. However, it does include tool guidance.  Tool guidance in OUM includes: a mention of a tool that could be used to complete a specific task(s) templates created with a specific tool example work products in a specific tool links to tool resources Tool Supplemental Guides So how do you find all this helpful tool information? Start at the lowest level first – the Task Overview.  Even though the task overviews are written tool-agnostic, they sometimes mention suggestions, or examples of a tool that might be used to complete the task.  More specific tool information can be found in the Task Overview, Templates and Tools section.  In some cases, the tool used to create the template (for example, Microsoft Word, Powerpoint, Project and Visio) is useful. The Templates and Tools section also provides more specific tool guidance, such as links to: White Papers Viewlets Example Work Products Additional Resources Tool Supplemental Guides If you’re more interested in seeing what tools might be helpful in general for your project or to see if there is any tool guidance for a specific tool that your project is committed to using, go to the Supplemental Guidance page in OUM.  This page is available from the Method Navigation pull down located in the header of almost every OUM page. When you open the Supplemental Guidance page, the first thing you see is a table index of everything that is included on the page.  At the top of the right column are all the Tool Supplemental Guides available in OUM.  Use the index to navigate to any of the guides. Next in the right column is Discipline/Industry/View Resources and Samples.  Use the index to navigate to any of these topics and see what’s available and more specifically, if there is any tool guidance available.  For example, if you navigate to the Cloud Resources, you will find a link to the IT Strategies from Oracle page that provides information for Cloud Practitioner Guides, Cloud Reference Architectures and Cloud White Papers, including the Cloud Candidate Selection Tool and Cloud Computing Maturity Model. The section for Method Tool and Technique Cross References can take you to the Task to Tool Cross Reference.  This page provides a task listing with possible helpful tools and links to more information regarding the tools.  By no means is this tool guidance all inclusive.  You can use other tools not mentioned in OUM to complete an OUM task. The Method Tool and Technique Cross References can also take you to the various Technique pages (Index and Cross References).  While techniques are not necessarily “tools,” they can certainly provide valuable assistance in completing tasks. In the Other Resources section of the Supplemental Guidance page, you find links to the viewlets and white papers that are included within OUM.

    Read the article

  • Protobuf design patterns

    - by Monster Truck
    I am evaluating Google Protocol Buffers for a Java based service (but am expecting language agnostic patterns). I have two questions: The first is a broad general question: What patterns are we seeing people use? Said patterns being related to class organization (e.g., messages per .proto file, packaging, and distribution) and message definition (e.g., repeated fields vs. repeated encapsulated fields*) etc. There is very little information of this sort on the Google Protobuf Help pages and public blogs while there is a ton of information for established protocols such as XML. I also have specific questions over the following two different patterns: Represent messages in .proto files, package them as a separate jar, and ship it to target consumers of the service --which is basically the default approach I guess. Do the same but also include hand crafted wrappers (not sub-classes!) around each message that implement a contract supporting at least these two methods (T is the wrapper class, V is the message class (using generics but simplified syntax for brevity): public V toProtobufMessage() { V.Builder builder = V.newBuilder(); for (Item item : getItemList()) { builder.addItem(item); } return builder.setAmountPayable(getAmountPayable()). setShippingAddress(getShippingAddress()). build(); } public static T fromProtobufMessage(V message_) { return new T(message_.getShippingAddress(), message_.getItemList(), message_.getAmountPayable()); } One advantage I see with (2) is that I can hide away the complexities introduced by V.newBuilder().addField().build() and add some meaningful methods such as isOpenForTrade() or isAddressInFreeDeliveryZone() etc. in my wrappers. The second advantage I see with (2) is that my clients deal with immutable objects (something I can enforce in the wrapper class). One disadvantage I see with (2) is that I duplicate code and have to sync up my wrapper classes with .proto files. Does anyone have better techniques or further critiques on any of the two approaches? *By encapsulating a repeated field I mean messages such as this one: message ItemList { repeated item = 1; } message CustomerInvoice { required ShippingAddress address = 1; required ItemList = 2; required double amountPayable = 3; } instead of messages such as this one: message CustomerInvoice { required ShippingAddress address = 1; repeated Item item = 2; required double amountPayable = 3; } I like the latter but am happy to hear arguments against it.

    Read the article

  • Move database from SQL Server 2012 to 2008

    - by Rich
    I have a database on a SQL Sever 2012 instance which I would like to copy to a 2008 server. The 2008 server cannot restore backups created by a 2012 server (I have tried). I cannot find any options in 2012 to create a 2008 compatible backup. Am I missing something? Is there an easy way to export the schema and data to a version-agnostic format which I can then import into 2008? The database does not use any 2012 specific features. It contains tables, data and stored procedures. Here is what I have tried so far: I tried "tasks" - "generate scripts" on the 2012 server, and I was able to generate the schema (including stored procedures) as a sql script. This didn't include any of the data, though. After creating that schema on my 2008 machine, I was able to open the "Export Data" wizard on the 2012 machine, and after configuring the 2012 as source machine and the 2008 as target machine, I was presented with a list of tables which I could copy. I selected all my tables (300+), and clicked through the wizard. Unfortunately it spends ages generating its scripts, then fails with errors like "Failure inserting into the read-only column 'FOO_ID'". I also tried the "Copy Database Wizard", which claimed to be able to copy "from 2000 or later to 2005 or later". It has two modes: 1) "detach and attach", which failed with error: Message: Index was outside the bounds of the array. StackTrace: at Microsoft.SqlServer.Management.Smo.PropertyBag.SetValue(Int32 index, Object value) ... at Microsoft.SqlServer.Management.Smo.DataFile.get_FileName() 2) SQL Management Object Method which failed with error "Cannot read property IsFileStream.This property is not available on SQL Server 7.0."

    Read the article

  • Burn bootable iso image to USB stick using dd: Won't boot (despite USB first in boot sequence)

    - by Nicolas Raoul
    I have installed Ubuntu on a Lenovo Thinkpad R500 2732, and I must update the BIOS. On the Lenovo website, I am offered this: BIOS Update Bootable CD for Windows 7 (32-bit, 64-bit), Vista (32-bit, 64-bit), XP - ThinkPad R500 I guess a bootable CD that would do a BIOS update is indeed what I need. (still wondering why it says "Windows" though... if it is bootable should not it be OS-agnostic?) Not wanting to waste a CD, I copied the image to my USB stick: sudo dd if=/home/nico/7yuj40uc.iso of=/dev/sdb1 bs=1M And rebooted, after making sure USB is first in the boot sequence. PROBLEM: It does not boot. Did I forget one step? Details about the iso image (readme): ls -lh 7yuj40uc.iso 25M file 7yuj40uc.iso /home/nico/7yuj40uc.iso: # ISO 9660 CD-ROM filesystem data '7YUJ40US ' (bootable) (Scroll to the right: it says "bootable") UNetbootin does not work because it is not a Linux image. Some people on the Internet advise to copy the content of the ISO and do other steps. This ISO has zero ISO content so it would not work. If I mount the ISO, I can see it contains zero files.

    Read the article

  • How do I know what hardware to buy to meet my needs?

    - by Darth Android
    While Stack Exchange does not permit shopping recommendations, it doesn't provide any general advice to consider when buying hardware. So, instead of just telling those that ask what to buy that it's not allowed, let's tell them how to figure out what they need. When looking forward to build a computer, how do I know what to buy? How do I find out if a given CPU will be enough for a certain game or application that I want to run? How do I find out if a given graphics card will be enough for a certain game or application? What is important when looking at motherboards? How much memory do I need? How do I know how much wattage I need for a power supply? What size case do I need? What relevant standards do I need to read up on and be aware of? PCI, PCIe, SATA, USB 2.0, USB 3.0, etc... What "gotchas" do I need to be on the lookout for? Please keep responses generation-agnostic to ensure they will be helpful to our future users. :)

    Read the article

  • What do I need to consider when buying hardware to meet my needs?

    - by Darth Android
    I'm looking to build a new computer from the ground up. I'm not sure what to look out for and need guidance and help on how to pick the hardware needed to construct my new rig. How do I know what to buy? How do I find out if a given CPU will be enough for a certain game or application that I want to run? How do I find out if a given graphics card will be enough for a certain game or application? What is important when looking at motherboards? How much memory do I need? How do I know how much wattage I need for a power supply? What size case do I need? What relevant standards do I need to read up on and be aware of? PCI, PCIe, SATA, USB 2.0, USB 3.0, etc... What "gotchas" do I need to be on the lookout for? Please keep responses generation-agnostic to ensure they will be helpful to our future users. While Stack Exchange does not permit shopping recommendations, it doesn't provide any general advice to consider when buying hardware. So, instead of just telling those that ask what to buy that it's not allowed, let's tell them how to figure out what they need. This question was Super User Question of the Week #20 Read the June 20, 2011 blog entry for more details or submit your own Question of the Week.

    Read the article

  • Modifying value of "Rating" column within Explorer for arbitrary file types

    - by Fake Name
    Basically, I have a large body of assorted media (text, images, flash files, archives, folders, etc...) and I'm attempting to organize it. Windows Explorer has a rating column, but there seems to be no way to modify the rating of the files short of opening them in their type-specific software (e.g. Media player, or Photo viewer). However, this does not work when the file is of an unsupported type (.rar, .swf ...), or a directory. I'd be more than willing to consider a file-manager replacement (I've alreadly looked at quite a few, Directory Opus, Total Commander, etc...), or even a solution that stores the rating metadata in a hidden file in each folder, or a separate database. The one real critical requirement is the ability to sort by rating, and being filetype-agnostic. Basically, is there any way to categorize a large collection of assorted files by rating that will work with any file type, including directories? - Ideally, there would be an easy way to add arbitrary columns to windows explorer, and edit them directly. However, there seems to be no way to do this. The rating column is the next best thing.

    Read the article

  • Modifying value of "Rating" column within Explorer for arbitrary file types.

    - by Fake Name
    Basically, I have a large body of assorted media (text, images, flash files, archives, folders, etc...) and I'm attempting to organize it. Windows Explorer has a rating column, but there seems to be no way to modify the rating of the files short of opening them in their type-specific software (e.g. Media player, or Photo viewer). However, this does not work when the file is of an unsupported type (.rar, .swf ...), or a directory. I'd be more than willing to consider a file-manager replacement (I've alreadly looked at quite a few, Directory Opus, Total Commander, etc...), or even a solution that stores the rating metadata in a hidden file in each folder, or a separate database. The one real critical requirement is the ability to sort by rating, and being filetype-agnostic. Basically, is there any way to categorize a large collection of assorted files by rating that will work with any file type, including directories? - Ideally, there would be an easy way to add arbitrary columns to windows explorer, and edit them directly. However, there seems to be no way to do this. The rating column is the next best thing.

    Read the article

  • HAProxy appsession vs cookie precedence

    - by user1139473
    I am trying to find the best solution for balancing and keeping persistence on our application behind HAProxy. Here is our basic configuration: https://gist.github.com/endzyme/1804046b23c37beba520 After playing around with taking members down and up and also reloading the haproxy (with -sf) I have noticed that appsession isn't 100% effective, it would appear that sometimes it doesn't always 'request-learn'. I also tried to add a cookie JSESSION prefix to balance in case request-learn didn't take. Unfortunately it would present scenarios where the prefix would list svr2 but it was balanced to a different server. I am assuming it's because the appsession table takes first then sticks on that before using the cookie parameter. I have not tested with using cookie as an inserted option (not prefix on existing cookie) but I am thinking it would yield similar results. My question is: Which one is checked first, appsession or cookie, and is it an immediate catch after it reads the first one, or a fall through? Also as a follow up - is it not recommended to use both in the same backend? Cookie as I understand takes less memory resources, is agnostic to reloads and has way better reliability of persistence. Appsession I assume takes less cpu resource, since it's reading not writing. (Bonus Question: is there a way to inspect appsession/cookie table map? socket show table doesn't show anything except stick-tables) Many thanks in advance, -Nick

    Read the article

  • hand coding a parser

    - by John Leidegren
    For all you compiler gurus, I wanna write a recursive descent parser and I wanna do it with just code. No generating lexers and parsers from some other grammar and don't tell me to read the dragon book, i'll come around to that eventually. I wanna get into the gritty details about implementing a lexer and parser for a reasonable simple langauge, say CSS. And I wanna do this right. This will probably end up being a series of questions but right now I'm starting with a lexer. Tokenization rules for CSS can be found here. I find my self writing code like this (hopefully you can infer the rest from this snippet): public CssToken ReadNext() { int val; while ((val = _reader.Read()) != -1) { var c = (char)val; switch (_stack.Top) { case ParserState.Init: if (c == ' ') { continue; // ignore } else if (c == '.') { _stack.Transition(ParserState.SubIdent, ParserState.Init); } break; case ParserState.SubIdent: if (c == '-') { _token.Append(c); } _stack.Transition(ParserState.SubNMBegin); break; What is this called? and how far off am I from something reasonable well understood? I'm trying to balence something which is fair in terms of efficiency and easy to work with, using a stack to implement some kind of state machine is working quite well, but I'm unsure how to continue like this. What I have is an input stream, from which I can read 1 character at a time. I don't do any look a head right now, I just read the character then depending on the current state try to do something with that. I'd really like to get into the mind set of writing reusable snippets of code. This Transition method is currently means to do that, it will pop the current state of the stack and then push the arguments in reverse order. That way, when I write Transition(ParserState.SubIdent, ParserState.Init) it will "call" a sub routine SubIdent which will, when complete, return to the Init state. The parser will be implemented in much the same way, currently, having everyhing in a single big method like this allows me to easily return a token when I found one, but it also forces me to keep everything in one single big method. Is there a nice way to split these tokenization rules into seperate methods? Any input/advice on the matter would be greatly appriciated!

    Read the article

  • Best Practices for Handing over Legacy Code

    - by PersonalNexus
    In a couple of months a colleague will be moving on to a new project and I will be inheriting one of his projects. To prepare, I have already ordered Michael Feathers' Working Effectively with Legacy Code. But this books as well as most questions on legacy code I found so far are concerned with the case of inheriting code as-is. But in this case I actually have access to the original developer and we do have some time for an orderly hand-over. Some background on the piece of code I will be inheriting: It's functioning: There are no known bugs, but as performance requirements keep going up, some optimizations will become necessary in the not too distant future. Undocumented: There is pretty much zero documentation at the method and class level. What the code is supposed to do at a higher level, though, is well-understood, because I have been writing against its API (as a black-box) for years. Only higher-level integration tests: There are only integration tests testing proper interaction with other components via the API (again, black-box). Very low-level, optimized for speed: Because this code is central to an entire system of applications, a lot of it has been optimized several times over the years and is extremely low-level (one part has its own memory manager for certain structs/records). Concurrent and lock-free: While I am very familiar with concurrent and lock-free programming and have actually contributed a few pieces to this code, this adds another layer of complexity. Large codebase: This particular project is more than ten thousand lines of code, so there is no way I will be able to have everything explained to me. Written in Delphi: I'm just going to put this out there, although I don't believe the language to be germane to the question, as I believe this type of problem to be language-agnostic. I was wondering how the time until his departure would best be spent. Here are a couple of ideas: Get everything to build on my machine: Even though everything should be checked into source code control, who hasn't forgotten to check in a file once in a while, so this should probably be the first order of business. More tests: While I would like more class-level unit tests so that when I will be making changes, any bugs I introduce can be caught early on, the code as it is now is not testable (huge classes, long methods, too many mutual dependencies). What to document: I think for starters it would be best to focus documentation on those areas in the code that would otherwise be difficult to understand e.g. because of their low-level/highly optimized nature. I am afraid there are a couple of things in there that might look ugly and in need of refactoring/rewriting, but are actually optimizations that have been out in there for a good reason that I might miss (cf. Joel Spolsky, Things You Should Never Do, Part I) How to document: I think some class diagrams of the architecture and sequence diagrams of critical functions accompanied by some prose would be best. Who to document: I was wondering what would be better, to have him write the documentation or have him explain it to me, so I can write the documentation. I am afraid, that things that are obvious to him but not me would otherwise not be covered properly. Refactoring using pair-programming: This might not be possible to do due to time constraints, but maybe I could refactor some of his code to make it more maintainable while he was still around to provide input on why things are the way they are. Please comment on and add to this. Since there isn't enough time to do all of this, I am particularly interested in how you would prioritize.

    Read the article

  • Adventures in MVVM &ndash; ViewModel Location and Creation

    - by Brian Genisio's House Of Bilz
    More Adventures in MVVM In this post, I am going to explore how I prefer to attach ViewModels to my Views.  I have published the code to my ViewModelSupport project on CodePlex in case you'd like to see how it works along with some examples.  Some History My approach to View-First ViewModel creation has evolved over time.  I have constructed ViewModels in code-behind.  I have instantiated ViewModels in the resources sectoin of the view. I have used Prism to resolve ViewModels via Dependency Injection. I have created attached properties that use Dependency Injection containers underneath.  Of all these approaches, I continue to find issues either in composability, blendability or maintainability.  Laurent Bugnion came up with a pretty good approach in MVVM Light Toolkit with his ViewModelLocator, but as John Papa points out, it has maintenance issues.  John paired up with Glen Block to make the ViewModelLocator more generic by using MEF to compose ViewModels.  It is a great approach, but I don’t like baking in specific resolution technologies into the ViewModelSupport project. I bring these people up, not to name drop, but to give them credit for the place I finally landed in my journey to resolve ViewModels.  I have come up with my own version of the ViewModelLocator that is both generic and container agnostic.  The solution is blendable, configurable and simple to use.  Use any resolution mechanism you want: MEF, Unity, Ninject, Activator.Create, Lookup Tables, new, whatever. How to use the locator 1. Create a class to contain your resolution configuration: public class YourViewModelResolver: IViewModelResolver { private YourFavoriteContainer container = new YourFavoriteContainer(); public YourViewModelResolver() { // Configure your container } public object Resolve(string viewModelName) { return container.Resolve(viewModelName); } } Examples of doing this are on CodePlex for MEF, Unity and Activator.CreateInstance. 2. Create your ViewModelLocator with your custom resolver in App.xaml: <VMS:ViewModelLocator x:Key="ViewModelLocator"> <VMS:ViewModelLocator.Resolver> <local:YourViewModelResolver /> </VMS:ViewModelLocator.Resolver> </VMS:ViewModelLocator> 3. Hook up your data context whenever you want a ViewModel (WPF): <Border DataContext="{Binding YourViewModelName, Source={StaticResource ViewModelLocator}}"> This example uses dynamic properties on the ViewModelLocator and passes the name to your resolver to figure out how to compose it. 4. What about Silverlight? Good question.  You can't bind to dynamic properties in Silverlight 4 (crossing my fingers for Silverlight 5), but you CAN use string indexing: <Border DataContext="{Binding [YourViewModelName], Source={StaticResource ViewModelLocator}}"> But, as John Papa points out in his article, there is a silly bug in Silverlight 4 (as of this writing) that will call into the indexer 6 times when it binds.  While this is little more than a nuisance when getting most properties, it can be much more of an issue when you are resolving ViewModels six times.  If this gets in your way, the solution (as pointed out by John), is to use an IndexConverter (instantiated in App.xaml and also included in the project): <Border DataContext="{Binding Source={StaticResource ViewModelLocator}, Converter={StaticResource IndexConverter}, ConverterParameter=YourViewModelName}"> It is a bit uglier than the WPF version (this method will also work in WPF if you prefer), but it is still not all that bad.  Conclusion This approach works really well (I suppose I am a bit biased).  It allows for composability from any mechanisim you choose.  It is blendable (consider serving up different objects in Design Mode if you wish... or different constructors… whatever makes sense to you).  It works in Cider.  It is configurable.  It is flexible.  It is the best way I have found to manage View-First ViewModel hookups.  Thanks to the guys mentioned in this article for getting me to something I love using.  Enjoy.

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • Orchestrating the Virtual Enterprise, Part I

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's Chief Sustainability Officer & Vice President, SCM Product Strategy During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems. In my next post, I will share examples of companies that have made that shift and talk more about the distributed orchestration process.

    Read the article

  • Is it bad practice to make an iterator that is aware of its own end

    - by aaronman
    For some background of why I am asking this question here is an example. In python the method chain chains an arbitrary number of ranges together and makes them into one without making copies. Here is a link in case you don't understand it. I decided I would implement chain in c++ using variadic templates. As far as I can tell the only way to make an iterator for chain that will successfully go to the next container is for each iterator to to know about the end of the container (I thought of a sort of hack in where when != is called against the end it will know to go to the next container, but the first way seemed easier and safer and more versatile). My question is if there is anything inherently wrong with an iterator knowing about its own end, my code is in c++ but this can be language agnostic since many languages have iterators. #ifndef CHAIN_HPP #define CHAIN_HPP #include "iterator_range.hpp" namespace iter { template <typename ... Containers> struct chain_iter; template <typename Container> struct chain_iter<Container> { private: using Iterator = decltype(((Container*)nullptr)->begin()); Iterator begin; const Iterator end;//never really used but kept it for consistency public: chain_iter(Container & container, bool is_end=false) : begin(container.begin()),end(container.end()) { if(is_end) begin = container.end(); } chain_iter & operator++() { ++begin; return *this; } auto operator*()->decltype(*begin) { return *begin; } bool operator!=(const chain_iter & rhs) const{ return this->begin != rhs.begin; } }; template <typename Container, typename ... Containers> struct chain_iter<Container,Containers...> { private: using Iterator = decltype(((Container*)nullptr)->begin()); Iterator begin; const Iterator end; bool end_reached = false; chain_iter<Containers...> next_iter; public: chain_iter(Container & container, Containers& ... rest, bool is_end=false) : begin(container.begin()), end(container.end()), next_iter(rest...,is_end) { if(is_end) begin = container.end(); } chain_iter & operator++() { if (begin == end) { ++next_iter; } else { ++begin; } return *this; } auto operator*()->decltype(*begin) { if (begin == end) { return *next_iter; } else { return *begin; } } bool operator !=(const chain_iter & rhs) const { if (begin == end) { return this->next_iter != rhs.next_iter; } else return this->begin != rhs.begin; } }; template <typename ... Containers> iterator_range<chain_iter<Containers...>> chain(Containers& ... containers) { auto begin = chain_iter<Containers...>(containers...); auto end = chain_iter<Containers...>(containers...,true); return iterator_range<chain_iter<Containers...>>(begin,end); } } #endif //CHAIN_HPP

    Read the article

  • Philosophy of [WebInvoke(ResponseFormat = WebMessageFormat.Json)]

    - by Mikey Cee
    Hi everyone, I'm writing what I'm referring to as a POJ (Plain Old JSON) WCF web service - one that takes and emits standard JSON with none of the crap that ASP.NET Ajax likes to add to it. It seems that there are three steps to accomplish this: Change to in the endpoint's tag Decorate the method with [WebInvoke(ResponseFormat = WebMessageFormat.Json)] Add an incantation of [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] to the service contract This is all working OK for me - I can pass in and am being returned nice plain JSON. If I remove the WebInvoke attribute, then I get XML returned instead, so it is certainly doing what it is supposed to do. But it strikes me as odd that the option to specify JSON output appears here and not in the configuration file. Say I wanted to expose my method as an XML endpoint too - how would I do this? Currently the only way I can see would be to have a second method that does exactly the same thing but does not have WebMethodFormat.Json specified. Then rinse and repeat for every method in my service? Yuck. Specifying that the output should be serialized to JSON in the attribute seems to be completely contrary to the philosophy of WCF, where the service is implemented is a transport and encoding agnostic manner, leaving the nasty details of how the data will be moved around to the configuration file. Is there a better way of doing what I want to do? Or are we stuck with this awkward attribute? Or do I not understanding WCF deeply enough?

    Read the article

  • Visual Studio 2010 64-bit COM Interop Issue

    - by Adam Driscoll
    I am trying to add a VC6 COM DLL to our VS2010RC C# solution. The DLL was compiled with the VC6 tools to create an x86 version and was compiled with the VC7 Cross-platform tools to generate a VC7 DLL. The x86 version of the assembly works fine as long as the consuming C# project's platform is set to x86. It doesn't matter whether the x64 or the x86 version of the DLL is actually registered. It works with both. If the platform is set to 'Any CPU' I receive a BadImageFormatException on the load of the Interop.<name>.dll. As for the x64 version, I cannot even get the project to build. I receive the tlbimp error: TlbImp : error TI0000: A single valid machine type compatible with the input type library must be specified. Has anyone seen this issue? EDIT: I've done a lot more digging into this issue and think this may be a Visual Studio bug. I have a clean solution. I bring in my COM assembly with language agnostic 'Any CPU' selected. The process architecture of the resulting Interop DLL is x86 rather than MSIL. May have to make the Interop by hand for now to get this to work. If anyone has another suggestion let me know.

    Read the article

  • Metaprogramming - self explanatory code - tutorials, articles, books

    - by elena
    Hello everybody, I am looking into improving my programming skils (actually I try to do my best to suck less each year, as our Jeff Atwood put it), so I was thinking into reading stuff about metaprogramming and self explanatory code. I am looking for something like an idiot's guide to this (free books for download, online resources). Also I want more than your average wiki page and also something language agnostic or preferably with Java examples. Do you know of such resources that will allow to efficiently put all of it into practice (I know experience has a lot to say in all of this but i kind of want to build experience avoiding the flow bad decisions - experience - good decisions)? EDIT: Something of the likes of this example from the Pragmatic Programmer: ...implement a mini-language to control a simple drawing package... The language consists of single-letter commands. Some commands are followed by a single number. For example, the following input would draw a rectangle: P 2 # select pen 2 D # pen down W 2 # draw west 2cm N 1 # then north 1 E 2 # then east 2 S 1 # then back south U # pen up Thank you!

    Read the article

  • Bit reversal of an integer, ignoring integer size and endianness

    - by ??O?????
    Given an integer typedef: typedef unsigned int TYPE; or typedef unsigned long TYPE; I have the following code to reverse the bits of an integer: TYPE max_bit= (TYPE)-1; void reverse_int_setup() { TYPE bits= (TYPE)max_bit; while (bits <<= 1) max_bit= bits; } TYPE reverse_int(TYPE arg) { TYPE bit_setter= 1, bit_tester= max_bit, result= 0; for (result= 0; bit_tester; bit_tester>>= 1, bit_setter<<= 1) if (arg & bit_tester) result|= bit_setter; return result; } One just needs first to run reverse_int_setup(), which stores an integer with the highest bit turned on, then any call to reverse_int(arg) returns arg with its bits reversed (to be used as a key to a binary tree, taken from an increasing counter, but that's more or less irrelevant). Is there a platform-agnostic way to have in compile-time the correct value for max_int after the call to reverse_int_setup(); Otherwise, is there an algorithm you consider better/leaner than the one I have for reverse_int()? Thanks.

    Read the article

  • How do you use asynchronous ORMs without huge callback chains?

    - by hornairs
    I'm using the relatively immature Joose Javascript ORM plugin (project page) to persist objects in an Appcelerator Titanium (company page) mobile project. Since it's client side storage, the application has to check to see if the database is initialized before starting up the ORM since it inspects the DB tables to construct the classes. My problem is that this sequence of operations (and if this one is like this, other things down the road) takes a lot of callbacks to complete. I have a lot of jumping around in the code that isn't apparent to a maintainer and results in some complex call graphs and whatnot. So, I ask these questions: How would you asynchronously initialize a database and populate it with seed data using an ORM that needs the schema to be correct to function? Do you have any general strategies or links for async/event driven programming and keeping the call graph simple and understandable? Do you have any suggestions for Javascript ORMs/meta object systems that work with HTML 5 as a storage engine and are hopefully framework agnostic? Am I just a big newb and should be able to work this out with ease? Thanks folks!

    Read the article

  • Binding XML in Sliverlight without Nominal Classes

    - by AnthonyWJones
    Lets say I have a simple chunck of XML:- <root> <item forename="Fred" surname="Flintstone" /> <item forename="Barney" surname="Rubble" /> </root> Having fetched this XML in Silverlight I would like to bind it with xaml of this ilke:- <ListBox x:Name="ItemList" Style="{StaticResource Items}"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <TextBox Text="{Binding Forename}" /> <TextBox Text="{Binding Surname}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Now I can bind simply enough with LINQ to XML and a nominal class:- public class Person { public string Forename {get; set;} public string Surname {get; set;} } So here is the question, can it be done without this class? IOW coupling between the Sliverlight code and the input XML is limited to the XAML only, other source code is agnostic to the set of attributes on the item element. Edit: The use of XSD is suggested but ultimately it amounts the same thing. XSD-Generated class. Edit: An anonymous class doesn't work, Silverlight can't bind them. Edit: This needs to be two way, the user needs to be able to edit the values and these value end up in the XML. (Changed original TextBlock to TextBox in sample above).

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50  | Next Page >