Search Results

Search found 16054 results on 643 pages for 'reference architecture'.

Page 110/643 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Execute remote Lua Script

    - by Bruno Lee
    Hi, I want to make an application that executes a remote script. The user can create a script (probabily a LUA script) then stores it in the server. Then he can uses an API for execute the script. I was thinking that API could be a webservice. So my questions are: I need high performance to execute the script. So my first choice was LUA script. Someone has another sugestion? Cause I need high perfomance, I was thinking if the webservice is the best solution. Maybe I could create a TCP/IP Windows Service that hold the users request. It is important to say that I will have many user executing scripts at the same time. So I will have a concurrency problem. My scripts will query in a database. I will use Tokyo Cabinet or Tokio Tyrant. I think Tokio Tyrant is the only solution cause I will have many requests. For perfomance, Do I need to make a connection pooling? Is there anyway to share variables between webservices requests? To make the webservice or the Windows service i was thinking to use C++. Can someone help with these questions? thanks

    Read the article

  • REST application, Transactions, Cache drop

    - by Julian Davchev
    Hi, I am building REST API in php with memcache layer on top for caching all resources. After some reading experience it turns out it's best when documents are as simple as posible...mainly due to dropping cache sequences. So if there is 'building','room' entities for the 'room' document I would only place the id of the 'building' and not the whole data of it. Then on api client side I would merge data as needed. Problem becomes when I need to update/insert (most cases more than one table). I update one resource but on second update system fails or whatever and there becomes database inconsistancies. I see several solutions: 1. Implement rest transactions which I find wrong and complex as idea is to be stateless and easy. 2. On update/insert actions I pass more complex data (not single entities) so I can force transactions on API level. But this will make it weird that your GET document structure is same as PUT document structure. And again somehow make drop sequences complex. Any pointers are more than welcome. Cheers,

    Read the article

  • Best Database Change Control Methodologies

    - by SnapJag
    As a database architect, developer, and consultant, there are many questions that can be answered. One, though I was asked recently and still can't answer good, is... "What is one of, or some of, the best methods or techniques to keep database changes documented, organized, and yet able to roll out effectively either in a single-developer or multi-developer environment." This may involve stored procedures and other object scripts, but especially schemas - from documentation, to the new physical update scripts, to rollout, and then full-circle. There are applications to make this happen, but require schema hooks and overhead. I would rather like to know about techniques used without a lot of extra third-party involvement.

    Read the article

  • Subterranean IL: Constructor constraints

    - by Simon Cooper
    The constructor generic constraint is a slightly wierd one. The ECMA specification simply states that it: constrains [the type] to being a concrete reference type (i.e., not abstract) that has a public constructor taking no arguments (the default constructor), or to being a value type. There seems to be no reference within the spec to how you actually create an instance of a generic type with such a constraint. In non-generic methods, the normal way of creating an instance of a class is quite different to initializing an instance of a value type. For a reference type, you use newobj: newobj instance void IncrementableClass::.ctor() and for value types, you need to use initobj: .locals init ( valuetype IncrementableStruct s1 ) ldloca 0 initobj IncrementableStruct But, for a generic method, we need a consistent method that would work equally well for reference or value types. Activator.CreateInstance<T> To solve this problem the CLR designers could have chosen to create something similar to the constrained. prefix; if T is a value type, call initobj, and if it is a reference type, call newobj instance void !!0::.ctor(). However, this solution is much more heavyweight than constrained callvirt. The newobj call is encoded in the assembly using a simple reference to a row in a metadata table. This encoding is no longer valid for a call to !!0::.ctor(), as different constructor methods occupy different rows in the metadata tables. Furthermore, constructors aren't virtual, so we would have to somehow do a dynamic lookup to the correct method at runtime without using a MethodTable, something which is completely new to the CLR. Trying to do this in IL results in the following verification error: newobj instance void !!0::.ctor() [IL]: Error: Unable to resolve token. This is where Activator.CreateInstance<T> comes in. We can call this method to return us a new T, and make the whole issue Somebody Else's Problem. CreateInstance does all the dynamic method lookup for us, and returns us a new instance of the correct reference or value type (strangely enough, Activator.CreateInstance<T> does not itself have a .ctor constraint on its generic parameter): .method private static !!0 CreateInstance<.ctor T>() { call !!0 [mscorlib]System.Activator::CreateInstance<!!0>() ret } Going further: compiler enhancements Although this method works perfectly well for solving the problem, the C# compiler goes one step further. If you decompile the C# version of the CreateInstance method above: private static T CreateInstance() where T : new() { return new T(); } what you actually get is this (edited slightly for space & clarity): .method private static !!T CreateInstance<.ctor T>() { .locals init ( [0] !!T CS$0$0000, [1] !!T CS$0$0001 ) DetectValueType: ldloca.s 0 initobj !!T ldloc.0 box !!T brfalse.s CreateInstance CreateValueType: ldloca.s 1 initobj !!T ldloc.1 ret CreateInstance: call !!0 [mscorlib]System.Activator::CreateInstance<T>() ret } What on earth is going on here? Looking closer, it's actually quite a clever performance optimization around value types. So, lets dissect this code to see what it does. The CreateValueType and CreateInstance sections should be fairly self-explanatory; using initobj for value types, and Activator.CreateInstance for reference types. How does the DetectValueType section work? First, the stack transition for value types: ldloca.s 0 // &[!!T(uninitialized)] initobj !!T // ldloc.0 // !!T box !!T // O[!!T] brfalse.s // branch not taken When the brfalse.s is hit, the top stack entry is a non-null reference to a boxed !!T, so execution continues to to the CreateValueType section. What about when !!T is a reference type? Remember, the 'default' value of an object reference (type O) is zero, or null. ldloca.s 0 // &[!!T(null)] initobj !!T // ldloc.0 // null box !!T // null brfalse.s // branch taken Because box on a reference type is a no-op, the top of the stack at the brfalse.s is null, and so the branch to CreateInstance is taken. For reference types, Activator.CreateInstance is called which does the full dynamic lookup using reflection. For value types, a simple initobj is called, which is far faster, and also eliminates the unboxing that Activator.CreateInstance has to perform for value types. However, this is strictly a performance optimization; Activator.CreateInstance<T> works for value types as well as reference types. Next... That concludes the initial premise of the Subterranean IL series; to cover the details of generic methods and generic code in IL. I've got a few other ideas about where to go next; however, if anyone has any itching questions, suggestions, or things you've always wondered about IL, do let me know.

    Read the article

  • How are interrupts handled by dual processor machines?

    - by jeffD
    I have an idea of how interrupts are handled by a dual core CPU. I was wondering about how interrupt handling is implemented on a board with more than one physical processor. Is any of the interrupt responsibility determined by the physical board's configuration? Each processor must be able to handle some types of interrupts, like disk I/O. Unless there is some circuitry to manage and dispatch interrupts to the appropriate processor? My guess is that the scheme must be processor neutral, so that any processor and core can run the interrupt handler. If a core is waiting on a disk read, will that core be the one to run the interrupt handler when the disk is ready?

    Read the article

  • Best Practices Question on using an ObjectDataSource in asp.net

    - by Lill Lansey
    Asp.net, c#, vs2008, sqlserver 2005. I am filling a DataTable in the data access layer with data from a sqlserver stored procedure. Best Practices Question – Is it ok to pass the DataTable to the business layer and use the DataTable from the business layer for an ObjectDataSource in the presentation layer, or Should I transfer the data in the data table into a List and use the List for an ObjectDataSource in the presentation layer? If I should transfer the data to a List, should that be done in the data access layer or the business layer? Does it make a difference if the data needs to be edited before being displayed?

    Read the article

  • Does my API design violate RESTful principles?

    - by peta
    Hello everybody, I'm currently (I try to) designing a RESTful API for a social network. But I'm not sure if my current approach does still accord to the RESTful principles. I'd be glad if some brighter heads could give me some tips. Suppose the following URI represents the name field of a user account: people/{UserID}/profile/fields/name But there are almost hundred possible fields. So I want the client to create its own field views or use predefined ones. Let's suppose that the following URI represents a predefined field view that includes the fields "name", "age", "gender": utils/views/field-views/myFieldView And because field views are kind of higher logic I don't want to mix support for field views into the "people/{UserID}/profile/fields" resource. Instead I want to do the following: utils/views/field-views/myFieldView/{UserID} Though Leonard Richardson & Sam Ruby state in their book "RESTful Web Services" that a RESTful design is somehow like an "extreme object oriented" approach, I think that my approach is object oriented and therefore accords to RESTful principles. Or am I wrong? When not: Are such "object oriented" approaches generally encouraged when used with care and in order to avoid query-based REST-RPC hybrids? Thanks for your feedback in advance, peta

    Read the article

  • how can we find that this processor supports how much memory?????

    - by Zia ur Rahman
    I have just started the Assembly language programming and in the first lecture our teacher told us about intel 8080 and intel 8085 and he said there was 64k memory with these processor. Now i want to know that how we find this amount of memory with specific processor, for example i have a processor 1.8 Ghz , now how i can find out the amount of memory that can be used with this processor. what i am trying to ask is tell me the method how we can find out this amount of memory?

    Read the article

  • Do you know any alternative to NDepend for architects?

    - by ifesdjeen
    Hi! do you know any software similar to NDepend? I've got it just recently, and found it very useful. It helped me a lot, but for now i don't have a possibility to buy a proffessional version. So, is there any alternative (maybe, open-source)? Preferrably, free. But not necessarily. Maybe, with a little bit more fitting price for a single-developer, not a team. Requirements for this software: Build dependency diagrams Retrieve code metrics Display comments coverage (so far)

    Read the article

  • design business class for unit test

    - by Mauro Destro
    I'm trying to clean my code to make it unit-testable. I know that unit tests must be created while coding but... I have to do it now, with code completed. My business class is full of methods with a similar implementation like: var rep=new NHrepository<ModelClass1>(Session); rep.Where(x=>x.Field1==1).ToList(); first error (from my point of view) is that I don't have to use "new" but instead use DI and add in the ctor parameters a INHrepository modelClass1Repository. If in my class I have two or more repository of different model class? Each must be in the ctor? Or probably business class is not build with SeparationOfConcern principle?

    Read the article

  • Usage of open source libraries in high governance and risk-averse large organizations (banks, financ

    - by bart
    Does anyone have any good stories of these kinds of organizations being open to using open source dependencies (and also tools). Many staff I've encountered have little or no exposure to open source/systems and open source is treated with great suspicion. Some reasons given for this are lack of support and robustness, which is ironic given the number of end-of-life unsupported vendor products that are in production. I'm also interested in any success stories where you've seen open source go into orgs like this and have a real benefit!

    Read the article

  • Architectural advice on connecting multiple diverse sites into a single community.

    - by Aleksandar
    Hi SO, I've been given a task to connect multiple sites of the same client into a single network. So i would like to hear an architectural advice on connecting these sites into a single community. These sites include: 1. Invision Power Board Forum (the most important site) 2. 3 custom made cms-s (changes to code allowable) 3. 1 drupal site 4. 3-4 wordpress blogs Requirements are as follows: 1. Connecting all users of all sites into a single administrable entity. With permissions changing ability, users banning etc. 2. Later on, based on this implementation I have to implement "facebook like" chat, which will be available to all users regardless of place of login. I have few ideas on my mind on how to go with this, but would like to hear some people with more experience and expertize than my self. Cheers!

    Read the article

  • 16 bit processor , memory addressing and memory cells

    - by Zia ur Rahman
    Suppose the accumulater register of the processor is of 16 bit , now we can call this processor as 16 bit processor, that is this processor supports 16 bit addressing. now my question is how we can calculate the number of memory cells that can be addressed by 16 bit addressing? according to my calculation 2 to the power 16 becomes 65055 it means the memory have 65055 cells now if we take 1KB=1000 Bytes then this becomes 65055/1000=65.055 now this means that 65 kilo bytes memory can be used with the processor having 16 bit addressing. now if we take 1KB=1024 Bytes then this becomes 65055/1024=63.5 ,it means that 63 kilo bytes memory can be used with this processor, but people say that 64 kilo bytes memory can be used. Now tell me am i right or wrong and why i am wrong why people say that 64kb memory can be used with the processor having 16 bit addressing?

    Read the article

  • Entity Framework in n-layered application - Lazy loading vs. Eager loading patterns

    - by Marconline
    Hi all. This questions doesn't let me sleep as it's since one year I'm trying to find a solution but... still nothing happened in my mind. Probably you can help me, because I think this is a very common issue. I've a n-layered application: presentation layer, business logic layer, model layer. Suppose for simplicity that my application contains, in the presentation layer, a form that allows a user to search for a customer. Now the user fills the filters through the UI and clicks a button. Something happens and the request arrives to presentation layer to a method like CustomerSearch(CustomerFilter myFilter). This business logic layer now keeps it simple: creates a query on the model and gets back results. Now the question: how do you face the problem of loading data? I mean business logic layer doesn't know that that particular method will be invoked just by that form. So I think that it doesn't know if the requesting form needs just the Customer objects back or the Customer objects with the linked Order entities. I try to explain better: our form just wants to list Customers searching by surname. It has nothing to do with orders. So the business logic query will be something like: (from c in ctx.CustomerSet where c.Name.Contains(strQry) select c).ToList(); now this is working correctly. Two days later your boss asks you to add a form that let you search for customers like the other and you need to show the total count of orders created by each customer. Now I'd like to reuse that query and add the piece of logic that attach (includes) orders and gets back that. How would you front this request? Here is the best (I think) idea I had since now. I'd like to hear from you: my CustomerSearch method in BLL doesn't create the query directly but passes through private extension methods that compose the ObjectQuery like: private ObjectQuery<Customer> SearchCustomers(this ObjectQuery<Customer> qry, CustomerFilter myFilter) and private ObjectQuery<Customer> IncludeOrders(this ObjectQuery<Customer> qry) but this doesn't convince me as it seems too complex. Thanks, Marco

    Read the article

  • Validation in n-tier asp.net mvc applications

    - by sTodorov
    Dear Stack Overflow gurus, I am looking for some practical/theoretical information regarding best practices for validation in asp.net mvc n-tier applications. I am working on a .Net application divided into the following layers: UI - Mvc3 BLL layer - all business rules. Decoupled from data access and UI layers through interfaces DAL layer - Data access with the repository pattern, EF4 and pocos Now, I am looking for a nice, clean and transparent way to specify my validation rules. Here are some thoughts on the matter so far: UI validation should only be responsible for user input and its validity. BLL validation should be handling the validity of the data regarding the application business rules. My main concern is how to bind the BLL and UI validation in the most efficient way. One think I am would like to avoid is having the UI check in a collection of validation and adding manually errors to the ModelState. Furthermore, I do not want to pass the ModelState to the BLL to be populated in there. I will appreciate any thoughts on the matter. P.S. Should this question be marked as a discussion ?

    Read the article

  • How to implement List, Set, and Map in null free design?

    - by Pyrolistical
    Its great when you can return a null/empty object in most cases to avoid nulls, but what about Collection like objects? In Java, Map returns null if key in get(key) is not found in the map. The best way I can think of to avoid nulls in this situation is to return an Entry<T> object, which is either the EmptyEntry<T>, or contains the value T. Sure we avoid the null, but now you can have a class cast exception if you don't check if its an EmptyEntry<T>. Is there a better way to avoid nulls in Map's get(K)? And for argument sake, let's say this language don't even have null, so don't say just use nulls.

    Read the article

  • Maintain order of messages via proxies to app servers

    - by David Turner
    Hi, I am receiving messages from a 3rd party via a http post, and it is important that the order the messages hit our infrastructure is maintained through the load balancers and proxies until it hits our application server. Quick Diagram. (proxies in place due to security requirements.) [ACE load balancer] - [2 proxies] - [Application Servers] or maybe [ACE load balancer] - [2 proxies] - [ACE load balancer] - [Application Servers] My idea was that I would setup the load balancers in active-passive mode, to force all messages to use one proxy, and then both the proxies would hit a second load balancer that would be configured in active passive to hit one application server. Whilst the above is not ideal, it does give me resilience, and once the message is in my app servers, I enter a stateless world, and load balance across both nodes of my cluster. However, I am concerned that even a single proxy could send messages out of order, perhaps if 2 messages are recived very close together, message 2 might get processed faster than message 1. Is this possible? Likely? Is there a simple open source proxy (MOD_PROXY?) that can be easily configured to just pass messages through it, and to guarantee to send the messages through in the order they are received. If so which, and finally links to how I should configure it would be great. In fact any links to articles around avoiding "out of order" messages using hardware would be gratefully received. Thanks, ps for those that are interested, the app is a java spring integration application currently on a appliation server.

    Read the article

  • How to leverage concurrency checking with EF 4.0 POCO Self Tracking Entities in a N-Tier scenario?

    - by Mark Lindell
    I'm using VS1010RC with the POCO self tracking T4 templates. In my WCF update service method I am using something similar to the following: using (var context = new MyContext()) { context.MyObjects.ApplyChanges(myObject); context.SaveChanges(); } This works fine until I set ConcurrencyMode=Fixed on the entity and then I get an exception. It appears as if the context does not know about the previous values as the SQL statement is using the changed entities value in the WHERE clause. What is the correct approach when using ConcurrencyMode=Fixed?

    Read the article

  • Application Architect vs. Systems Architect vs. Enterprise Architect?

    - by iaman00b
    So many buzzwords. Not sure if I need to start playing BS Bingo or not. And I'm not trying to be cynical. But I've heard many people with these various titles. There never seems to be a clear delineation between the three. Or there's a lot of domain crossover between the three. Actually, another I've seen while looking around here on Stackoverflow has been "Solutions Architect" as well. But that one doesn't seem to be so prevalent in other places. There are questions here and there with vague answers. But I'd like definative answers to this. Please assume I'm still relatively new to software stuff and that I'm trying to map out a career path. Oh, and please be gentle folks; this most definitely is not a duplicate question. Neither is it an aggregate. So kindly leave it alone. Xp

    Read the article

  • MVC architectural question - Where should payment processing go?

    - by Keltex
    This question is related to my ASP.NET MVC 2 development, but it could apply to any MVC environment and a question of where the logic should go. So let's say I have a controller that takes an online payment such as a shopping cart application. And I have the method that accepts the customers' credit card information: public class CartController : Controller CartRepository cartRepository = new CartRepository() [HttpPost] public ActionResult Payment(PaymentViewModel rec) { if(!ModelState.IsValid) { return View(rec); } // process payment here return RedirectToAction("Receipt"); } At the comment process payment here should the payment processing be handled: In the controller? By the repository? Someplace else?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >