Search Results

Search found 22716 results on 909 pages for 'network architecture'.

Page 216/909 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Rewriting Live TCP Streams

    - by user213060
    I want to rewrite TCP/IP streams. Ettercap's etterfilter command lets you perform simple live replacements of TCP/IP data based on fixed strings or regexes. Example: http://ettercap.sourceforge.net/forum/viewtopic.php?t=2833 I would like to rewrite streams based on my own filter program instead of just simple string replacements. Anyone have an idea of how to do this? Is there anything other than Ettercap that can do live replacement like this, maybe as a plugin to a VPN software or something? Thanks!

    Read the article

  • How to avoid injecting dependencies into an object so that it can pass them on?

    - by Pheter
    I am interested in applying dependency injection to my current project, which makes use of the MVC pattern. My controllers will call the models and therefore will need to inject the dependencies into the models. To do this, the controller must have the dependencies (such as a database object) in the first place. The controller doesn't need to make use of some of these dependencies (such as the database object), so I feel that it shouldn't be given this dependency. However, it has to have these dependencies if it is to inject them into the model objects. How can I avoid having dependencies injected into an object just so that it can pass them on? Doing so feels wrong and can result in many dependencies being injected into an object. Edit: I am using PHP.

    Read the article

  • Writing to Socket outputStream w/o closing it

    - by Eyal
    Hi, I'd like to write some messages to the server. Each time, for the tramsmitting only, I'm closing the outputStream and reopen it when I have to send the next message. os.write(msgBytes); os.write("\r\n".getBytes()); os.flush(); os.close(); How Can I keep this Socket's OutputStream, os, open and still be able to send the message? Thanks.

    Read the article

  • Squid handling of concurrent cache misses

    - by Oliver H-H
    We're using a Squid cache to off-load traffic from our web servers, ie. it's setup as a reverse-proxy responding to inbound requests before they hit our web servers. When we get blitzed with concurrent requests for the same request that's not in the cache, Squid proxies all the requests through to our web ("origin") servers. For us, this behavior isn't ideal: our origin servers gets bogged down trying to fulfill N identical requests concurrently. Instead, we'd like the first request to proxy through to the origin server, the rest of the requests to queue at the Squid layer, and then all be fulfilled by Squid when the origin server has responded to that first request. Does anyone know how to configure Squid to do this? We've read through the documentation multiple times and thoroughly web-searched the topic, but can't figure out how to do it. We use Akamai too and, interestingly, this is its default behavior. (However, Akamai has so many nodes that we still see lots of concurrent requests in certain traffic spike scenarios, even with Akamai's super-node feature enabled.) This behavior is clearly configurable for some other caches, eg. the Ehcache documentation offers the option "Concurrent Cache Misses: A cache miss will cause the filter chain, upstream of the caching filter to be processed. To avoid threads requesting the same key to do useless duplicate work, these threads block behind the first thread." Some folks call this behavior a "blocking cache," since the subsequent concurrent requests block behind the first request until it's fulfilled or timed-out. Thx for looking over my noob question! Oliver

    Read the article

  • Advantage of creating a generic repository vs. specific repository for each object?

    - by LuckyLindy
    We are developing an ASP.NET MVC application, and are now building the repository/service classes. I'm wondering if there are any major advantages to creating a generic IRepository interface that all repositories implement, vs. each Repository having its own unique interface and set of methods. For example: a generic IRepository interface might look like (taken from this answer): public interface IRepository : IDisposable { T[] GetAll<T>(); T[] GetAll<T>(Expression<Func<T, bool>> filter); T GetSingle<T>(Expression<Func<T, bool>> filter); T GetSingle<T>(Expression<Func<T, bool>> filter, List<Expression<Func<T, object>>> subSelectors); void Delete<T>(T entity); void Add<T>(T entity); int SaveChanges(); DbTransaction BeginTransaction(); } Each Repository would implement this interface (e.g. CustomerRepository:IRepository, ProductRepository:IRepository, etc). The alternate that we've followed in prior projects would be: public interface IInvoiceRepository : IDisposable { EntityCollection<InvoiceEntity> GetAllInvoices(int accountId); EntityCollection<InvoiceEntity> GetAllInvoices(DateTime theDate); InvoiceEntity GetSingleInvoice(int id, bool doFetchRelated); InvoiceEntity GetSingleInvoice(DateTime invoiceDate, int accountId); //unique InvoiceEntity CreateInvoice(); InvoiceLineEntity CreateInvoiceLine(); void SaveChanges(InvoiceEntity); //handles inserts or updates void DeleteInvoice(InvoiceEntity); void DeleteInvoiceLine(InvoiceLineEntity); } In the second case, the expressions (LINQ or otherwise) would be entirely contained in the Repository implementation, whoever is implementing the service just needs to know which repository function to call. I guess I don't see the advantage of writing all the expression syntax in the service class and passing to the repository. Wouldn't this mean easy-to-messup LINQ code is being duplicated in many cases? For example, in our old invoicing system, we call InvoiceRepository.GetSingleInvoice(DateTime invoiceDate, int accountId) from a few different services (Customer, Invoice, Account, etc). That seems much cleaner than writing the following in multiple places: rep.GetSingle(x => x.AccountId = someId && x.InvoiceDate = someDate.Date); The only disadvantage I see to using the specific approach is that we could end up with many permutations of Get* functions, but this still seems preferable to pushing the expression logic up into the Service classes. What am I missing?

    Read the article

  • How can i use complextype class or multi type class is it generic collection?

    - by programmerist
    i need a complex returning type. i have 4 class returning types COMPLEXTYPE must include Company, Muayene, Radyoloji, Satis because i must return data switch case situation how can i do? Maybe i need generic collections How can i do that? public class GenoTipController { public COMPLEXTYPE Generate(DataModelType modeltype) { _Company company = null; _Muayene muayene = null; _Radyoloji radyoloji = null; _Satis satis = null; switch (modeltype) { case DataModelType.Radyoloji: radyoloji = new Radyoloji(); return radyoloji; break; case DataModelType.Satis: satis = new Satis(); return satis; break; case DataModelType.Muayene: muayene = new Muayene(); return muayene; break; case DataModelType.Company: company = new Company(); return company; break; default: break; } } } public class CompanyView { public static List GetPersonel() { GenoTipController controller = new GenoTipController(); _Company company = controller.Generate(DataModelType.Company); return company.GetPersonel(); } } public enum DataModelType { Radyoloji, Satis, Muayene, Company }

    Read the article

  • How to choose programming language for projects?

    - by bdhar
    This is a question I constantly encounter when I attend any technical forums / discussions / interviews. There is a similar article but it focuses on business merits as well. What I am looking for is a guide (not a checklist like this one which is abstract and not so accurate) which helps an architect to choose the programming language to implement a requirement. Is there a book or article available for the same purpose?

    Read the article

  • What kind of data processing problems would CUDA help with?

    - by Chris McCauley
    Hi, I've worked on many data matching problems and very often they boil down to quickly and in parallel running many implementations of CPU intensive algorithms such as Hamming / Edit distance. Is this the kind of thing that CUDA would be useful for? What kinds of data processing problems have you solved with it? Is there really an uplift over the standard quad-core intel desktop? Chris

    Read the article

  • How can I interrupt a ServerSocket accept() method?

    - by lukeo05
    Hi, In my main thread I have a while(listening) loop which calls accept() on my ServerSocket object, then starts a new client thread and adds it to a Collection when a new client is accepted. I also have an Admin thread which I want to use to issue commands, like 'exit', which will cause all the client threads to be shut down, shut itself down, and shut down the main thread, by turning listening to false. However, the accept() call in the while(listening) loop blocks, and there doesn't seem to be any way to interrupt it, so the while condition cannot be checked again and the program cannot exit! Is there a better way to do this? Or some way to interrupt the blocking method? Thanks!

    Read the article

  • Static libraries, dynamic libraries, DLLs, entry points, headers ... how to get out of this alive?

    - by tunnuz
    Hello, I recently had to program C++ under Windows for an University project, and I'm pretty confused about static and dynamic libraries system, what the compiler needs, what the linker needs, how to build a library ... is there any good document about this out there? I'm pretty confused about the *nix library system as well (so, dylibs, the ar tool, how to compile them ...), can you point a review document about the current library techniques on the various architectures? Note: due to my poor knowledge this message could contain wrong concepts, feel free to edit it. Thank you Feel free to add more reference, I will add them to the summary. References Since most of you posted *nix or Windows specific references I will summarize here the best ones, I will mark as accepted answer the Wikipedia one, because is a good start point (and has references inside too) to get introduced to this stuff. Program Library Howto (Unix) Dynamic-Link Libraries (from MSDN) (Windows) DLL Information (StackOverflow) (Windows) Programming in C (Unix) An Overview of Compiling and Linking (Windows)

    Read the article

  • Business entity: private instance VS single instance

    - by taoufik
    Suppose my WinForms application has a business entity Order, the entity is used in multiple views, each view handles a different domain or use-case in the application. As an example, one managing orders, the other one digging into one order and displaying additional data. If I'd use nHibernate (or any other ORM) and use one session/dataContext per view (or per db action), I'd end up getting two different instances for the same Order (let's say orderId = 1). Although functionally the same entity, they are technically two different instances. Yes, I could implement Equals/GetHashcode to make them "seem" the same. Why would you go for a single instance per entity vs private instances per view or per use-case? Having single instances has the advantage of sharing INotifyPropertyChanged events, and sharing additional (non-persistent) data. Having a private instance in each view would give you the flexibility of the undo functionality on a view level. In the example above, I'd allow the user to change order details, and give them the flexibility to not save the change. Here, synchronisation between the view/use-case happens on a data persistence level. What would your argument be?

    Read the article

  • Store html form on client or server while building?

    - by Brett
    Hi, I have a fairly complex html form enhanced via jquery. It has multiple tabs, within each one things like a html form builder, uploads, descriptions. There is lots of data, and as the user flicks around the various tabs I'm thinking of posting the data to the server. For example, the form builder, has about 10 properties for each field, as the user flicks between the various fields, an ajax request saves the current values, then loads a new set from the new field clicked on. When the user hits save, the idea is then on the server all these bits and pieces come together and become the live version (i may store them as a temp version while the user is working away). So I guess my question is, for complex forms where the same fields are re-used within the one form, does anyone attempt to save all this data locally and upload it in one hit or do most of you do little ajax post's and compile it all when the final save button is hit? b

    Read the article

  • Why put a DAO layer over a persistence layer (like JDO or Hibernate)

    - by Todd Owen
    Data Access Objects (DAOs) are a common design pattern, and recommended by Sun. But the earliest examples of Java DAOs interacted directly with relational databases -- they were, in essence, doing object-relational mapping (ORM). Nowadays, I see DAOs on top of mature ORM frameworks like JDO and Hibernate, and I wonder if that is really a good idea. I am developing a web service using JDO as the persistence layer, and am considering whether or not to introduce DAOs. I foresee a problem when dealing with a particular class which contains a map of other objects: public class Book { // Book description in various languages, indexed by ISO language codes private Map<String,BookDescription> descriptions; } JDO is clever enough to map this to a foreign key constraint between the "BOOKS" and "BOOKDESCRIPTIONS" tables. It transparently loads the BookDescription objects (using lazy loading, I believe), and persists them when the Book object is persisted. If I was to introduce a "data access layer" and write a class like BookDao, and encapsulate all the JDO code within this, then wouldn't this JDO's transparent loading of the child objects be circumventing the data access layer? For consistency, shouldn't all the BookDescription objects be loaded and persisted via some BookDescriptionDao object (or BookDao.loadDescription method)? Yet refactoring in that way would make manipulating the model needlessly complicated. So my question is, what's wrong with calling JDO (or Hibernate, or whatever ORM you fancy) directly in the business layer? Its syntax is already quite concise, and it is datastore-agnostic. What is the advantage, if any, of encapsulating it in Data Access Objects?

    Read the article

  • FluentValidation + s#arp

    - by csetzkorn
    Hi, Did someone implement something like this: http://www.jeremyskinner.co.uk/2010/02/22/using-fluentvalidation-with-an-ioc-container/ in s#arp? Thanks. Christian PS: Hi, I have made a start in using FluentValidation in S#arp. I have implemented a Validator factory: public class ResolveType { private static IWindsorContainer _windsorContainer; public static void Initialize(IWindsorContainer windsorContainer) { _windsorContainer = windsorContainer; } public static object Of(Type type) { return _windsorContainer.Resolve(type); } } public class CastleWindsorValidatorFactory : ValidatorFactoryBase { public override IValidator CreateInstance(Type validatorType) { return ResolveType.Of(validatorType) as IValidator; } } I think I will use services which can be used by the controllers etc.: public class UserValidator : AbstractValidator { private readonly IUserRepository UserRepository; public UserValidator(IUserRepository UserRepository) { Check.Require(UserRepository != null, "UserRepository may not be null"); this.UserRepository = UserRepository; RuleFor(user => user.Email).NotEmpty(); RuleFor(user => user.FirstName).NotEmpty(); RuleFor(user => user.LastName).NotEmpty(); } } public interface IUserService { User CreateUser(User User); } public class UserService : IUserService { private readonly IUserRepository UserRepository; private readonly UserValidator UserValidator; public UserService ( IUserRepository UserRepository ) { Check.Require(UserRepository != null, "UserRepository may not be null"); this.UserRepository = UserRepository; this.UserValidator = new UserValidator(UserRepository); } public User CreateUser(User User) { UserValidator.Validate(User); ... } } Instead of putting concrete validators in the service, I would like to use the above factory somehow. Where do I register it and how in s#arp (Global.asax)? I believe s#arp is geared towards the nhibernator validator. How do I deregister it? Thanks. Best wishes, Christian

    Read the article

  • Good working habits to observe in project development?

    - by Will Marcouiller
    As my development experience grows, I see fit to stick to best practices from here and there to build somehow my own working practices while observing the conventions, etc. I'm currently working on a project which my goals is to graduate the security access model from an environment's Active Directory to another environment's automatically. I don't know for any of you, but as far as I'm concerned, I meet some real difficulties sticking to only one way, then develop. I mean, I learn something new everyday while visiting SO, and recently wanted to get acquainted with generics. On the other hand, I better know the Façade pattern which proved to be very practical in transactional programming in process systems. This seems to be less practical for desktop application as there are plenty of variables to consider in a desktop application that you don't have to care in transactional programming, as you're playing only with information data. As for my current project, I have: Groups; Organizational Units; Users. Which are all considered an entry in the Active Directory. This points out to be a good candidate for generics, as also approached this way by Bart de Smett's Linq to AD on CodePlex. He has a DirectorySource<T>, and to manage let's say groups, then he instantiate a source with the proper type: var groups = new DirectorySource<Group>(); This seems to be very a good way of doing. Despite, I seem to go from one pattern to another and I don't seem to be able to strictly stick to one. While I'm aware that one must not stay with only one way of doing, since each pattern statisfies certain advantages, while also illustrating disadvantages under some usage conditions, I seem to want to develop with both patterns having a singleton Façade class with the underlying factories which represent the sub systems: GroupsFactory; UsersFactory; OrganizationalUnitsFactory. Each of the factories offers the possible operations for their respective entity (group, user, OU). To make a very long story short, I often have plenty of ideas while developping and this causes me some trouble, as I go from an idea to another feeling completely lost after a while. Yet I understand the advantages and disavantages, I have no trouble choosing from one pattern to another depending on the situation. Nevertheless, when it comes to programming itself, if I'm not part of a team, I feel sometimes like I can't do anything good. That is, because I can't stand not doing something "perfect" the first time. The role I play within the project is both: the project manager and the programmer. I am more comfortable in the project manager role, architectural role, analytical role than the developer's. Has any of you some good habbits to observe in project development? Thanks to you all! =)

    Read the article

  • Calling sp and Performance strategy.

    - by Costa
    Hi I find my self in a situation where I have to choose between either creating a new sp in database and create the middle layer code. so loose some precious development time. also the procedure is likely to contain some joins. Or use two existing sp(s), the problem of this approach is that I am doing two round trips to database. which can be poor performance especially if I have database in another server. Which approach you will go?, and why? thanks

    Read the article

  • Considerations when architecting an application using Dependency Injection

    - by Dan Bryant
    I've begun experimenting with dependency injection (in particular, MEF) for one of my projects, which has a number of different extensibility points. I'm starting to get a feel for what I can do with MEF, but I'd like to hear from others who have more experience with the technology. A few specific cases: My main use case at the moment is exposing various singleton-like services that my extensions make use of. My Framework assembly exposes service interfaces and my Engine assembly contains concrete implementations. This works well, but I may not want to allow all of my extensions to have access to all of my services. Is there a good way within MEF to limit which particular imports I allow a newly instantiated extension to resolve? This particular application has extension objects that I repeatedly instantiate. I can import multiple types of Controllers and Machines, which are instantiated in different combinations for a Project. I couldn't find a good way to do this with MEF, so I'm doing my own type discovery and instantiation. Is there a good way to do this within MEF or other DI frameworks? I welcome input on any other things to watch out for or surprising capabilities you've discovered that have changed the way you architect.

    Read the article

  • Do you use an architectural framework for Flex/AIR development?

    - by Christophe Herreman
    Given that Flex is still a relatively young technology, there are already a bunch of architectural frameworks available for Flex/AIR (and Flash) development, the main ones being Cairngorm and PureMVC. The amount of architectural frameworks is remarkable compared to other technologies. I was wondering how many of you use an architectural framework for Flex development. If so, why, or why not if you don't use any? To share my own experience and point of view: I have used Cairngorm (and ARP for Flash development) on a variety of projects and found that at times, we needed to write extra code just to fit into the framework, which obviously didn't feel right. Although I haven't used PureMVC on many occasions, I have the same gut feeling after looking at the examples applications. Architectural frameworks equal religion in some way. Most followers are convinced that their framework is THE framework and are not open or very skeptical when it comes to using other frameworks. (I also find myself hesitant and skeptical to check out new frameworks, but that is mainly because I would rather wait until the hype is over.) In conclusion, I'm thinking that it is better to have a sound knowledge of patterns and practices that you can apply in your application instead of choosing a framework and sticking to it. There is simply no right or wrong and I don't believe that there will ever be a framework that is considered the holy grail.

    Read the article

  • To Wrap or Not to Wrap: Wrapping Data Access in a Service Facade

    - by PureCognition
    For a while now, my team and I have been wrapping our data access layer in a web service facade (using WCF) and calling it from the business logic layer. Meanwhile, we could simply use the repository pattern where the business logic layer consumes the data access layer locally through an interface, and at any point in time, we can switch things out for it to hit a service instead (if necessary). The question is: When is it a good time to wrap the data access layer in a service facade and when isn't it? Right now, it seems like the main advantage is that other applications can consume the service, but if they are internal applications written in .NET then they can just consume the .NET assembly instead. Are there other advantages of having the DAL be wrapped in a service that I am unaware of?

    Read the article

  • Sending series of images to display like a movie on iPhone

    - by unknownthreat
    Allow me to elaborate more. On the server, we will have a program that will take data from iPhone and process that data and produce series of images. Each time an image is generated, it will be send back to display on iPhone. I have done all of the things above using UDP, OpenGL, and such. It works. The images are transferred to iPhone and can be displayed, but it is slow. The image's resolution is around 320 x 420 and we send the image pixels by pixels. This naive implementation leads to a slow framerate. I can see around 2-3 frames per second. There are also some UDP packets dropped, and this is expected. Are there any sort of compression method available for something like this? Are there any other method that can make this better? NOTE: please don't just write "compression" as an answer, because we are aware that we will need to do it in some ways.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >