Search Results

Search found 324 results on 13 pages for 'robin orheden'.

Page 10/13 | < Previous Page | 6 7 8 9 10 11 12 13  | Next Page >

  • is there any Open Source solution for Failover of incoming Traffic?

    - by sahil
    Hi, We have two ISP... and both ISP's Ip Nat with same Webserver IP, i want failover for incoming traffic , is there any open source solution? can i do it by making two name server , one for each ISP? ... I am not sure but as per my knowledge primary and secondary name server will reply in round robin method till they are live , once any name server will be unreachable then only another will be reply...so if i am right then i think i can do incoming failover by making two name server in my office... Waiting for your valuable response... Thanking you, Sahil

    Read the article

  • is there any Open Soruce solution for Failover of incoming Traffic?

    - by sahil
    Hi, We have two ISP... and both ISP's Ip Nat with same Webserver IP, i want failover for incoming traffic , is there any open source solution? can i do it by making two name server , one for each ISP? ... I am not sure but as per my knowledge primary and secondary name server will reply in round robin method till they are live , once any name server will be unreachable then only another will be reply...so if i am right then i think i can do incoming failover by making two name server in my office... Waiting for your valuable response... Thanking you, Sahil

    Read the article

  • http server connectivity puzzle

    - by jpmartins
    I have been seeing some strange connection issue in the production environment. The setup has two IBM Http Server's (IHS) and a network IP load-balancer in front of them (round-robin). One instance the system is working fine, the next requests stop arriving at the IHS. Telnet directly to port 80 of the IHS is established sucessfully, but connection to the port 80 through the IP of the load-balancer fails! The puzzle comes next, the network admins say the load-balancer is working fine. When we finally reboot the IHS servers and request start flowing... The situation happened three times the last month and no obvious pattern was found. Any debug ideas?

    Read the article

  • Munin Aggregate Graphs from several servers

    - by Sparsh Gupta
    I am using DNS round robin load balancing and have divided my total traffic onto multiple servers. Each server does around 300-400req/second but I am interested in having an aggregate graph telling me the TOTAL of all requests per second served by our architecture. Is there any way I can do this. Right now each graph in Munin comes as a separate graph as they depict things on one server. I am using configuration as follow which doesn't work doesnt work for me, does this configuration got errors? [TRAFFIC.AGGREGATED] update no requests.graph_title nGinx requests requests.graph_vlabel nGinx requests per second requests.draw LINE2 requests.graph_args --base 1000 requests.graph_category nginx requests.label req/sec requests.type DERIVE requests.min 0 requests.graph_order output requests.output.sum \ lb1.visualwebsiteoptimizer.com:nginx_request_lb1.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb2.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb3.visualwebsiteoptimizer.com_request.request

    Read the article

  • Highly Available Web Application (LAMP)

    - by Anthony Rizzo
    I work for a small company who provides a web application for thousands of users. Earlier this year they had one server hosted one company. We recently acquired another server in a different location with the hopes of one day making this a redundant failover machine. I understand what to do with the mysql replication, I plan on using a master-master replication setup, and rsync to sync the scripts and files, however I am at a stand still about how to configure the fail-over. Ideally I would like the two machines to accept requests, like a round robin dns, however if one machine goes down I do not want requests to go that machine. All of the solutions I am come across assumes high availability of servers in the same location, these servers are in two completely different locations with different public ip address. Any help would be great. Thanks

    Read the article

  • Want to SASL/TLS authentication

    - by Naval
    I want to send mail from remote client from my server(centos 5 and 64 bit) for this i need to sasl auth but i have no idea about it what changes i have to make in my server and client here I want to make things more clear my server's hostname/Ip is = test02.s80.in/176.67.172.209 now i want to authenticat remote client vps2.smail.info and vps1.smail.info to deliver mail .. so plz help me if any systematic way to do sasl/tls authentication for these clients... i am using DNS load-balancing(round-robin) mx record lookup technique for load balancing..

    Read the article

  • Ways to do simple failover with one server and two IPs

    - by CrassHoppr
    The setup is one server (Windows 2008) at one location with two incoming connections. As the server has to interface with various on-site devices, and will have a small number of incoming connections, a data center is not an option, and instead cable/dsl connections must be used. The goal is that users visit https://service.site.com and are sent to either the primary IP address or a secondary IP if the primary is down. I've seen advice to use round robin DNS for this, but caching an IP for a downed interface is something I'd like to avoid. Is something like this possible with these constraints?

    Read the article

  • IRC Services with failover support?

    - by insertjokehere
    I run a single server (call it 'server A') IRC 'network', and thank to the generosity of some friends, I have been given a second server ('server B') that I can run an IRCd on in order to provide redundancy in case server A crashes. This is fine, I can set up a round-robin DNS with the servers linked. The problem I have is what to do about services? Does anyone know of a way to get the services to 'fail over' in case of a server failure? Eg, Server A starts off running the services, but suddenly crashes. Server B detects this and starts its own copy of the services (ideally with the same configuration and data as the services on Server B) One solution that comes it mind is to write a bot that each server runs, that sit in a channel periodically checking if the bot from the other server is in the channel. If it is, then all is well. If not, then failover. I would prefer not to have to code this myself though We are currently using Unreal IRCd and Anope services on Linux

    Read the article

  • SharePoint 2010 MySites - Simple explanation needed!

    - by Chris W
    I've been playing around with the 2010 beta for a couple of weeks, experimenting with topology options etc. I think I've got myself totally confused as to how it works hence if there's any SP experts out there that can explain things in simple terms for me I'd appreciate it! I want to setup a farm with 3 servers providing the content & MySites. I presume that the way to do this is to load balance or DNS round robin traffic between the 3 servers. The bit where I'm confused is that My Site Settings page asks for a specific My Site Host hence all my site traffic will be pushed to a single server even though we have 3 in the farm. If this hosts fails I presume MySites will be unavailable. Is this right? How do I configure it so that access to MySites is load balanced across the 3 servers in the farm?

    Read the article

  • Session persistence between multiple Rails / Unicorn servers with Redis as session_store on AWS

    - by d_ethier
    I've got 2 nginx EC2 instances pointing to 2 Unicorn EC2 instances in a round robin load balanced configuration. The two nginx instances are being the Elastic Load Balancer. Both Unicorn instances have a Redis session_store configured which is in a master/slave configuration with an Elastic IP attached to the master. I've tried configuring the session stickiness on the load balancer, but sessions are lost on each page refresh. I'm using the redis-store gem for the session_store configuration and redis support. Anyone have any ideas as to why this is not working?

    Read the article

  • Doesn't DNS diversity negatively affect performance? Why/how?

    - by cnst
    If you look at the press releases of various orgs that run the internet, you can see them praise the fact that now they run root server X in city Y, as if that magically makes everyone in city Y get all the relevant resolutions from the local server X, instead of going 200ms across the oceans and lands to other continents for resolutions. Similarly, the zones of some geographical domain names, like .ru, are being mirrored not just within Europe, but also, for example, in Hong Kong, which is no more, no less, but is about 300ms away from central Europe, since the traffic is often crossing the two oceans on each way. Doesn't all of this negatively affect DNS performance? Isn't it more of a liability to have a diverse pool of geodispersed authoritative servers, especially if your target audience is quite geographically concentrated? Perhaps a better question is, are there any DNS resolvers that use something better than the naive round-robin for choosing which authoritative server to contact?

    Read the article

  • What is the best way to auto failover to backup WAN link for web server

    - by user66735
    Hi Iam looking for the best way to ensure my server ( application ) remains available for all my users (on web/LAN/WAN ), when my primary ISP link fails. My server is behind a firewall on which both my primary & secondary links land. I have already assigned multiple IPs (both ISP's static IP) to the 'A' record ( host.example.com ) in the DNS. However in a round robin scenario is there a way I can ensure that my web user will not see a "cannot dislay web page" error ever ?? What are the better methods to achieve this??

    Read the article

  • Nginx issue with two web nodes

    - by HTF
    I'm running Wordpress website with Nginx and Memcached. I have simple DNS round robin balancing with A records pointing to both web servers. I've noticed the following entries in both web servers access logs: 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 I've configured W3 Total cache plugin for Wordpress - pointing to loopback address (127.0.0.1:11211) on each Wordpress installation. Is this because the webserver is trying to access content that is cached on the other web server? Shall I add IPs to W3 plugin of both web servers on each website (192.168.1.:11211, 192.168.1.2:11211)? I'm not sure if this related to Memcached or maybe some configuration issue on the server itself? Regards

    Read the article

  • Load balanced proxies to avoid an API request limit

    - by ClickClickClick
    There is a certain API out there which limits the number of requests per day per IP. My plan is to create a bunch of EC2 instances with elastic IPs to sidestep the limitation. I'm familiar with EC2 and am just interested in the configuration of the proxies and a software load balancer. I think I want to run a simple TCP Proxy on each instance and a software load balancer on the machine I will be requesting from. Something that allows the following to return a response from a different IP (round robin, availability, doesn't really matter..) eg. curl http://www.bbc.co.uk -x http://myproxyloadbalancer:port Could anyone recommend a combination of software or even a link to an article that details a pleasing way to pull it off? (My client won't be curl but is proxy aware.. I'll be making the requests from a Ruby script..)

    Read the article

  • Who's Talking about Oracle ADF Essentials 11.1.2.3: News & Blogs?

    - by Dana Singleterry
    With the recent release of Oracle ADF Essentials - The core of Oracle ADF which is free, numerous online news sources, developers, Oracle Aces, and Oracle PMs have been furiously blogging / writing articles about this news with excitement.  Here is some of the messaging all in one place for your review. News coverage on Oracle ADF Essentials 11.1.2.3: Computerworld, ITworld and InfoWorld: Oracle releases free ADF Essentials eWEEK: Oracle Launches Free Version of Application Development Framework IT Business Edge: Oracle Starts to Embrace App Servers CMSWire: Oracle Debuts Free Version of its ADF Application Building Tools InfoQ: Oracle Launches Free Version of Application Development Framework Computer Business Review: Oracle unveils Application Development Framework Essentials The Register: Oracle woos open sourcers with free Java web framework Blog entries on Oracle ADF Essentials 11.1.2.3: Oracle ADF Core Functionality Now Available for Free - Presenting Oracle ADF Essentials by JDeveloper PMs Blog ADF Essentials - Available for free and certified on GlassFish! by delabassee JDeveloper 11.1.2.3.0 is out together with Oracle ADF Essentials by Timo Hahn ADF Essentials (A Free Version) Released by Chad Thompson ADF Essentials - Quick Technical Review by Andrejus Baranovskis Develop and Deploy ADF applications free of charge using the new ADF Essentials" by Lucas Jellema Free! ADF Essentials! by Angus Myles Oracle ADF Essentials by Stijn Haus Free Version of Oracle ADF Framework available by Robin Muller-Bady ADF Essentials Release by Eingestellt von Markus Klenke Free version of Oracle ADF - ADF Essentials by Emilio Petrangeli Oracle ADF Essentials - finally free by Jakub Pawlowski Oracle ADF Essentials, a Free Version of ADF by Jake Kuramot

    Read the article

  • The Retail Week Conference 2012 - Interview with Paul Dickson

    - by user801960
    Recently we attended the Retail Week Conference at the Hilton London Metropole Hotel in London. The conference proves to be an inspirational meeting of retail minds and the insight gained from both the speakers and the other delegates is invaluable. In particular we enjoyed hearing from Charlie Mayfield, Chairman at John Lewis Partnership, about understanding how the consumer is viewing the ever changing world of retail; a session on how to encourage brand-loyal multichannel activities from Robin Terrell of House of Fraser with Alan White of the N Brown Group, Vince Russell from The Cloud and Lucy Neville-Rolfe from Tesco; and a fascinating session from Tim Steiner, Chief Executive of Ocado, about how the business makes it as easy as possible for consumers to shop on their various platforms, which included some surprising usage statistics. Oracle's own Vice President of Retail, Paul Dickson, also held a session with Richard Pennycook, Group Finance Director at Morrisons, about the role of technology in accelerating and supporting the business strategy. Morrisons' 'Evolve' programme takes a litte-and-often approach to updating its technology infrastructure to spread cost and keep the adoption process gentle for staff, and the session explored how the process works and how Oracle's technology underpins the programme to optimise their operations using actionable insight. We had a quick chat with Paul Dickson at the session to get his thoughts on the programme - the video is below. We also filmed the whole presentation, so keep checking back on this blog if you're interested in seeing it.

    Read the article

  • Webinar, June 27: Application Intelligence and Connected Devices

    - by terrencebarr
    Oracle and Beecham have recently conducted a market survey on use of Connected Devices for M2M & Internet of Things (IoT) applications and new trends. On June 27, 9 am ET the first session in this webinar series addresses intelligence in connected devices. Join Peter Utzschneider from Oracle and Robin Duke-Woolley of Beecham Research as they discuss the findings from this survey and the implications for the M2M & IoT connected devices market: What are the key business drivers of your connected devices program? To what extent do you expect the intelligence required for M2M & IoT applications to change? Would these changes occur at the network edge, at the data center, or both? What are the impacts of these changes on ISV’s and device manufacturers? What are the opportunities for other M2M & IoT players? To attend, please register for free or click on the image. Cheers, – Terrence Filed under: Embedded Tagged: Connected, devices, iot, Java Embedded, Java ME Embedded, M2M, webinar

    Read the article

  • EVENT RECAP: Oracle Health Sciences Conference

    - by cwarticki
    Monaco served as an intense location for this year's Oracle Health Sciences User Group conference.  It was a "Grand Prix" event with nearly 200 attendees from all over the world.  In a country famous for high performance race cars, luxury super yachts and lifestyles of the rich & famous, the conference was very Ellison-esque. I think the Superyachts were being paired with Exadata. The OSHUG staff were fantastic . Robin and Taylor (pictured left) from Drohan Management took care of all the details and were wonderful to get to know. I met with some real Oracle loyalists.  Stan Sachar,  I.T. Manager for Westat, and the Focus Group co-chair for Admin Configuration Mgmt (ACM).  Westat was an early adopter of Oracle Clinical for clinical trial projects with installations in 1997-98.  I had a chance to talk with Stan during the reception and he is an Oracle advocate and evangelist. He's invested in his career in using Oracle products. (Stan Sachar pictured right with Dick Wolnick from Oracle, on left) I also met with Mirco Becker from Grunenthal Gmbh.  He's been working with the Argus product for over 6 years.  He's a big user of Oracle Support. Mirco attended my support best practices session and was actively engaged and asked several questions.  He's excited to adopt those best practices and work more efficiently and effectively with Support. Finally, I thank the many who attended my session.  I admit, the beautiful weather and view of the ocean was a distraction, but nonetheless my mission was to provide you with all the necessary support resources for Health Sciences users. You will find a copy of my presentation on the OSHUG website. Bon Voyage Monaco.  Thanks for the memories.  I'll see everyone next year, in Miami. -Chris WartickiGlobal Customer Management

    Read the article

  • SOA Cloud and Service Technology Symposium December 4-5th 2013 in Mexico

    - by JuergenKress
    Do you want to attend the SOA; Cloud and Service Technology Symposium December 4-5th 2013 in Mexico? Please feel free to use the promotional code “Q14CB324” for a 50% discount. Here are the Conference presentations from Partners and Oracle: "Cloud Service Brokers" Jürgen Kress, Oracle, Rolando Carrasco, S&P Solutions "Fast Data - Delivering High-Velocity and Volume Big Data Business Value in Real Time" Robin Smith, Oracle, Robert Greene, Oracle "Unlocking the Value of Big Data" Raul Goycoolea Seoane, Oracle "Modeling Business Process Architecture on BPMN 2.0 and Decomposing it to Service Inventory" Jorge Heredia, Itehl Consulting "BPM and Dynamic/Adaptive Case Management - Friends or Foes?" Manas Deb, Oracle "Building SOA and MDM Solutions to Enable Cloud Adoption" Luis Weir, HCL, John Dunn, HCL "Secure Applications in the Cloud: Security & Privacy Patterns and Mechanisms" Ricardo Puttini, University of Brasília, Anderson Nascimento, University of Brasília "SOA, Data Grids, Mobile and Clouds - Where Next for SOA?" Matt Brasier, C2B2 Consulting LTD "Achieving Greater Responsiveness with BPM" Andre Boaventura, Oracle Do you want to meet the Oracle team at the conference? Please send us a message on twitter @soacommunity. Do you want to network at the conference? Please use the #soacommunity. For details and registrations please visit the conference website. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Symposium,Thmas Erl,Service Technolgy Symosium,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • First Shard for SQL Azure and SQL Server

    - by Herve Roggero
    That's it!!!!! It's ready to go and be tested, abused and improved! It requires .NET 4.0 and uses some cool technologies, like caching (the new System.Runtime.Caching) and the Task Parallel Library (System.Threading.Tasks). With this library you can: Define a shard of 1, 2 or 100 SQL databases (a mix of SQL Server and SQL Azure) Read from the shard in parallel or sequentially, and cache resultsets Update, Delete a record from the shard Insert records quickly in the shard with a round-robin load Reset the cache You can download the source code and a sample application here: http://enzosqlshard.codeplex.com/  Note about the breadcrumbs: I had to add a connection GUID in order for the library to know which database a record came from. The GUID is currently calculated on the fly in the library using some of the parameters of the connection string. The GUID is also dynamically added to the result set so the client can pass it back to the library. I am curious to get your feedback on this approach. ** Correction from my previous post: this is a library for a Horizontal Partition Shard (HPS): tables are split across databases horizontally. So in essence, the tables need to have the same schema across the databases.

    Read the article

  • get text from a certain <tr> tag

    - by WideBlade
    Is there a way to get the text in a dynamic way from a certain <tr> tag in the page? e.g. I've a page with a <tr> with the value "a1". I'd like to get only the text from this <tr> tag, and echo it into the page. is this possible? here is the HTML: <html><tr id='ieconn2' > <td><table width='100%'><tr><td valign='top'><table width='100%'><tr><td><script type="text/javascript"><!-- google_ad_client = "pub-4503439170693445"; /* 300x250, created 7/21/10 */ google_ad_slot = "7608120147"; google_ad_width = 300; google_ad_height = 250; //--> </script> <script type="text/javascript" src="http://pagead2.googlesyndication.com/pagead/show_ads.js"> </script><br>When Marshall and Lily fear they will never get pregnant, they see a specialist who can hopefully help move the process along. Meanwhile, Robin starts her new job.<br><br><b>Source: </b>CBS <br>&nbsp;</td></tr><tr><td><b>There are no foreign summaries for this episode:</b> <a href='/edit/shows/3918/episode_foreign_summary/?eid=1065002553&season=6'>Contribute</a></td></tr><tr><td><b>English Recap Available: </b> <a href='/How_I_Met_Your_Mother/episodes/1065002553?show_recap=1'>View Here</a></td></tr></table></td><td valign='top' width='250'><div align='left'> <img alt='How I Met Your Mother season 6 episode 13' src="http://images.tvrage.com/screencaps/20/3918/1065002553.jpg" width="248" border='0' > </div><div align='center'><a href='/How_I_Met_Your_Mother/episodes/1065002553?gallery=1'>6 gallery images</a></div></td></tr></table></td></tr><tr> <td background='/_layout_v3/buttons/title.jpg' height='39' width='631' align='center'> <table width='100%' cellpadding='0' cellspacing='0' style='margin: 1px 1px 1px 1px;'> <tr> <td align='left' style='cursor: pointer;' onclick="SwitchHeader('ieconn3','iehide3','26')" width='90'>&nbsp;<span style='font-size: 15px; font-weight: bold; color: black; padding-left: 8px;' id='iehide3'><img src='/_layout_v3/misc/minus.gif' width='26'></span></td> <td align='center' style='cursor: pointer;' onclick="SwitchHeader('ieconn3','iehide3','26')" ><h5 class='nospace'>Sponsored Links</h5><a name=''></a></td> <td align='left' width='90' >&nbsp;</td></tr></table></td> </tr></html> All I want to get is this text: "When Marshall and Lily fear they will never get pregnant, they see a specialist who can hopefully help move the process along. Meanwhile, Robin starts her new job. "

    Read the article

  • Which powerful laptop, with UK keyboard and 8gb ram

    - by RobinL
    I've been searching high and low for high spec laptops compatible with Ubuntu. The lack of coherent information on the topic is high (considering the number of people who apparently want a good laptop with an OS operating system). So I thought you may have some advice. My requirements: a) has = 8Gb ram b) is compatible with Ubuntu c) has a UK keyboard and charger d) does not cost the Earth Which would you go for? Does anyone have good experience with high-end laptops running Ubuntu? So here's some background research: Samsung Series 7 looks great, but has various problems on Ubuntu, including: poor battery life, touchpad does not work, graphics card not fully supported and sucks power when it does (see [here] and [here], for example). Other options on the [wish list] include: the sensible [Acer] (possibly n.1 choice, but not sure about graphics card compatibility or battery), a nice looking [HP Pavilion dv6-6c56ea], which also has incompatibility issues (see [here] and [here] and check ubuntuforums) And another [Acer] which may be best due to its simplicity and cheapness. Other sub-questions: didn't Dell offer Ubuntu support for decent laptops (above 6Gb ram their offerings are scarce); what about pre-installed options such as those provided by System76? If it weren't for the UK keyboard and charger, I'd probably go for this [amazing-looking] [machine]. Many thanks for any advice, P.s. Apologies for lack of hyperlinks; I'm a noob so only allowed 2 :( All 10 links are available here though for the interested reader :) Robin

    Read the article

  • Wednesday at Oracle OpenWorld 2012 - Must See Session: “Event-Driven Patterns and Best Practices: Even More Important with Big Data”

    - by Lionel Dubreuil
    Don’t miss this “CON8636 - Event-Driven Patterns and Best Practices: Even More Important with Big Data“ session: Speakers: Faisal Nazir - Senior Solutions Architect, Motorola Shinichiro Takahashi - Senior Manager, Service Platform Department, NTT DOCOMO, INC. Robin Smith - Product Management/Strategy Director - Oracle Event Processing, Oracle Date: Wednesday, Oct 3 Time: 10:15 AM - 11:15 AM Location: Moscone South - 310 As the demand for big data analytics and integration grows across all industries, this session focuses on the role of the Oracle event-driven solution platform in delivering vital real-time integrated analysis intelligence to the data streams consumed and emitted from these large distributed data stores. Objectives for this session are to: Increase awareness of Oracle Event Processing, showcasing tight alignment with big data solutions Highlight emerging usage patterns in relation to streaming event data and distributed data stores Show a significant Oracle competitive advantage over IBM solutions advertised in this domain Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Need some critique on .NET/WCF SOA architecture plan

    - by user998101
    I am working on a refactoring of some services and would appreciate some critique on my general approach. I am working with three back-end data systems and need to expose an authenticated front-end API over http binding, JSON, and REST for internal apps as well as 3rd party integration. I've got a rough idea below that's a hybrid of what I have and where I intend to wind up. I intend to build guidance extensions to support this architecture so that devs can build this out quickly. Here's the current idea for our structure: Front-end WCF routing service (spread across multiple IIS servers via hardware load balancer) Load balancing of services behind routing is handled within routing service, probably round-robin One of the services will be a token Multiple bindings per-service exposed to address JSON, REST, and whatever else comes up later All in/out is handled via POCO DTOs Use unity to scan for what services are available and expose them The front-end services behind the routing service do nothing more than expose the API and do conversion of DTO<-Entity Unity inject service implementation to allow mocking automapper for DTO/Entity conversion Invoke WF services where response required immediately Queue to ESB for async WF -- ESB will invoke WF later Business logic WF layer Expose same api as front-end services Implement business logic Wrap transaction context where needed Call out to composite/atomic services Composite/Atomic Services Exposed as WCF One service per back-end system Standard atomic CRUD operations plus composite operations Supports transaction context The questions I have are: Are the separation of concerns outlined above beneficial? Current thought is each layer below is its own project, except the backend stuff, where each system gets one project. The project has a servicehost and all the services are under a services folder. Interfaces live in a separate project at each layer. DTO and Entities are in two separate projects under a shared folder. I am currently planning to build dedicated services for shared functionality such as logging and overload things like tracelistener to call those services. Is this a valid approach? Any other suggestions/comments?

    Read the article

  • Dividing a Video into Frames and Sending Frames to Streams

    - by Amit Kumar
    I have to implement a "demux" that divides up a video stream and sends each frame to one of multiple output streams in a round-robin fashion. I am trying to implement the demux as follows. The video stream contains one frame after another and is implemented via a java InputStream. Each frame has a frame header followed by the image data. The demux needs to read the frame header to know the size of the image data. The image data can then be redirected from the input video stream to one of the output streams (java OutputStream). My problem is about how to implement this redirection. That is, connect the InputStream to the OutputStream to send N bytes (here N is the size of the image data), and then disconnect and connect to another OutputStream. I have seen the interface of PipedInputStream etc but they do not seem to implement the disconnection.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13  | Next Page >