Search Results

Search found 27695 results on 1108 pages for 'window services'.

Page 70/1108 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Exploring TCP throughput with DTrace (2)

    - by user12820842
    Last time, I described how we can use the overlap in distributions of unacknowledged byte counts and send window to determine whether the peer's receive window may be too small, limiting throughput. Let's combine that comparison with a comparison of congestion window and slow start threshold, all on a per-port/per-client basis. This will help us Identify whether the congestion window or the receive window are limiting factors on throughput by comparing the distributions of congestion window and send window values to the distribution of outstanding (unacked) bytes. This will allow us to get a visual sense for how often we are thwarted in our attempts to fill the pipe due to congestion control versus the peer not being able to receive any more data. Identify whether slow start or congestion avoidance predominate by comparing the overlap in the congestion window and slow start distributions. If the slow start threshold distribution overlaps with the congestion window, we know that we have switched between slow start and congestion avoidance, possibly multiple times. Identify whether the peer's receive window is too small by comparing the distribution of outstanding unacked bytes with the send window distribution (i.e. the peer's receive window). I discussed this here. # dtrace -s tcp_window.d dtrace: script 'tcp_window.d' matched 10 probes ^C cwnd 80 10.175.96.92 value ------------- Distribution ------------- count 1024 | 0 2048 | 4 4096 | 6 8192 | 18 16384 | 36 32768 |@ 79 65536 |@ 155 131072 |@ 199 262144 |@@@ 400 524288 |@@@@@@ 798 1048576 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3848 2097152 | 0 ssthresh 80 10.175.96.92 value ------------- Distribution ------------- count 268435456 | 0 536870912 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 1073741824 | 0 unacked 80 10.175.96.92 value ------------- Distribution ------------- count -1 | 0 0 | 1 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 21 16384 | 36 32768 |@ 78 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5391 131072 | 0 swnd 80 10.175.96.92 value ------------- Distribution ------------- count 32768 | 0 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 131072 | 0 Here we are observing a large file transfer via http on the webserver. Comparing these distributions, we can observe: That slow start congestion control is in operation. The distribution of congestion window values lies below the range of slow start threshold values (which are in the 536870912+ range), so the connection is in slow start mode. Both the unacked byte count and the send window values peak in the 65536-131071 range, but the send window value distribution is narrower. This tells us that the peer TCP's receive window is not closing. The congestion window distribution peaks in the 1048576 - 2097152 range while the receive window distribution is confined to the 65536-131071 range. Since the cwnd distribution ranges as low as 2048-4095, we can see that for some of the time we have been observing the connection, congestion control has been a limiting factor on transfer, but for the majority of the time the receive window of the peer would more likely have been the limiting factor. However, we know the window has never closed as the distribution of swnd values stays within the 65536-131071 range. So all in all we have a connection that has been mildly constrained by congestion control, but for the bulk of the time we have been observing it neither congestion or peer receive window have limited throughput. Here's the script: #!/usr/sbin/dtrace -s tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @cwnd["cwnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd); @ssthresh["ssthresh", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd_ssthresh); @unacked["unacked", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); @swnd["swnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } One surprise here is that slow start is still in operation - one would assume that for a large file transfer, acknowledgements would push the congestion window up past the slow start threshold over time. The slow start threshold is in fact still close to it's initial (very high) value, so that would suggest we have not experienced any congestion (the slow start threshold is adjusted when congestion occurs). Also, the above measurements were taken early in the connection lifetime, so the congestion window did not get a changes to get bumped up to the level of the slow start threshold. A good strategy when examining these sorts of measurements for a given service (such as a webserver) would be start by examining the distributions above aggregated by port number only to get an overall feel for service performance, i.e. is congestion control or peer receive window size an issue, or are we unconstrained to fill the pipe? From there, the overlap of distributions will tell us whether to drill down into specific clients. For example if the send window distribution has multiple peaks, we may want to examine if particular clients show issues with their receive window.

    Read the article

  • Can I use Unity with XMonad?

    - by geniass
    I've been using Fedora for a while, but I recently decided to upgrade my ubuntu installation from 12.04 to 12.10. While I was using Fedora, I found xmonad tiling window manager which I sort of got working with xfce. I really like xmonad and I now automatically use the keyboard shortcuts without even thinking. So I would like to use xmonad with unity. Now, I have seen several blog posts etc. that explain how to setup the whole thing in 12.04, with Unity2D. But now, Unity2D has been removed, and the old methods no longer apply. I have read that Unity is actually just a Compiz plugin, and that the wm can't be replaced. Does no version of Unity have a replaceable WM? I've also tried MATE (GNOME2 fork) with xmonad on 12.10 and that is totally broken for me, but that's for another question.

    Read the article

  • How to persistently export an environment variable before starting compiz

    - by Dykam
    A few months ago compiz suddenly stopped working. That is, it got to a refreshrate where the redrawing is more than noticeable. It took 5 seconds to redraw a chat window. Ever since I've been using metacity instead, but I've found myself missing some plugins badly. I found the following solution: export __GL_YIELD="NOTHING"; compiz --replace This works fine, everything is fast again with compiz. But how to make sure this variable is always set whenever I run compiz? I'm using standard nVidia drivers, failed to get the open source ones working.

    Read the article

  • Recommendations for taskbar

    - by user210551
    I recently updated to Ubuntu 13.10 and now tint2 stopped working. Better yet: it works, I just can not see it as the all bar is now transparent. Since Tint2 is no longer maintained I doubt it will be updated hence I'm now looking into alternative is there are any. In short: looking for a simple taskbar that sits at the bottom of the screen and that lists applications per window (no groupping!). Don't need fancy animations or widgets/features. Any ideas?

    Read the article

  • How does one get Bluetile on 12.04?

    - by JKN
    I've been working on getting bluetile working on 12.04 so I can start down the path of tiling windows managers, but I have been having limited success. I have read the (only 7!) other posts on bluetile and tried the website, but unfortunately nothing seems to address precise thoroughly enough for me to get it working. I have tried getting gnome running on my machine via apt-get and following basic instructions from there, but without success. The apt-get for bluetile also fails (on selecting the gnome-bluetile session option, unity ends up opening anyway in a buggy, unstable way). I also messed around with specifying a custom xsession for lightdm to look at and setting the window-manager to bluetile, but again without success- somehow I ended up in unity again. Apologies if this question is too vague- I am new to really using linux systems, so sometimes I don't know what to ask or look for. Thanks!

    Read the article

  • How do I focus an "urgent" application?

    - by pydave
    Sometimes applications will wiggle in the launcher and have a blue triangle. (See compiz config settings for Unity.) How can I set a keyboard command to bring that application to the front in Ubuntu 11.10? I had it set in 11.04, but I don't remember how. This is useful when programs open in the background or if you activate them from another app. (I have an alias to send files to a single gvim instance from Terminal instead of opening them in their own window. When gvim gets the file, it wiggles but doesn't gain focus.)

    Read the article

  • [Natty] Disable new invisible border feature? (ruins compiz grid)

    - by Ike
    a new "feature" was added recently that adds an invisible border around the windows to grab for resizing (although i thought the new resize grip solved any issues). this annoys me because it destroys the usefulness of the grid plugin of compiz.. i'm not sure if the border is part of compiz or gnome, but i'd like to know how to disable it. i couldn't find any options in ccsm or the window settings in gnome. see the screenshot to see how much waste is causes. these windows should match up instead of having blank space surrounding all of them.

    Read the article

  • Calling services from the Orchestrating layer in SOA?

    - by Martin Lee
    The Service Oriented Architecture Principles site says that Service Composition is an important thing in SOA. But Service Loose Coupling is important as well. Does that mean that the "Orchestrating layer" should be the only one that is allowed to make calls to services in the system? As I understand SOA, the "Orchestrating layer" 'glues' all the services together into one software application. I tried to depict that on Fig.A and Fig.B. The difference between the two is that on Fig.A the application is composed of services and all the logic is done in the "Orchestrating layer" (all calls to services are done from the "Orchestrating layer" only). On Fig.B the application is composed from services, but one service calls another service. Does the architecture on Fig.B violate the "Service Loose Coupling" principle of SOA? Can a service call another service in SOA? And more generally, can the architecture on Fig.A be considered superior to the one on Fig.B in terms of service loose coupling, abstraction, reusability, autonomy, etc.? My guess is that the A architecture is much more universal, but it can add some unnecessary data transfers between the "Orchestrating layer" and all the called services.

    Read the article

  • Common methods/implementation across multiple WCF Services

    - by Rob
    I'm looking at implementing some WCF Services as part of an API for 3rd parties to access data within a product I work on. There are currently a set of services exposed as "classic" .net Web Services and I need to emulate the behaviour of these, at least in part. The existing services all have an AcquireAuthenticationToken method that takes a set of parameters (username, password, etc) and return a session token (represented as a GUID), which is then passed in on calls to any other method (There's also a ReleaseAuthenticationToken method, no guesses needed as to what that does!). What I want to do is implement multiple WCF services, such as: ProductData UserData and have both of these services share a common implementation of Acquire/Release. From the base project that is created by VS2k8, it would appear I will start with, per service: public class ServiceName : IServiceName { } public interface IServiceName { } Therefore my questions would be: Will WCF tolerate me adding a base class to this, public class ServiceName : ServiceBase, IServiceName, or does the fact that there's an interface involved mean that won't work? If "No it won't work" to Question 1, could I change IServiceName so it extends another interface, IServiceBase, thus forcing the presence of Acquire/Release methods, but then having to supply the implementation in each service. Are 1 and 2 both really bad ideas and there's actually a much better solution that, knowing next to nothing about WCF, I just haven't thought of?

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • wpf - window to be tight round user control

    - by Andy Clarke
    Hi, I've got a user control that I'm loading into a Window dynamically - I wanted to set the Window so that it didn't have a size and then I thought the window to resize accordingly depending on the UserControl. However it dosn't - can anyone assist please? I've made a very basic example - I've cut out the dynamic bits and just put a UserControl in a Window. What do I need to do to get the window to be tight around the UserControl? Thanks, Andy <UserControl x:Class="WpfApplication1.UserControl1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Height="300" Width="300" Background="LightBlue"> <Grid> </Grid> </UserControl> <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:WpfApplication1="clr-namespace:WpfApplication1" Title="Window1" > <Grid> <WpfApplication1:UserControl1> </WpfApplication1:UserControl1> </Grid> </Window>

    Read the article

  • Is it Possible to Query Multiple Databases with WCF Data Services?

    - by Mas
    I have data being inserted into multiple databases with the same schema. The multiple databases exist for performance reasons. I need to create a WCF service that a client can use to query the databases. However from the client's point of view, there is only 1 database. By this I mean when a client performs a query, it should query all databases and return the combined results. I also need to provide the flexibility for the client to define its own queries. Therefore I am looking into WCF Data Services, which provides the very nice functionality for client specified queries. So far, it seems that a DataService can only make a query to a single database. I found no override that would allow me to dispatch queries to multiple databases. Does anyone know if it is possible for a WCF Data Service to query against multiple databases with the same schema?

    Read the article

  • Help me re-center a ModalPopup within an iframe when the iframe's parent window scrolls

    - by Cory Larson
    I have a web page with an iframe in it (I don't like it, but that's the way it has to be). It's not a cross-domain iframe so there's nothing to worry about there. I have written a jQuery extension that does the centering of a ModalPopup (which is called from an overridden AjaxControlToolkit.ModalPopupBehavior._layout method) based on the width and height of the iframe's parent, so that it looks centered even though the iframe is not in the center of the page. It's a lot of tricky stuff, especially since the web page I added the iframe to runs in quirks mode. Now, I've also overridden AjaxControlToolkit.ModalPopupBehavior._attachPopup, so that when the parent window resizes or scrolls, the ModalPopup inside of the iframe recenter's itself relative to the new size of the parent window. However, the same code that attaches the popup to the parent window's resize event does NOT work for the parent window's scroll event. See the code below, and the comments: AjaxControlToolkit.ModalPopupBehavior.prototype._attachPopup = function() { /// <summary> /// Attach the event handlers for the popup to the PARENT window /// </summary> if (this._DropShadow && !this._dropShadowBehavior) { this._dropShadowBehavior = $create(AjaxControlToolkit.DropShadowBehavior, {}, null, null, this._popupElement); } if (this._dragHandleElement && !this._dragBehavior) { this._dragBehavior = $create(AjaxControlToolkit.FloatingBehavior, {"handle" : this._dragHandleElement}, null, null, this._foregroundElement); } $addHandler(parent.window, 'resize', this._resizeHandler); // <== This WORKS $addHandler(parent.window, 'scroll', this._scrollHandler); // <== This DOES NOT work this._windowHandlersAttached = true; } Can anybody explain to me why the resize event works while the scroll event doesn't? Any suggestions or alternatives out there to help me out? I am working with jQuery, so if I can use something besides the $addHandler method from MS that'd be fine. Be aware that I also have to override the _detachPopup function to remove the handler, so I need to take that into account. Thanks!

    Read the article

  • Opening URL in New Tab doesn't work in existing, programmatically-opened New Window (Firefox)

    - by seth
    I am building a web app, for myself, to control some servers on my home network, and discovered what I think is very odd behavior in Firefox. If you open a pop-up, via javascript, in Firefox, is it then impossible to open a new tab, via javascript in that pop-up? If not impossible, how do you do it? Given a clean, default Firefox 3.6.3 installation... If I open a page in Firefox and then call var my_window = window.open('http://www.google.com','_blank','top=10'); A brand new "pop-up" window opens. However, if instead I call var my_window = window.open('http://www.google.com'); A get a new tab. HOWEVER... If I call the first version var my_window = window.open('http://www.google.com','_blank','top=10'); And then in the new "pop-up" that opens, I call var my_window = window.open('http://www.google.com'); It opens a new tab in the original window, not a new tab in the pop-up. This seems very odd, and not intuitive at all. Why would the call in the pop-up open a tab in the "parent" window?

    Read the article

  • What to keep in mind when creating your own custom web services API?

    - by John Conde
    I have created a website which allows users to sign up for, and use, an online service. To help promote the website we will be have resellers who will be offering their own branded services through us. The initial plan is to allow resellers to place registration, login, and lost password forms on their own website and use an API created by us to handle these requests. I have begun outlining how I expect the API to work (and starting documenting it as well) and I want to make sure I get it right, or as close to right, as I can from the beginning as I know once you have declared a public API you want to avoid changing that API at all costs. So far I have decided: To have the user pass their account credentials with each request To require SSL for all requests What else should I be keeping in mind?

    Read the article

  • Best practice when working with web services that return objects?

    - by Raj
    Hello, I'm currently working with web services that return objects such as a list of files e.g. File array. I wanted to know whether its best practice to bind this type of object directly to my front end code for example a repeater/listview or whether to first parse it into my own list of "file class" e.g. customFiles[] If the web service changes then it will break my front end code, however if I create my own CustomFile class, then i would only need to change my code in one place to fix the issue, but it just seems like a lot of extra work to create the same classes from a web service, i wanted to know what is the best practice for this type of work. thanks Raj.

    Read the article

  • Security for web services only used from a Silverlight application?

    - by Lasse V. Karlsen
    I have googled a bit for how I should handle security in a web service application when the application is basically the data repository for a Silverlight application, but have gotten inconclusive results. The Silverlight application is not supposed to have its own user authentication, since it will be reachable only through a web application that the user have already authenticated to get into. As such, I was thinking I could simply add a parameter to the SL application that is a cookie-type value, with a certain lifetime, linked to the user in the database. The SL application would then have to pass this value alongside other parameters to the web services. Since the web service is hopefully going to be a generic web service endpoint, few methods, adding an extra parameter at this level will not be a problem. But, am I supposed to roll this system on my own? It sounds to me as this isn't exactly new features that nobody has considered before, so what are my options?

    Read the article

  • Are self-described / auto-descriptive services loosely or tightly coupled in a SOA architecture ?

    - by snowflake
    I consider a self-described / auto-descriptive service as a good thing in a SOA architecture, since (almost) everything you know to call the service is present in the service contract (such a WSDL). Sample of a non self-described service for me is Facebook Query Language (FQL http://wiki.developers.facebook.com/index.php/FQL), or any web service exchanging XML flow in a one String parameter for then parsing XML and performing treatments. Last ones seem further more technically decoupled, since technically you can switch implementations without technical impact on the caller, handling compatibility between implementations/versions at a business level. On the other side, having no strong interface (diluted into the service and its version), make the service tightly coupled to the existing implementation (more difficulty to interchange the service and to ensure perfect compatibility). This question is related to http://stackoverflow.com/questions/2503071/how-to-implement-loose-coupling-with-a-soa-architecture So, are self-described / auto-descriptive services loosely or tightly coupled in a SOA architecture ? What are the impacts regarding ESBs ? Any pointer will be appreciated.

    Read the article

  • How to open popup behind main window (HTML,jQuery)

    - by sara.ma
    I'm new. i have a popup code that when user click anywhere in the HTML page, a popup window shows up: (function () { document.onclick = function () { var sUrl = "http://URL.com"; if (typeof daily_capping == "undefined") var daily_capping = 10; if (typeof capping_minutes == "undefined") var capping_minutes = 60; if (document.cookie.indexOf("_popwin=") === -1) { var ads2day = document.cookie.split("_popwinDaily=")[1]; ads2day = typeof ads2day == "undefined" ? 0 : parseInt(ads2day.split(";")[0]); if (ads2day < daily_capping) { var isMSIE = navigator.userAgent.indexOf("MSIE") != -1 ? !0 : !1, _parent = self, sOptions, popunder; if (top != self) try { top.document.location.toString() && (_parent = top) } catch (err) {} sOptions = "toolbar=no,scrollbars=yes,location=yes,statusbar=yes,menubar=no,resizable=1,width=" + screen.width.toString() + ",height=" + (screen.height - 20).toString() + ",screenX=0,screenY=0,left=0,top=0", popunder = _parent.window.open(sUrl, "rhpop", sOptions); if (popunder) { popunder.blur(); if (isMSIE) { window.focus(); try { opener.window.focus() } catch (err) {} } else popunder.init = function (e) { with(e)(function () { if (typeof window.mozPaintCount != "undefined" || typeof navigator.webkitGetUserMedia == "function") { var e = window.open("about:blank"); e.close() } try { opener.window.focus() } catch (t) {} })() }, popunder.params = { url: sUrl }, popunder.init(popunder) } var now = new Date, popDaily = (new Date(now.getUTCFullYear(), now.getUTCMonth(), now.getUTCDate(), 23, 59, 59)).toGMTString(); document.cookie = "_popwinDaily=" + (ads2day + 1) + ";expires=" + popDaily + ";path=/"; var popInterval = new Date; popInterval.setTime(popInterval.getTime() + capping_minutes * 60 * 1e3), document.cookie = "_popwin=1;expires=" + popInterval.toGMTString() + ";path=/" } } } })(); but popup is on top. is it possible to make it open behind main page?? is there any lighter popup code for this purpose? thanks guys

    Read the article

  • How to show a window that acts like a popup menu?

    - by Edwin
    Hi, When window A is show, I want to show another none-modal popup window B, but: I don't want window A to become inactive due to window B becomes the front window; I want that when window B is focused, I pull down a combo box control on window A with one click (generally you have to click twice, one for moving the focus to window A and the second for pulling down the combo box); As you can see, the window B that I want is something like a more usable popup window like a popup menu (which is less a obstacle than a general none-modal window when you want it to get away by clicking on any other part of the parent window). Am I clear on my question? Thank you.

    Read the article

  • Can a SQL Server 2008 database support both a REST and SOAP web services within two different endpoints?

    - by PaulDecember
    Say you have a SQL Server 2008 database. You build a SOAP web service. You then deploy or publish this using Visual Studio 2010 in one website. Now, using the same database, you build a REST web service, in a different solution. You deploy this on another website. Can you consume the endpoints and/or .svc file of both the SOAP and REST web services, though they reference the same SQL Server 2008 database? I don't see why not, but before I go down this path and spend days I'd like to make sure. Also if there's a performance hit to the database if it is running both SOAP and REST at the same time--again, I don't see why it would matter, but I must make sure. Thanks.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >