Search Results

Search found 21072 results on 843 pages for 'thin client'.

Page 141/843 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • With a browser, how do I know which decimal separator does the client use?

    - by Quassnoi
    I'm developing a web application. I need to display some decimal data correctly so that it can be copied and pasted into a certain GUI application that is not under my control. The GUI application is locale sensitive and it accepts only the correct decimal separator which is set in the system. I can guess the decimal separator from Accept-Language and the guess will be correct in 95% cases, but sometimes it fails. Is there any way to do it on server side (preferably, so that I can collect statistics), or on client side? Update: The whole point of the task is doing it automatically. In fact, this webapp is a kind of online interface to a legacy GUI which helps to fill the forms correctly. The kind of users that use it mostly have no idea on what a decimal separator is. The Accept-Language solution is implemented and works, but I'd like to improve it. Update2: I need to retrive a very specific setting: decimal separator set in Control Panel / Regional and Language Options / Regional Options / Customize. I deal with four kinds of operating systems: Russian Windows with a comma as a DS (80%). English Windows with a period as a DS (15%). Russian Windows with a period as a DS to make poorly written English applications work (4%). English Windows with a comma as a DS to make poorly written Russian applications work (1%). All 100% of clients are in Russia and the legacy application deals with Russian goverment-issued forms, so asking for a country will yield 100% of Russian Federation, and GeoIP will yield 80% of Russian Federation and 20% of other, incorrect answers.

    Read the article

  • Can the Subversion client (svn) derefence symbolic links as if they were files?

    - by Ryan B. Lynch
    I have a directory on a Linux system that mostly contains symlinks to files on a different filesystem. I'd like to add the directory to a Subversion repository, dereferencing the symlinks in the process (treating them as the files they point to, rather than links). Generally, I'd like to be able to handle any working-copy operations with this behavior, but the 'svn add' command is where it starts, I think. The SVN client utility doesn't appear to have any options related to symlink dereferencing in the working copy. I didn't find any references to this in the manual (http://svnbook.red-bean.com/en/1.5/index.html), either. I found a poster on the SVN users mailing list who asked the same question but never received an answer, here: http://markmail.org/message/ngchfnzlmm43yj7h (That poster ended up using hard links instead of symlinks. That technique is not an option, in my case, because the real underlying files reside on a separate filesystem.) I'm using Subversion v1.6.1 on Fedora 11. For what it's worth, I know that there are alternative tools/techniques that could help approximate this behavior, but which I have to discard for various reasons. I've already considered [and dust-binned] these possibilities: - a "union" mount, merging all of the the directories containing the real files, with the SVN working-copy directory as the "top" layer in the union; - copying/moving the real files to the same filesystem as the SVN working-copy, and using hardlinks instead of symlinks; - non-SVN version control systems. These were all neat ideas, and I'm sure they are good solutions to other problems, but they won't work given the constraints of this environment and situation.

    Read the article

  • Can an application affect TCP retransmits

    - by sipwiz
    I'm troubleshooting some communications issues and in the network traces I am occasionally coming across TCP sequence errors. One example I've got is: Server to Client: Seq=3174, Len=50 Client to Server: Ack=3224 Server to Client: Seq=3224, Len=50 Client to Server: Ack=3224 Server to Client: Seq=3274, Len=10 Client to Server: Ack=3224, SLE=3274, SRE=3284 Packets 4 & 5 are recorded in the trace (which is from a router in between the client and server) at almost exactly the same time so they most likely crossed in transit. The TCP session has got out of sync with the client missing the last two transmissions from the server. Those two packets should have been retransmitted but they weren't, the next log in the trace is a RST packet from the Client 24 seconds after packet 6. My question is related to what could be responsible for the failure to retransmit the server data from packets 3 & 5? I would assume that the retransmit would be at the operating system level but is there anyway the application could influence it and stop it being sent? A thread blocking or put to sleep or something like that?

    Read the article

  • Dynamically generate client-side HTML form control using JavaScript and server-side Python code in Google App Engine

    - by gisc
    I have the following client-side front-end HTML using Jinja2 template engine: {% for record in result %} <textarea name="remark">{{ record.remark }}</textarea> <input type="submit" name="approve" value="Approve" /> {% endfor %} Thus the HTML may show more than 1 set of textarea and submit button. The back-end Python code retrieves a variable number of records from a gql query using the model, and pass this to the Jinja2 template in result. When a submit button is clicked, it triggers the post method to update the record: def post(self): if self.request.get('approve'): updated_remark = self.request.get('remark') record.remark = db.Text(updated_remark) record.put() However, in some instances, the record updated is NOT the one that correspond to the submit button clicked (eg if a user clicks on record 1 submit, record 2 remark gets updated, but not record 1). I gather that this is due to the duplicate attribute name remark. I can possibly use JavaScript/jQuery to generate different attribute names. The question is, how do I code the back-end Python to get the (variable number of) names generated by the JavaScript? Thanks.

    Read the article

  • ASP.NET GridView "Client-Side Confirmation when Deleting" stopped working on ie - how come?

    - by tarnold
    A few months ago, I have programmed an ASP.NET GridView with a custom "Delete" LinkButton and Client-Side JavaScript Confirmation according to this msdn article: http://msdn.microsoft.com/en-us/library/bb428868.aspx (published in April 2007) or e.g. http://stackoverflow.com/questions/218733/javascript-before-aspbuttonfield-click The code looks like this: <ItemTemplate> <asp:LinkButton ID="deleteLinkButton" runat="server" Text="Delete" OnCommand="deleteLinkButtonButton_Command" CommandName='<%# Eval("id") %>' OnClientClick='<%# Eval("id", "return confirm(\"Delete Id {0}?\")") %>' /> </ItemTemplate> Surprisingly, "Cancel" doesn't work no more with my ie (Version: 6.0.2900.2180.xpsp_sp2_qfe.080814-1242) - it always deletes the row. With Opera (Version 9.62) it still works as expeced and described in the msdn article. More surprisingly, on a fellow worker's machine with the same ie version, it still works ("Cancel" will not delete the row). The generated code looks like <a onclick="return confirm(...);" href="javascript:__doPostBack('...')"> As confirm(...) returns false on "Cancel", I expect the __doPostBack event in the href not to be fired. Are there any strange ie settings I accidentally might have changed? What else could be the cause of this weird behaviour? Or is this a "please reinstall WinXP" issue?

    Read the article

  • is using private shared objects/variables on class level harmful ?

    - by haansi
    Hello, Thanks for your attention and time. I need your opinion on an basic architectural issue please. In page behind classes I am using a private and shared object and variables (list or just client or simplay int id) to temporary hold data coming from database or class library. This object is used temporarily to catch data and than to return, pass to some function or binding a control. 1st: Can this approach harm any way ? I couldn't analyze it but a thought was using such shared variables may replace data in it when multiple users may be sending request at a time? 2nd: Please comment also on using such variables in BLL (to hold data coming from DAL/database). In this example every time new object of BLL class will be made. Here is sample code: public class ClientManager { Client objclient = new Client(); //Used in 1st and 2nd method List<Client> clientlist = new List<Client>();// used in 3rd and 4th method ClientRepository objclientRep = new ClientRepository(); public List<Client> GetClients() { return clientlist = objclientRep.GetClients(); } public List<Client> SearchClients(string Keyword) { return clientlist = objclientRep.SearchClients(Keyword); } public Client GetaClient(int ClientId) { return objclient = objclientRep.GetaClient(ClientId); } public Client GetClientDetailForConfirmOrder(int UserId) { return objclientRep.GetClientDetailForConfirmOrder(UserId); } } I am really thankful to you for sparing time and paying kind attention.

    Read the article

  • Validating JSP's and HTML Forms, Server-side or Client-side, or both?

    - by CitadelCSAlum
    I am aware that I can Google "HTML Form Validation" and would get a billion tutorials. I am well aware that I can use simple JavaScript to validate form input, but I have been told that this is not necessarily an efficient method. I have also heard that it is a best practice to validate both client and server-side code. OK! Well, What exactly does this mean besides writing code on both? Does it mean I do some with JavaScript and other with Servlet's or does it mean that I write identical validation methods on both? My real question is can anybody give me insight and direction as how to go about validation my HTML forms. I am using JSP's and Servlet's and I have tons of form validation to do. I have already done minor form validation with regex in Java, but want to figure out if Im heading in the right track before I write any more code. Only productive answers please, If I wanted negative feedback on how inexperienced I was, I would have gone to Reddit. Thanks!

    Read the article

  • Beginner Question on traversing a EF4 model

    - by user564577
    I have a basic EF4 model with two entities. I'm using Self Tracking Entities Client has ClientAddresses relationship on clientkey = clientkey. (1 to many) How do i get a list/collection of client entities (STE) and their addresses (STE) but only ones where they live in a particular state or some such filter on the address??? This seems to filter and bring back clients but doesnt bring back addresses. var j = from client in context.Clients where client.ClientAddresses.All(c => c.ZIP == "80923") select client; I cant get this to create the Addresses because ClientAddresses is IEnumerable and it needs a TrackableCollection var query = from t1 in context.Clients join t2 in context.ClientAddresses on t1.ClientKey equals t2.ClientKey where t2.ZIP == "80923" select new Client { FirstName = t1.FirstName, LastName = t1.LastName, IsEnabled = t1.IsEnabled, ClientKey = t1.ClientKey, ChangeUser = t1.ChangeUser, ChangeDate = t1.ChangeDate, ClientAddresses = from a in t1.ClientAddresses select new ClientAddress { AddressKey = a.AddressKey, AddressLine1 = a.AddressLine1, AddressLine2 = a.AddressLine2, AddressTypeCode = a.AddressTypeCode, City = a.City, ClientKey = a.ClientKey, State = a.State, ZIP = a.ZIP } }; Any pointers would be appreciated. Thanks Edit: This seems to work.... var j = from client in context.Clients.Include("ClientAddresses") where client.ClientAddresses.Any(c => c.ZIP == "80923") select client;

    Read the article

  • Entity Framework LINQ Query using Custom C# Class Method - Once yes, once no - because executing on the client or in SQL?

    - by BrooklynDev
    I have two Entity Framework 4 Linq queries I wrote that make use of a custom class method, one works and one does not: The custom method is: public static DateTime GetLastReadToDate(string fbaUsername, Discussion discussion) { return (discussion.DiscussionUserReads.Where(dur => dur.User.aspnet_User.UserName == fbaUsername).FirstOrDefault() ?? new DiscussionUserRead { ReadToDate = DateTime.Now.AddYears(-99) }).ReadToDate; } The linq query that works calls a from after a from, the equivalent of SelectMany(): from g in oc.Users.Where(u => u.aspnet_User.UserName == fbaUsername).First().Groups from d in g.Discussions select new { UnReadPostCount = d.Posts.Where(p => p.CreatedDate > DiscussionRepository.GetLastReadToDate(fbaUsername, p.Discussion)).Count() }; The query that does not work is more like a regular select: from d in oc.Discussions where d.Group.Name == "Student" select new { UnReadPostCount = d.Posts.Where(p => p.CreatedDate > DiscussionRepository.GetLastReadToDate(fbaUsername, p.Discussion)).Count(), }; The error I get is: LINQ to Entities does not recognize the method 'System.DateTime GetLastReadToDate(System.String, Discussion)' method, and this method cannot be translated into a store expression. My question is, why am I able to use my custom GetLastReadToDate() method in the first query and not the second? I suppose this has something to do with what gets executed on the db server and what gets executed on the client? These queries seem to use the GetLastReadToDate() method so similarly though, I'm wondering why would work for the first and not the second, and most importantly if there's a way to factor common query syntax like what's in the GetLastReadToDate() method into a separate location to be reused in several different places LINQ queries. Please note all these queries are sharing the same object context.

    Read the article

  • JPA: persisting object, parent is ok but child not updated

    - by James.Elsey
    Hello, I have my domain object, Client, I've got a form on my JSP that is pre-populated with its data, I can take in amended values, and persist the object. Client has an abstract entity called MarketResearch, which is then extended by one of three more concrete sub-classes. I have a form to pre-populate some MarketResearch data, but when I make changes and try to persist the Client, it doesn't get saved, can someone give me some pointers on where I've gone wrong? My 3 domain classes are as follows (removed accessors etc) public class Client extends NamedEntity { @OneToOne @JoinColumn(name = "MARKET_RESEARCH_ID") private MarketResearch marketResearch; ... } @Inheritance(strategy = InheritanceType.JOINED) public abstract class MarketResearch extends AbstractEntity { ... } @Entity(name="MARKETRESEARCHLG") public class MarketResearchLocalGovernment extends MarketResearch { @Column(name = "CURRENT_HR_SYSTEM") private String currentHRSystem; ... } This is how I'm persisting public void persistClient(Client client) { if (client.getId() != null) { getJpaTemplate().merge(client); getJpaTemplate().flush(); } else { getJpaTemplate().persist(client); } } To summarize, if I change something on the parent object, it persists, but if I change something on the child object it doesn't. Have I missed something blatantly obvious? Thanks

    Read the article

  • Issue with the Entity Manager and phpunit in Symfony 2

    - by rgazelot
    I have an issue with my Entity Manager in phpunit. This is my test : public function testValidChangeEmail() { $client = self::createAuthClient('user','password'); $crawler = $client->request('GET', '/user/edit/30'); $crawler = $client->submit($crawler->selectButton('submit')->form(array( 'form[email]' => '[email protected]', ))); /* * With this em, this work perfectly * $em = $client->getContainer()->get('doctrine.orm.entity_manager'); */ $user = self::$em->getRepository('MyBundle:User')->findUser('[email protected]'); die(var_dump($user->getEmail())); } and this is my WebTestCase which extends original WebTestCase : class WebTestCase extends BaseWebTestCase { static protected $container; static protected $em; static protected function createClient(array $options = array(), array $server = array()) { $client = parent::createClient($options, $server); self::$em = $client->getContainer()->get('doctrine.orm.entity_manager'); self::$container = $client->getContainer(); return $client; } protected function createAuthClient($user, $pass) { return self::createClient(array(), array( 'PHP_AUTH_USER' => $user, 'PHP_AUTH_PW' => $pass, )); } As you can see, I replace the self::$em when I created my client. My issue : In my test, the die() give me the old email and not the new email ([email protected]) which has registered in the test. However in my database, I have the [email protected] correctly saved. When I retrieve my user in the database, I use sefl::$em. If I use the $em in comments, I retrieve the right new email. I don't understand why in my WebTestCase, I can access to the new Entity Manager...

    Read the article

  • Which way is preferred when doing asynchronous WCF calls?

    - by Mikael Svenson
    When invoking a WCF service asynchronous there seems to be two ways it can be done. 1. public void One() { WcfClient client = new WcfClient(); client.BegindoSearch("input", ResultOne, null); } private void ResultOne(IAsyncResult ar) { WcfClient client = new WcfClient(); string data = client.EnddoSearch(ar); } 2. public void Two() { WcfClient client = new WcfClient(); client.doSearchCompleted += TwoCompleted; client.doSearchAsync("input"); } void TwoCompleted(object sender, doSearchCompletedEventArgs e) { string data = e.Result; } And with the new Task<T> class we have an easy third way by wrapping the synchronous operation in a task. 3. public void Three() { WcfClient client = new WcfClient(); var task = Task<string>.Factory.StartNew(() => client.doSearch("input")); string data = task.Result; } They all give you the ability to execute other code while you wait for the result, but I think Task<T> gives better control on what you execute before or after the result is retrieved. Are there any advantages or disadvantages to using one over the other? Or scenarios where one way of doing it is more preferable?

    Read the article

  • MySQL 5.5 brings in new ways to authenticate users

    - by Georgi Kodinov
    Ever wanted to use your server's OS for authenticating MySQL users ? Or the corporate LDAP repository ? Unfortunately options like the above are plentiful nowadays. And providing hard-coded support for protocol X or service Y is not the best possible idea. MySQL 5.5 has taken the step into the right direction by providing an infrastructure allowing one to make the server understand different authentication protocols by creating a set of simple plugins (one for the client and one for the server). So now you can easily extend MySQL to search for and authenticate users in your favorite user directory. In fact the API supplied is so versatile that we took the possibility to re-design the current "native" authentication mechanism into a built-in always-on plugin ! OK, let me give you an example: Imagine we have a bunch of users defined in your OS, e.g. we have a user joro with his respective password. And we have a MySQL instance running on the same computer. It would not be unexpected to need to let joro access and/or modify MySQL data. The first step is to define him as a MySQL user. And there's a problem right there : MySQL's CREATE USER joro@localhost IDENTIFIED BY 'joros_password' statement needs a password. And this is a password in no way related to the password that joro have set up in the OS. What's worse : if joro changes his OS password this will in no way be reflected in MySQL. So he'll need to change his MySQL password in a separate step. Not very convenient, specially when you have a lot of users. This is a laborious setup for joro's DBA as well : he'll have to disable his access in both MySQL and the OS should he decides that joro's out of the "nice" list. Now mysql 5.5 to the rescue: Imagine that the smart DBA has created a MySQL server plugin that will check if the name of the user logging in is a valid and enabled OS name and if the password supplied to the mysql client matches the OS and has called this plugin 'auth_os'. Now all that's left to do is to define joro as a MySQL user that will be authenticated externally. This is done by the following command : CREATE USER 'joro'@'localhost' IDENTIFIED WITH 'auth_os'; Now joro can login to MySQL using his current OS password. Note : joro is still a valid MySQL user, so you can grant privileges to him just like you would for all other users. What's better: you can have users that authenticate using different mechanisms in the same server. So you can e.g. safely experiment with external authentication for selected users while keeping your current user base operational. What happens under the hood when joro logs in ? The server will find out by the user definition that it needs to use a non-default authentication and will ask the client to "switch" to using the appropriate client-side plugin (if of course the client is not already using it). If the client can't do this (e.g. because it's an old client or doesn't have the necessary plugin available) the server will reject the login. Otherwise the server will let the server-side plugin decide (while possibly talking to the client side plugin and the OS user directory) if this is a valid login or not. If it is the login process will continue as usual, while if it's not the login will get rejected. There's a lot more that MySQL 5.5 can do for you than just the simple case above. Stay tuned for more advanced use cases like mapping groups of external users to a single MySQL user (so you won't have to have 1-to-1 mapping between your external user directory and your mysql user repository) or ways to control the process as a DBA. Or you can simply skip ahead and read the relevant topics from MySQL's excellent online documentation. Or take a look at the example plugins in plugin/auth. Or take a look at the test suite in mysql-test/t/plugin_auth.test. Changelog entry: http://dev.mysql.com/doc/refman/5.5/en/news-5-5-7.html Primary new sections: Pluggable authentication Proxy users Client plugin C API functions Revised sections: New PROXY privilege New proxies_priv grant table Passwords might be external New external_user and proxy_user system variables New --default-auth and --plugin-dir mysql options New MYSQL_DEFAULT_AUTH and MYSQL_PLUGIN_DIR options for mysql_options() CREATE USER has IDENTIFIED WITH clause to specify auth plugin GRANT has PROXY privilege, IDENTIFIED WITH clause to specify auth plugin The data structure for writing client plugins

    Read the article

  • Implementing a chat program and thus involving majority of networking concepts [closed]

    - by Anisha Kaul
    Logging the chat messages on the client side. Registration of ALL clients on the server on their start up. Client should be able to add another client on his list for chatting. Server should be able to switch between clients on the basis of FCFS (multithreading). When a client logs in from other side, its friend client should be able to see it online. Now, to add to this, there can be things like sharing text/voice/video files etc, but then the focus will be on compression majorly. With the chat program, my intention is to learn the majority of "networking" concepts. What else, can be implemented (in this chat program) which can brush up my "networking" concepts?

    Read the article

  • Async CTP (C# 5): How to make WCF work with Async CTP

    - by javarg
    If you have recently downloaded the new Async CTP you will notice that WCF uses Async Pattern and Event based Async Pattern in order to expose asynchronous operations. In order to make your service compatible with the new Async/Await Pattern try using an extension method similar to the following: WCF Async/Await Method public static class ServiceExtensions {     public static Task<DateTime> GetDateTimeTaskAsync(this Service1Client client)     {         return Task.Factory.FromAsync<DateTime>(             client.BeginGetDateTime(null, null),             ar => client.EndGetDateTime(ar));     } } The previous code snippet adds an extension method to the GetDateTime method of the Service1Client WCF proxy. Then used it like this (remember to add the extension method’s namespace into scope in order to use it): Code Snippet var client = new Service1Client(); var dt = await client.GetDateTimeTaskAsync(); Replace the proxy’s type and operation name for the one you want to await.

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • PPTP connection fails with errors 800/806

    - by Mark S. Rasmussen
    I've got a client (Server 2008 R2) that won't connect to our production environment PPTP VPN server (Server 2003, running RRAS). The server is behind a firewall that has TCP1723 open as well as GRE. Other clients at our office are able to connect just fine. Our office is behind a Juniper SSG5-Serial firewall, but all outgoing traffic is allowed, and multiple other clients are able to connect to VPN servers without issues. I've also setup a completely different VPN server on another network outside of our office. The functioning clients connect just fine - the Server 2008 R2 machine doesn't. Thus it's definitely a problem with this machine in particular. I've rebooted it. I've disabled the firewall, no dice on either. I've run PPTPSRV and PPTPCLNT on the server/client and they're able to communicate perfectly - indicating there's no problem using neither TCP1723 nor GRE. The Server 2008 R2 machine is also running as a VPN server itself (incoming connection) and that's working perfectly. We have the issues no matter if there are active incoming connections or not. I'm not sure what my next debugging step would be; any suggestions? EDIT: The event log on the server has the following warning from RasMan: A connection between the VPN server and the VPN client xxx.xxx.xxx.xxx has been established, but the VPN connection cannot be completed. The most common cause for this is that a firewall or router between the VPN server and the VPN client is not configured to allow Generic Routing Encapsulation (GRE) packets (protocol 47). Verify that the firewalls and routers between your VPN server and the Internet allow GRE packets. Make sure the firewalls and routers on the user's network are also configured to allow GRE packets. If the problem persists, have the user contact the Internet service provider (ISP) to determine whether the ISP might be blocking GRE packets. Obviously this points to GRE being a potential problem. But seeing as I have other clients connectiong without problems, as well as PPTPSRV and PPTPCLNT being able to communicate, I'm suspecting this might be a red herring. EDIT: Here are the anonymized events logged by the client in chronological order: CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has started dialing a VPN connection using a per-user connection profile named ZZZ. The connection settings are: Dial-in User = XXX\YYY VpnStrategy = PPTP DataEncryption = Require PrerequisiteEntry = AutoLogon = No UseRasCredentials = Yes Authentication Type = CHAP/MS-CHAPv2 Ipv4DefaultGateway = No Ipv4AddressAssignment = By Server Ipv4DNSServerAssignment = By Server Ipv6DefaultGateway = Yes Ipv6AddressAssignment = By Server Ipv6DNSServerAssignment = By Server IpDnsFlags = Register primary domain suffix IpNBTEnabled = Yes UseFlags = Private Connection ConnectOnWinlogon = No. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY is trying to establish a link to the Remote Access Server for the connection named ZZZ using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has successfully established a link to the Remote Access Server using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The link to the Remote Access Server has been established by user XXX\YYY. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY dialed a connection named ZZZ which has failed. The error code returned on failure is 806. Running Wireshark on the client shows it trying and retrying to send a "71 Configuration Request" While the server shows the incoming client requests, but apparently without replying: Given that this is GRE traffic, I think rules out the GRE traffic being blocked. Question is, why doesn't the server reply? This is the Configuration Request the server receives from the non functioning client (meaning no response is sent to the client request): And this is the Configuration Request the server receives from the working client: To me they seem identical, except for differing keys and magic numbers, and the fact that one client receives a response while the other doesn't.

    Read the article

  • Sun Ray Hardware Last Order Dates & Extension of Premier Support for Desktop Virtualization Software

    - by Adam Hawley
    In light of the recent announcement  to end new feature development for Oracle Virtual Desktop Infrastructure Software (VDI), Oracle Sun Ray Software (SRS), Oracle Virtual Desktop Client (OVDC) Software, and Oracle Sun Ray Client hardware (3, 3i, and 3 Plus), there have been questions and concerns regarding what this means in terms of customers with new or existing deployments.  The following updates clarify some of these commonly asked questions. Extension of Premier Support for Software Though there will be no new feature additions to these products, customers will have access to maintenance update releases for Oracle Virtual Desktop Infrastructure and Sun Ray Software, including Oracle Virtual Desktop Client and Sun Ray Operating Software (SROS) until Premier Support Ends.  To ensure that customer investments for these products are protected, Oracle  Premier Support for these products has been extended by 3 years to following dates: Sun Ray Software - November 2017 Oracle Virtual Desktop Infrastructure - March 2017 Note that OVDC support is also extended to the above dates since OVDC is licensed by default as part the SRS and VDI products.   As a reminder, this only affects the products listed above.  Oracle Secure Global Desktop and Oracle VM VirtualBox will continue to be enhanced with new features from time-to-time and, as a result, they are not affected by the changes detailed in this message. The extension of support means that customers under a support contract will still be able to file service requests through Oracle Support, and Oracle will continue to provide the utmost level of support to our customers as expected,  until the published Premier Support end date.  Following the end of Premier Support, Sustaining Support remains an 'indefinite' period of time.   Sun Ray 3 Series Clients - Last Order Dates For Sun Ray Client hardware, customers can continue to purchase Sun Ray Client devices until the following last order dates: Product Marketing Part Number Last Order Date Last Ship Date Sun Ray 3 Plus TC3-P0Z-00, TC3-PTZ-00 (TAA) September 13, 2013 February 28, 2014 Sun Ray 3 Client TC3-00Z-00 February 28, 2014 August 31, 2014 Sun Ray 3i Client TC3-I0Z-00 February 28, 2014 August 31, 2014 Payflex Smart Cards X1403A-N, X1404A-N February 28, 2014 August 31, 2014 Note the difference in the Last Order Date for the Sun Ray 3 Plus (September 13, 2013) compared to the other products that have a Last Order Date of February 28, 2014. The rapidly approaching date for Sun Ray 3 Plus is due to a supplier phasing-out production of a key component of the 3 Plus.   Given September 13 is unfortunately quite soon, we strongly encourage you to place your last time buy as soon as possible to maximize Oracle's ability fulfill your order. Keep in mind you can schedule shipments to be delivered as late as the end of February 2014, but the last day to order is September 13, 2013. Customers wishing to purchase other models - Sun Ray 3 Clients and/or Sun Ray 3i Clients - have additional time (until February 28, 2014) to assess their needs and to allow fulfillment of last time orders.  Please note that availability of supply cannot be absolutely guaranteed up to the last order dates and we strongly recommend placing last time buys as early as possible.  Warranty replacements for Sun Ray Client hardware for customers covered by Oracle Hardware Systems Support contracts will be available beyond last order dates, per Oracle's policy found on Oracle.com here.  Per that policy, Oracle intends to provide replacement hardware for up to 5 years beyond the last ship date, but hardware may not be available beyond the 5 year period after the last ship date for reasons beyond Oracle's control. In any case, by design, Sun Ray Clients have an extremely long lifespan  and mean time between failures (MTBF) - much longer than PCs, and over the years we have continued to see first- and second generations of Sun Rays still in daily use.  This is no different for the Sun Ray 3, 3i, and 3 Plus.   Because of this, and in addition to Oracle's continued support for SRS, VDI, and SROS, Sun Ray and Oracle VDI deployments can continue to expand and exist as a viable solution for some time in the future. Continued Availability of Product Licenses and Support Oracle will continue to offer all existing software licenses, and software and hardware support including: Product licenses and Premier Support for Sun Ray Software and Oracle Virtual Desktop Infrastructure Premier Support for Operating Systems (for Sun Ray Operating Software maintenance upgrades/support)  Premier Support for Systems (for Sun Ray Operating Software maintenance upgrades/support and hardware warranty) Support renewals For More Information For more information, please refer to the following documents for specific dates and policies associated with the support of these products: Document 1478170.1 - Oracle Desktop Virtualization Software and Hardware Lifetime Support Schedule Document 1450710.1 - Sun Ray Client Hardware Lifetime schedule Document 1568808.1 - Document Support Policies for Discontinued Oracle Virtual Desktop Infrastructure, Sun Ray Software and Hardware and Oracle Virtual Desktop Client Development For Sales Orders and Questions Please contact your Oracle Sales Representative or Saurabh Vijay ([email protected])

    Read the article

  • WebLogic JDBC Use of Oracle Wallet for SSL

    - by Steve Felts
    Introduction Secure Sockets Layer (SSL) can be used to secure the connection between the middle tier “client”, WebLogic Server (WLS) in this case, and the Oracle database server.  Data between WLS and database can be encrypted.  The server can be authenticated so you have proof that the database can be trusted by validating a certificate from the server.  The client can be authenticated so that the database only accepts connections from clients that it trusts. Similar to the discussion in an earlier article about using the Oracle wallet for database credentials, the Oracle wallet can also be used with SSL to store the keys and certificates.  By using it correctly, clear text passwords can be eliminated from the JDBC configuration and client/server configuration can be simplified by sharing the wallet across multiple datasources. There is a very good Oracle Technical White Paper on using SSL with the Oracle thin driver at http://www.oracle.com/technetwork/database/enterprise-edition/wp-oracle-jdbc-thin-ssl-130128.pdf [LINK1].  The link http://www.oracle.com/technetwork/middleware/weblogic/index-087556.html [LINK2] describes how to use WebLogic Server with Oracle JDBC Driver SSL. The information in this article is a guide on what steps need to be taken in the variety of available options; use the links above for details. SSL from the driver to the database server is basically turned on by specifying a protocol of “tcps” in the URL.  However, there is a fair amount of setup needed.  Also remember that there is an overhead in performance. Creating the wallets The common use cases are 1. “data encryption and server-only authentication”, requiring just a trust store, or 2. “data encryption and authentication of both tiers” (client and server), requiring a trust store and a key store. It is recommended to use the auto-login wallet type so that clear text passwords are not needed in the datasource configuration to open the wallet.  The store type for an auto-login wallet is “SSO” (Single Sign On), not “JKS” or “PKCS12” as in [LINK2].  The file name is “cwallet.sso”. Wallets are created using the orapki tool.  They need to be created based on the usage (encryption and/or authentication).  This is discussed in detail in [LINK1] in Appendix B or in the Advanced Security Administrator’s Guide of the Database documentation. Database Server Configuration It is necessary to update the sqlnet.ora and listener.ora files with the directory location of the wallet using WALLET_LOCATION.  These files also indicate whether or not SSL_CLIENT_AUTHENTICATION is being used (true or false). The Oracle Listener must also be configured to use the TCPS protocol.  The recommended port is 2484. LISTENER = (ADDRESS_LIST= (ADDRESS=(PROTOCOL=tcps)(HOST=servername)(PORT=2484))) WebLogic Server Classpath The WebLogic Server CLASSPATH must have three additional security files. The files that need to be added to the WLS CLASSPATH are $MW_HOME/modules/com.oracle.osdt_cert_1.0.0.0.jar $MW_HOME/modules/com.oracle.osdt_core_1.0.0.0.jar $MW_HOME/modules/com.oracle.oraclepki_1.0.0.0.jar One way to do this is to add them to PRE_CLASSPATH environment variable for use with the standard WebLogic scripts. Setting the Oracle Security Provider It’s necessary to enable the Oracle PKI provider on the client side.  This can either be done statically by updating the java.security file under the JRE or dynamically by setting it in a WLS startup class using java.security.Security.insertProviderAt(new oracle.security.pki.OraclePKIProvider (), 3); See the full example of the startup class in [LINK2]. Datasource Configuration When creating a WLS datasource, set the PROTOCOL in the URL to tcps as in the following. jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=host)(PORT=port))(CONNECT_DATA=(SERVICE_NAME=myservice))) For encryption and server authentication, use the datasource connection properties: - javax.net.ssl.trustStore=location of wallet file on the client - javax.net.ssl.trustStoreType=”SSO” For client authentication, use the datasource connection properties: - javax.net.ssl.keyStore=location of wallet file on the client - javax.net.ssl.keyStoreType=”SSO” Note that the driver connection properties for the wallet require a file name, not a directory name. Active GridLink ONS over SSL For completeness, there is another SSL usage for WLS datasources.  The communication with the Oracle Notification Service (ONS) for load balancing information and node up/down events can use SSL also. Create an auto-login wallet and use the wallet on the client and server.  The following is a sample sequence to create a test wallet for use with ONS. orapki wallet create -wallet ons -auto_login -pwd ONS_Wallet orapki wallet add -wallet ons -dn "CN=ons_test,C=US" -keysize 1024 -self_signed -validity 9999 -pwd ONS_Wallet orapki wallet export -wallet ons -dn "CN=ons_test,C=US" -cert ons/cert.txt -pwd ONS_Wallet On the database server side, it’s necessary to define the walletfile directory in the file $CRS_HOME/opmn/conf/ons.config and run onsctl stop/start. When configuring an Active GridLink datasource, the connection to the ONS must be defined.  In addition to the host and port, the wallet file directory must be specified.  By not giving a password, a SSO wallet is assumed. Summary To use SSL with the Oracle thin driver without any clear text passwords, use an SSO Oracle Wallet.  SSL support in the Oracle thin driver is available starting in 10g Release 2.

    Read the article

  • ScriptAlias makes requests match too many Location blocks. What is going on?

    - by brain99
    We wish to restrict access on our development server to those users who have a valid SSL Client certificate. We are running Apache 2.2.16 on Debian 6. However, for some sections (mainly git-http, setup with gitolite on https://my.server/git/) we need an exception since many git clients don't support SSL client certificates. I have succeeded in requiring client cert authentication for the server, and in adding exceptions for some locations. However, it seems this does not work for git. The current setup is as follows: SSLCACertificateFile ssl-certs/client-ca-certs.crt <Location /> SSLVerifyClient require SSLVerifyDepth 2 </Location> # this works <Location /foo> SSLVerifyClient none </Location> # this does not <Location /git> SSLVerifyClient none </Location> I have also tried an alternative solution, with the same results: # require authentication everywhere except /git and /foo <LocationMatch "^/(?!git|foo)"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> In both these cases, a user without client certificate can perfectly access my.server/foo/, but not my.server/git/ (access is refused because no valid client certificate is given). If I disable SSL client certificate authentication completely, my.server/git/ works ok. The ScriptAlias problem Gitolite is setup using the ScriptAlias directive. I have found that the problem occurs with any similar ScriptAlias: # Gitolite ScriptAlias /git/ /path/to/gitolite-shell/ ScriptAlias /gitmob/ /path/to/gitolite-shell/ # My test ScriptAlias /test/ /path/to/test/script/ Note that /path/to/test/script is a file, not a directory, the same goes for /path/to/gitolite-shell/ My test script simply prints out the environment, super simple: #!/usr/bin/perl print "Content-type:text/plain\n\n"; print "TEST\n"; @keys = sort(keys %ENV); foreach (@keys) { print "$_ => $ENV{$_}\n"; } It seems that if I go to https://my.server/test/someLocation, that any SSLVerifyClient directives are being applied which are in Location blocks that match /test/someLocation or just /someLocation. If I have the following config: <LocationMatch "^/f"> SSLVerifyClient require SSLVerifyDepth 2 </LocationMatch> Then, the following URL requires a client certificate: https://my.server/test/foo. However, the following URL does not: https://my.server/test/somethingElse/foo Note that this only seems to apply for SSL configuration. The following has no effect whatsoever on https://my.server/test/foo: <LocationMatch "^/f"> Order allow,deny Deny from all </LocationMatch> However, it does block access to https://my.server/foo. This presents a major problem for cases where I have some project running at https://my.server/project (which has to require SSL client certificate authorization), and there is a git repository for that project at https://my.server/git/project which cannot require a SSL client certificate. Since the /git/project URL also gets matched agains /project Location blocks, such a configuration seems impossible given my current findings. Question: Why is this happening, and how do I solve my problem? In the end, I want to require SSL Client certificate authorization for the whole server except for /git and /someLocation, with as minimal configuration as possible (so I don't have to modify the configuration each time something new is deployed or a new git repository is added). Note: I rewrote my question (instead of just adding more updates at the bottom) to take into account my new findings and hopefully make this more clear.

    Read the article

  • Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket - (TOTD #185)

    - by arungupta
    The WebSocket API defines different send(xxx) methods that can be used to send text and binary data. This Tip Of The Day (TOTD) will show how to send and receive text and binary data using WebSocket. TOTD #183 explains how to get started with a WebSocket endpoint using GlassFish 4. A simple endpoint from that blog looks like: @WebSocketEndpoint("/endpoint") public class MyEndpoint { public void receiveTextMessage(String message) { . . . } } A message with the first parameter of the type String is invoked when a text payload is received. The payload of the incoming WebSocket frame is mapped to this first parameter. An optional second parameter, Session, can be specified to map to the "other end" of this conversation. For example: public void receiveTextMessage(String message, Session session) {     . . . } The return type is void and that means no response is returned to the client that invoked this endpoint. A response may be returned to the client in two different ways. First, set the return type to the expected type, such as: public String receiveTextMessage(String message) { String response = . . . . . . return response; } In this case a text payload is returned back to the invoking endpoint. The second way to send a response back is to use the mapped session to send response using one of the sendXXX methods in Session, when and if needed. public void receiveTextMessage(String message, Session session) {     . . .     RemoteEndpoint remote = session.getRemote();     remote.sendString(...);     . . .     remote.sendString(...);    . . .    remote.sendString(...); } This shows how duplex and asynchronous communication between the two endpoints can be achieved. This can be used to define different message exchange patterns between the client and server. The WebSocket client can send the message as: websocket.send(myTextField.value); where myTextField is a text field in the web page. Binary payload in the incoming WebSocket frame can be received if ByteBuffer is used as the first parameter of the method signature. The endpoint method signature in that case would look like: public void receiveBinaryMessage(ByteBuffer message) {     . . . } From the client side, the binary data can be sent using Blob, ArrayBuffer, and ArrayBufferView. Blob is a just raw data and the actual interpretation is left to the application. ArrayBuffer and ArrayBufferView are defined in the TypedArray specification and are designed to send binary data using WebSocket. In short, ArrayBuffer is a fixed-length binary buffer with no format and no mechanism for accessing its contents. These buffers are manipulated using one of the views defined by one of the subclasses of ArrayBufferView listed below: Int8Array (signed 8-bit integer or char) Uint8Array (unsigned 8-bit integer or unsigned char) Int16Array (signed 16-bit integer or short) Uint16Array (unsigned 16-bit integer or unsigned short) Int32Array (signed 32-bit integer or int) Uint32Array (unsigned 16-bit integer or unsigned int) Float32Array (signed 32-bit float or float) Float64Array (signed 64-bit float or double) WebSocket can send binary data using ArrayBuffer with a view defined by a subclass of ArrayBufferView or a subclass of ArrayBufferView itself. The WebSocket client can send the message using Blob as: blob = new Blob([myField2.value]);websocket.send(blob); where myField2 is a text field in the web page. The WebSocket client can send the message using ArrayBuffer as: var buffer = new ArrayBuffer(10);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = i;}websocket.send(buffer); A concrete implementation of receiving the binary message may look like: @WebSocketMessagepublic void echoBinary(ByteBuffer data, Session session) throws IOException {    System.out.println("echoBinary: " + data);    for (byte b : data.array()) {        System.out.print(b);    }    session.getRemote().sendBytes(data);} This method is just printing the binary data for verification but you may actually be storing it in a database or converting to an image or something more meaningful. Be aware of TYRUS-51 if you are trying to send binary data from server to client using method return type. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Custom payloads using encoder/decoder Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • when should a database table be broken into multiple tables with relations?

    - by GSto
    I have an application that needs to store client data, and part of that is some data about their employer as well. Assuming that a client can only have one employer, and that the chance of people having identical employer data is slim to none, which schema would make more sense to use? Schema 1 Client Table: ------------------- id int name varchar(255), email varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16), employer_name varchar(255), employer_phone varchar(255), employer_address varchar(255), employer_city varchar(255), employer_state char(2), employer_zip varchar(16) **Schema 2** Client Table ------------------ id int name varchar(255), email varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16), Employer Table --------------------- id int name varchar(255), phone varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16) patient_id int Part of me thinks that since are clearly two different 'objects' in the real world, seperating them out into two different tables makes sense. However, since a client will always have an employer, I'm also not seeing any real benefits to seperating them out, and it would make querying data about clients more complex. Is there any benefit / reason for creating two tables in a situation like this one instead of one?

    Read the article

  • How to do validation on both client and server side for a service which is a store procedure(return a complex type)

    - by Tai
    Hi I am doing Silverlight 4 In my database, I have a store procedure(having two parameters) which returns rows (with extra fields). So i have to make a complex type for those rows on my Models. And Making a service to call that function import store procedure. The RIA will automatically create a matching Entity(to the complex type) and an operation for me. However, I don't know how to validation the parameters of the operation on both client and server side. For example, the parameter must be an integer only (and greater than 10) or datetime only. below is my xaml code. I am using DomainDataSource control and don't know how to validate the two field parameter.It has two TextBox to let the user types in the value of parameters. Plz help me, thank you <riaControls:DomainDataSource AutoLoad="False" d:DesignData="{d:DesignInstance my1:USPFinancialAccountHistory, CreateList=true}" Height="0" LoadedData="uSPFinancialAccountHistoryDomainDataSource_LoadedData" Name="uSPFinancialAccountHistoryDomainDataSource" QueryName="GetFinancialAccountHistoryQuery" Width="0" Margin="0,0,705,32"> <riaControls:DomainDataSource.DomainContext> <my:USPFinancialAccountHistoryContext /> </riaControls:DomainDataSource.DomainContext> <riaControls:DomainDataSource.QueryParameters> <riaControls:Parameter ParameterName="fiscalYear" Value="{Binding ElementName=fiscalYearTextBox, Path=Text}" /> <riaControls:Parameter ParameterName="fiscalPeriod" Value="{Binding ElementName=fiscalPeriodTextBox, Path=Text}" /> </riaControls:DomainDataSource.QueryParameters> </riaControls:DomainDataSource> <StackPanel Height="30" HorizontalAlignment="Left" Orientation="Horizontal" VerticalAlignment="Top"> <sdk:Label Content="Fiscal Year:" Margin="3" VerticalAlignment="Center" /> <TextBox Name="fiscalYearTextBox" Width="60" /> <sdk:Label Content="Fiscal Period:" Margin="3" VerticalAlignment="Center" /> <TextBox Name="fiscalPeriodTextBox" Width="60" /> <Button Command="{Binding Path=LoadCommand, ElementName=uSPFinancialAccountHistoryDomainDataSource}" Content="Load" Margin="3" Name="uSPFinancialAccountHistoryDomainDataSourceLoadButton" /> </StackPanel> <telerik:RadGridView ItemsSource="{Binding ElementName=uSPFinancialAccountHistoryDomainDataSource, Path=Data}" Name="uSPFinancialAccountHistoryRadGridView" Grid.Row="1" IsReadOnly="True" DataLoadMode="Asynchronous" AutoGenerateColumns="False" ShowGroupPanel="False"> <telerik:RadGridView.Columns> <telerik:GridViewDataColumn Header="Account Number" DataMemberBinding="{Binding AccountNumber}"/> <telerik:GridViewDataColumn Header="Department Number" DataMemberBinding="{Binding DepartmentNumber}"/> <telerik:GridViewDataColumn Header="Period code" DataMemberBinding="{Binding PeriodCode}" /> <telerik:GridViewDataColumn Header="Total Debit" DataMemberBinding="{Binding TotalDebit}" DataFormatString="{}{0:C2}"/> <telerik:GridViewDataColumn Header="Total Credit" DataMemberBinding="{Binding TotalCredit}" DataFormatString="{}{0:C2}"/> <telerik:GridViewDataColumn Header="Period Total" DataMemberBinding="{Binding PeriodTotal}" DataFormatString="{}{0:C2}"/> <telerik:GridViewDataColumn Header="Year To Date" DataMemberBinding="{Binding YearToDate}" DataFormatString="{}{0:C2}"/> </telerik:RadGridView.Columns> </telerik:RadGridView>

    Read the article

  • Why is Apache seg faulting?

    - by Jamie Howard
    We have a production server that seems to Seg Fault a few times every day. The fault is picked up by Apache and logged in the error log - but there seems to be no traffic around the time. If it's a request generating the fault then it looks like it happens before any other logging is made so I can't see how it's happening so it's very hard to debug. Our setup is Linux 64 bit Centos 5.3 Apache is loaded with the following modules apachectl -t -D DUMP_MODULES | more Loaded Modules: core_module (static) mpm_prefork_module (static) http_module (static) so_module (static) auth_basic_module (shared) auth_digest_module (shared) authn_file_module (shared) authn_alias_module (shared) authn_anon_module (shared) authn_dbm_module (shared) authn_default_module (shared) authz_host_module (shared) authz_user_module (shared) authz_owner_module (shared) authz_groupfile_module (shared) authz_dbm_module (shared) authz_default_module (shared) ldap_module (shared) authnz_ldap_module (shared) include_module (shared) log_config_module (shared) logio_module (shared) env_module (shared) ext_filter_module (shared) mime_magic_module (shared) expires_module (shared) deflate_module (shared) headers_module (shared) usertrack_module (shared) setenvif_module (shared) mime_module (shared) dav_module (shared) status_module (shared) autoindex_module (shared) info_module (shared) dav_fs_module (shared) vhost_alias_module (shared) negotiation_module (shared) dir_module (shared) actions_module (shared) speling_module (shared) userdir_module (shared) alias_module (shared) rewrite_module (shared) proxy_module (shared) proxy_balancer_module (shared) proxy_ftp_module (shared) proxy_http_module (shared) proxy_connect_module (shared) cache_module (shared) suexec_module (shared) disk_cache_module (shared) file_cache_module (shared) mem_cache_module (shared) cgi_module (shared) version_module (shared) security2_module (shared) unique_id_module (shared) fcgid_module (shared) php5_module (shared) proxy_ajp_module (shared) ssl_module (shared) Here's an exert from the Apache error log: [Mon Mar 15 06:39:25 2010] [error] [client 213.246.222.74] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Mon Mar 15 07:41:31 2010] [error] [client 213.246.222.74] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Mon Mar 15 08:24:16 2010] [error] [client 67.19.250.146] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Mon Mar 15 08:43:46 2010] [error] [client 213.246.222.74] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Mon Mar 15 08:54:02 2010] [error] [client 74.208.123.71] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Mon Mar 15 09:09:51 2010] [notice] child pid 2138 exit signal Segmentation fault (11), possible coredump in /tmp [Mon Mar 15 09:45:27 2010] [error] [client 213.246.222.74] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Mon Mar 15 09:49:05 2010] [error] [client 190.12.113.196] File does not exist: /var/www/vhosts/default/htdocs/phpMyAdmin [Mon Mar 15 09:49:06 2010] [error] [client 190.12.113.196] File does not exist: /var/www/vhosts/default/htdocs/PMA And the Access log around the same time (09:09:51): 213.246.222.74 - - [15/Mar/2010:08:43:46 +0000] "GET /" 400 561 "-" "-" 208.80.193.28 - - [15/Mar/2010:08:52:20 +0000] "GET / HTTP/1.0" 301 313 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; SU 2.009)" 74.208.123.71 - - [15/Mar/2010:08:54:02 +0000] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 298 "-" "-" 81.149.146.231 - - [15/Mar/2010:09:15:18 +0000] "GET /zabbix/ HTTP/1.1" 200 3565 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_2; en-us) AppleWebKit/531.21.8 (KHTML, like Gecko) Version/4.0.4 Safari/531.21.10" 81.158.71.196 - - [15/Mar/2010:09:16:06 +0000] "GET / HTTP/1.1" 301 313 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.4; en-US; rv:1.9.0.18) Gecko/2010020219 Firefox/3.0.18" 213.246.222.74 - - [15/Mar/2010:09:45:27 +0000] "GET /" 400 561 "-" "-" 213.246.222.74 - - [15/Mar/2010:09:45:27 +0000] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 298 "-" "-" 190.12.113.196 - - [15/Mar/2010:09:49:05 +0000] "GET /phpMyAdmin/main.php HTTP/1.0" 404 295 "-" "-" So As you can see, there's no access logged around the time of the fault!! How annoying :s I enabled core dumps and here is the backtrace: #0 0x00007f9c8c8a858b in memcpy () from /lib64/libc.so.6 No symbol table info available. #1 0x00007f9c8cfb066d in apr_pstrcat (a=<value optimized out>) at strings/apr_strings.c:165 cp = 0x1fa6b "\205¦H\211¦t`¦\003" argp = 0x7f9c9ad790e8 "Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Re"... res = 0x0 saved_lengths = {129643, 2, 43, 140310399395576, 0, 140310394592712} nargs = <value optimized out> len = <value optimized out> adummy = {{gp_offset = 16, fp_offset = 32668, overflow_arg_area = 0x7fff968a0ec0, reg_save_area = 0x7fff968a0de0}} #2 0x00007f9c8cfb1bf9 in apr_table_merge (t=0x7f9c8f83b148, key=0x7f9c85a465fe "Vary", val=0x7f9c9ad99070 "Referer, Referer, Referer, Referer, Referer") at tables/apr_tables.c:688 next_elt = (apr_table_entry_t *) 0x7f9c8f83b270 end_elt = (apr_table_entry_t *) 0x7f9c8f83b270 checksum = <value optimized out> hash = 22 #3 0x00007f9c85a42cfa in ?? () from /etc/httpd/modules/mod_rewrite.so No symbol table info available. #4 0x00007f9c85a44022 in ?? () from /etc/httpd/modules/mod_rewrite.so No symbol table info available. #5 0x00007f9c8e87bd1a in ap_run_fixups () from /usr/sbin/httpd No symbol table info available. #6 0x00007f9c8e88e8f8 in ap_process_request () from /usr/sbin/httpd No symbol table info available. #7 0x00007f9c8e88bb40 in ?? () from /usr/sbin/httpd No symbol table info available. #8 0x00007f9c8e887ca2 in ap_run_process_connection () from /usr/sbin/httpd No symbol table info available. #9 0x00007f9c8e892849 in ?? () from /usr/sbin/httpd No symbol table info available. #10 0x00007f9c8e892ada in ?? () from /usr/sbin/httpd No symbol table info available. #11 0x00007f9c8e892b90 in ?? () from /usr/sbin/httpd No symbol table info available. #12 0x00007f9c8e89387b in ap_mpm_run () from /usr/sbin/httpd No symbol table info available. #13 0x00007f9c8e86de48 in main () from /usr/sbin/httpd No symbol table info available. Can anyone shed any light on how to move forward with this? I can confirm that the server is operational and doesn't appear to be misbehaving - the failures are so infrequent that I haven't seen it do one while making a request myself. Really appreciate any help! Cheers!

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >