Search Results

Search found 21091 results on 844 pages for 'dsl client'.

Page 125/844 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • What is the best idea to put available OS (linux) and Web application to client?

    - by Fernando Costa
    After a year programming a web based business management system, I got my idea divided into two differents ways to do what I'm doing... I will try to explain in follow lines: First I will describe my enviroment: Webserver: apache, ngynx Programming Language: PHP, Shell Script, Java Script, SQL Database: Mysql Operating System: Linux, UNIX (All Distros) (If manually configured works on windows) Authentication Server: FreeRadius First situation I have my application running on this enviroment that I had just described before, as my application is a SaaS app, then I have my own server to run it all and customers pay to use it as a service accessed by webbrowser. Second Situation The same as before but with one big difference, everything (environment) is installed in the customer, then I need to cryptography all my codes (It includes PHP and Shell Scripts). I think this situation is most difficulty, but I would like to hear it from different points of view.

    Read the article

  • How to set up a easy-to-use proxy for the whole system with WinXP client and server?

    - by Pekka
    I am working together intensively with a colleague on the Canary Islands. We speak through live messenger and work together using a RDP software. She has frequent problems with connections to certain big-name and small-name sites (amongst others live.com, google.com, gmx.de) very likely to be caused by the spanish provider (the connections simply time out, this has been going on for weeks already). I have been thinking about setting up my computer as a proxy to make these connections work. I have a DSL connection and am behind a NAT capable router that I control. Does anybody know a simple, "one-click" way to transport ALL network traffic through a remote proxy? Without having to set proxy settings for each application that uses the internet? VPN is not an option, because I am behind a firewall that supports protocol 47 and such, but I have never succeeded in getting an incoming VPN connection to work. I can however redirect normal traffic using NAT. A VPN solution that does not need strange protocols would also be an option.

    Read the article

  • Server 2003 and XP Client; Why are HTTP connections being silently dropped.

    - by Asa Yeamans
    On my network, my edge-router, a windows 2003 r2 server router with all the latest updates, will drop packets, but only under specific circumstances. I have troubleshot and isolated it down to the most simple configuration i can. There is NO NAT involved. Only fully-public IP addresses. No Firewalls are running either, all ahve been disabled. no packet filters on any interfaces anywhere either. I have a single Windows XP virtual machine and my edge-router(the windows 2003 r2 server, and also a virtual machine) running on a windows 2008 x64 r2 system (running virtual server 2005 as i dont have Intel-VT compatible chip yet). The edge router can access any external http site just fine, no issues. However the windows XP machine is only able to access certain sites. These work: www.google.com www.txstate.edu www.workintexas.com www.thedailywtf.com . These Dont: www.yahoo.com www.utexas.edu en.wikipedia.org slashdot.org www.bing.com. I have removed all possibility of DNS issues by connecting with net-cat from the XP box and sending GET /\r\nHost: \r\n\r\n and that connection replicates the issue as well. The network setup: My statically assigned IP block: x.x.x.168/29 DSL Modem -----PPPoE Connection---- x.x.x.169[EdgeRouter] [EdgeRouter]x.x.x.170 -----Virtual Ethernet----- x.x.x.174 [Test2] Test2's Default gateway is x.x.x.170 and test2 can ping any and every valid, accessible, public IP address with no packet loss what-so-ever. If i connect directly over PPPoE from test2 (the XP box) everything works just fine... Im at my wits end, i have NO IDEA whats causing this.

    Read the article

  • How to connect 2 routers (Asmax and D-link)

    - by piobyz
    I just bought a new router, D-link DSL 2641B and want to connect it to another one, provided by my ISP, Asmax AR 804MP. Previously, I had Linksys WRT350N, and there was no problem, while I had Ethernet cable plugged in to one of LAN ports in Asmax and INTERNET(RJ45) port in Linksys, connection used PPPoE protocol -- worked OK. D-link has DSL(RJ11) port (which I don't want to use as Asmax replacement, while there is a separate Ethernet cable with a TV plugged to Asmax, which I don't want to configure from scratch on D-link). How should I connect my new D-link to work with Asmax? Via DSL port? Via one of the LAN ports (in which case I probably should change the purpose of this port in the config, I guess?). I tried connecting D-link both ways: LAN(ASMAX) to LAN(DLINK) LAN(ASMAX) to DSL(DLINK) (using RJ11 - RJ45 cable) I hope there is some setting in the DLINK's config that I overlooked. I haven't tried to see what's in ASMAX's config, but I guess I don't need to change anything there, while Linksys worked just fine? The only difference I see, is that D-link has RJ11 DSL port as WAN, and Linksys has RJ45 (called by them INTERNET) as a main WAN port.

    Read the article

  • What if you've been asked to develop a site and the client later introduces Ts&Cs that you'll breach whilst doing your job?

    - by Matt Lacey
    Disclaimer : this is all made up. Honest. And it represents no clients or employers living or dead, blah blah blah, etc. [Allegedly] As part of a website I've built, I've now been provided the Terms and Conditions of site usage to display on the site. These terms--which must be agreed to to access the site--include my (or any visitor to the sites) compliance with a number of clauses. Many of these clauses refer to general computer use and are not tied specifically to use of the site. Some of these clauses refer to things I have had to previously do as a legitimate part of my job and would expect to have to do again. When I've raised similar issues previously my line manager has said just to ignore it but that doesn't seem to be the professional thing to do. So, what do I do? Abiding by the terms would mean that I could no longer work on the project and would cause issues with my employer and the owner of the business the site is being created for. Ignoring them could lead to possible future issues with the business owner and is not something I'm necessarily happy with (the deliberate breaking of a legal contract). Neither option is one I'd choose and could have major consequences. Any thoughts?

    Read the article

  • What level/format of access should be given to a client to the issue tracking system?

    - by dukeofgaming
    So, I used to think that it would be a good idea to give the customer access to the issue tracking system, but now I've seen that it creates less than ideal situations, like: Customer judging progress solely on ticket count Developers denied to add issues to avoid customer thinking that there is less progress Customer appointing people on their side to add issues who don't always do a good job (lots of duplicate issues, insufficient information to reproduce, and other things that distract people from doing their real job) However, I think customers should have access to some indicators or proof that there is progress being done, as well as a right to report bugs. So, what would be the ideal solution to this situation?, specially, getting out of or improving the first situation described?

    Read the article

  • GlassFish v2.1 -- getting Application Client and Eclipselink to work together?

    - by Nick
    We are trying to use Eclipselink 1.1 with Glassfish v2.1. Following the instructions on: http://wiki.glassfish.java.net/Wiki.jsp?page=FaqEclipseLinkGlassFishV2 I adapted the instructions for the appclient script on linux by adding the lines: APPCPATH=$APPCPATH:$AS_INSTALL/lib/eclipselink-1.1.1.jar export APPCPATH to the appclient shell script. This however is not working. On running the application client (using Glassfish's webstart), I get the error: WARNING: "IOP00810257: (MARSHAL) Could not load class org.eclipse.persistence.indirection.IndirectList" Anyone else succeed in getting GF v 2.1 to work with eclipselink? or any ideas on what I might be doing wrong? I found this bug report: http s://glassfish.dev.java.net/issues/show_bug.cgi?id=8204 (New users can't post more than 1 link, so remove the space between 'http' and 's'.) Where Tim Quinn (tjquinn) said: App client container support for persistence is not yet in place I think this refers only to Glassfish v3, and it should be working in Glassfish v2. Is this correct? I'm working on the assumption that this will work once the ACC knows where to find the eclipselinks jar. Thanks in advance, Nick.

    Read the article

  • OpenVPN route missing

    - by dajuric
    I can connect to an OpenVPN server from Windows without any problems. But when I try to connect from Ubuntu 12.04 (start OpenVPN) I receive the following: OpenVPN needs a gateway parameter for a --route option and no default was specified by either --route-gateway or --ifconfig options SERVER IP: 161.53.X.X internal network: 10.0.0.0 / 8 What I need to do ? client configuration: client dev tap proto udp remote 161.53.X.X 1194 resolv-retry infinite nobind ca ca.crt cert client.crt key client.key ns-cert-type server comp-lzo verb 3 server conf: local 161.53.X.X port 1194 proto udp dev tap dev-node OpenVPN ca ca.crt cert server.crt key server.key # This file should be kept secret dh dh1024.pem # DHCP leases addresses to clients server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 10.0.0.1 255.255.0.0" client-to-client duplicate-cn keepalive 10 120 comp-lzo verb 6

    Read the article

  • Office 2010 Client &ndash; Should I go with 32 bit or 64 bit?

    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This feed URL has been discontinued. Please update your reader's URL to : http://feeds.feedburner.com/winsmarts Read full article .... ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • client website compromised, found a strange .php file. any ideas?

    - by Kevin Strong
    I do support work for a web development company and I found a suspicious file today on the website of one of our clients called "hope.php" which contained several eval(gzuncompress(base64_decode('....'))) commands (which on a site like this, usually indicates that they've been hacked). Searching for the compromised site on google, we got a bunch of results which link to hope.php with various query strings that seem to generate different groups of seo terms like so: (the second result from the top is legitimate, all the rest are not) Here is the source of "hope.php": http://pastebin.com/7Ss4NjfA And here is the decoded version I got by replacing the eval()s with echo(): http://pastebin.com/m31Ys7q5 Any ideas where this came from or what it is doing? I've of course already removed the file from the server, but I've never seen code like this so I'm rather curious as to its origin. Where could I go to find more info about something like this?

    Read the article

  • TCP Keepalive and firewall killing idle sessions

    - by Carlos A. Ibarra
    In a customer site, the network team added a firewall between the client and the server. This is causing idle connections to get disconnected after about 40 minutes of idle time. The network people say that the firewall doesn't have any idle connection timeout, but the fact is that the idle connections get broken. In order to get around this, we first configured the server (a Linux machine) with TCP keepalives turned on with tcp_keepalive_time=300, tcp_keepalive_intvl=300, and tcp_keepalive_probes=30000. This works, and the connections stay viable for days or more. However, we would also like the server to detect dead clients and kill the connection, so we changed the settings to time=300,intvl=180,probes=10, thinking that if the client was indeed alive, the server would probe every 300s (5 minutes) and the client would respond with an ACK and that would keep the firewall from seeing this as an idle connection and killing it. If the client was dead, after 10 probes, the server would abort the connection. To our surprise, the idle but alive connections get killed after about 40 minutes as before. Wireshark running on the client side shows no keepalives at all between the server and client, even when keepalives are enabled on the server. What could be happening here? If the keepalive settings on the server are time=300,intvl=180,probes=10, I would expect that if the client is alive but idle, the server would send keepalive probes every 300 seconds and leave the connection alone, and if the client is dead, it would send one after 300 seconds, then 9 more probes every 180 seconds before killing the connection. Am I right? One possibility is that the firewall is somehow intercepting the keepalive probes from the server and failing to pass them on to the client, and the fact that it got a probe makes it think that the connection is active. Is this common behavior for a firewall? We don't know what kind of firewall is involved. The server is a Teradata node and the connection is from a Teradata client utility to the database server, port 1025 on the server side, but we have seen the same problem with an SSH connection so we think it affects all TCP connections.

    Read the article

  • It's here, it's here! OJEC 1.1 is here -- Oracle Java ME Embedded Client

    - by hinkmond
    Download it now! OJEC 1.1 has just been released and will run on your Linux/ARM, Linux/x86, or Linux/MIPS device. Get it before supplies run out! (Not really, it's software. How can it run out?) See: Download OJEC 1.1 now! Here's a quote: Oracle offers Java ME Embedded products to meet your specific Java Technology needs. Java ME is targeted for use on devices where a feature-rich Java platform is desirable, despite lacking the resources necessary to run the full Java SE Embedded environment. Try it on your Raspberry Pi! (If you can find where to buy a Raspberry Pi that is. Now that's something that can run out) Hinkmond

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • How to cache on CloudFlare images that are served to client as JSON?

    - by Askar Ibragimov
    I am using a gallery on my website that gets list of images from a JSON sent by a php script. So, the javascript gallery calls PHP backend and it replies with complex JSON where images are specified as object fields. These fields not necessarily include full URLs, merely a path to needed images. I'd like to use Cloudflare and want these images to be cached there. How I could learn whether these are cached or not, and make sure that these would be cached and not considered some sort of dynamic content?

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • Obtaining Second WAN - Want to keep

    - by ndavis
    My business is growing so I've decided to purchase a second DSL line, to separate two departments to allow for more bandwidth usage. Previously I had a network printer that served both departments. With the addition of the second DSL line, I'm potentially going to have two separate LANs. How can I set it up so that the new LAN still has access to the Network Printer, but ensure that they are still using the new DSL line?

    Read the article

  • Should I keep separate client codebases and databases for a software-as-a-service application?

    - by John
    My question is about the architecture of my application. I have a Rails application where companies can administrate all things related to their clients. Companies would buy a subscription and their users can access the application online. Hopefully I will get multiple companies subscribing to my application/service. What should I do with my code and database? Seperate app code base and database per company One app code base but seperate database per company One app code base and one database The decision involves security (e.g. a user from company X should not see any data from company Y) performance (let's suppose it becomes successful, it should have a good performance) and scalability (again, if successful, it should have a good performance but also easy for me to handle all the companies, code changes, etc). For the sake of maintainability, I tend to opt for the one code base, but for the database I really don't know. What do you think is the best option?

    Read the article

  • Combine multiple network interfaces to connect to a dedicated server

    - by Dženis Macanovic
    this is an underpaid employee writing, who's apparently responsible for all the IT stuff in a very small (non-IT) company. Today said company got a bunch of PCs/workstations, a switch, a computer that's supposed to be used as a router, two DSL connections (each 16 MBit/s downstream and 1 MBit/s upstream) and a dedicated server which is hosted and managed professionally by a larger local company with some decent connection speed (1 GBit/s both directions if I'm not mistaken). This is what I've set up (note I'm not making use of the second DSL connection at all)... ETH0 ETH1 [ SWITCH ]---[LINUX DEBIAN ROUTER]---[DSL MODEM 1]---[INTERNET] | | | PC1 | | PC2 | ... ... when my boss asked me, if it was somehow possible to get 32 MBit/s downstream and 2 MBit/s upstream. At that time I replied "no" without thinking too much about it. Now I've just had the following idea... ETH1 ETH0 ETH0 ,---[DSL MODEM 1 (NON-STATIC IP)]---, ,---, ETH0 [ SWITCH ]---[LINUX DEBIAN ROUTER] [INTERNET] [LINUX DEBIAN SERVER]---[INTERNET] | | | '---[ DSL MODEM 2 (STATIC IP) ]---' '---' PC1 | | ETH2 ETH0 PC2 | ... ... but I have absolutely no clue how to implement that. Would that even be possible? What would the masquerading rules look like on the router? What about the server? I didn't find anything on the internet, mainly because I couldn't come up with any good keywords to search for to begin with. English obviously isn't my first language. Thanks in advance for your time!

    Read the article

  • Configuring two subnets with two NICS. Access from a NAS to the internet

    - by archipestre
    I am having trouble configuring my NAS. I have a DSL router with WIFI (192.168.1.1) in my flatmates room. In my room I have a server with two NICS: 1) wlan0 (192.168.1.2) that connects to the DSL router via wireless 2) em1 (192.168.0.1) that connects to the NAS (192.168.0.20) with a crossover cable. I have Fedora 17 and I have enable packet forwarding. My IP configuration is as follows: WLAN0 inet 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255 EM1 inet 192.168.1.2 netmask 255.255.255.0 broadcast 192.168.1.255 My routing table looks like: Destination Gateway G enmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlan0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 em1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0 I have enable a static route in the DSL server: Status Network Destination Subnet Mask Interface Gateway Remove Edit Active 192.168.0.0 255.255.255.0 LAN 192.168.1.2 From my server I can ping the DSL router and the NAS. From the NAS I can ping both NICS of the server. However the NAS is unable to ping the DSL router or any address in the Internet. Any idea of what is wrong. Thank you in advance

    Read the article

  • Accessing Server-Side Data from Client Script: Using Ajax Web Services, Script References, and ...

    Today's websites commonly exchange information between the browser and the web server using Ajax techniques. In a nutshell, the browser executes JavaScript code typically in response to the page loading or some user action. This JavaScript makes an asynchronous HTTP request to the server. The server processes this request and, perhaps, returns data that the browser can then seamlessly integrate into the web page. Typically, the information exchanged between the browser and server is serialized into JSON, an open, text-based serialization format that is both human-readable and platform independent.Adding such targeted, lightweight Ajax capabilities to your ASP.NET website requires two steps: first, you must create some mechanism on

    Read the article

  • Dynamic loaded libraries and shared global symbols

    - by phlipsy
    Since I observed some strange behavior of global variables in my dynamically loaded libraries, I wrote the following test. At first we need a statically linked library: The header test.hpp #ifndef __BASE_HPP #define __BASE_HPP #include <iostream> class test { private: int value; public: test(int value) : value(value) { std::cout << "test::test(int) : value = " << value << std::endl; } ~test() { std::cout << "test::~test() : value = " << value << std::endl; } int get_value() const { return value; } void set_value(int new_value) { value = new_value; } }; extern test global_test; #endif // __BASE_HPP and the source test.cpp #include "base.hpp" test global_test = test(1); Then I wrote a dynamically loaded library: library.cpp #include "base.hpp" extern "C" { test* get_global_test() { return &global_test; } } and a client program loading this library: client.cpp #include <iostream> #include <dlfcn.h> #include "base.hpp" typedef test* get_global_test_t(); int main() { global_test.set_value(2); // global_test from libbase.a std::cout << "client: " << global_test.get_value() << std::endl; void* handle = dlopen("./liblibrary.so", RTLD_LAZY); if (handle == NULL) { std::cout << dlerror() << std::endl; return 1; } get_global_test_t* get_global_test = NULL; void* func = dlsym(handle, "get_global_test"); if (func == NULL) { std::cout << dlerror() << std::endl; return 1; } else get_global_test = reinterpret_cast<get_global_test_t*>(func); test* t = get_global_test(); // global_test from liblibrary.so std::cout << "liblibrary.so: " << t->get_value() << std::endl; std::cout << "client: " << global_test.get_value() << std::endl; dlclose(handle); return 0; } Now I compile the statically loaded library with g++ -Wall -g -c base.cpp ar rcs libbase.a base.o the dynamically loaded library g++ -Wall -g -fPIC -shared library.cpp libbase.a -o liblibrary.so and the client g++ -Wall -g -ldl client.cpp libbase.a -o client Now I observe: The client and the dynamically loaded library possess a different version of the variable global_test. But in my project I'm using cmake. The build script looks like this: CMAKE_MINIMUM_REQUIRED(VERSION 2.6) PROJECT(globaltest) ADD_LIBRARY(base STATIC base.cpp) ADD_LIBRARY(library MODULE library.cpp) TARGET_LINK_LIBRARIES(library base) ADD_EXECUTABLE(client client.cpp) TARGET_LINK_LIBRARIES(client base dl) analyzing the created makefiles I found that cmake builds the client with g++ -Wall -g -ldl -rdynamic client.cpp libbase.a -o client This ends up in a slightly different but fatal behavior: The global_test of the client and the dynamically loaded library are the same but will be destroyed two times at the end of the program. Am I using cmake in a wrong way? Is it possible that the client and the dynamically loaded library use the same global_test but without this double destruction problem?

    Read the article

  • How can I reliably check client identity whilst making DCOM calls to a C# .Net 3.5 Server?

    - by pionium
    Hi, I have an old Win32 C++ DCOM Server that I am rewriting to use C# .Net 3.5. The client applications sit on remote XP machines and are also written in C++. These clients must remain unchanged, hence I must implement the interfaces on new .Net objects. This has been done, and is working successfully regarding the implementation of the interfaces, and all of the calls are correctly being made from the old clients to the new .Net objects. However, I'm having problems obtaining the identity of the calling user from the DCOM Client. In order to try to identify the user who instigated the DCOM call, I have the following code on the server... [DllImport("ole32.dll")] static extern int CoImpersonateClient(); [DllImport("ole32.dll")] static extern int CoRevertToSelf(); private string CallingUser { get { string sCallingUser = null; if (CoImpersonateClient() == 0) { WindowsPrincipal wp = System.Threading.Thread.CurrentPrincipal as WindowsPrincipal; if (wp != null) { WindowsIdentity wi = wp.Identity as WindowsIdentity; if (wi != null && !string.IsNullOrEmpty(wi.Name)) sCallingUser = wi.Name; } if (CoRevertToSelf() != 0) ReportWin32Error("CoRevertToSelf"); } else ReportWin32Error("CoImpersonateClient"); return sCallingUser; } } private static void ReportWin32Error(string sFailingCall) { Win32Exception ex = new Win32Exception(); Logger.Write("Call to " + sFailingCall + " FAILED: " + ex.Message); } When I get the CallingUser property, the value returned the first few times is correct and the correct user name is identified, however, after 3 or 4 different users have successfully made calls (and it varies, so I can't be more specific), further users seem to be identified as users who had made earlier calls. What I have noticed is that the first few users have their DCOM calls handled on their own thread (ie all calls from a particular client are handled by a single unique thread), and then subsequent users are being handled by the same threads as the earlier users, and after the call to CoImpersonateClient(), the CurrentPrincipal matches that of the initial user of that thread. To Illustrate: User Tom makes DCOM calls which are handled by thread 1 (CurrentPrincipal correctly identifies Tom) User Dick makes DCOM calls which are handled by thread 2 (CurrentPrincipal correctly identifies Dick) User Harry makes DCOM calls which are handled by thread 3 (CurrentPrincipal correctly identifies Harry) User Bob makes DCOM calls which are handled by thread 3 (CurrentPrincipal incorrectly identifies him as Harry) As you can see in this illustration, calls from clients Harry and Bob are being handled on thread 3, and the server is identifying the calling client as Harry. Is there something that I am doing wrong? Are there any caveats or restrictions on using Impersonations in this way? Is there a better or different way that I can RELIABLY achieve what I am trying to do? All help would be greatly appreciated. Regards Andrew

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >