Search Results

Search found 20761 results on 831 pages for 'chef client'.

Page 121/831 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Apache mod_wsgi error: ImportError: No module named django.core.handlers.wsgi

    - by bigmac
    I am using Python 2.7 with mod_python 3.3.1 and mod_wsgi 3.3. I get an Internal Server Error and this stack trace in the apache logs: [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] import django.core.handlers.wsgi [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] ImportError: No module named django.core.handlers.wsgi [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] mod_wsgi (pid=4463): Target WSGI script '/home/one/codebase/campman/wsgi_handler.py' cannot be loaded as Python module. [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] mod_wsgi (pid=4463): Exception occurred processing WSGI script '/home/one/codebase/campman/wsgi_handler.py'. [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] Traceback (most recent call last): [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] File "/home/one/codebase/campman/wsgi_handler.py", line 13, in <module> [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] import django.core.handlers.wsgi [Thu Apr 21 10:25:37 2011] [error] [client 83.244.243.242] ImportError: No module named django.core.handlers.wsgi

    Read the article

  • What privilege level is required on a Windows client workstation on an ActiveDomain to break file lo

    - by Mike Burton
    I'm not sure if I should be asking this here or on StackOverflow, but here goes: I'm part of a team maintaining a document management application, and I'm trying to figure out Windows file locking permissions. We use a utility somebody downloaded years ago called psunlock to remotely close all locks on a file. We recently discovered that this does not work across different domains on our VPN. A little bit of digging lead me to the samba manual's discussion of file locking. I still don't really "get it", though. Does anyone have any insight to share into how the process of locking and breaking locks on files works in a network context? My thinking is that privileges are required both on the file appliance and on the client workstations which hold locks. Is that accurate? Can anyone give a more specific version? Ideally I'm looking for something along the lines of A user must have privilege level X in order to break locks held from a client workstation. In practice I'd be happy with a hotlink to a good white paper on the subject.

    Read the article

  • What's the best Bittorrent client for 12.10 excluding utorrent?

    - by Brenton Horne
    The reason why I've excluded utorrent is because utorrent server doesn't appear to work for me with the details of such difficulties available in the question How do I install uTorrent?. Running through wine is an annoying solution to the problem as whenever I run it that way I have to manually relocate torrent-ed files as they are undiscoverable in the location utorrent saves it by default. I'd appreciate a decently-sized compare and contrast answer.

    Read the article

  • Restful Java based web services in json + html5 and javascript no templates (jsp/jsf/freemarker) aka fat/thick client

    - by Ismail Marmoush
    I have this idea of building a website which service JSON data through restful services framework. And will not use any template engines like jsp/jsf/freemarker. Just pure html5 and Javascript libs. What do you think of the pros and cons of such design ? Just for elaboration and brain storming a friend of mine argued with the following concerns: sounds like gwt this way you won't have any control over you service api for example say you wanna charge the user per request how will you handle it? how will you control your design and themes? what about the 1st request the browser make? not easy with this all of the user's requests will come with "Accept" header "application/json" how will you separate browser from abuser? this way all of your public apis will be used by third party apps abusively and you won't be able to lock it since you won't be able to block the normal user browser We won't use compiled html anyway but may be something like freemarker and in that case you won't expose any of your json resources to the unauthorized user but you will expose all the html since any browser can access them all the well known 1st class services do this can you send me links to what you've read? keep in mind the DOM based XSS it will be a nightmare ofc, if what you say is applicable.

    Read the article

  • Clients with multiple proxy and multithreading callbacks

    - by enzom83
    I created a sessionful web service using WCF, and in particular I used the NetTcpBinding binding. In addition to methods to initiate and terminate a session, other methods allow the client to send to one or more tasks to be performed (the results are returned via callback, so the service is duplex), but they also allow you to know the status of the service. Assuming you activate the same service on multiple endpoints, and assuming that the client knows these endpoints (for example, it could maintain a List of endpoints), the client should connect with one or more replicas of the same service. The client periodically updates the status of the service, so when it needs to perform a new task (the task is submitted by the user via UI), it selects the service currently less loaded and sends the task to it. Periodically, the client also initiates a maintenance procedure in order to disconnect from one or more overloaded service and in order to connect with new services. I created a client proxy using the svcutil tool. I wish each proxy can be used simultaneously by different threads, for example, in addition to the thread that submits the tasks using a proxy, there are also the following two threads which act periodically: a thread that periodically sends a request to the service in order to obtain the updated state; a thread that periodically selects a proxy to close and instantiates a new proxy to replace the closed one. To achieve these objectives, is it sufficient to create an array of proxies and manage their opening and closing in separate threads? I think I read that the proxy method calls are thread safe, so I would not need to perform a lock before requesting updates to the service. However, when the maintenance procedure (which is activated on its own thread) decides to close a proxy, should I perform a lock? Finally, each proxy is also associated with an object that implements the callback interface for the service: are the callbacks (invoked on the client) executed on different threads on the client? I would like to wrap the management of the proxy in one or more classes so that it can then easily manage within a WPF application.

    Read the article

  • Problem connecting to isp server using xl2tpd as client. Ubuntu server 13.04

    - by Deon Pretorius
    I have followed guides found on google and ubuntu support pages and can get xl2tpd connection up but only under the following conditions: 1 - ADSL model must be configured and connected to the ISP or 2 - ADSL modem in bridge mode I must have an existing PPPoe connection established. If neither of the above are active xl2tpd wont trigger pppd and connect to the isp and thus tunnel connection fails to connect to the L2TP server of the ISP. Am I doing something wrong; /etc/ppp/options.l2tpd.axxess ipcp-accept-local ipcp-accept-remote refuse-eap refuse-chap require-pap noccp noauth idle 1800 mtu 1200 mru 1200 defaultroute usepeerdns debug lock connect-delay 5000 name (name used for ppp connection) /etc/ppp/pap-secrets # * password (name used for ppp connection as above) * (ppp password supplied by isp) /etc/xl2tpd/xl2tpd.conf [global] ; Global parameters: auth file = /etc/xl2tpd/l2tp-secrets ; * Where our challenge secrets are access control = yes ; * Refuse connections without IP match debug tunnel = yes [lac axxess] lns = 196.30.121.50 ; * Who is our LNS? redial = yes ; * Redial if disconnected? redial timeout = 5 ; * Wait n seconds between redials max redials = 5 ; * Give up after n consecutive failures hidden bit = yes ; * User hidden AVP's? length bit = yes ; * Use length bit in payload? require pap = yes ; * Require PAP auth. by peer require chap = no ; * Require CHAP auth. by peer refuse chap = yes ; * Refuse CHAP authentication require authentication = yes ; * Require peer to authenticate name = BLA85003@axxess ; * Report this as our hostname ppp debug = yes ; * Turn on PPP debugging pppoptfile = /etc/ppp/options.l2tpd.axxess ; * ppp options file for this lac /etc/xl2tpd/l2tp-secrets # Secrets for authenticating l2tp tunnels # us them secret # * marko blah2 # zeus marko blah # * * interop * vzb_l2tp (*** secret supplied by isp) ^ isp server host name Any help will be greatly appreciated

    Read the article

  • My client's solution of a Windows SBS 2011 VM on an Ubuntu host and VirtualBox is pinning the host CPU

    - by Scott Stamp
    Here's my situation, I've got a client hosting two servers (one VM), with the host providing VMware Zimbra, the other Windows Small Business Server 2011. Unfortunately, the person before me had configured this setup as follows. Host: Ubuntu Desktop Edition 10.04 (I know, again, not my choice) running VMware Zimbra 8GB of RAM On-board RAID1 of two 320GB Seagate Barracuda drives for the OS Software RAID5 of four 500GB WD Caviar Black drives on MDADM for bulk storage (sorry, I don't know the model #) A relatively competent quad-core Intel Core i7 CPU from the Nehalem architecture (not suspicious of this as the bottleneck) Guest: Windows Small Business Server 2011 4GB of RAM Host-equivalent CPU allocation VDI file for OS hosted on the on-board RAID, VDI file for storage hosted on the on-board RAID For some reason when running, the VM locks up when sitting nearly idle, and the VirtualBox process reports values of 240%+ in top (how is that even possible?!). Anyone have any ideas or suggestions? I'm totally stumped on this one. Happy to provide whatever logs you'd like to take a look at. Ideally I'd drop VirtualBox and provision this with VMware Workstation, but the client has objected to the (very nominal) costs involved. If hardware needs to be purchased to help, it will be, but we're considering upgrades a last-resort at this time. Thanks in advance! *fingers crossed*

    Read the article

  • What is the best idea to put available OS (linux) and Web application to client?

    - by Fernando Costa
    After a year programming a web based business management system, I got my idea divided into two differents ways to do what I'm doing... I will try to explain in follow lines: First I will describe my enviroment: Webserver: apache, ngynx Programming Language: PHP, Shell Script, Java Script, SQL Database: Mysql Operating System: Linux, UNIX (All Distros) (If manually configured works on windows) Authentication Server: FreeRadius First situation I have my application running on this enviroment that I had just described before, as my application is a SaaS app, then I have my own server to run it all and customers pay to use it as a service accessed by webbrowser. Second Situation The same as before but with one big difference, everything (environment) is installed in the customer, then I need to cryptography all my codes (It includes PHP and Shell Scripts). I think this situation is most difficulty, but I would like to hear it from different points of view.

    Read the article

  • What if you've been asked to develop a site and the client later introduces Ts&Cs that you'll breach whilst doing your job?

    - by Matt Lacey
    Disclaimer : this is all made up. Honest. And it represents no clients or employers living or dead, blah blah blah, etc. [Allegedly] As part of a website I've built, I've now been provided the Terms and Conditions of site usage to display on the site. These terms--which must be agreed to to access the site--include my (or any visitor to the sites) compliance with a number of clauses. Many of these clauses refer to general computer use and are not tied specifically to use of the site. Some of these clauses refer to things I have had to previously do as a legitimate part of my job and would expect to have to do again. When I've raised similar issues previously my line manager has said just to ignore it but that doesn't seem to be the professional thing to do. So, what do I do? Abiding by the terms would mean that I could no longer work on the project and would cause issues with my employer and the owner of the business the site is being created for. Ignoring them could lead to possible future issues with the business owner and is not something I'm necessarily happy with (the deliberate breaking of a legal contract). Neither option is one I'd choose and could have major consequences. Any thoughts?

    Read the article

  • What level/format of access should be given to a client to the issue tracking system?

    - by dukeofgaming
    So, I used to think that it would be a good idea to give the customer access to the issue tracking system, but now I've seen that it creates less than ideal situations, like: Customer judging progress solely on ticket count Developers denied to add issues to avoid customer thinking that there is less progress Customer appointing people on their side to add issues who don't always do a good job (lots of duplicate issues, insufficient information to reproduce, and other things that distract people from doing their real job) However, I think customers should have access to some indicators or proof that there is progress being done, as well as a right to report bugs. So, what would be the ideal solution to this situation?, specially, getting out of or improving the first situation described?

    Read the article

  • GlassFish v2.1 -- getting Application Client and Eclipselink to work together?

    - by Nick
    We are trying to use Eclipselink 1.1 with Glassfish v2.1. Following the instructions on: http://wiki.glassfish.java.net/Wiki.jsp?page=FaqEclipseLinkGlassFishV2 I adapted the instructions for the appclient script on linux by adding the lines: APPCPATH=$APPCPATH:$AS_INSTALL/lib/eclipselink-1.1.1.jar export APPCPATH to the appclient shell script. This however is not working. On running the application client (using Glassfish's webstart), I get the error: WARNING: "IOP00810257: (MARSHAL) Could not load class org.eclipse.persistence.indirection.IndirectList" Anyone else succeed in getting GF v 2.1 to work with eclipselink? or any ideas on what I might be doing wrong? I found this bug report: http s://glassfish.dev.java.net/issues/show_bug.cgi?id=8204 (New users can't post more than 1 link, so remove the space between 'http' and 's'.) Where Tim Quinn (tjquinn) said: App client container support for persistence is not yet in place I think this refers only to Glassfish v3, and it should be working in Glassfish v2. Is this correct? I'm working on the assumption that this will work once the ACC knows where to find the eclipselinks jar. Thanks in advance, Nick.

    Read the article

  • OpenVPN route missing

    - by dajuric
    I can connect to an OpenVPN server from Windows without any problems. But when I try to connect from Ubuntu 12.04 (start OpenVPN) I receive the following: OpenVPN needs a gateway parameter for a --route option and no default was specified by either --route-gateway or --ifconfig options SERVER IP: 161.53.X.X internal network: 10.0.0.0 / 8 What I need to do ? client configuration: client dev tap proto udp remote 161.53.X.X 1194 resolv-retry infinite nobind ca ca.crt cert client.crt key client.key ns-cert-type server comp-lzo verb 3 server conf: local 161.53.X.X port 1194 proto udp dev tap dev-node OpenVPN ca ca.crt cert server.crt key server.key # This file should be kept secret dh dh1024.pem # DHCP leases addresses to clients server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 10.0.0.1 255.255.0.0" client-to-client duplicate-cn keepalive 10 120 comp-lzo verb 6

    Read the article

  • Office 2010 Client &ndash; Should I go with 32 bit or 64 bit?

    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This feed URL has been discontinued. Please update your reader's URL to : http://feeds.feedburner.com/winsmarts Read full article .... ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • client website compromised, found a strange .php file. any ideas?

    - by Kevin Strong
    I do support work for a web development company and I found a suspicious file today on the website of one of our clients called "hope.php" which contained several eval(gzuncompress(base64_decode('....'))) commands (which on a site like this, usually indicates that they've been hacked). Searching for the compromised site on google, we got a bunch of results which link to hope.php with various query strings that seem to generate different groups of seo terms like so: (the second result from the top is legitimate, all the rest are not) Here is the source of "hope.php": http://pastebin.com/7Ss4NjfA And here is the decoded version I got by replacing the eval()s with echo(): http://pastebin.com/m31Ys7q5 Any ideas where this came from or what it is doing? I've of course already removed the file from the server, but I've never seen code like this so I'm rather curious as to its origin. Where could I go to find more info about something like this?

    Read the article

  • TCP Keepalive and firewall killing idle sessions

    - by Carlos A. Ibarra
    In a customer site, the network team added a firewall between the client and the server. This is causing idle connections to get disconnected after about 40 minutes of idle time. The network people say that the firewall doesn't have any idle connection timeout, but the fact is that the idle connections get broken. In order to get around this, we first configured the server (a Linux machine) with TCP keepalives turned on with tcp_keepalive_time=300, tcp_keepalive_intvl=300, and tcp_keepalive_probes=30000. This works, and the connections stay viable for days or more. However, we would also like the server to detect dead clients and kill the connection, so we changed the settings to time=300,intvl=180,probes=10, thinking that if the client was indeed alive, the server would probe every 300s (5 minutes) and the client would respond with an ACK and that would keep the firewall from seeing this as an idle connection and killing it. If the client was dead, after 10 probes, the server would abort the connection. To our surprise, the idle but alive connections get killed after about 40 minutes as before. Wireshark running on the client side shows no keepalives at all between the server and client, even when keepalives are enabled on the server. What could be happening here? If the keepalive settings on the server are time=300,intvl=180,probes=10, I would expect that if the client is alive but idle, the server would send keepalive probes every 300 seconds and leave the connection alone, and if the client is dead, it would send one after 300 seconds, then 9 more probes every 180 seconds before killing the connection. Am I right? One possibility is that the firewall is somehow intercepting the keepalive probes from the server and failing to pass them on to the client, and the fact that it got a probe makes it think that the connection is active. Is this common behavior for a firewall? We don't know what kind of firewall is involved. The server is a Teradata node and the connection is from a Teradata client utility to the database server, port 1025 on the server side, but we have seen the same problem with an SSH connection so we think it affects all TCP connections.

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • How to cache on CloudFlare images that are served to client as JSON?

    - by Askar Ibragimov
    I am using a gallery on my website that gets list of images from a JSON sent by a php script. So, the javascript gallery calls PHP backend and it replies with complex JSON where images are specified as object fields. These fields not necessarily include full URLs, merely a path to needed images. I'd like to use Cloudflare and want these images to be cached there. How I could learn whether these are cached or not, and make sure that these would be cached and not considered some sort of dynamic content?

    Read the article

  • It's here, it's here! OJEC 1.1 is here -- Oracle Java ME Embedded Client

    - by hinkmond
    Download it now! OJEC 1.1 has just been released and will run on your Linux/ARM, Linux/x86, or Linux/MIPS device. Get it before supplies run out! (Not really, it's software. How can it run out?) See: Download OJEC 1.1 now! Here's a quote: Oracle offers Java ME Embedded products to meet your specific Java Technology needs. Java ME is targeted for use on devices where a feature-rich Java platform is desirable, despite lacking the resources necessary to run the full Java SE Embedded environment. Try it on your Raspberry Pi! (If you can find where to buy a Raspberry Pi that is. Now that's something that can run out) Hinkmond

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • Should I keep separate client codebases and databases for a software-as-a-service application?

    - by John
    My question is about the architecture of my application. I have a Rails application where companies can administrate all things related to their clients. Companies would buy a subscription and their users can access the application online. Hopefully I will get multiple companies subscribing to my application/service. What should I do with my code and database? Seperate app code base and database per company One app code base but seperate database per company One app code base and one database The decision involves security (e.g. a user from company X should not see any data from company Y) performance (let's suppose it becomes successful, it should have a good performance) and scalability (again, if successful, it should have a good performance but also easy for me to handle all the companies, code changes, etc). For the sake of maintainability, I tend to opt for the one code base, but for the database I really don't know. What do you think is the best option?

    Read the article

  • Accessing Server-Side Data from Client Script: Using Ajax Web Services, Script References, and ...

    Today's websites commonly exchange information between the browser and the web server using Ajax techniques. In a nutshell, the browser executes JavaScript code typically in response to the page loading or some user action. This JavaScript makes an asynchronous HTTP request to the server. The server processes this request and, perhaps, returns data that the browser can then seamlessly integrate into the web page. Typically, the information exchanged between the browser and server is serialized into JSON, an open, text-based serialization format that is both human-readable and platform independent.Adding such targeted, lightweight Ajax capabilities to your ASP.NET website requires two steps: first, you must create some mechanism on

    Read the article

  • Dynamic loaded libraries and shared global symbols

    - by phlipsy
    Since I observed some strange behavior of global variables in my dynamically loaded libraries, I wrote the following test. At first we need a statically linked library: The header test.hpp #ifndef __BASE_HPP #define __BASE_HPP #include <iostream> class test { private: int value; public: test(int value) : value(value) { std::cout << "test::test(int) : value = " << value << std::endl; } ~test() { std::cout << "test::~test() : value = " << value << std::endl; } int get_value() const { return value; } void set_value(int new_value) { value = new_value; } }; extern test global_test; #endif // __BASE_HPP and the source test.cpp #include "base.hpp" test global_test = test(1); Then I wrote a dynamically loaded library: library.cpp #include "base.hpp" extern "C" { test* get_global_test() { return &global_test; } } and a client program loading this library: client.cpp #include <iostream> #include <dlfcn.h> #include "base.hpp" typedef test* get_global_test_t(); int main() { global_test.set_value(2); // global_test from libbase.a std::cout << "client: " << global_test.get_value() << std::endl; void* handle = dlopen("./liblibrary.so", RTLD_LAZY); if (handle == NULL) { std::cout << dlerror() << std::endl; return 1; } get_global_test_t* get_global_test = NULL; void* func = dlsym(handle, "get_global_test"); if (func == NULL) { std::cout << dlerror() << std::endl; return 1; } else get_global_test = reinterpret_cast<get_global_test_t*>(func); test* t = get_global_test(); // global_test from liblibrary.so std::cout << "liblibrary.so: " << t->get_value() << std::endl; std::cout << "client: " << global_test.get_value() << std::endl; dlclose(handle); return 0; } Now I compile the statically loaded library with g++ -Wall -g -c base.cpp ar rcs libbase.a base.o the dynamically loaded library g++ -Wall -g -fPIC -shared library.cpp libbase.a -o liblibrary.so and the client g++ -Wall -g -ldl client.cpp libbase.a -o client Now I observe: The client and the dynamically loaded library possess a different version of the variable global_test. But in my project I'm using cmake. The build script looks like this: CMAKE_MINIMUM_REQUIRED(VERSION 2.6) PROJECT(globaltest) ADD_LIBRARY(base STATIC base.cpp) ADD_LIBRARY(library MODULE library.cpp) TARGET_LINK_LIBRARIES(library base) ADD_EXECUTABLE(client client.cpp) TARGET_LINK_LIBRARIES(client base dl) analyzing the created makefiles I found that cmake builds the client with g++ -Wall -g -ldl -rdynamic client.cpp libbase.a -o client This ends up in a slightly different but fatal behavior: The global_test of the client and the dynamically loaded library are the same but will be destroyed two times at the end of the program. Am I using cmake in a wrong way? Is it possible that the client and the dynamically loaded library use the same global_test but without this double destruction problem?

    Read the article

  • How can I reliably check client identity whilst making DCOM calls to a C# .Net 3.5 Server?

    - by pionium
    Hi, I have an old Win32 C++ DCOM Server that I am rewriting to use C# .Net 3.5. The client applications sit on remote XP machines and are also written in C++. These clients must remain unchanged, hence I must implement the interfaces on new .Net objects. This has been done, and is working successfully regarding the implementation of the interfaces, and all of the calls are correctly being made from the old clients to the new .Net objects. However, I'm having problems obtaining the identity of the calling user from the DCOM Client. In order to try to identify the user who instigated the DCOM call, I have the following code on the server... [DllImport("ole32.dll")] static extern int CoImpersonateClient(); [DllImport("ole32.dll")] static extern int CoRevertToSelf(); private string CallingUser { get { string sCallingUser = null; if (CoImpersonateClient() == 0) { WindowsPrincipal wp = System.Threading.Thread.CurrentPrincipal as WindowsPrincipal; if (wp != null) { WindowsIdentity wi = wp.Identity as WindowsIdentity; if (wi != null && !string.IsNullOrEmpty(wi.Name)) sCallingUser = wi.Name; } if (CoRevertToSelf() != 0) ReportWin32Error("CoRevertToSelf"); } else ReportWin32Error("CoImpersonateClient"); return sCallingUser; } } private static void ReportWin32Error(string sFailingCall) { Win32Exception ex = new Win32Exception(); Logger.Write("Call to " + sFailingCall + " FAILED: " + ex.Message); } When I get the CallingUser property, the value returned the first few times is correct and the correct user name is identified, however, after 3 or 4 different users have successfully made calls (and it varies, so I can't be more specific), further users seem to be identified as users who had made earlier calls. What I have noticed is that the first few users have their DCOM calls handled on their own thread (ie all calls from a particular client are handled by a single unique thread), and then subsequent users are being handled by the same threads as the earlier users, and after the call to CoImpersonateClient(), the CurrentPrincipal matches that of the initial user of that thread. To Illustrate: User Tom makes DCOM calls which are handled by thread 1 (CurrentPrincipal correctly identifies Tom) User Dick makes DCOM calls which are handled by thread 2 (CurrentPrincipal correctly identifies Dick) User Harry makes DCOM calls which are handled by thread 3 (CurrentPrincipal correctly identifies Harry) User Bob makes DCOM calls which are handled by thread 3 (CurrentPrincipal incorrectly identifies him as Harry) As you can see in this illustration, calls from clients Harry and Bob are being handled on thread 3, and the server is identifying the calling client as Harry. Is there something that I am doing wrong? Are there any caveats or restrictions on using Impersonations in this way? Is there a better or different way that I can RELIABLY achieve what I am trying to do? All help would be greatly appreciated. Regards Andrew

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >