Search Results

Search found 22173 results on 887 pages for 'concerned client'.

Page 119/887 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Problem connecting to isp server using xl2tpd as client. Ubuntu server 13.04

    - by Deon Pretorius
    I have followed guides found on google and ubuntu support pages and can get xl2tpd connection up but only under the following conditions: 1 - ADSL model must be configured and connected to the ISP or 2 - ADSL modem in bridge mode I must have an existing PPPoe connection established. If neither of the above are active xl2tpd wont trigger pppd and connect to the isp and thus tunnel connection fails to connect to the L2TP server of the ISP. Am I doing something wrong; /etc/ppp/options.l2tpd.axxess ipcp-accept-local ipcp-accept-remote refuse-eap refuse-chap require-pap noccp noauth idle 1800 mtu 1200 mru 1200 defaultroute usepeerdns debug lock connect-delay 5000 name (name used for ppp connection) /etc/ppp/pap-secrets # * password (name used for ppp connection as above) * (ppp password supplied by isp) /etc/xl2tpd/xl2tpd.conf [global] ; Global parameters: auth file = /etc/xl2tpd/l2tp-secrets ; * Where our challenge secrets are access control = yes ; * Refuse connections without IP match debug tunnel = yes [lac axxess] lns = 196.30.121.50 ; * Who is our LNS? redial = yes ; * Redial if disconnected? redial timeout = 5 ; * Wait n seconds between redials max redials = 5 ; * Give up after n consecutive failures hidden bit = yes ; * User hidden AVP's? length bit = yes ; * Use length bit in payload? require pap = yes ; * Require PAP auth. by peer require chap = no ; * Require CHAP auth. by peer refuse chap = yes ; * Refuse CHAP authentication require authentication = yes ; * Require peer to authenticate name = BLA85003@axxess ; * Report this as our hostname ppp debug = yes ; * Turn on PPP debugging pppoptfile = /etc/ppp/options.l2tpd.axxess ; * ppp options file for this lac /etc/xl2tpd/l2tp-secrets # Secrets for authenticating l2tp tunnels # us them secret # * marko blah2 # zeus marko blah # * * interop * vzb_l2tp (*** secret supplied by isp) ^ isp server host name Any help will be greatly appreciated

    Read the article

  • My client's solution of a Windows SBS 2011 VM on an Ubuntu host and VirtualBox is pinning the host CPU

    - by Scott Stamp
    Here's my situation, I've got a client hosting two servers (one VM), with the host providing VMware Zimbra, the other Windows Small Business Server 2011. Unfortunately, the person before me had configured this setup as follows. Host: Ubuntu Desktop Edition 10.04 (I know, again, not my choice) running VMware Zimbra 8GB of RAM On-board RAID1 of two 320GB Seagate Barracuda drives for the OS Software RAID5 of four 500GB WD Caviar Black drives on MDADM for bulk storage (sorry, I don't know the model #) A relatively competent quad-core Intel Core i7 CPU from the Nehalem architecture (not suspicious of this as the bottleneck) Guest: Windows Small Business Server 2011 4GB of RAM Host-equivalent CPU allocation VDI file for OS hosted on the on-board RAID, VDI file for storage hosted on the on-board RAID For some reason when running, the VM locks up when sitting nearly idle, and the VirtualBox process reports values of 240%+ in top (how is that even possible?!). Anyone have any ideas or suggestions? I'm totally stumped on this one. Happy to provide whatever logs you'd like to take a look at. Ideally I'd drop VirtualBox and provision this with VMware Workstation, but the client has objected to the (very nominal) costs involved. If hardware needs to be purchased to help, it will be, but we're considering upgrades a last-resort at this time. Thanks in advance! *fingers crossed*

    Read the article

  • What is the best idea to put available OS (linux) and Web application to client?

    - by Fernando Costa
    After a year programming a web based business management system, I got my idea divided into two differents ways to do what I'm doing... I will try to explain in follow lines: First I will describe my enviroment: Webserver: apache, ngynx Programming Language: PHP, Shell Script, Java Script, SQL Database: Mysql Operating System: Linux, UNIX (All Distros) (If manually configured works on windows) Authentication Server: FreeRadius First situation I have my application running on this enviroment that I had just described before, as my application is a SaaS app, then I have my own server to run it all and customers pay to use it as a service accessed by webbrowser. Second Situation The same as before but with one big difference, everything (environment) is installed in the customer, then I need to cryptography all my codes (It includes PHP and Shell Scripts). I think this situation is most difficulty, but I would like to hear it from different points of view.

    Read the article

  • What level/format of access should be given to a client to the issue tracking system?

    - by dukeofgaming
    So, I used to think that it would be a good idea to give the customer access to the issue tracking system, but now I've seen that it creates less than ideal situations, like: Customer judging progress solely on ticket count Developers denied to add issues to avoid customer thinking that there is less progress Customer appointing people on their side to add issues who don't always do a good job (lots of duplicate issues, insufficient information to reproduce, and other things that distract people from doing their real job) However, I think customers should have access to some indicators or proof that there is progress being done, as well as a right to report bugs. So, what would be the ideal solution to this situation?, specially, getting out of or improving the first situation described?

    Read the article

  • What if you've been asked to develop a site and the client later introduces Ts&Cs that you'll breach whilst doing your job?

    - by Matt Lacey
    Disclaimer : this is all made up. Honest. And it represents no clients or employers living or dead, blah blah blah, etc. [Allegedly] As part of a website I've built, I've now been provided the Terms and Conditions of site usage to display on the site. These terms--which must be agreed to to access the site--include my (or any visitor to the sites) compliance with a number of clauses. Many of these clauses refer to general computer use and are not tied specifically to use of the site. Some of these clauses refer to things I have had to previously do as a legitimate part of my job and would expect to have to do again. When I've raised similar issues previously my line manager has said just to ignore it but that doesn't seem to be the professional thing to do. So, what do I do? Abiding by the terms would mean that I could no longer work on the project and would cause issues with my employer and the owner of the business the site is being created for. Ignoring them could lead to possible future issues with the business owner and is not something I'm necessarily happy with (the deliberate breaking of a legal contract). Neither option is one I'd choose and could have major consequences. Any thoughts?

    Read the article

  • GlassFish v2.1 -- getting Application Client and Eclipselink to work together?

    - by Nick
    We are trying to use Eclipselink 1.1 with Glassfish v2.1. Following the instructions on: http://wiki.glassfish.java.net/Wiki.jsp?page=FaqEclipseLinkGlassFishV2 I adapted the instructions for the appclient script on linux by adding the lines: APPCPATH=$APPCPATH:$AS_INSTALL/lib/eclipselink-1.1.1.jar export APPCPATH to the appclient shell script. This however is not working. On running the application client (using Glassfish's webstart), I get the error: WARNING: "IOP00810257: (MARSHAL) Could not load class org.eclipse.persistence.indirection.IndirectList" Anyone else succeed in getting GF v 2.1 to work with eclipselink? or any ideas on what I might be doing wrong? I found this bug report: http s://glassfish.dev.java.net/issues/show_bug.cgi?id=8204 (New users can't post more than 1 link, so remove the space between 'http' and 's'.) Where Tim Quinn (tjquinn) said: App client container support for persistence is not yet in place I think this refers only to Glassfish v3, and it should be working in Glassfish v2. Is this correct? I'm working on the assumption that this will work once the ACC knows where to find the eclipselinks jar. Thanks in advance, Nick.

    Read the article

  • OpenVPN route missing

    - by dajuric
    I can connect to an OpenVPN server from Windows without any problems. But when I try to connect from Ubuntu 12.04 (start OpenVPN) I receive the following: OpenVPN needs a gateway parameter for a --route option and no default was specified by either --route-gateway or --ifconfig options SERVER IP: 161.53.X.X internal network: 10.0.0.0 / 8 What I need to do ? client configuration: client dev tap proto udp remote 161.53.X.X 1194 resolv-retry infinite nobind ca ca.crt cert client.crt key client.key ns-cert-type server comp-lzo verb 3 server conf: local 161.53.X.X port 1194 proto udp dev tap dev-node OpenVPN ca ca.crt cert server.crt key server.key # This file should be kept secret dh dh1024.pem # DHCP leases addresses to clients server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 10.0.0.1 255.255.0.0" client-to-client duplicate-cn keepalive 10 120 comp-lzo verb 6

    Read the article

  • Office 2010 Client &ndash; Should I go with 32 bit or 64 bit?

    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This feed URL has been discontinued. Please update your reader's URL to : http://feeds.feedburner.com/winsmarts Read full article .... ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • client website compromised, found a strange .php file. any ideas?

    - by Kevin Strong
    I do support work for a web development company and I found a suspicious file today on the website of one of our clients called "hope.php" which contained several eval(gzuncompress(base64_decode('....'))) commands (which on a site like this, usually indicates that they've been hacked). Searching for the compromised site on google, we got a bunch of results which link to hope.php with various query strings that seem to generate different groups of seo terms like so: (the second result from the top is legitimate, all the rest are not) Here is the source of "hope.php": http://pastebin.com/7Ss4NjfA And here is the decoded version I got by replacing the eval()s with echo(): http://pastebin.com/m31Ys7q5 Any ideas where this came from or what it is doing? I've of course already removed the file from the server, but I've never seen code like this so I'm rather curious as to its origin. Where could I go to find more info about something like this?

    Read the article

  • TCP Keepalive and firewall killing idle sessions

    - by Carlos A. Ibarra
    In a customer site, the network team added a firewall between the client and the server. This is causing idle connections to get disconnected after about 40 minutes of idle time. The network people say that the firewall doesn't have any idle connection timeout, but the fact is that the idle connections get broken. In order to get around this, we first configured the server (a Linux machine) with TCP keepalives turned on with tcp_keepalive_time=300, tcp_keepalive_intvl=300, and tcp_keepalive_probes=30000. This works, and the connections stay viable for days or more. However, we would also like the server to detect dead clients and kill the connection, so we changed the settings to time=300,intvl=180,probes=10, thinking that if the client was indeed alive, the server would probe every 300s (5 minutes) and the client would respond with an ACK and that would keep the firewall from seeing this as an idle connection and killing it. If the client was dead, after 10 probes, the server would abort the connection. To our surprise, the idle but alive connections get killed after about 40 minutes as before. Wireshark running on the client side shows no keepalives at all between the server and client, even when keepalives are enabled on the server. What could be happening here? If the keepalive settings on the server are time=300,intvl=180,probes=10, I would expect that if the client is alive but idle, the server would send keepalive probes every 300 seconds and leave the connection alone, and if the client is dead, it would send one after 300 seconds, then 9 more probes every 180 seconds before killing the connection. Am I right? One possibility is that the firewall is somehow intercepting the keepalive probes from the server and failing to pass them on to the client, and the fact that it got a probe makes it think that the connection is active. Is this common behavior for a firewall? We don't know what kind of firewall is involved. The server is a Teradata node and the connection is from a Teradata client utility to the database server, port 1025 on the server side, but we have seen the same problem with an SSH connection so we think it affects all TCP connections.

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • How to cache on CloudFlare images that are served to client as JSON?

    - by Askar Ibragimov
    I am using a gallery on my website that gets list of images from a JSON sent by a php script. So, the javascript gallery calls PHP backend and it replies with complex JSON where images are specified as object fields. These fields not necessarily include full URLs, merely a path to needed images. I'd like to use Cloudflare and want these images to be cached there. How I could learn whether these are cached or not, and make sure that these would be cached and not considered some sort of dynamic content?

    Read the article

  • It's here, it's here! OJEC 1.1 is here -- Oracle Java ME Embedded Client

    - by hinkmond
    Download it now! OJEC 1.1 has just been released and will run on your Linux/ARM, Linux/x86, or Linux/MIPS device. Get it before supplies run out! (Not really, it's software. How can it run out?) See: Download OJEC 1.1 now! Here's a quote: Oracle offers Java ME Embedded products to meet your specific Java Technology needs. Java ME is targeted for use on devices where a feature-rich Java platform is desirable, despite lacking the resources necessary to run the full Java SE Embedded environment. Try it on your Raspberry Pi! (If you can find where to buy a Raspberry Pi that is. Now that's something that can run out) Hinkmond

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • Should I keep separate client codebases and databases for a software-as-a-service application?

    - by John
    My question is about the architecture of my application. I have a Rails application where companies can administrate all things related to their clients. Companies would buy a subscription and their users can access the application online. Hopefully I will get multiple companies subscribing to my application/service. What should I do with my code and database? Seperate app code base and database per company One app code base but seperate database per company One app code base and one database The decision involves security (e.g. a user from company X should not see any data from company Y) performance (let's suppose it becomes successful, it should have a good performance) and scalability (again, if successful, it should have a good performance but also easy for me to handle all the companies, code changes, etc). For the sake of maintainability, I tend to opt for the one code base, but for the database I really don't know. What do you think is the best option?

    Read the article

  • Accessing Server-Side Data from Client Script: Using Ajax Web Services, Script References, and ...

    Today's websites commonly exchange information between the browser and the web server using Ajax techniques. In a nutshell, the browser executes JavaScript code typically in response to the page loading or some user action. This JavaScript makes an asynchronous HTTP request to the server. The server processes this request and, perhaps, returns data that the browser can then seamlessly integrate into the web page. Typically, the information exchanged between the browser and server is serialized into JSON, an open, text-based serialization format that is both human-readable and platform independent.Adding such targeted, lightweight Ajax capabilities to your ASP.NET website requires two steps: first, you must create some mechanism on

    Read the article

  • Dynamic loaded libraries and shared global symbols

    - by phlipsy
    Since I observed some strange behavior of global variables in my dynamically loaded libraries, I wrote the following test. At first we need a statically linked library: The header test.hpp #ifndef __BASE_HPP #define __BASE_HPP #include <iostream> class test { private: int value; public: test(int value) : value(value) { std::cout << "test::test(int) : value = " << value << std::endl; } ~test() { std::cout << "test::~test() : value = " << value << std::endl; } int get_value() const { return value; } void set_value(int new_value) { value = new_value; } }; extern test global_test; #endif // __BASE_HPP and the source test.cpp #include "base.hpp" test global_test = test(1); Then I wrote a dynamically loaded library: library.cpp #include "base.hpp" extern "C" { test* get_global_test() { return &global_test; } } and a client program loading this library: client.cpp #include <iostream> #include <dlfcn.h> #include "base.hpp" typedef test* get_global_test_t(); int main() { global_test.set_value(2); // global_test from libbase.a std::cout << "client: " << global_test.get_value() << std::endl; void* handle = dlopen("./liblibrary.so", RTLD_LAZY); if (handle == NULL) { std::cout << dlerror() << std::endl; return 1; } get_global_test_t* get_global_test = NULL; void* func = dlsym(handle, "get_global_test"); if (func == NULL) { std::cout << dlerror() << std::endl; return 1; } else get_global_test = reinterpret_cast<get_global_test_t*>(func); test* t = get_global_test(); // global_test from liblibrary.so std::cout << "liblibrary.so: " << t->get_value() << std::endl; std::cout << "client: " << global_test.get_value() << std::endl; dlclose(handle); return 0; } Now I compile the statically loaded library with g++ -Wall -g -c base.cpp ar rcs libbase.a base.o the dynamically loaded library g++ -Wall -g -fPIC -shared library.cpp libbase.a -o liblibrary.so and the client g++ -Wall -g -ldl client.cpp libbase.a -o client Now I observe: The client and the dynamically loaded library possess a different version of the variable global_test. But in my project I'm using cmake. The build script looks like this: CMAKE_MINIMUM_REQUIRED(VERSION 2.6) PROJECT(globaltest) ADD_LIBRARY(base STATIC base.cpp) ADD_LIBRARY(library MODULE library.cpp) TARGET_LINK_LIBRARIES(library base) ADD_EXECUTABLE(client client.cpp) TARGET_LINK_LIBRARIES(client base dl) analyzing the created makefiles I found that cmake builds the client with g++ -Wall -g -ldl -rdynamic client.cpp libbase.a -o client This ends up in a slightly different but fatal behavior: The global_test of the client and the dynamically loaded library are the same but will be destroyed two times at the end of the program. Am I using cmake in a wrong way? Is it possible that the client and the dynamically loaded library use the same global_test but without this double destruction problem?

    Read the article

  • How can I reliably check client identity whilst making DCOM calls to a C# .Net 3.5 Server?

    - by pionium
    Hi, I have an old Win32 C++ DCOM Server that I am rewriting to use C# .Net 3.5. The client applications sit on remote XP machines and are also written in C++. These clients must remain unchanged, hence I must implement the interfaces on new .Net objects. This has been done, and is working successfully regarding the implementation of the interfaces, and all of the calls are correctly being made from the old clients to the new .Net objects. However, I'm having problems obtaining the identity of the calling user from the DCOM Client. In order to try to identify the user who instigated the DCOM call, I have the following code on the server... [DllImport("ole32.dll")] static extern int CoImpersonateClient(); [DllImport("ole32.dll")] static extern int CoRevertToSelf(); private string CallingUser { get { string sCallingUser = null; if (CoImpersonateClient() == 0) { WindowsPrincipal wp = System.Threading.Thread.CurrentPrincipal as WindowsPrincipal; if (wp != null) { WindowsIdentity wi = wp.Identity as WindowsIdentity; if (wi != null && !string.IsNullOrEmpty(wi.Name)) sCallingUser = wi.Name; } if (CoRevertToSelf() != 0) ReportWin32Error("CoRevertToSelf"); } else ReportWin32Error("CoImpersonateClient"); return sCallingUser; } } private static void ReportWin32Error(string sFailingCall) { Win32Exception ex = new Win32Exception(); Logger.Write("Call to " + sFailingCall + " FAILED: " + ex.Message); } When I get the CallingUser property, the value returned the first few times is correct and the correct user name is identified, however, after 3 or 4 different users have successfully made calls (and it varies, so I can't be more specific), further users seem to be identified as users who had made earlier calls. What I have noticed is that the first few users have their DCOM calls handled on their own thread (ie all calls from a particular client are handled by a single unique thread), and then subsequent users are being handled by the same threads as the earlier users, and after the call to CoImpersonateClient(), the CurrentPrincipal matches that of the initial user of that thread. To Illustrate: User Tom makes DCOM calls which are handled by thread 1 (CurrentPrincipal correctly identifies Tom) User Dick makes DCOM calls which are handled by thread 2 (CurrentPrincipal correctly identifies Dick) User Harry makes DCOM calls which are handled by thread 3 (CurrentPrincipal correctly identifies Harry) User Bob makes DCOM calls which are handled by thread 3 (CurrentPrincipal incorrectly identifies him as Harry) As you can see in this illustration, calls from clients Harry and Bob are being handled on thread 3, and the server is identifying the calling client as Harry. Is there something that I am doing wrong? Are there any caveats or restrictions on using Impersonations in this way? Is there a better or different way that I can RELIABLY achieve what I am trying to do? All help would be greatly appreciated. Regards Andrew

    Read the article

  • How do I make a SignalR client something when it receives a message?

    - by Ben
    I want to do something when a client receives a message via a SignalR hub. I've put together a basic chat app, but want people to be able to "like" a chat comment. I'm thinking the way to do this is to find the chat message on the client's page and update it using javascript. In the meantime to "prove the concept" I just want to make an alert popup on the client machine to say another user likes the comment. Trouble is, I'm not sure where to put it. (Am struggling to find SignalR documentation to be honest.) can't get my head round what is calling what here. My ChatHub class is as follows: public class ChatHub : Hub { public void Send(string name, string message) { // Call the broadcastMessage method to update clients. Clients.All.broadcastMessage(name, message); } } And my JavaScript is: $(function () { // Declare a proxy to reference the hub. var chat = $.connection.chatHub; // Create a function that the hub can call to broadcast messages. chat.client.broadcastMessage = function (name, message) { // Html encode display name and message. var encodedName = $('<div />').text(name).html(); var encodedMsg = $('<div />').text(message).html(); // Add the message to the page. var divContent = $('#discussion').html(); $('#discussion').html('<div class="container">' + '<div class="content">' + '<p class="username">' + encodedName + '</p>' + '<p class="message">' + encodedMsg + '</p>' + '</div>' + '<div class="slideout">' + '<div class="clickme" onclick="slideMenu(this)"></div>' + '<div class="slidebutton"><img id="imgid" onclick="likeButtonClick(this)" src="Images/like.png" /></div>' + '<div class="slidebutton"><img onclick="commentButtonClick(this)" src="Images/comment.png" /></div>' + '<div class="slidebutton" style="margin-right:0px"><img onclick="redcardButtonClick(this)" src="Images/redcard.png" /></div>' + '</div>' + '</div>' + divContent); }; // Set initial focus to message input box. $('#message').focus(); // Start the connection. $.connection.hub.start().done(function () { $('#sendmessage').click(function () { // Call the Send method on the hub. chat.server.send($('#lblUser').html(), $('#message').val()); // Clear text box and reset focus for next comment. $('#message').val('').focus(); }); });

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >