Search Results

Search found 21072 results on 843 pages for 'thin client'.

Page 2/843 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Design pattern for client/server sessions?

    - by nonot1
    Are there any common patterns or general guidance I can learn from for how to design a client/server system where the both the client and server must maintain some kind per-client session state? I've found any number of libraries that can help with some of the plumbing, but it's the overall design I'm wondering about. Open issues in my mind: How to structure the client/server communication so that bidirectional synchronous and asynchronous requests are possible? The server side needs to spawn a couple of per-connected-client session-long helper process. How to manage that? How to manage the mapping from a given client (and any of it's requests) to server state and helper process instances in the face of multiple clients and intermittent network connectivity. Most communication can be simple blocking request/reply, but some will be long running processing tasks that the client will want to keep tabs on. To the extent that it matters, the platform is Linux/C/C++. Not web based. Just an existing thick-client software app being modified to talk to backend servers for some tasks.

    Read the article

  • IT merger - self-sufficient site with domain controller VS thin clients outpost with access to termi

    - by imagodei
    SITUATION: A larger company acquires a smaller one. IT infrastructure has to be merged. There are no immediate plans to change the current size or role of the smaller company - the offices and production remain. It has a Win 2003 SBS domain server, Win 2000 file server, linux server for SVN and internal Wikipedia, 2 or 3 production machines, LTO backup solution. The servers are approx. 5 years old. Cisco network equippment (switches, wireless, ASA). Mail solution is a hosted Exchange. There are approx. 35 desktops and laptops in the company. IT infrastructure unification: There are 2 IT merging proposals. 1.) Replacing old servers, installing Win Server 2008 domain controller, and setting up either subdomain or domain trust to a larger company. File server and other servers remain local and synchronization should be set up to a centralized location in larger company. Similary with the backup - it remains local and if needed it should be replicated to a centralized location. Licensing is managed by smaller company. 2.) All servers are moved to a centralized location in larger company. As many desktop machines as possible are replaced by thin clients. The actual machines are virtualized and hosted by Terminal server at the same central location. Citrix solutions will be used. Only router and site-2-site VPN connection remain at the smaller company. Backup internet line to insure near 100% availability is needed. Licensing is mainly managed by larger company. Only specialized software for PCs that will not be virtualized is managed by smaller company. I'd like to ask you to discuss both solutions a bit. In your opinion, which is better from the operational point of view? Which is more reliable, cheaper in the long run? Easier to manage from the system administrator's point of view? Easier on the budget and easier to maintain from IT department's point of view? Does anybody have any experience with the second option and how does it perform in production environment? Pros and cons of both? Your input will be of great significance to me. Thank you very much!

    Read the article

  • After closing the ssh terminal, the thin server is down

    - by Keating Wang
    I have a rails project run on the thin server(1.3.1) on a ubuntu server. I ssh to the server and start thin with command 'thin start -C config/thin.yml', following the thin.yml, port: 3000 log: log/thin.log timeout: 30 chdir: /home/byht/56platform/dev/tracker environment: production servers: 1 daemonize: true After thin starts successfully, I visit the project and it works well. Then, I close the terminal, I can also visit the pages that have been visited, but when I visit the pages that not been visited before closing ssh terminal, a "500" error appears on the page. I didn't find the error messages in the log file. I have tried start thin with nohup and sudo, but they are useless. I sign in the ubuntu server locally, then the problem disappears. But I need sign in the server to stat thin with ssh when I'm home.

    Read the article

  • Handling packet impersonating in client-server model online game

    - by TheDespite
    I am designing a server-client model game library/engine. How do I, and should I even bother to handle frequent update packet possible impersonating? In my current design anyone could copy a packet from someone else and modify it to execute any non-critical action for another client. I am currently compressing all datagrams so that adds just a tad of security. Edit: One way I thought about was to send a unique "key" to the verified client every x_time and then the client has to add that to all of it's update packets until a new key is sent. Edit2: I should have mentioned that I am not concerned about whether the actions described in the packet are available to the client at the time, this is all checked by the server which I thought was obvious. I am only concerned about someone sending packets for another client.

    Read the article

  • SMTP Client implementation [on hold]

    - by orif
    I'm implementing SMTP client. What should the client do once it already sent the "." at the end of the mail, but didn't receive "250 Ok"? This is how the conversation between the client and server look like: Server Response: 220 www.sample.com ESMTP Postfix Client Sending : HELO domain.com Server Response: 250 Hello domain.com Client Sending : MAIL FROM: <[email protected]> Server Response: 250 Ok Client Sending : RCPT TO: <[email protected]> Server Response: 250 Ok Client Sending : DATA Server Response: 354 End data with <CR><LF>.<CR><LF> Client Sending : Subject: Example Message Client Sending : From: [email protected] Client Sending : To: [email protected] Client Sending : Client Sending : TEST MAIL Client Sending : Client Sending : . Server Response: 250 Ok: queued as 23411 Client Sending : QUIT I'm not sure what should I do if the client sends "." and doesn't receive the 250 Ok - because of possible network error. Was the "." sent or not? Should the client resend the mail - and - maybe - duplicate the item, or not - and risk in losing an important mail item? Thank you.

    Read the article

  • Ruby on Rails deployment, on "thin" server with lot of attachments

    - by Horace Ho
    A lot of PDFs are stored inside MySQL as a BLOB field for each PDF file. The average file size is 500K each. The Rails app will stream the :binary data as file downloads, where there is a user click on the download link. Assume there is a maximum of 5 users downloading 5 PDFs concurrently, what kind of deployment setup parameters I should be aware of? e.g. for the case of thin: thin start --servers 3 whether --servers 3 is good enough (or 5 or more is needed) for the above example? The 2nd question is whether 'thin' a capable solution? Thanks!

    Read the article

  • Networking Client Server Packet logic (How they communicate)

    - by Trixmix
    I want to know what is the logic behind server client communication through packets for a real time game. for example the server sends x packets then the client receives x packets and processes them.. Basically what is the process to keep the client and server in sync and able to receive and send packets. more in depth example of what I want to know: client step 1 wait for a packet step 2 read x packets step 3 process x packets step 4 send x packets and so on... I need to know the very basic outline of the communication. Big questions are: 1) do I send and read packets all at one time? i.e for loop though the incoming packets array list and read them all or one every server loop or what... 2) what order should I do things i.e first receive then read then process then send etc.. 3) what I asked above a step by step of what the server / client should do.. Thanks!

    Read the article

  • Thin web server - single or multiple instances per IP address:port?

    - by wchrisjohnson
    I'm deploying a rack/sinatra/web socket app onto several servers and will use thin as the web server (http://code.macournoyer.com/thin/). There are almost no views to show, so I am not front-ending it with a traditional web server like Apache or nginx. In general, you see thin started and the underlying config file for it has the number of server instances to start, say 3, and the port to start with, say 5000. So, in my example, when thin starts, it starts up three instances on a range of ports, starting on port 5000. If I have a series of virtual machines, say 3, 6, 9, etc. that I treat as a cluster, would/should I choose to start a single thin instance on each VM, or multiple instances on each VM? Why? Thanks - Chris

    Read the article

  • Client-Server Networking Between PHP Client and Java Server

    - by Muhammad Yasir
    Hi there, I have a university project which is already 99% completed. It consists of two parts-website (PHP) and desktop (Java). People have their accounts on the website and they wish to query different information regarding their accounts. They send an SMS which is received by desktop application which queries database of website (MySQL) and sends the reply accordingly. This part is working superbly. The problem is that some times website wishes to instruct the desktop application to send a specific SMS to a particular number. Apparently there seems no way other than putting all the load to the DB server... This is how I made it work. Website puts SMS jobs in a specific table. Java application polls this table again and again and if it finds a job, it executes it. Even this part is working correctly but unfortunately it is not acceptable by my university to poll the DB like this. :( The other approach I could think of is to use client-server one. I tried making Java server and its PHP client. So that whenever an SMS is to be sent, the website opens a socket connection to desktop application and sends two strings (cell # and SMS message). Unfortunately I am unable to do this. I was successfully to make a Java server which works fine when connected by a Java client, similarly my PHP client connects correctly to a PHP server, but when I try to cross them, they start hating each other... PHP shows no error but Java gives StreamCorruptedException when it tries to read header of input stream. Could someone please tell what I can try to make PHP client and Java server work together? Or if the said purpose can be achieved by another means, how? Regards, Yasir

    Read the article

  • RMagic Error in rails, with AM Charts

    - by Elliot
    Hi Everyone, I'm using AMCharts and rails. AMCharts uses the Image Magic lib to export an image of the chart. In rails this is done with the gem, RMagic. In a controller this is implemented with the following controller method: def export width = params[:width].to_i height = params[:height].to_i data = {} img = Magick::Image.new(width, height) height.times do |y| row = params["r#{y}"].split(',') row.size.times do |r| pixel = row[r].to_s.split(':') pixel[0] = pixel[0].to_s.rjust(6, '0') if pixel.size == 2 pixel[1].to_i.times do (data[y] ||= []) << pixel[0] end else (data[y] ||= []) << pixel[0] end end width.times do |x| img.pixel_color(x, y, "##{data[y][x]}") end end img.format = "PNG" send_data(img.to_blob , :disposition => 'inline', :type => 'image/png', :filename => "chart.png?#{rand(99999999).to_i}") end When the controller is accessed however, I receive this error in the page: The change you wanted was rejected. Maybe you tried to change something you didn't have access to. And this error in the logs (its running on heroku btw): ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): /home/heroku_rack/lib/static_assets.rb:9:in `call' /home/heroku_rack/lib/last_access.rb:25:in `call' /home/heroku_rack/lib/date_header.rb:14:in `call' thin (1.0.1) lib/thin/connection.rb:80:in `pre_process' thin (1.0.1) lib/thin/connection.rb:78:in `catch' thin (1.0.1) lib/thin/connection.rb:78:in `pre_process' thin (1.0.1) lib/thin/connection.rb:57:in `process' thin (1.0.1) lib/thin/connection.rb:42:in `receive_data' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run_machine' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run' thin (1.0.1) lib/thin/backends/base.rb:57:in `start' thin (1.0.1) lib/thin/server.rb:150:in `start' thin (1.0.1) lib/thin/controllers/controller.rb:80:in `start' thin (1.0.1) lib/thin/runner.rb:173:in `send' thin (1.0.1) lib/thin/runner.rb:173:in `run_command' thin (1.0.1) lib/thin/runner.rb:139:in `run!' thin (1.0.1) bin/thin:6 /usr/local/bin/thin:20:in `load' /usr/local/bin/thin:20 Rendering /disk1/home/slugs/149903_609c236_eb4f/mnt/public/422.html (422 Unprocessable Entity) Anyone have any idea what's going on here?

    Read the article

  • Typus not working in Heroku (error 500) what im doing wrong?

    - by Victor P
    Have someone used Typus (admin plugin for rails) in Heroku? http://intraducibles.com/projects/typus/install I follow the instructions and in my local machine (Rails 2.3.5) is working fine, but when I deploy to Heroku it crashes. What Im doing wrong? the log: Logfile created on Mon Mar 29 18:14:06 -0700 2010 Processing TypusController#dashboard (for 190.196.113.93 at 2010-03-29 18:14:07) [GET] Parameters: {"action"=>"dashboard", "controller"=>"typus"} ActiveRecord::StatementInvalid (PGError: ERROR: relation "typus_users" does not exist : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a.attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"typus_users"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum ): vendor/plugins/typus/app/controllers/typus_controller.rb:128:in `verify_typus_users_table_schema' /home/heroku_rack/lib/static_assets.rb:9:in `call' /home/heroku_rack/lib/last_access.rb:25:in `call' /home/heroku_rack/lib/date_header.rb:14:in `call' thin (1.0.1) lib/thin/connection.rb:80:in `pre_process' thin (1.0.1) lib/thin/connection.rb:78:in `catch' thin (1.0.1) lib/thin/connection.rb:78:in `pre_process' thin (1.0.1) lib/thin/connection.rb:57:in `process' thin (1.0.1) lib/thin/connection.rb:42:in `receive_data' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run_machine' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run' thin (1.0.1) lib/thin/backends/base.rb:57:in `start' thin (1.0.1) lib/thin/server.rb:150:in `start' thin (1.0.1) lib/thin/controllers/controller.rb:80:in `start' thin (1.0.1) lib/thin/runner.rb:173:in `send' thin (1.0.1) lib/thin/runner.rb:173:in `run_command' thin (1.0.1) lib/thin/runner.rb:139:in `run!' thin (1.0.1) bin/thin:6 /usr/local/bin/thin:20:in `load' /usr/local/bin/thin:20 Rendering /disk1/home/slugs/157361_3469154_60b1/mnt/public/500.html (500 Internal Server Error)

    Read the article

  • Heroku Rails Internal Server Error

    - by Ryan Max
    Hello. I got a 500 Internal Sever error when I try to deploy my rails app on heroku. It works fine on my local machine, so i'm not sure what's wrong here. Seems to be something with the "sessions" on the home controller. Here is my log: ==> production.log <== # Logfile created on Sun May 09 17:35:59 -0700 2010 Processing HomeController#index (for 76.169.212.8 at 2010-05-09 17:36:00) [GET] ActiveRecord::StatementInvalid (PGError: ERROR: relation "sessions" does not ex ist : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a .attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"sessions"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum ): lib/authenticated_system.rb:106:in `login_from_session' lib/authenticated_system.rb:12:in `current_user' lib/authenticated_system.rb:6:in `logged_in?' lib/authenticated_system.rb:35:in `authorized?' lib/authenticated_system.rb:53:in `login_required' /home/heroku_rack/lib/static_assets.rb:9:in `call' /home/heroku_rack/lib/last_access.rb:25:in `call' /home/heroku_rack/lib/date_header.rb:14:in `call' thin (1.2.7) lib/thin/connection.rb:76:in `pre_process' thin (1.2.7) lib/thin/connection.rb:74:in `catch' thin (1.2.7) lib/thin/connection.rb:74:in `pre_process' thin (1.2.7) lib/thin/connection.rb:57:in `process' thin (1.2.7) lib/thin/connection.rb:42:in `receive_data' eventmachine (0.12.10) lib/eventmachine.rb:256:in `run_machine' eventmachine (0.12.10) lib/eventmachine.rb:256:in `run' thin (1.2.7) lib/thin/backends/base.rb:57:in `start' thin (1.2.7) lib/thin/server.rb:156:in `start' thin (1.2.7) lib/thin/controllers/controller.rb:80:in `start' thin (1.2.7) lib/thin/runner.rb:177:in `send' thin (1.2.7) lib/thin/runner.rb:177:in `run_command' thin (1.2.7) lib/thin/runner.rb:143:in `run!' thin (1.2.7) bin/thin:6 /usr/local/bin/thin:20:in `load' /usr/local/bin/thin:20 Rendering /disk1/home/slugs/155328_f2d3c00_845e/mnt/public/500.html (500 Interna l Server Error) And here is my home_controller.rb class HomeController < ApplicationController before_filter :login_required def index @user = current_user @user.profile ||= Profile.new @profile = @user.profile end end Does it have something the way my routes are set up? Or is it my authentication? (I am using restful authentication with Bort)

    Read the article

  • Can't install thin by using rubygems on Ubuntu 9.10

    - by skyfive
    How can I fix this error, and install thin or other gems? $ sudo gem install thin Building native extensions. This could take a while... ERROR: Error installing thin: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9.1 extconf.rb checking for rb_trap_immediate in ruby.h,rubysig.h... *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.9.1 /usr/lib/ruby/1.9.1/mkmf.rb:362:in `try_do': The complier failed to generate an executable file. (RuntimeError) You have to install development tools first. from /usr/lib/ruby/1.9.1/mkmf.rb:425:in `try_compile' from /usr/lib/ruby/1.9.1/mkmf.rb:543:in `try_var' from /usr/lib/ruby/1.9.1/mkmf.rb:791:in `block in have_var' from /usr/lib/ruby/1.9.1/mkmf.rb:668:in `block in checking_for' from /usr/lib/ruby/1.9.1/mkmf.rb:274:in `block (2 levels) in postpone' from /usr/lib/ruby/1.9.1/mkmf.rb:248:in `open' from /usr/lib/ruby/1.9.1/mkmf.rb:274:in `block in postpone' from /usr/lib/ruby/1.9.1/mkmf.rb:248:in `open' from /usr/lib/ruby/1.9.1/mkmf.rb:270:in `postpone' from /usr/lib/ruby/1.9.1/mkmf.rb:667:in `checking_for' from /usr/lib/ruby/1.9.1/mkmf.rb:790:in `have_var' from extconf.rb:16:in `' Gem files will remain installed in /var/lib/gems/1.9.1/gems/eventmachine-0.12.10 for inspection. Results logged to /var/lib/gems/1.9.1/gems/eventmachine-0.12.10/ext/gem_make.out Addtional Infomation as below $ cat /etc/issue Ubuntu 9.10 \n \l $ dpkg -l | grep ruby ii libreadline-ruby1.9.1 1.9.1.243-2 Readline interface for Ruby 1.9.1 ii libruby1.9.1 1.9.1.243-2 Libraries necessary to run Ruby 1.9.1 ii ruby1.9.1 1.9.1.243-2 Interpreter of object-oriented scripting lan ii ruby1.9.1-dev 1.9.1.243-2 Header files for compiling extension modules ii rubygems1.9.1 1.3.5-1ubuntu2 package management framework for Ruby librar $ ruby -v ruby 1.9.1p243 (2009-07-16 revision 24175) [x86_64-linux] $ gem list *** LOCAL GEMS *** rack (1.1.0) sinatra (1.0)

    Read the article

  • intelligent thin start with port alias for bash

    - by seaofclouds
    i would like a single alias (ts) which starts my local development server. the script should test for an open port starting at 3000 and use the first available port. additionally, some sites require a rackup file, making -R config.ru necessary. this script should check the current directory for the config.ru file and add that to the alias if present. currently, to start my local development environment, i run: alias ts="thin -R config.ru -p 3000 start" often, i need to run several servers to test different sites, so i've created additional aliases: alias ts1="thin -R config.ru -p 3001 start"

    Read the article

  • Connecting Linux to WatchGuard Firebox SSL (OpenVPN client)

    Recently, I got a new project assignment that requires to connect permanently to the customer's network through VPN. They are using a so-called SSL VPN. As I am using OpenVPN since more than 5 years within my company's network I was quite curious about their solution and how it would actually be different from OpenVPN. Well, short version: It is a disguised version of OpenVPN. Unfortunately, the company only offers a client for Windows and Mac OS which shouldn't bother any Linux user after all. OpenVPN is part of every recent distribution and can be activated in a couple of minutes - both client as well as server (if necessary). WatchGuard Firebox SSL - About dialog Borrowing some files from a Windows client installation Initially, I didn't know about the product, so therefore I went through the installation on Windows 8. No obstacles (and no restart despite installation of TAP device drivers!) here and the secured VPN channel was up and running in less than 2 minutes or so. Much appreciated from both parties - customer and me. Of course, this whole client package and my long year approved and stable installation ignited my interest to have a closer look at the WatchGuard client. Compared to the original OpenVPN client (okay, I have to admit this is years ago) this commercial product is smarter in terms of file locations during installation. You'll be able to access the configuration and key files below your roaming application data folder. To get there, simply enter '%AppData%\WatchGuard\Mobile VPN' in your Windows/File Explorer and confirm with Enter/Return. This will display the following files: Application folder below user profile with configuration and certificate files From there we are going to borrow four files, namely: ca.crt client.crt client.ovpn client.pem and transfer them to the Linux system. You might also be able to isolate those four files from a Mac OS client. Frankly, I'm just too lazy to run the WatchGuard client installation on a Mac mini only to find the folder location, and I'm going to describe why a little bit further down this article. I know that you can do that! Feedback in the comment section is appreciated. Configuration of OpenVPN (console) Depending on your distribution the following steps might be a little different but in general you should be able to get the important information from it. I'm going to describe the steps in Ubuntu 13.04 (Raring Ringtail). As usual, there are two possibilities to achieve your goal: console and UI. Let's what it is necessary to be done. First of all, you should ensure that you have OpenVPN installed on your system. Open your favourite terminal application and run the following statement: $ sudo apt-get install openvpn network-manager-openvpn network-manager-openvpn-gnome Just to be on the safe side. The four above mentioned files from your Windows machine could be copied anywhere but either you place them below your own user directory or you put them (as root) below the default directory: /etc/openvpn At this stage you would be able to do a test run already. Just in case, run the following command and check the output (it's the similar information you would get from the 'View Logs...' context menu entry in Windows: $ sudo openvpn --config client.ovpn Pay attention to the correct path to your configuration and certificate files. OpenVPN will ask you to enter your Auth Username and Auth Password in order to establish the VPN connection, same as the Windows client. Remote server and user authentication to establish the VPN Please complete the test run and see whether all went well. You can disconnect pressing Ctrl+C. Simplifying your life - authentication file In my case, I actually set up the OpenVPN client on my gateway/router. This establishes a VPN channel between my network and my client's network and allows me to switch machines easily without having the necessity to install the WatchGuard client on each and every machine. That's also very handy for my various virtualised Windows machines. Anyway, as the client configuration, key and certificate files are located on a headless system somewhere under the roof, it is mandatory to have an automatic connection to the remote site. For that you should first change the file extension '.ovpn' to '.conf' which is the default extension on Linux systems for OpenVPN, and then open the client configuration file in order to extend an existing line. $ sudo mv client.ovpn client.conf $ sudo nano client.conf You should have a similar content to this one here: dev tunclientproto tcp-clientca ca.crtcert client.crtkey client.pemtls-remote "/O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server"remote-cert-eku "TLS Web Server Authentication"remote 1.2.3.4 443persist-keypersist-tunverb 3mute 20keepalive 10 60cipher AES-256-CBCauth SHA1float 1reneg-sec 3660nobindmute-replay-warningsauth-user-pass auth.txt Note: I changed the IP address of the remote directive above (which should be obvious, right?). Anyway, the required change is marked in red and we have to create a new authentication file 'auth.txt'. You can give the directive 'auth-user-pass' any file name you'd like to. Due to my existing OpenVPN infrastructure my setup differs completely from the above written content but for sake of simplicity I just keep it 'as-is'. Okay, let's create this file 'auth.txt' $ sudo nano auth.txt and just put two lines of information in it - username on the first, and password on the second line, like so: myvpnusernameverysecretpassword Store the file, change permissions, and call openvpn with your configuration file again: $ sudo chmod 0600 auth.txt $ sudo openvpn --config client.conf This should now work without being prompted to enter username and password. In case that you placed your files below the system-wide location /etc/openvpn you can operate your VPNs also via service command like so: $ sudo service openvpn start client $ sudo service openvpn stop client Using Network Manager For newer Linux users or the ones with 'console-phobia' I'm going to describe now how to use Network Manager to setup the OpenVPN client. For this move your mouse to the systray area and click on Network Connections => VPN Connections => Configure VPNs... which opens your Network Connections dialog. Alternatively, use the HUD and enter 'Network Connections'. Network connections overview in Ubuntu Click on 'Add' button. On the next dialog select 'Import a saved VPN configuration...' from the dropdown list and click on 'Create...' Choose connection type to import VPN configuration Now you navigate to your folder where you put the client files from the Windows system and you open the 'client.ovpn' file. Next, on the tab 'VPN' proceed with the following steps (directives from the configuration file are referred): General Check the IP address of Gateway ('remote' - we used 1.2.3.4 in this setup) Authentication Change Type to 'Password with Certificates (TLS)' ('auth-pass-user') Enter User name to access your client keys (Auth Name: myvpnusername) Enter Password (Auth Password: verysecretpassword) and choose your password handling Browse for your User Certificate ('cert' - should be pre-selected with client.crt) Browse for your CA Certificate ('ca' - should be filled as ca.crt) Specify your Private Key ('key' - here: client.pem) Then click on the 'Advanced...' button and check the following values: Use custom gateway port: 443 (second value of 'remote' directive) Check the selected value of Cipher ('cipher') Check HMAC Authentication ('auth') Enter the Subject Match: /O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server ('tls-remote') Finally, you have to confirm and close all dialogs. You should be able to establish your OpenVPN-WatchGuard connection via Network Manager. For that, click on the 'VPN Connections => client' entry on your Network Manager in the systray. It is advised that you keep an eye on the syslog to see whether there are any problematic issues that would require some additional attention. Advanced topic: routing As stated above, I'm running the 'WatchGuard client for Linux' on my head-less server, and since then I'm actually establishing a secure communication channel between two networks. In order to enable your network clients to get access to machines on the remote side there are two possibilities to enable that: Proper routing on both sides of the connection which enables both-direction access, or Network masquerading on the 'client side' of the connection Following, I'm going to describe the second option a little bit more in detail. The Linux system that I'm using is already configured as a gateway to the internet. I won't explain the necessary steps to do that, and will only focus on the additional tweaks I had to do. You can find tons of very good instructions and tutorials on 'How to setup a Linux gateway/router' - just use Google. OK, back to the actual modifications. First, we need to have some information about the network topology and IP address range used on the 'other' side. We can get this very easily from /var/log/syslog after we established the OpenVPN channel, like so: $ sudo tail -n20 /var/log/syslog Or if your system is quite busy with logging, like so: $ sudo less /var/log/syslog | grep ovpn The output should contain PUSH received message similar to the following one: Jul 23 23:13:28 ios1 ovpn-client[789]: PUSH: Received control message: 'PUSH_REPLY,topology subnet,route 192.168.1.0 255.255.255.0,dhcp-option DOMAIN ,route-gateway 192.168.6.1,topology subnet,ping 10,ping-restart 60,ifconfig 192.168.6.2 255.255.255.0' The interesting part for us is the route command which I highlighted already in the sample PUSH_REPLY. Depending on your remote server there might be multiple networks defined (172.16.x.x and/or 10.x.x.x). Important: The IP address range on both sides of the connection has to be different, otherwise you will have to shuffle IPs or increase your the netmask. {loadposition content_adsense} After the VPN connection is established, we have to extend the rules for iptables in order to route and masquerade IP packets properly. I created a shell script to take care of those steps: #!/bin/sh -eIPTABLES=/sbin/iptablesDEV_LAN=eth0DEV_VPNS=tun+VPN=192.168.1.0/24 $IPTABLES -A FORWARD -i $DEV_LAN -o $DEV_VPNS -d $VPN -j ACCEPT$IPTABLES -A FORWARD -i $DEV_VPNS -o $DEV_LAN -s $VPN -j ACCEPT$IPTABLES -t nat -A POSTROUTING -o $DEV_VPNS -d $VPN -j MASQUERADE I'm using the wildcard interface 'tun+' because I have multiple client configurations for OpenVPN on my server. In your case, it might be sufficient to specify device 'tun0' only. Simplifying your life - automatic connect on boot Now, that the client connection works flawless, configuration of routing and iptables is okay, we might consider to add another 'laziness' factor into our setup. Due to kernel updates or other circumstances it might be necessary to reboot your system. Wouldn't it be nice that the VPN connections are established during the boot procedure? Yes, of course it would be. To achieve this, we have to configure OpenVPN to automatically start our VPNs via init script. Let's have a look at the responsible 'default' file and adjust the settings accordingly. $ sudo nano /etc/default/openvpn Which should have a similar content to this: # This is the configuration file for /etc/init.d/openvpn## Start only these VPNs automatically via init script.# Allowed values are "all", "none" or space separated list of# names of the VPNs. If empty, "all" is assumed.# The VPN name refers to the VPN configutation file name.# i.e. "home" would be /etc/openvpn/home.conf#AUTOSTART="all"#AUTOSTART="none"#AUTOSTART="home office"## ... more information which remains unmodified ... With the OpenVPN client configuration as described above you would either set AUTOSTART to "all" or to "client" to enable automatic start of your VPN(s) during boot. You should also take care that your iptables commands are executed after the link has been established, too. You can easily test this configuration without reboot, like so: $ sudo service openvpn restart Enjoy stable VPN connections between your Linux system(s) and a WatchGuard Firebox SSL remote server. Cheers, JoKi

    Read the article

  • Best approach to designing multi-client applications

    - by Tomh
    Hi, I was wondering how you guys start out if you need to design a multi-client project where multiple clients can interact with a server. In specific how do you go about dealing with different states and message handling, how do you start designing and considering all these cases? For example a video webchat application where it is possible that you call another client, while that client is already in a call, or is stuck in a modal dialog such that the calling dialog does not come through.

    Read the article

  • Thin & Sinatra not taking port

    - by NekoNova
    I'm having problems settig up my application using Thin and Sinatra. I have created a development-config.ru file that contains the following settings: # This is a rack configuration file to fire up the Sinatra application. # This allows better control and configuration as we are using the modular # approach here for controlling our application. # # Extend the Ruby load path with the root of the API and the lib folder # so that we can automatically include all our own custom classes. This makes # the requiring of files a bit cleaner and easier to maintain. # This is basically what rails does as well. # We also store the root of the API in the ENV settings to ensure we have # always access to the root of the API when building paths. ENV['API_ROOT'] = File.dirname(__FILE__) $:.unshift ENV['API_ROOT'] $:.unshift File.expand_path(File.join(ENV['API_ROOT'], 'lib')) $:.unshift File.expand_path(File.join(ENV['API_ROOT'], 'db')) # Now we can require all the gems used for the entire API by simpling requiring # them here. We can also include the classes that we have defined inside the lib # folder. require 'rubygems' require 'bundler' # Run Bundler to setup our gems properly. This will install all the missing gems on # the system and ensure that the deployment environment is ready to run. Bundler.require # To make the loading easier for the application, we will now automatically load all # models that have been defined inside the lib folder. This ensures that we do not need # to load them anymore anywhere else in our application, as the models will be known to # ruby everywhere. Dir.glob(File.join(ENV['API_ROOT'], 'lib', '**', '*.rb')).each{|file| require file} # Now we will configure the Sinatra application so that we can fire up the entire API. # This requires some detailed settings like whether logging is allowed, the port to be # used and some folder locations. require 'sinatra' require 'app' set :logging, true set :dump_errors, true set :port, 3001 set :views, "#{ENV['API_ROOT']}/views" set :public_folder, "#{ENV['API_ROOT']}/public" set :environment, :test # Start up the Sinatra application with all the settings that we have defined. run App.new This is based upon the information I found on the Sinatra website. However, the problem is that I cannot get the application running on port 3001. If I use thin start -R development-config.ru it runs on port 3000. If I use rackup config-development.ru it runs on port 9696. However I never see Sinatra kick in or run over port 3000. My application looks like this: # Author : Arne De Herdt # Email : # This is the actuall application that will be running under Sinatra # to serve the requests for the billing middleware API. # We use the modular approach here to allow control when deploying # the application using Capistrano. require 'sinatra/base' require 'logger' require 'savon' require 'billcrux' class App < Sinatra::Base # This action responds to POST requests on the URI '/billcrux/register' # and is responsible for handeling registration requests with the # BillCrux payment system. # The post "/billcrux/register" do # do stuff end end Can someone tell me what I am doing wrong?

    Read the article

  • Unobtrusive Client Side Validation with Dynamic Contents in ASP.NET MVC 3

    - by imran_ku07
        Introduction:          A while ago, I blogged about how to perform client side validation for dynamic contents in ASP.NET MVC 2 at here. Using the approach given in that blog, you can easily validate your dynamic ajax contents at client side. ASP.NET MVC 3 also supports unobtrusive client side validation in addition to ASP.NET MVC 2 client side validation for backward compatibility. I feel it is worth to rewrite that blog post for ASP.NET MVC 3 unobtrusive client side validation. In this article I will show you how to do this.       Description:           I am going to use the same example presented at here. Create a new ASP.NET MVC 3 application. Then just open HomeController.cs and add the following code,   public ActionResult CreateUser() { return View(); } [HttpPost] public ActionResult CreateUserPrevious(UserInformation u) { return View("CreateUserInformation", u); } [HttpPost] public ActionResult CreateUserInformation(UserInformation u) { if(ModelState.IsValid) return View("CreateUserCompanyInformation"); return View("CreateUserInformation"); } [HttpPost] public ActionResult CreateUserCompanyInformation(UserCompanyInformation uc, UserInformation ui) { if (ModelState.IsValid) return Content("Thank you for submitting your information"); return View("CreateUserCompanyInformation"); }             Next create a CreateUser view and add the following lines,   <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<UnobtrusiveValidationWithDynamicContents.Models.UserInformation>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> CreateUser </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <div id="dynamicData"> <%Html.RenderPartial("CreateUserInformation"); %> </div> </asp:Content>             Next create a CreateUserInformation partial view and add the following lines,   <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<UnobtrusiveValidationWithDynamicContents.Models.UserInformation>" %> <% Html.EnableClientValidation(); %> <%using (Html.BeginForm("CreateUserInformation", "Home")) { %> <table id="table1"> <tr style="background-color:#E8EEF4;font-weight:bold"> <td colspan="3" align="center"> User Information </td> </tr> <tr> <td> First Name </td> <td> <%=Html.TextBoxFor(a => a.FirstName)%> </td> <td> <%=Html.ValidationMessageFor(a => a.FirstName)%> </td> </tr> <tr> <td> Last Name </td> <td> <%=Html.TextBoxFor(a => a.LastName)%> </td> <td> <%=Html.ValidationMessageFor(a => a.LastName)%> </td> </tr> <tr> <td> Email </td> <td> <%=Html.TextBoxFor(a => a.Email)%> </td> <td> <%=Html.ValidationMessageFor(a => a.Email)%> </td> </tr> <tr> <td colspan="3" align="center"> <input type="submit" name="userInformation" value="Next"/> </td> </tr> </table> <%} %> <script type="text/javascript"> $("form").submit(function (e) { if ($(this).valid()) { $.post('<%= Url.Action("CreateUserInformation")%>', $(this).serialize(), function (data) { $("#dynamicData").html(data); $.validator.unobtrusive.parse($("#dynamicData")); }); } e.preventDefault(); }); </script>             Next create a CreateUserCompanyInformation partial view and add the following lines,   <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<UnobtrusiveValidationWithDynamicContents.Models.UserCompanyInformation>" %> <% Html.EnableClientValidation(); %> <%using (Html.BeginForm("CreateUserCompanyInformation", "Home")) { %> <table id="table1"> <tr style="background-color:#E8EEF4;font-weight:bold"> <td colspan="3" align="center"> User Company Information </td> </tr> <tr> <td> Company Name </td> <td> <%=Html.TextBoxFor(a => a.CompanyName)%> </td> <td> <%=Html.ValidationMessageFor(a => a.CompanyName)%> </td> </tr> <tr> <td> Company Address </td> <td> <%=Html.TextBoxFor(a => a.CompanyAddress)%> </td> <td> <%=Html.ValidationMessageFor(a => a.CompanyAddress)%> </td> </tr> <tr> <td> Designation </td> <td> <%=Html.TextBoxFor(a => a.Designation)%> </td> <td> <%=Html.ValidationMessageFor(a => a.Designation)%> </td> </tr> <tr> <td colspan="3" align="center"> <input type="button" id="prevButton" value="Previous"/>   <input type="submit" name="userCompanyInformation" value="Next"/> <%=Html.Hidden("FirstName")%> <%=Html.Hidden("LastName")%> <%=Html.Hidden("Email")%> </td> </tr> </table> <%} %> <script type="text/javascript"> $("#prevButton").click(function () { $.post('<%= Url.Action("CreateUserPrevious")%>', $($("form")[0]).serialize(), function (data) { $("#dynamicData").html(data); $.validator.unobtrusive.parse($("#dynamicData")); }); }); $("form").submit(function (e) { if ($(this).valid()) { $.post('<%= Url.Action("CreateUserCompanyInformation")%>', $(this).serialize(), function (data) { $("#dynamicData").html(data); $.validator.unobtrusive.parse($("#dynamicData")); }); } e.preventDefault(); }); </script>             Next create a new class file UserInformation.cs inside Model folder and add the following code,   public class UserInformation { public int Id { get; set; } [Required(ErrorMessage = "First Name is required")] [StringLength(10, ErrorMessage = "First Name max length is 10")] public string FirstName { get; set; } [Required(ErrorMessage = "Last Name is required")] [StringLength(10, ErrorMessage = "Last Name max length is 10")] public string LastName { get; set; } [Required(ErrorMessage = "Email is required")] [RegularExpression(@"^\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*$", ErrorMessage = "Email Format is wrong")] public string Email { get; set; } }             Next create a new class file UserCompanyInformation.cs inside Model folder and add the following code,    public class UserCompanyInformation { public int UserId { get; set; } [Required(ErrorMessage = "Company Name is required")] [StringLength(10, ErrorMessage = "Company Name max length is 10")] public string CompanyName { get; set; } [Required(ErrorMessage = "CompanyAddress is required")] [StringLength(50, ErrorMessage = "Company Address max length is 50")] public string CompanyAddress { get; set; } [Required(ErrorMessage = "Designation is required")] [StringLength(50, ErrorMessage = "Designation max length is 10")] public string Designation { get; set; } }            Next add the necessary script files in Site.Master,   <script src="<%= Url.Content("~/Scripts/jquery-1.4.4.min.js")%>" type="text/javascript"></script> <script src="<%= Url.Content("~/Scripts/jquery.validate.min.js")%>" type="text/javascript"></script> <script src="<%= Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")%>" type="text/javascript"></script>            Now run this application. You will get the same behavior as described in this article. The key important feature to note here is the $.validator.unobtrusive.parse method, which is used by ASP.NET MVC 3 unobtrusive client side validation to initialize jQuery validation plug-in to start the client side validation process. Another important method to note here is the jQuery.valid method which return true if the form is valid and return false if the form is not valid .       Summary:          There may be several occasions when you need to load your HTML contents dynamically. These dynamic HTML contents may also include some input elements and you need to perform some client side validation for these input elements before posting thier values to server. In this article I shows you how you can enable client side validation for dynamic input elements in ASP.NET MVC 3. I am also attaching a sample application. Hopefully you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • Demonstrate bad code to client?

    - by jtiger
    I have a new client that has asked me to do a redesign of their website, an ASP.NET Webforms application that was developed by another consultant. It seemed straight-forward (it never is) but I took a look at the code to make sure I knew what I was in for. This application was not written well. At all. It is extremely vulnerable to SQL Injection attacks, business logic is spread throughout the entire application, a lot of duplication, and dead end code that does nothing. On top of that, it keeps throwing exceptions that are being smothered, so it all appears to be running smoothly. My job is to simply update the html and css, but much of the html is being generated in business logic and would be a nightmare for me to sort everything out. My estimates on the redesign were longer than the client was aiming for, and they are asking why so long. How can I explain to my client just how bad this code is? In their mind, the application is running great and the redesign should be a quick one-off. It's my word against the previous consultant, so how can I actually give simple, concrete examples that a non-technical client would understand?

    Read the article

  • Technology Choice for a Client Application [on hold]

    - by AK_
    Not sure this is the right place to ask... I'm involved in the development of a new system, and now we are passing the demos stage. We need to build a proper client application. The platform we care most about is Windows, for now at least, but we would love to support other platforms, as long as it's free :-). Or at least very cheap. We anticipate two kinds of users: Occasional, coming mostly from the web. Professional, who would probably require more features, and better performance, and probably would prefer to see a native client. Our server exposes two APIs: A SOAP API, WCF behind the scenes, that supports 100% of the functionality. A small and very fast UDP + Binary API, that duplicates some of the functionality and is intended for the sake of performance for certain real-time scenarios. Our team is mostly proficient in .Net, C#, C++ development, and rather familiar with Web development (HTML, JavaScript). We are probably intending to develop two clients (for both user profiles), a web app, and a native app. Architecturally, we would like to have as many common components as possible. We would like to have several layers: Communication, Client Model, Client Logic, shared by both of the clients. We would also like to be able to add features to both clients when only the actual UI is a dual cost, and the rest is shared. We are looking at several technologies: WPF + Silverlight, Pure HTML, Flash / Flex (AIR?), Java (JavaFx?), and we are considering poking at WinRT(or whatever the proper name is). The question is which technology would you recommend and why? And which advantages or disadvantages will it have regarding our requirements?

    Read the article

  • ltsp-built-client error

    - by sat
    I am facing some issues while building a thin client using the ltsp-build-client, it says an error. Error is: I: Retrieving Release E: Failed getting release file file://root/ISO/ubuntu-12.04.1-desktop-i386.iso/dists/squeeze/Release error: LTSP client installation ended abnormally My Command is: ltsp-build-client --mirror file://root/ISO/ubuntu-12.04.1-desktop-i386.iso --security-mirror none --accept-unsigned-packages I am referring this URL http://wiki.debian.org/LTSP/Howto. How to solve this error?

    Read the article

  • WCF client encrypt message to JAVA WS using username_token with message protection client policy

    - by Alex
    I am trying to create a WCF client APP that is consuming a JAVA WS that uses username_token with message protection client policy. There is a private key that is installed on the server and a public certificate file was exported from the JKS keystore file. I have installed the public key into certificate store via MMC under Personal certificates. I am trying to create a binding that will encrypt the message and pass the username as part of the payload. I have been researching and trying the different configurations for about a day now. I found a similar situation on the msdn forum: http://social.msdn.microsoft.com/Forums/en/wcf/thread/ce4b1bf5-8357-4e15-beb7-2e71b27d7415 This is the configuration that I am using in my app.config <customBinding> <binding name="certbinding"> <security authenticationMode="UserNameOverTransport"> <secureConversationBootstrap /> </security> <httpsTransport requireClientCertificate="true" /> </binding> </customBinding> <endpoint address="https://localhost:8443/ZZZService?wsdl" binding="customBinding" bindingConfiguration="cbinding" contract="XXX.YYYPortType" name="ServiceEndPointCfg" /> And this is the client code that I am using: EndpointAddress endpointAddress = new EndpointAddress(url + "?wsdl"); P6.WCF.Project.ProjectPortTypeClient proxy = new P6.WCF.Project.ProjectPortTypeClient("ServiceEndPointCfg", endpointAddress); proxy.ClientCredentials.UserName.UserName = UserName; proxy.ClientCredentials.ClientCertificate.SetCertificate(StoreLocation.CurrentUser, StoreName.My, X509FindType.FindByThumbprint, "67 87 ba 28 80 a6 27 f8 01 a6 53 2f 4a 43 3b 47 3e 88 5a c1"); var projects = proxy.ReadProjects(readProjects); This is the .NET CLient error I get: Error Log: Invalid security information. On the Java WS side I trace the log : SEVERE: Encryption is enabled but there is no encrypted key in the request. I traced the SOAP headers and payload and did confirm the encrypted key is not there. Headers: {expect=[100-continue], content-type=[text/xml; charset=utf-8], connection=[Keep-Alive], host=[localhost:8443], Content-Length=[731], vsdebuggercausalitydata=[uIDPo6hC1kng3ehImoceZNpAjXsAAAAAUBpXWdHrtkSTXPWB7oOvGZwi7MLEYUZKuRTz1XkJ3soACQAA], SOAPAction=[""], Content-Type=[text/xml; charset=utf-8]} Payload: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"><s:Header><o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"><o:UsernameToken u:Id="uuid-5809743b-d6e1-41a3-bc7c-66eba0a00998-1"><o:Username>admin</o:Username><o:Password>admin</o:Password></o:UsernameToken></o:Security></s:Header><s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><ReadProjects xmlns="http://xmlns.dev.com/WS/Project/V1"><Field>ObjectId</Field><Filter>Id='WS-Demo'</Filter></ReadProjects></s:Body></s:Envelope> I have also tryed some other bindings but with no success: <basicHttpBinding> <binding name="basicHttp"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="Certificate"/> </security> </binding> </basicHttpBinding> <wsHttpBinding> <binding name="wsBinding"> <security mode="Message"> <message clientCredentialType="UserName" negotiateServiceCredential="false" /> </security> </binding> </wsHttpBinding> Your help will be greatly aprreciatted! Thanks!

    Read the article

  • *Client* scalability for large numbers of remote web service calls

    - by Yuriy
    Hey Guys, I was wondering if you could share best practices and common mistakes when it comes to making large numbers of time-sensitive web service calls. In my case, I have a SOAP and an XML-RPC based web service to which I'm constantly making calls. I predict that this will soon become an issue as the number of calls per second will grow. On a higher level, I was thinking of batching those calls and submitting those to the web services every 100 ms. Could you share what else works? On a lower level side of the things, I use Apache Xml-Rpc client and standard javax.xml.soap.* packages for my client implementations. Are you aware of any client scalability related tricks/tips/warnings with these packages? Thanks in advance Yuriy

    Read the article

  • Web Service Client Java

    - by zeckk
    I generated java web service client from here -- http://api.search.live.net/search.wsdl.. I want to make search and listing the return values. How i can show result ? my code is : import java.rmi.RemoteException; import com.microsoft.schemas.LiveSearch._2008._03.Search.*; public class searchtry { public static void main(String[] args) throws RemoteException { LiveSearchPortTypeProxy client=new LiveSearchPortTypeProxy(); SearchRequest request=new SearchRequest(); SearchRequestType1 type1=new SearchRequestType1(); sorgu.setAppId("*****************"); //Windows Live gave this id for using that service sorgu.setSources(new SourceType[]{SourceType.Web}); sorgu.setQuery("Java"); aratip.setParameters(request); SearchResponseType0 answer= client.search(type1); System.out.println(answer.toString()); }

    Read the article

  • Sequence for authentication on a decoupled client?

    - by A T
    Using a sequence diagram and example code could you explain to me how authentication works when the client is completely separated from the server? I.e.: you haven't generated any of the client using a server-side template engine, rather you are communicating using REST (SOAP xor HTTP) xor RPC (XML xor JSON) with javascript on the client-side. Specifically I would like to know the sequence of: Authenticating using basic auth (user+pass) with "my" server Authenticating using OAuth2, e.g.: with Facebook, with facebook's server then whatever extra steps are needed for "my" server And how it could be implemented. (feel free to use psuedo-code [like below] or [preferably] prototyped simply using BackboneJS, AngularJS, EmberJS, BatmanJS, AgilityJS, SammyJS xor ActiveJS. if cookie.status in [Expired, Tampered, Wrong IP, Invalid, Not Found]: try auth(user,pass): if user is in my db: try authenticate(user,pass) if successful: login user # give session-cookie here? else: present user with "auth failed" msg else if user not in db: redirect to "edit-profile" page PS: I have written an example (editable) auth sequence diagram; based on facebooks' documentation.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >