Search Results

Search found 20761 results on 831 pages for 'chef client'.

Page 194/831 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • How to keep a data structure synchronized over a network?

    - by David Gouveia
    Context In the game I'm working on (a sort of a point and click graphic adventure), pretty much everything that happens in the game world is controlled by an action manager that is structured a bit like: So for instance if the result of examining an object should make the character say hello, walk a bit and then sit down, I simply spawn the following code: var actionGroup = actionManager.CreateGroup(); actionGroup.Add(new TalkAction("Guybrush", "Hello there!"); actionGroup.Add(new WalkAction("Guybrush", new Vector2(300, 300)); actionGroup.Add(new SetAnimationAction("Guybrush", "Sit")); This creates a new action group (an entire line in the image above) and adds it to the manager. All of the groups are executed in parallel, but actions within each group are chained together so that the second one only starts after the first one finishes. When the last action in a group finishes, the group is destroyed. Problem Now I need to replicate this information across a network, so that in a multiplayer session, all players see the same thing. Serializing the individual actions is not the problem. But I'm an absolute beginner when it comes to networking and I have a few questions. I think for the sake of simplicity in this discussion we can abstract the action manager component to being simply: var actionManager = new List<List<string>>(); How should I proceed to keep the contents of the above data structure syncronized between all players? Besides the core question, I'm also having a few other concerns related to it (i.e. all possible implications of the same problem above): If I use a server/client architecture (with one of the players acting as both a server and a client), and one of the clients has spawned a group of actions, should he add them directly to the manager, or only send a request to the server, which in turn will order every client to add that group? What about packet losses and the like? The game is deterministic, but I'm thinking that any discrepancy in the sequence of actions executed in a client could lead to inconsistent states of the world. How do I safeguard against that sort of problem? What if I add too many actions at once, won't that cause problems for the connection? Any way to alleviate that?

    Read the article

  • Is there a way to hide text from descriptions in Google search results

    - by Linda H
    The first line of text on all of our client's product pages is "Download hi-res images", which of course isn't what we'd want in the description when people search for their products. Is there any way to hide this text/link so that Google and the others just ignore it and go on into the text description below? I suppose we could use a meta-description, but the client isn't very good at computers and it's such a small site it seems silly.

    Read the article

  • Eclipse 3.7 Indigo disponible : support de GIT, WindowBuilder, M2Eclipse et 62 projets mis à jour

    Eclipse 3.7 Indigo disponible Support de GIT, WindowBuilder, M2Eclipse et 62 projets mis à jour Une nouvelle version d'Eclipse est disponible. Elle porte le nom d'Eclipse Indigo. De nombreux ajouts ont été apportés dont les plus significatifs sont certainement :EGIT1.0 (un client pour GIT) WindowBuilder (un outil de construction d'IHMs) M2E (le client Maven) et plus de 62 projets qui ont été mis à jour Pour accompagner cette sortie, de nombreux événements gratuits (des Eclipse DemoCamps Indigo) sont organisés un peu partout dans le monde. En France, trois Eclipse DemoCamps Indigo se dérouleront à

    Read the article

  • OpenJDK default options to always use the server VM

    - by montrealmike
    I got a warning message: jvm uses the client vm, make sure to run java with the server vm for best performance by adding -server to the command line In fact, when i run java -version i get: OpenJDK Runtime Environment (IcedTea7 2.3.2) (7u7-2.3.2a-0ubuntu0.12.04.1) OpenJDK Client VM (build 23.2-b09, mixed mode, sharing) How does one go about and change OpenJDK's defaults to always start under the server VM?

    Read the article

  • Does it make sense to implement OAuth for a 2 party system?

    - by nbv4
    I'm under the impression that OAuth is for authentication between three parties. Does it make sense to implement OAuth in a context where there is just a client and server. We have a server, and a client (HTML/javascript). Currently we authenticate via the normal "post credentials to server, get a cookie, use cookie to authenticate all subsequent requests" method. Will implementing OAuth be a benefit in this situation?

    Read the article

  • Cloud Infrastructure has a new standard

    - by macoracle
    I have been working for more than two years now in the DMTF working group tasked with creating a Cloud Management standard. That work has culminated in the release today of the Cloud Infrastructure Management Interface (CIMI) version 1.0 by the DMTF. CIMI is a single interface that a cloud consumer can use to manage their cloud infrastructure in multiple clouds. As CIMI is adopted by the cloud vendors, no more will you need to adapt client code to each of the proprietary interfaces from these multiple vendors. Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, CIMI is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience. What does CIMI allow? CIMI is both a model for the resources (computing, storage networking) in the cloud as well as a RESTful protocol binding to HTTP. This means that to create a Machine (guest VM) for example, the client creates a “document” that represents the Machine resource and sends it to the server using HTTP. CIMI allows the resources to be encoded in either JavaScript Object Notation (JSON) or the eXentsible Markup Language (XML). CIMI provides a model for the resources that can be mapped to any existing cloud infrastructure offering on the market. There are some features in CIMI that may not be supported by every cloud, but CIMI also supports the discovery of which features are implemented. This means that you can still have a client that works across multiple clouds and is able to take full advantage of the features in each of them. Isn’t it too early for a standard? A key feature of a successful standard is that it allows for compatible extensions to occur within the core framework of the interface itself. CIMI’s feature discovery (through metadata) is used to convey to the client that additional features that may be vendor specific have been implemented. As multiple vendors implement such features, they become candidates to add the future versions of CIMI. Thus innovation can continue in the cloud space without being slowed down by a lowest common denominator type of specification. Since CIMI was developed in the open by dozens of stakeholders who are already implementing infrastructure clouds, I expect to CIMI being adopted by these same companies and others over the next year or two. Cloud Customers who can see the benefit of this standard should start to ask their cloud vendors to show a CIMI implementation in their roadmap.  For more information on CIMI and the DMTF's other cloud efforts, go to: http://dmtf.org/cloud

    Read the article

  • Working out costs to implement WCAG 2.0 (AA) site

    - by Sixfoot Studio
    Hi, I've run our client's site through a WCAG 2.0 validator which has returned 415 tasks that need to be worked through in order to get it WCAG 2.0 compliant. For the most part, I can get a rough estimation of how long a task will take but there are tasks I have never had to do before which I am not sure how to cost. I would like to know if someone has a rough guide on what to cost a client to convert their site to a compliant WCAG 2.0 (AA) site. Many thanks

    Read the article

  • TransportWithMessageCredential & Service Bus – Introduction

    - by Michael Stephenson
    Recently we have been working on a project using the Windows Azure Service Bus to expose line of business applications. One of the topics we discussed a lot was around the security aspects of the solution. Most of the samples you see for Windows Azure Service Bus often use the shared secret with the Access Control Service to protect the service bus endpoint but one of the problems we found was that with this scenario any claims resulting from credentials supplied by the client are not passed through to the service listening to the service bus endpoint. As an example of this we originally were hoping that we could give two different clients their own shared secret key and the issuer for each would indicate which client it was. If the claims had flown to the listening service then we could check that the message sent by client one was a type they are allowed to send. Unfortunately this claim isn't flown to the listening service so we were unable to implement this scenario. We had also seen samples that talk about changing the relayClientAuthenticationType attribute would allow you to authenticate the client within the service itself rather than with ACS. While this was interesting it wasn't exactly what we wanted. By removing the step where access to the Relay endpoint is protected by authentication against ACS it means that anyone could send messages via the service bus to the on-premise listening service which would then authenticate clients. In our scenario we certainly didn't want to allow clients to skip the ACS authentication step because this could open up two attack opportunities for an attacker. The first of these would allow an attacker to send messages through to our on-premise servers and potentially cause a denial of service situation. The second case would be with the same kind of attack by running lots of messages through service bus which were then rejected the attacker would be causing us to incur charges per message on our Windows Azure account. The correct way to implement our desired scenario is to combine one of the common options for authenticating against ACS so the service bus endpoint cannot be accessed by an unauthenticated caller with the normal WCF security features using the TransportWithMessageCredential security option. Looking around I could not find any guidance on how to implement this correctly so on the back of setting this up I decided to write a couple of articles to walk through a couple of the common scenarios you may be interested in. These are available on the following links: Walkthrough - Combining shared secret and username token Walkthrough – Combining shared secret and certificates

    Read the article

  • Google prévoit d'intégrer QuickOffice à Chrome OS, l'outil déjà disponible sur Chromebook en version développeur

    L'édition des documents Quickoffice dans Chrome bientôt possible Google porte ses outils de bureautique dans le navigateur grâce à Native ClientAprès l'intégration de Quickoffice aux Google Apps, le géant de la recherche travaille sur le port de sa suite d'outils bureautiques mobiles sur Chrome OS et Chrome.La société aurait dévoilé ces jours le nouvel ordinateur Chromebook Pixel, avec une version de Chrome OS qui dispose d'une partie des applications Quickoffice.Le port de Quickoffice sur Chrome a été possible grâce à l'utilisation de Native Client. Native Client est une technologie de type sandbox (bac à sable), qui permet d'exécuter des applications écrites en C/C++ à l'intérieur d'un navig...

    Read the article

  • Set and Verify the Retention Value for Change Data Capture

    - by AllenMWhite
    Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)

    Read the article

  • GZipped Images - Is It Worth?

    - by charlie
    Most image formats are already compressed. But in fact, if I take an image and compress it [gzipping it], and then I compare the compressed one to the uncompressed one, there is a difference in size, even though not such a dramatic difference. The question is: is it worth gzipping images? the content size flushed down to the client's browser will be smaller, but there will be some client overhead when de-gzipping it. Please advise.

    Read the article

  • JavaOne 2011 - Moscow and Hyderabad Editions

    - by Cassandra Clark
    Connect with Java developers at JavaOne - JavaOne will be held in Moscow, April 12-13th, 2011 and again in Hyderabad, May 10th - 11th, 2011. Enjoy two days of technical content and hands-on learning focused on Java and next-generation development trends and technologies, including rich enterprise applications (REAs), service-oriented architecture (SOA), and the database.JavaOne Moscow Tracks - Java EE, Enterprise Computing, and the CloudJava SE, Client Side Technologies, and Rich User ExperiencesJava ME, Mobile, and EmbeddedJavaOne Hyderabad Tracks - Core Java PlatformJava EE, Enterprise Computing, and the CloudJava SE, Client Side Technologies, and Rich User ExperiencesJava ME, Mobile, and EmbeddedRegister Now for JavaOne Moscow!Register Now for JavaOne Hyderabad!

    Read the article

  • Techniques to prevent non-official clients in network gaming?

    - by UpTheCreek
    In multi-player network games, what techniques exist to try to ensure that users are connecting with the official client application, and not some hacked client app? I realise there is probably no sure-fire way to do this, but rather I'm interested in techniques that can be employed to mitigate the problem. I'm especially interested in any techniques that can be used for web based games, but I imagine most can be applied generally. Thank you!

    Read the article

  • Should I use my own public API on my site (via JS)?

    - by newboyhun
    First of all, this question is far more different other 'public api questions' like this: Should a website use its own public API?, second, sorry for my English. You can find the question summarized at the bottom of this question. What I want to achieve is a big website with a public api, so who like programming (like me) and likes my website, can replicate my website's data with a much better approach (of course with some restrictions). Almost everything could be used by the public API. Because of this, I was thinking about making the whole website AJAX driven. There would be parts of the API which would be limited only to my website (domain), like login, registering. There would be only an INTERFACE on the client side, which would use the public and private API to make this interface working. The website would be ONLY CLIENT SIDE, well, I mean, the website would only use AJAX to use the api. How do I imagine this? The website would be like a mobile application, the application only sending a request to a webserver, which returns a json, the application parses it, and uses it to advance in the application. (e.g.: login) My thoughts: Pros: The whole website is built up by javascript, this means I don't need to transfer the html to the client, saving bandwidth. (I hope so) Anyone can use up the data of my website to make their own cool things. (Is this a con or pro? O_O) The public API is always in use, so I can see if there are any error. Cons: Without Javascript the website is unusable. The bad guys easily can load the server with requesting too much data (like Request Per Second 10000), but this can be countered via limiting this with some PHP code and logging. Probably much more work So the question in some words is: Should I build my website around my own api? Is it good to work only on the client side? Is this good for a big website? (e.x.: facebook, yeah facebook is a different story, but could it run with an 'architecture' like this?)

    Read the article

  • Single API Architecture

    - by user1901686
    When people refer to an architecture that involves a single service API that all clients talk to (a client can be an iPad app, etc), what is the "client" for the web app -- is it A) the web browser itself. Thus, the entire app is written in html/css/javascript and ajax calls to the service are made to fetch data and changes are made through javascript or B) you have an MVC-like stack on a server, only instead of the controllers calling to the model layer directly, they call to the service API which return models that are used to render the traditional views or C) something else?

    Read the article

  • google analytics statistics

    - by colmcq
    I am compiling a report for a client using google analytics. I have observed that the client has unusually good page view times (5 mins) and excellent bounce rates (<25%). I need to reference research data that validates my assertion that these figures are excellent compared to an industry standard (the industry is ecommerce and gaming). Can you direct me to any published research data that specifies normal bounce rates and page view times for this industry?

    Read the article

  • "Collection Wrapper" pattern - is this common?

    - by Prog
    A different question of mine had to do with encapsulating member data structures inside classes. In order to understand this question better please read that question and look at the approach discussed. One of the guys who answered that question said that the approach is good, but if I understood him correctly - he said that there should be a class existing just for the purpose of wrapping the collection, instead of an ordinary class offering a number of public methods just to access the member collection. For example, instead of this: class SomeClass{ // downright exposing the concrete collection. Things[] someCollection; // other stuff omitted Thing[] getCollection(){return someCollection;} } Or this: class SomeClass{ // encapsulating the collection, but inflating the class' public interface. Thing[] someCollection; // class functionality omitted. public Thing getThing(int index){ return someCollection[index]; } public int getSize(){ return someCollection.length; } public void setThing(int index, Thing thing){ someCollection[index] = thing; } public void removeThing(int index){ someCollection[index] = null; } } We'll have this: // encapsulating the collection - in a different class, dedicated to this. class SomeClass{ CollectionWrapper someCollection; CollectionWrapper getCollection(){return someCollection;} } class CollectionWrapper{ Thing[] someCollection; public Thing getThing(int index){ return someCollection[index]; } public int getSize(){ return someCollection.length; } public void setThing(int index, Thing thing){ someCollection[index] = thing; } public void removeThing(int index){ someCollection[index] = null; } } This way, the inner data structure in SomeClass can change without affecting client code, and without forcing SomeClass to offer a lot of public methods just to access the inner collection. CollectionWrapper does this instead. E.g. if the collection changes from an array to a List, the internal implementation of CollectionWrapper changes, but client code stays the same. Also, the CollectionWrapper can hide certain things from the client code - from example, it can disallow mutation to the collection by not having the methods setThing and removeThing. This approach to decoupling client code from the concrete data structure seems IMHO pretty good. Is this approach common? What are it's downfalls? Is this used in practice?

    Read the article

  • pxe boot fails with message: no DEFAULT or UI configuration directive found

    - by spockaroo
    I am trying to pxe-boot a machine (client), and in the process I am trying to setup a tftp server that this machine can boot off. On the server, which runs Ubuntu 10.10, I have setup dhcp, dns, nfs, and tftp-hpa servers. All the servers/deamons start fine. I tested the tftp server by using a tftp client and downloading a file that the server directory hosts. My /etc/xinet.d/tftp looks like this service tftp { disable = no socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -v -s /var/lib/tftpboot only_from = 10.1.0.0/24 interface = 10.1.0.1 } My /etc/default/tftpd-hpa looks like this RUN_DAEMON="yes" OPTIONS="-l -s /var/lib/tftpboot" TFTP_USERNAME="tftp" TFTP_DIRECTORY="/var/lib/tftpboot" TFTP_ADDRESS="0.0.0.0:69" TFTP_OPTIONS="--secure" My /var/lib/tftpboot/ directory looks like this initrd.img-2.6.35-25-generic-pae vmlinuz-2.6.35-25-generic-pae pxelinux.0 pxelinux.cfg -- default I did sudo chmod 644 /var/lib/tftpboot/pxelinux.cfg/default chmod 755 /var/lib/tftpboot/initrd.img-2.6.35-25-generic-pae chmod 755 /var/lib/tftpboot/vmlinuz-2.6.35-25-generic-pae /var/lib/tftpboot/pxelinux.cfg has the following contents SERIAL 0 19200 0 LABEL linux KERNEL vmlinuz-2.6.35-25-generic-pae APPEND root=/dev/nfs initrd=initrd.img-2.6.35-25-generic-pae nfsroot=10.1.0.1:/nfsroot ip=dhcp console=ttyS0,19200n8 rw I copied /var/lib/tftpboot/pxelinux.0 from /usr/lib/syslinux/ after installing the package syslinux-common. Also just for completeness, /etc/dhcp3/dhcpd.conf the following lines (relevant to this interface) subnet 10.1.0.0 netmask 255.255.255.0 { range 10.1.0.100 10.1.0.240; option routers 10.1.0.1; option broadcast-address 10.1.0.255; option domain-name-servers 10.1.0.1; filename "pxelinux.0"; } When I boot the client machine, and watch the output over the serial port, I notice that the client requests an ip address from the server and gets it. Then I see TFTP being displayed - indicating that it is trying to connect to the TFTP server. This succeeds, and I see TFTP.|, which return immediately displaying the following message PXELINUX 4.01 debian-20100714 Copyright (C) 1994-2010 H. Peter Anvin et al No DEFAULT or UI configuration directive found! boot: /var/log/syslog shows Feb 20 15:24:05 ch in.tftpd[2821]: tftp: client does not accept options What option is it talking about in the syslog? I assume it is referring to OPTIONS or TFTP_OPTIONS, but what am I doing wrong?

    Read the article

  • dhcp-snooping option 82 drops valid dhcp requests on 2610 series Procurve switches

    - by kce
    We are slowly starting to implement dhcp-snooping on our HP ProCurve 2610 series switches, all running the R.11.72 firmware. I'm seeing some strange behavior where dhcp-request or dhcp-renew packets are dropped when originating from "downstream" switches due "untrusted relay information from client". The full error: Received untrusted relay information from client <mac-address> on port <port-number> In more detail we have a 48 port HP2610 (Switch A) and a 24 port HP2610 (Switch B). Switch B is "downstream" of Switch A by virtue of a DSL connection to one of Switch A ports. The dhcp server is connected to Switch A. The relevant bits are as follows: Switch A dhcp-snooping dhcp-snooping authorized-server 192.168.0.254 dhcp-snooping vlan 1 168 interface 25 name "Server" dhcp-snooping trust exit Switch B dhcp-snooping dhcp-snooping authorized-server 192.168.0.254 dhcp-snooping vlan 1 interface Trk1 dhcp-snooping trust exit The switches are set to trust BOTH the port the authorized dhcp server is attached to and its IP address. This is all well and good for the clients attached to Switch A, but the clients attached to Switch B get denied due to the "untrusted relay information" error. This is odd for a few reasons 1) dhcp-relay is not configured on either switch, 2) the Layer-3 network here is flat, same subnet. DHCP packets should not have a modified option 82 attribute. dhcp-relay does appear to be enabled by default however: SWITCH A# show dhcp-relay DHCP Relay Agent : Enabled Option 82 : Disabled Response validation : Disabled Option 82 handle policy : append Remote ID : mac Client Requests Server Responses Valid Dropped Valid Dropped ---------- ---------- ---------- ---------- 0 0 0 0 SWITCH B# show dhcp-relay DHCP Relay Agent : Enabled Option 82 : Disabled Response validation : Disabled Option 82 handle policy : append Remote ID : mac Client Requests Server Responses Valid Dropped Valid Dropped ---------- ---------- ---------- ---------- 40156 0 0 0 And interestingly enough the dhcp-relay agent seems very busy on Switch B, but why? As far as I can tell there is no reason why dhcp requests need a relay with this topology. And furthermore I can't tell why the upstream switch is dropping legitimate dhcp requests for untrusted relay information when the relay agent in question (on Switch B) isn't modifying the option 82 attributes anyway. Adding the no dhcp-snooping option 82 on Switch A allows the dhcp traffic from Switch B to be approved by Switch A, by virtue of just turning off that feature. What are the repercussions of not validating option 82 modified dhcp traffic? If I disable option 82 on all my "upstream" switches - will they pass dhcp traffic from any downstream switch regardless of that traffic's legitimacy? This behavior is client operating system agnostic. I see it with both Windows and Linux clients. Our DHCP servers are either Windows Server 2003 or Windows Server 2008 R2 machines. I see this behavior regardless of the DHCP servers' operating system. Can anyone shed some light on what's happening here and give me some recommendations on how I should proceed with configuring the option 82 setting? I feel like i just haven't completely grokked dhcp-relaying and option 82 attributes.

    Read the article

  • Multiple subnets on isc-dhcp-server using ddns with bind9

    - by legioxi
    On my network I have two subnets: 10.100.1.0/24 - Wired/wireless 10.100.7.0/24 - VPN Both subnets are served by isc-dhcp-server running on a Debian VM. This same VM runs bind9 for my DNS. ISC-DHCP-SERVER is configured to use DDNS and update BIND9 with hosts/IPs. Everything runs great until a device drops off the wired/wireless network and pops onto the VPN. When connecting on the VPN, a DHCP lease is handed out on the new subnet but DDNS does not update BIND9. Since the device has A/TXT/PTR records it appears ISC-DHCP-SERVER won't switch them to the new IP. The logs show: Connect to wireless: Nov 6 20:55:13 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': adding an RR at 'demo-iphone.internal.mydomain.com' A Nov 6 20:55:13 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': adding an RR at 'demo-iphone.internal.mydomain.com' TXT Nov 6 20:55:13 core-server dhcpd: DHCPACK on 10.100.1.160 to FF:FF:FF:FF:FF:FF (demo-iphone) via eth0 Nov 6 20:55:13 core-server dhcpd: Added new forward map from demo-iphone.internal.mydomain.com to 10.100.1.160 Nov 6 20:55:13 core-server dhcpd: Added reverse map from 160.49.21.172.in-addr.arpa. to demo-iphone.internal.mydomain.com Switch to VPN: Nov 6 20:56:34 core-server dhcpd: DHCPOFFER on 10.100.7.101 to BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': update unsuccessful: demo-iphone.internal.mydomain.com: 'name not in use' prerequisite not satisfied (YXDOMAIN) Nov 6 20:56:34 core-server dhcpd: DHCPREQUEST for 10.100.7.101 (10.100.1.2) from BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server dhcpd: DHCPACK on 10.100.7.101 to BB:BB:BB:BB:BB:BB (demo-iphone) via 10.100.7.0 Nov 6 20:56:34 core-server named[2417]: client 127.0.0.1#57697: updating zone 'internal.mydomain.com/IN': update unsuccessful: demo-iphone.internal.mydomain.com/TXT: 'RRset exists (value dependent)' prerequisite not satisfied (NXRRSET) Nov 6 20:56:34 core-server dhcpd: Forward map from demo-iphone.internal.mydomain.com to 10.100.7.101 FAILED: Has an address record but no DHCID, not mine. One thing to note is that the MAC of the device when connecting via VPN is the MAC of my Cisco ASA5512X and not the actual device. The ASA is relaying the DHCP request from the VPN client to the VM running ISC-DHCP-SERVER. Is there a way to get DDNS working in this scenario?

    Read the article

  • Tutorial for configuring OpenVPN [on hold]

    - by user2699451
    I have been through 10+ tutorials on setting up a OpenVPN, and each tutorial gives a different problem... Does anyone know of a decent and helpful website/tutorial which I could go to to get it set up? I have been battling through it for almost 2 months now. Yes, I have also bugged forums.openvpn, but I think I have "reached my post limit" with them. I have to configure it remotely via ssh. UPDATE: okay, I have been asked to be more clear on the topic I followed this tutorial (as a example) - http://www.servermom.com/how-to-build-openvpn-server-on-centos-6-x/732/ I had no issues setting up, etc. except when I boot into windows and run the OpenVPN GUI Client, it connects and gives this error: WARNING: Bad encapsulated packet length from peer (21331), which must be 0 and <= 1576 -- please ensure that --tun-mtu or --link-mtu is equal on both peers -- this condition could also indicate a possible active attack on the TCP link -- [Attemping restart...] Here is my server config: port 1194 #- port proto udp #- protocol dev tun tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 reneg-sec 0 ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem plugin /usr/lib64/openvpn/plugin/lib/openvpn-auth-pam.so /etc/pam.d/login #- Co$ #plugin /etc/openvpn/radiusplugin.so /etc/openvpn/radiusplugin.cnf #- Uncomment$ client-cert-not-required username-as-common-name server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" keepalive 5 30 comp-lzo persist-key persist-tun status 1194.log verb 3 and my client config: client dev tun proto udp remote [server ip] 1194 # - Your server IP and OpenVPN Port resolv-retry infinite nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 persist-key persist-tun ca ca.crt auth-user-pass comp-lzo reneg-sec 0 verb 3 OpenVPN Client Log: Thu Oct 31 11:51:29 2013 OpenVPN 2.0.9 Win32-MinGW [SSL] [LZO] built on Oct 1 2006 Thu Oct 31 11:51:44 2013 IMPORTANT: OpenVPN's default port number is now 1194, based on an official port number assignment by IANA. OpenVPN 2.0-beta16 and earlier used 5000 as the default port. Thu Oct 31 11:51:44 2013 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Thu Oct 31 11:51:44 2013 LZO compression initialized Thu Oct 31 11:51:44 2013 Control Channel MTU parms [ L:1576 D:140 EF:40 EB:0 ET:0 EL:0 ] Thu Oct 31 11:51:44 2013 Data Channel MTU parms [ L:1576 D:1450 EF:44 EB:135 ET:32 EL:0 AF:3/1 ] Thu Oct 31 11:51:44 2013 Local Options hash (VER=V4): '2547efd2' Thu Oct 31 11:51:44 2013 Expected Remote Options hash (VER=V4): '77cf0943' Thu Oct 31 11:51:44 2013 Attempting to establish TCP connection with x.x.x.x:1194 Thu Oct 31 11:51:44 2013 TCP connection established with x.x.x.x:1194 Thu Oct 31 11:51:44 2013 TCPv4_CLIENT link local: [undef] Thu Oct 31 11:51:44 2013 TCPv4_CLIENT link remote: x.x.x.x:1194 // after this it just hangs, nothing happens So I dont know what I am doing wrong but I am getting a bit impatient and on each forum I post this, I get stupid/unrelated/unhelpful answers...

    Read the article

  • OpenVPN not sending traffic to internet?

    - by coleifer
    I've set up openvpn on my pi and am running into a small issue. I can connect to the VPN server and ping it just fine, and I can also connect to other machines on my local network. However I am unable, when connected to the VPN, to reach the outside world (either by name lookup or IP). here are the details: On the server the tun0 interface: tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 10.8.0.1 netmask 255.255.255.255 destination 10.8.0.2 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 I can ping it just fine: # ping -c 3 10.8.0.1 PING 10.8.0.1 (10.8.0.1) 56(84) bytes of data. 64 bytes from 10.8.0.1: icmp_seq=1 ttl=64 time=0.159 ms 64 bytes from 10.8.0.1: icmp_seq=2 ttl=64 time=0.155 ms 64 bytes from 10.8.0.1: icmp_seq=3 ttl=64 time=0.156 ms --- 10.8.0.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms Routing table # ip route show default via 192.168.1.1 dev eth0 metric 204 10.8.0.0/24 via 10.8.0.2 dev tun0 10.8.0.2 dev tun0 proto kernel scope link src 10.8.0.1 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 204 I also have ip traffic forwarding: net.ipv4.ip_forward = 1 I do not have any custom iptables rules (that I'm aware of). On the client, I can connect to the VPN. Here is my tun0: tun0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 10.8.0.6 netmask 255.255.255.255 destination 10.8.0.5 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 21 bytes 1527 (1.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 And on the client I can ping it: sudo ping -c 3 10.8.0.6 PING 10.8.0.6 (10.8.0.6) 56(84) bytes of data. 64 bytes from 10.8.0.6: icmp_seq=1 ttl=64 time=0.035 ms 64 bytes from 10.8.0.6: icmp_seq=2 ttl=64 time=0.026 ms 64 bytes from 10.8.0.6: icmp_seq=3 ttl=64 time=0.032 ms --- 10.8.0.6 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1998ms rtt min/avg/max/mdev = 0.026/0.031/0.035/0.003 ms I can ssh from the client into another server on my LAN (192.168.1.x), however I cannot reach anything outside my LAN. Here's some of the server logs at the bottom of this gist: https://gist.github.com/coleifer/6ef95c3008f130249933/edit I am frankly out of ideas! I don't think it's my client because both my laptop and my phone (which has an openvpn client) exhibit the same behavior. I had OpenVPN installed on this pi before using debian and it worked, so I don't think it's my router but of course anything is possible.

    Read the article

  • Using curl -s in *nix command line not working for some reason

    - by JM4
    I am trying to install composer (though to be honest I really have no idea how it fully works and documentation seems to be quite poor) on my MediaTemple DV machine. I am using their [instructions][1] Trying to install globally using: $ curl -s https://getcomposer.org/installer | php My command line (again using putty and logged into my server as root) thinks for a second, then sets up for next prompt. I run a simple ls -l to check for the file it should have downloaded with no luck. Any idea what could be causing the issue? I have tested and do in fact have curl installed. UPDATE 1 Based on the first answer, the verbose response is: > $ curl -vs https://getcomposer.org/installer | php > * About to connect() to getcomposer.org port 443 > * Trying 37.59.4.156... connected > * Connected to getcomposer.org (37.59.4.156) port 443 > * successfully set certificate verify locations: > * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none > * SSLv2, Client hello (1): SSLv3, TLS handshake, Server hello (2): SSLv3, TLS handshake, CERT (11): SSLv3, TLS handshake, Server key > exchange (12): SSLv3, TLS handshake, Server finished (14): SSLv3, TLS > handshake, Client key exchange (16): SSLv3, TLS change cipher, Client > hello (1): SSLv3, TLS handshake, Finished (20): SSLv3, TLS change > cipher, Client hello (1): SSLv3, TLS handshake, Finished (20): SSL > connection using DHE-RSA-AES256-SHA > * Server certificate: > * subject: /C=CH/CN=dl.packagist.org/[email protected] > * start date: 2012-07-07 23:25:35 GMT > * expire date: 2013-07-10 02:55:12 GMT > * SSL: certificate subject name 'dl.packagist.org' does not match target host name 'getcomposer.org' > * Closing connection #0 > * SSLv3, TLS alert, Client hello (1): > > > [1]: http://getcomposer.org/doc/00-intro.md

    Read the article

  • Looking for a NTP Server Software for Windows

    - by Simon
    I'm looking for a, preferably free, NTP Server for Windows Server 2003/2008. We have already tried the built in Windows Time Server, but our tests did show that it is not very accurate, we see time differences up to 500ms. The max time difference we can allow for our application is ~100ms. Now we have already used the Meinberg NTPd for Windows. It works great except we have one big issue with it: If there is a network connection problem between the client and server, the ntp server is in a panic state It won't give the client a new time until we restart the ntp service. This is a big issue which has caused us some trouble. It was working fine for months until there was a network problem we didn't notice, we only noticed it after a week when the time difference was already 30 sec. on the clients. So please suggest some alternative NTP Server for windows. I did Google but I get a lot of unrelated search results. Edit: So far the ntpd windows version was very accurate and I'd like to stick with it. The only problem is the "panic state" after a network disconnect. Maybe some knows here what the cause of this is and how to fix it. Also, I forgot to mention that we have a server/client setup like this: Server1 -- Server2 -- Server3 -- Client1 -- Client2 -- Client3 So Server2 gets its time from Server1, Server3 gets its time from Server2, and the Clients get their time from Server3. Also, there are clients connected directly to Server2. It is important that all Servers and Clients have the exact same time (within ~100ms) Now there was a network problem with Server3 and its clients. The servers run the ntpd port for Windows, which acts as NTP server and client. The clients have Dimension4 as NTP client. After the network problem, the error message in D4 was something like this (out the top of my head, don't have the exact error message): Server response: The server is in a panic state (could not sync clock) I read through the ntpd docs, and the only mention of "panic" is when the time difference is 10000 seconds which will cause to exit the ntpd server but this was not the case. Also there is a "-g" command line switch to disable the panic exit, but it is already set by default. Any ideas what could cause the panic state and how to get rid of it next time?

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >