Search Results

Search found 361 results on 15 pages for 'inter'.

Page 3/15 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Problem with write query

    - by phenevo
    Hi, I've got collection of geo objects in database: There are four Tables: Countries Regions Provinces Cities Cities has inter alia ProvinceCode Provinces has inter alia regionCode Regions has inter alia CountryCode And there is fifth Table: Descriptions ObjectCode ObjectType(country, region, province, city) Description. How to get from Descriptions table, all descriptions from objects which are in the definite country ??

    Read the article

  • this pointer to base class constructor?

    - by Rolle
    I want to implement a derived class that should also implement an interface, that have a function that the base class can call. The following gives a warning as it is not safe to pass a this pointer to the base class constructor: struct IInterface { void FuncToCall() = 0; }; struct Base { Base(IInterface* inter) { m_inter = inter; } void SomeFunc() { inter->FuncToCall(); } IInterface* m_inter; }; struct Derived : Base, IInterface { Derived() : Base(this) {} FuncToCall() {} }; What is the best way around this? I need to supply the interface as an argument to the base constructor, as it is not always the dervied class that is the interface; sometimes it may be a totally different class. I could add a function to the base class, SetInterface(IInterface* inter), but I would like to avoid that.

    Read the article

  • Haproxy ACL for balance on URL request

    - by Elgreco08
    I'm usung Ubuntu with haproxy 1.4.13 version. Its load balancing two subdomains: app1.domain.com app2.domain.com now i want to be able to use ACL to send based on url request to the right backends For example: http://app1.domain.com/path/games/index.php sould be send to backend1 http://app1.domain.com/path/photos/index.php should be send to backend2 http://app2.domain.com/path/mail/index.php sould be send to backend3 http://app2.domain.com/path/wazap/index.php should be send to backend4 i did used the code the the following acl frontend http-farm bind 0.0.0.0:80 acl app1web hdr_beg(host) -i app1 # for http://app1.domain.com acl app2web hdr_beg(host) -i app2 # for http://app2.domain.com acl msg-url-1 url_reg ^\/path/games/.* acl msg-url-2 url_reg ^\/path/photos/.* acl msg-url-3 url_reg ^\/path/mail/.* acl msg-url-4 url_reg ^\/path/wazap/.* use_backend games if msg-url-1 app1web use_backend photos if msg-url-2 app2web use_backend mail if ..... backend games option httpchk GET /alive.php HTTP/1.1\r\nHost:\ app1.domain.com option forwardfor balance roundrobin server appsrv-1 192.168.1.10:80 check inter 2000 fall 3 server appsrv-2 192.168.1.11:80 check inter 2000 fall 3 backend photos option httpchk GET /alive.php HTTP/1.1\r\nHost:\ app2.domain.com option forwardfor balance roundrobin server appsrv-1 192.168.1.13:80 check inter 2000 fall 3 server appsrv-2 192.168.1.14:80 check inter 2000 fall 3 .... Since the path mail, photos...etc will be application pools on iis, i want to monitor them if they are alive, if the pool does not respond it should stop serving it. my problem is for sure in the regular expression in the ACL acl msg-url-4 url_reg ^\/path/wazap/.* What should i change in the ACL to make it work ? thanks for any hints

    Read the article

  • HAProxy, health checking multiple servers with different host names

    - by Marco Bettiolo
    I need to load balance between multiple running servers with different host names. I cannot set-up the same virtual host on each one. Is it possible to have only one listen configuration with multiple server and make the Health Checks apply the http-send-name-header Host directive? I am using HAProxy 1.5. I came up with this working haproxy.cfg, as you can see, I had to set a different hostname for each health check as the health check ignores the http-send-name-header Host. I would have preferred to use variables or other methods and keep things more concise. global log 127.0.0.1 local0 notice maxconn 2000 user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 option redispatch timeout connect 5000 timeout client 10000 timeout server 10000 stats enable stats uri /haproxy?stats stats refresh 5s balance roundrobin option httpclose listen inbound :80 option httpchk HEAD / HTTP/1.1\r\n server instance1 127.0.0.101 check inter 3000 fall 1 rise 1 server instance2 127.0.0.102 check inter 3000 fall 1 rise 1 listen instance1 127.0.0.101:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.example.com server www.example.com www.example.com:80 check inter 5000 fall 3 rise 2 listen instance2 127.0.0.102:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.bing.com server www.bing.com www.bing.com:80 check inter 5000 fall 3 rise 2

    Read the article

  • How to divert traffic based on hostname using HAProxy?

    - by Bosky
    I've had some initial success with HAProxy setting up a bunch of app servers listening on various other ports. I now have another webserver listening on one port, and i'd like to what changes to make to my config to flow traffic by hostname as well. The following is the current setup, assuming: my apache webserver is running at examplecom:8001 my bunch of app servers 0.0.0.0:8081, 0.0.0.0:8082 , 0.0.0.0:8083 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 debug #quiet #user haproxy #group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen appservers 0.0.0.0:80 mode http balance roundrobin option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 server inst1 0.0.0.0:8081 cookie server01 check inter 2000 fall 3 server inst2 0.0.0.0:8082 cookie server02 check inter 2000 fall 3 server inst3 0.0.0.0:8083 cookie server01 check inter 2000 fall 3 server inst4 0.0.0.0:8084 cookie server02 check inter 2000 fall 3 capture cookie vgnvisitor= len 32 (any other comments on the ^ setup are welcome.) Now I'd like to continue the same above, but in addition in case - if the hostname is myspecialtopleveldomain<dot>com, then would like to flow traffic to example<dot>com:8001 ~B

    Read the article

  • Nginx + Haproxy + Thin + Rails - 503 Service Unavailable -

    - by Luca G. Soave
    I don't know how troubleshoot this. I get "503 Service Unavailable" http error for all "nginx upstreams" proxy passing calls to haproxy fast_thin and slow_thin ( server 127.0.0.1:3100 and server 127.0.0.1:3200 ), which loadbalance on 6 Thin servers ( 127.0.0.1:3000 .. 3005 ). Static files like /blog are currently fine. The falldown is: nginx on port 80 - haproxy on 3100 and 3200 - thin on 3000 .. 3005 and then Rails. Here it is /etc/nginx/nginx.conf : user nginx; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; include /etc/nginx/conf.d/*.conf; } then /etc/nginx/conf.d/default.conf upstream fast_thin { server 127.0.0.1:3100; } upstream slow_thin { server 127.0.0.1:3200; } server { listen 80; server_name www.gitwatcher.com; rewrite ^/(.*) http://gitwatcher.com/$1 permanent; } server { listen 80; server_name gitwatcher.com; access_log /var/www/gitwatcher/log/access.log; error_log /var/www/gitwatcher/log/error.log; root /var/www/gitwatcher/public; # index index.html; location /about { proxy_pass http://fast_thin; break; } location /trends { proxy_pass http://slow_thin; break; } location /categories { proxy_pass http://slow_thin; break; } location /signout { proxy_pass http://slow_thin; break; } location /auth/github { proxy_pass http://slow_thin; break; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://slow_thin; break; } } } then haproxy config file /etc/haproxy/haproxy.cfg : global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet nbproc 1 # number of processing cores defaults log global retries 3 maxconn 2000 contimeout 5000 mode http clitimeout 60000 # maximum inactivity time on the client side srvtimeout 30000 # maximum inactivity time on the server side timeout connect 4000 # maximum time to wait for a connection attempt to a server to succeed option httplog option dontlognull option redispatch option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode) option abortonclose # enable early dropping of aborted requests from pending queue option httpchk # enable HTTP protocol to check on servers health option forwardfor # enable insert of X-Forwarded-For headers balance roundrobin # each server is used in turns, according to assigned weight stats enable # enable web-stats at /haproxy?stats stats auth haproxy:pr0xystats # force HTTP Auth to view stats stats refresh 5s # refresh rate of stats page listen rails_proxy 127.0.0.1:3100 # - equal weights on all servers # - maxconn will queue requests at HAProxy if limit is reached # - minconn dynamically scales the connection concurrency (bound my maxconn) depending on size of HAProxy queue # - check health every 20000 microseconds server web1 127.0.0.1:3000 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3001 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3002 weight 1 minconn 3 maxconn 6 check inter 20000 listen slow_proxy 127.0.0.1:3200 # cluster for slow requests, lower the queues, check less frequently server slow1 127.0.0.1:3003 weight 1 minconn 1 maxconn 3 check inter 40000 server slow2 127.0.0.1:3004 weight 1 minconn 1 maxconn 3 check inter 40000 server slow3 127.0.0.1:3005 weight 1 minconn 1 maxconn 3 check inter 40000 and the Thin config file /etc/thin/gitwatcher.yml : --- chdir: /var/www/gitwatcher environment: production address: 0.0.0.0 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 100 require: [] wait: 30 servers: 6 daemonize: true if I look into open listen ports, I got the following : root@fullness:/var/www/gitwatcher# lsof | grep TCP | egrep "nginx|haproxy|thin" nginx 834 root 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 835 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 837 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) haproxy 1908 haproxy 4u IPv4 11699 0t0 TCP localhost:3100 (LISTEN) haproxy 1908 haproxy 6u IPv4 11701 0t0 TCP localhost:3200 (LISTEN) root@fullness:/var/www/gitwatcher# iptables -L get me the following : Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:22222 ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT all -- anywhere anywhere DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Any help ?

    Read the article

  • haproxy backend default location

    - by magd1
    If you go to www.company.com, I want it to redirect to /something/something on my server, but the URL still shows www.company.com Is this possible in haproxy? backend new_marketing_server *** set default URL to /something/something*** mode http balance roundrobin timeout server 10m option httpclose server server1 10.86.151.142:80 minconn 32000 maxconn 3200 check port 80 inter 2000 server server2 10.122.13.189:80 minconn 32000 maxconn 3200 check port 80 inter 2000

    Read the article

  • algorithm diagram

    - by tunl
    This is the max searching algorithm diagram: So, I wonder how can draw diagram for Recursion in HaNoi Tower program: package tunl; public class TowersApp { static int n = 3; public static void main(String[] args) { TowersApp.doTowers(3, 'A', 'B', 'C'); } public static void doTowers(int n, char from, char inter, char to) { if (n == 1) { System.out.println("disk 1 from "+ from + " to " + to); } else { doTowers(n-1, from, to, inter); System.out.println("disk " + n + " from " + from + " to " + to); doTowers(n-1, inter, from, to); } } } I can't draw it. Anyone can help me !!!

    Read the article

  • Pointers to Derived Class Objects Losing vfptr

    - by duckworthd
    To begin, I am trying to write a run-of-the-mill, simple Ray Tracer. In my Ray Tracer, I have multiple types of geometries in the world, all derived from a base class called "SceneObject". I've included the header for it here. /** Interface for all objects that will appear in a scene */ class SceneObject { public: mat4 M, M_inv; Color c; SceneObject(); ~SceneObject(); /** The transformation matrix to be applied to all points of this object. Identity leaves the object in world frame. */ void setMatrix(mat4 M); void setMatrix(MatrixStack mStack); void getMatrix(mat4& M); /** The color of the object */ void setColor(Color c); void getColor(Color& c); /** Alter one portion of the color, leaving the rest as they were. */ void setDiffuse(vec3 rgb); void setSpecular(vec3 rgb); void setEmission(vec3 rgb); void setAmbient(vec3 rgb); void setShininess(double s); /** Fills 'inter' with information regarding an intersection between this object and 'ray'. Ray should be in world frame. */ virtual void intersect(Intersection& inter, Ray ray) = 0; /** Returns a copy of this SceneObject */ virtual SceneObject* clone() = 0; /** Print information regarding this SceneObject for debugging */ virtual void print() = 0; }; As you can see, I've included a couple virtual functions to be implemented elsewhere. In this case, I have only two derived class -- Sphere and Triangle, both of which implement the missing member functions. Finally, I have a Parser class, which is full of static methods that do the actual "Ray Tracing" part. Here's a couple snippets for relevant portions void Parser::trace(Camera cam, Scene scene, string outputFile, int maxDepth) { int width = cam.getNumXPixels(); int height = cam.getNumYPixels(); vector<vector<vec3>> colors; colors.clear(); for (int i = 0; i< width; i++) { vector<vec3> ys; for (int j = 0; j<height; j++) { Intersection intrsct; Ray ray; cam.getRay(ray, i, j); vec3 color; printf("Obtaining color for Ray[%d,%d]\n", i,j); getColor(color, scene, ray, maxDepth); ys.push_back(color); } colors.push_back(ys); } printImage(colors, width, height, outputFile); } void Parser::getColor(vec3& color, Scene scene, Ray ray, int numBounces) { Intersection inter; scene.intersect(inter,ray); if(inter.isIntersecting()){ Color c; inter.getColor(c); c.getAmbient(color); } else { color = vec3(0,0,0); } } Right now, I've forgone the true Ray Tracing part and instead simply return the color of the first object hit, if any. As you have no doubt noticed, the only way the computer knows that a ray has intersected an object is through Scene.intersect(), which I also include. void Scene::intersect(Intersection& i, Ray r) { Intersection result; result.setDistance(numeric_limits<double>::infinity()); result.setIsIntersecting(false); double oldDist; result.getDistance(oldDist); /* Cycle through all objects, making result the closest one */ for(int ind=0; ind<objects.size(); ind++){ SceneObject* thisObj = objects[ind]; Intersection betterIntersect; thisObj->intersect(betterIntersect, r); double newDist; betterIntersect.getDistance(newDist); if (newDist < oldDist){ result = betterIntersect; oldDist = newDist; } } i = result; } Alright, now for the problem. I begin by creating a scene and filling it with objects outside of the Parser::trace() method. Now for some odd reason, I cast Ray for i=j=0 and everything works wonderfully. However, by the time the second ray is cast all of the objects stored in my Scene no longer recognize their vfptr's! I stepped through the code with a debugger and found that the information to all the vfptr's are lost somewhere between the end of getColor() and the continuation of the loop. However, if I change the arguments of getColor() to use a Scene& instead of a Scene, then no loss occurs. What crazy voodoo is this?

    Read the article

  • Problem with acceding in DOM with jQuery

    - by Tristan
    Hello, I changed the tree of my JSON-P output, and i cannot access to my object DOM anymore : Here's my output : jsonp1271634374310( {"Inter-Medias": {"name":"Inter-Medias","idGSP":"14","average":"80","services":"8.86"} }); And here's my jQuery script : success: function(data, textStatus, XMLHttpRequest){ widget = data.name; widget += data.average ; .... I know one level is missing, but if I try to do : data.Inter-Medias.name or data.name.name it's still not working. Any idea please ? I will have case where i'll have multiple Ojbect, so i want to display all of them, how to do that ? for (i=0;i < data.length;i++) Does it look right ? Bonus question : i understand function(data, textStatus, XMLHttpRequest) that in case of success this function is triggered, the data is what the script recieved from his request, but i have no idea what textStatus or XMLHttpRequest are here too and why ?! Thank you.

    Read the article

  • Problem with accessing in DOM with jQuery

    - by Tristan
    Hello, I changed the tree of my JSON-P output, and i cannot access to my object DOM anymore : Here's my output : jsonp1271634374310( {"Inter-Medias": {"name":"Inter-Medias","idGSP":"14","average":"80","services":"8.86"} }); And here's my jQuery script : success: function(data, textStatus, XMLHttpRequest){ widget = data.name; widget += data.average ; .... I know one level is missing, but if I try to do : data.Inter-Medias.name or data.name.name it's still not working. Any idea please ? I will have case where i'll have multiple Ojbect, so i want to display all of them, how to do that ? for (i=0;i < data.length;i++) Does it look right ? Bonus question : i understand function(data, textStatus, XMLHttpRequest) that in case of success this function is triggered, the data is what the script recieved from his request, but i have no idea what textStatus or XMLHttpRequest are here too and why ?! Thank you.

    Read the article

  • Azure Blobs - ArgumentNullException when calling UploadFile()

    - by Ariel
    I’m getting the following exception when trying to upload a file with the following code: string encodedUrl = "videos/Sample.mp4" CloudBlockBlob encodedVideoBlob = blobClient.GetBlockBlobReference(encodedUrl); Log(string.Format("Got blob reference for {0}", encodedUrl), EventLogEntryType.Information); encodedVideoBlob.Properties.ContentType = contentType; encodedVideoBlob.Metadata[BlobProperty.Description] = description; encodedVideoBlob.UploadFile(localEncodedBlobPath); I see the "Got blob reference" message, so I assume the reference resolves correctly. Void Run() C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs (40) System.ArgumentNullException: Value cannot be null. Parameter name: value at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result() at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.ExecuteAndWait() at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFromStream(Stream source, BlobRequestOptions options) at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFile(String fileName, BlobRequestOptions options) at EncoderWorkerRole.WorkerRole.ProcessJobOutput(IJob job, String videoBlobToEncodeUrl) in C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs:line 144 at EncoderWorkerRole.WorkerRole.Run() in C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs:line 40 Interestingly, I'm running that same snippet from an on-premises server i.e., outside of Azure and it works correctly. Ideas welcome, thanks!

    Read the article

  • New Endpoint options that enable additional application patterns

    - by kaleidoscope
    The two communication-related capabilities:  a) inter-role communication and b) external endpoints on worker roles enable new application patterns in Windows Azure-hosted services. Inter-role Communication - A common application pattern enabled by this is client-server, where the server could be an application such as a database or a memory cache. External Endpoints on Worker Roles - A common application type enabled by this is a self-hosted Internet-exposed service, such as a custom application server. For further details click on the following link: http://blogs.msdn.com/windowsazure/archive/2009/11/24/new-endpoint-options-enable-additional-application-patterns.aspx   Tinu, O

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • openvpn WARNING: No server certificate verification method has been enabled

    - by tmedtcom
    I tried to install openvpn on debian squeez (server) and connect from my fedora 17 as (client). Here is my configuration: server configuration ###cat server.conf # Serveur TCP ** proto tcp** port 1194 dev tun # Cles et certificats ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key dh /etc/openvpn/easy-rsa/keys/dh1024.pem # Reseau #Adresse virtuel du reseau vpn server 192.170.70.0 255.255.255.0 #Cette ligne ajoute sur le client la route du reseau vers le serveur push "route 192.168.1.0 255.255.255.0" #Creer une route du server vers l'interface tun. #route 192.170.70.0 255.255.255.0 # Securite keepalive 10 120 #type d'encryptage des données **cipher AES-128-CBC** #activation de la compression comp-lzo #nombre maximum de clients autorisés max-clients 10 #pas d'utilisateur et groupe particuliers pour l'utilisation du VPN user nobody group nogroup #pour rendre la connexion persistante persist-key persist-tun #Log d'etat d'OpenVPN status /var/log/openvpn-status.log #logs openvpnlog /var/log/openvpn.log log-append /var/log/openvpn.log #niveau de verbosité verb 5 ###cat client.conf # Client client dev tun [COLOR="Red"]proto tcp-client[/COLOR] remote <my server wan IP> 1194 resolv-retry infinite **cipher AES-128-CBC** # Cles ca ca.crt cert client.crt key client.key # Securite nobind persist-key persist-tun comp-lzo verb 3 Message from the host client (fedora 17) in the log file / var / log / messages: Dec 6 21:56:00 GlobalTIC NetworkManager[691]: <info> Starting VPN service 'openvpn'... Dec 6 21:56:00 GlobalTIC NetworkManager[691]: <info> VPN service 'openvpn' started (org.freedesktop.NetworkManager.openvpn), PID 7470 Dec 6 21:56:00 GlobalTIC NetworkManager[691]: <info> VPN service 'openvpn' appeared; activating connections Dec 6 21:56:00 GlobalTIC NetworkManager[691]: <info> VPN plugin state changed: starting (3) Dec 6 21:56:01 GlobalTIC NetworkManager[691]: <info> VPN connection 'Connexion VPN 1' (Connect) reply received. Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: OpenVPN 2.2.2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] built on Sep 5 2012 Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]:[COLOR="Red"][U][B] WARNING: No server certificate verification method has been enabled.[/B][/U][/COLOR] See http://openvpn.net/howto.html#mitm for more info. Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]:[COLOR="Red"] WARNING: file '/home/login/client/client.key' is group or others accessible[/COLOR] Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: UDPv4 link local: [undef] Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: UDPv4 link remote: [COLOR="Red"]<my server wan IP>[/COLOR]:1194 Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: [COLOR="Red"]read UDPv4 [ECONNREFUSED]: Connection refused (code=111)[/COLOR] Dec 6 21:56:03 GlobalTIC nm-openvpn[7472]: [COLOR="Red"]read UDPv4[/COLOR] [ECONNREFUSED]: Connection refused (code=111) Dec 6 21:56:07 GlobalTIC nm-openvpn[7472]: read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Dec 6 21:56:15 GlobalTIC nm-openvpn[7472]: read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Dec 6 21:56:31 GlobalTIC nm-openvpn[7472]: read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Dec 6 21:56:41 GlobalTIC NetworkManager[691]: <warn> VPN connection 'Connexion VPN 1' (IP Conf[/CODE] ifconfig on server host(debian): ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:16:21:ac inet addr:192.168.1.6 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe16:21ac/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9059 errors:0 dropped:0 overruns:0 frame:0 TX packets:5660 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:919427 (897.8 KiB) TX bytes:1273891 (1.2 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:192.170.70.1 P-t-P:192.170.70.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifconfig on the client host (fedora 17) as0t0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 5.5.0.1 netmask 255.255.252.0 destination 5.5.0.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 200 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 321 (321.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 as0t1: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 5.5.4.1 netmask 255.255.252.0 destination 5.5.4.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 200 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 321 (321.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 as0t2: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 5.5.8.1 netmask 255.255.252.0 destination 5.5.8.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 200 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 321 (321.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 as0t3: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 5.5.12.1 netmask 255.255.252.0 destination 5.5.12.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 200 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 321 (321.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 **p255p1**: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.2 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::21d:baff:fe20:b7e6 prefixlen 64 scopeid 0x20<link> ether 00:1d:ba:20:b7:e6 txqueuelen 1000 (Ethernet) RX packets 4842070 bytes 3579798184 (3.3 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3996158 bytes 2436442882 (2.2 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 16 p255p1 is label for eth0 interface and on the server : root@hoteserver:/etc/openvpn# tree . +-- client ¦** +-- ca.crt ¦** +-- client.conf ¦** +-- client.crt ¦** +-- client.csr ¦** +-- client.key ¦** +-- client.ovpn ¦* ¦** +-- easy-rsa ¦** +-- build-ca ¦** +-- build-dh ¦** +-- build-inter ¦** +-- build-key ¦** +-- build-key-pass ¦** +-- build-key-pkcs12 ¦** +-- build-key-server ¦** +-- build-req ¦** +-- build-req-pass ¦** +-- clean-all ¦** +-- inherit-inter ¦** +-- keys ¦** ¦** +-- 01.pem ¦** ¦** +-- 02.pem ¦** ¦** +-- ca.crt ¦** ¦** +-- ca.key ¦** ¦** +-- client.crt ¦** ¦** +-- client.csr ¦** ¦** +-- client.key ¦** ¦** +-- dh1024.pem ¦** ¦** +-- index.txt ¦** ¦** +-- index.txt.attr ¦** ¦** +-- index.txt.attr.old ¦** ¦** +-- index.txt.old ¦** ¦** +-- serial ¦** ¦** +-- serial.old ¦** ¦** +-- server.crt ¦** ¦** +-- server.csr ¦** ¦** +-- server.key ¦** +-- list-crl ¦** +-- Makefile ¦** +-- openssl-0.9.6.cnf.gz ¦** +-- openssl.cnf ¦** +-- pkitool ¦** +-- README.gz ¦** +-- revoke-full ¦** +-- sign-req ¦** +-- vars ¦** +-- whichopensslcnf +-- openvpn.log +-- openvpn-status.log +-- server.conf +-- update-resolv-conf on the client: [login@hoteclient openvpn]$ tree . |-- easy-rsa | |-- 1.0 | | |-- build-ca | | |-- build-dh | | |-- build-inter | | |-- build-key | | |-- build-key-pass | | |-- build-key-pkcs12 | | |-- build-key-server | | |-- build-req | | |-- build-req-pass | | |-- clean-all | | |-- list-crl | | |-- make-crl | | |-- openssl.cnf | | |-- README | | |-- revoke-crt | | |-- revoke-full | | |-- sign-req | | `-- vars | `-- 2.0 | |-- build-ca | |-- build-dh | |-- build-inter | |-- build-key | |-- build-key-pass | |-- build-key-pkcs12 | |-- build-key-server | |-- build-req | |-- build-req-pass | |-- clean-all | |-- inherit-inter | |-- keys [error opening dir] | |-- list-crl | |-- Makefile | |-- openssl-0.9.6.cnf | |-- openssl-0.9.8.cnf | |-- openssl-1.0.0.cnf | |-- pkitool | |-- README | |-- revoke-full | |-- sign-req | |-- vars | `-- whichopensslcnf |-- keys -> ./easy-rsa/2.0/keys/ `-- server.conf the problem source is cipher AES-128-CBC ,proto tcp-client or UDP or the interface p255p1 on fedora17 or file authentification ta.key is not found ????

    Read the article

  • Clustered Graphs Visualization Techniques

    - by jameszhao00
    I need to visualize a relatively large graph (6K nodes, 8K edges) that has the following properties: Distinct Clusters. Approximately 50-100 Nodes per cluster and moderate interconnectivity at the cluster level Minimal (5-10 inter-cluster edges per cluster) interconnectivity between clusters Let global edge overlap = The edge overlaps caused by directly visualizing a graph of Clusters = {A, B, C, D, E}, Edges = {Pentagram of those clusters, which is non-planar by the way and will definitely generate edge overlap if you draw it out directly} Let Local Edge Overlap = the above but { A, B, C, D, E } are just nodes. I need to visualize graphs with the above in a way that satisfies the following requirements No global edge overlap (i.e. edge overlaps caused by inter-cluster properties is not okay) Local edge overlap within a cluster is fine Anyone have thoughts on how to best visualize a graph with the requirements above? One solution I've come up with to deal with the global edge overlap is to make sure a cluster A can only have a max of 1 direct edge to another cluster (B) during visualization. Any additional inter-cluster edges between cluster A - C, A - D, ... are disconnected and additional node/edges A - A_C, C - C_A, A - A_D, D - D_A... are created. Anyone have any thoughts?

    Read the article

  • haproxy - pass original / remote ip in tcp mode

    - by Vito Botta
    I've got haproxy set up with keepalived for load balancing and ip failover of a percona cluster, and since it works great I'd like to use the same lb / failover for another service/daemon. I've configured haproxy this way: listen my_service 0.0.0.0:4567 mode tcp balance leastconn option tcpka contimeout 500000 clitimeout 500000 srvtimeout 500000 server host1 xxx.xxx.xxx.xx1:4567 check port 4567 inter 5000 rise 3 fall 3 server host2 xxx.xxx.xxx.xx2:4567 check port 4567 inter 5000 rise 3 fall 3 The load balancing works fine, but the service sees the IP of the load balancer instead of the actual IPs of the clients. In http mode it's quite easy to have haproxy pass along the remote IP, but how do I do in tcp mode? This is critical due to the nature of the service I need to load balance. Thanks! Vito

    Read the article

  • How to set up VLAN network

    - by Paddington
    I'm changing my network from having every device on flat network to using VLans. My problem is that we already have a lot of devices on this network(192.168.20.0/24). From theory, I read that each Vlan has to be a different subnet and then I need to configure virtual interfaces on my Cisco router to cater for inter vlan routing. 1) How can I segment this network with minimum down time on the devices already on the network? 2) Can I just create Vlans and leave all these Vlans in the same layer 3 network so that they can go out of the network (I am not too concerned about inter-Vlan routing) or I have to create subnets which means reconfiguring the existing devices (something I do not want).

    Read the article

  • First Step Towards Rapid Enterprise Application Deployment

    - by Antoinette O'Sullivan
    Take Oracle VM Server for x86 training as a first step towards deploying enterprise applications rapidly. You have a choice between the following instructor-led training: Oracle VM with Oracle VM Server for x86 1-day Seminar. Take this course from your own desk on one of the 300 events on the schedule. This seminar tells you how to build a virtualization platform using the Oracle VM Manager and Oracle VM Server for x86 and to sustain the deployment of highly configurable, inter-connected virtual machines. Oracle VM Administration: Oracle VM Server for x86 3-day hands on course. This course teaches you how to build a virtualization platform using the Oracle VM Manager and Oracle VM Server for x86. You learn how deploy and manage highly configurable, inter-connected virtual machines. The course teaches you how to install and configure Oracle VM Server for x86 as well as details of network and storage configuration, pool and repository creation, and virtual machine management.Take this course from your own desk on one of the 450 events on the schedule. You can also take this course in an Oracle classroom on one of the following events:  Location  Date  Delivery Language  Istanbul, Turkey  12 November 2012  Turkish  Wellington, New Zealand  10 Dec 2012  English  Roseveille, United States  19 November 2012  English  Warsaw, Poland  17 October 2012  Polish  Paris, France  17 October 2012  French  Paris, France  21 November 2012  French  Dusseldorfm Germany  5 November 2012  German For more information on Oracle's Virtualization courses see http://oracle.com/education/vm

    Read the article

  • Java?????????????????????(Java??????????????????)

    - by OTN-J Master
    8?30?(????)???????Java???????????????????????????????????? ???????????Java Community Lead?Tori Wieldt?????????????????? ??: Java Source ???: Tori Wieldt Java?????????????????????????????????·????CVE-2012-4681?????????????????????·??????????Java??? ???????????????????????????3?????????????????????????CVE-2012-4681?CVE- 2012-1682?CVE-2012-3136?CVE-2012-0547???????????????????Java??????·????? ???????????????Java???????????????Oracle????·?????????????????????? Normal 0 0 2 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0mm 5.4pt 0mm 5.4pt; mso-para-margin-top:0mm; mso-para-margin-right:0mm; mso-para-margin-bottom:auto; mso-para-margin-left:0mm; text-align:justify; text-justify:inter-ideograph; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Century","serif"; mso-ascii-font-family:Century; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Century; mso-hansi-theme-font:minor-latin; mso-font-kerning:1.0pt;} Normal 0 0 2 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0mm 5.4pt 0mm 5.4pt; mso-para-margin-top:0mm; mso-para-margin-right:0mm; mso-para-margin-bottom:auto; mso-para-margin-left:0mm; text-align:justify; text-justify:inter-ideograph; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Century","serif"; mso-ascii-font-family:Century; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Century; mso-hansi-theme-font:minor-latin; mso-font-kerning:1.0pt;} ??????????????????????????????CVE-2012-4681??????????????????????????????????????????·???????????????????????????? Normal 0 0 2 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0mm 5.4pt 0mm 5.4pt; mso-para-margin-top:0mm; mso-para-margin-right:0mm; mso-para-margin-bottom:auto; mso-para-margin-left:0mm; text-align:justify; text-justify:inter-ideograph; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Century","serif"; mso-ascii-font-family:Century; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Century; mso-hansi-theme-font:minor-latin; mso-font-kerning:1.0pt;} ????????????????????????????????? http://www.oracle.com/technetwork/jp/java/javase/downloads/index.html Java?????????????????????JRE?????????????? http://java.com Windows??????????????????????????? Java Automatic Update JUG???????John Yeary?????????????????????????????????????????????????????????????????????????????????? ??????????????????? ???? Oracle Security Alert for CVE-2012-4681 Change to Java SE 7 and Java SE 6 Update Release Numbers

    Read the article

  • Confused by Perl grep function

    - by titaniumdecoy
    I don't understand the last line of this function from Programming Perl 3e. Here's how you might write a function that does a kind of set intersection by returning a list of keys occurring in all the hashes passed to it: @common = inter( \%foo, \%bar, \%joe ); sub inter { my %seen; for my $href (@_) { while (my $k = each %$href) { $seen{$k}++; } } return grep { $seen{$_} == @_ } keys %seen; } I understand that %seen is a hash which maps each key to the number of times it was encountered in any of the hashes provided to the function.

    Read the article

  • How does this Perl grep work to determine the union of several hashes?

    - by titaniumdecoy
    I don't understand the last line of this function from Programming Perl 3e. Here's how you might write a function that does a kind of set intersection by returning a list of keys occurring in all the hashes passed to it: @common = inter( \%foo, \%bar, \%joe ); sub inter { my %seen; for my $href (@_) { while (my $k = each %$href) { $seen{$k}++; } } return grep { $seen{$_} == @_ } keys %seen; } I understand that %seen is a hash which maps each key to the number of times it was encountered in any of the hashes provided to the function.

    Read the article

  • Confusing Perl code

    - by titaniumdecoy
    I don't understand the last line of this function from Programming Perl 3e. Here's how you might write a function that does a kind of set intersection by returning a list of keys occurring in all the hashes passed to it: @common = inter( \%foo, \%bar, \%joe ); sub inter { my %seen; for my $href (@_) { while (my $k = each %$href) { $seen{$k}++; } } return grep { $seen{$_} == @_ } keys %seen; } I understand that %seen is a hash which maps each key to the number of times it was encountered in any of the hashes provided to the function.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >