Search Results

Search found 303 results on 13 pages for 'topology'.

Page 9/13 | < Previous Page | 5 6 7 8 9 10 11 12 13  | Next Page >

  • Redirecting or routing all traffic to OpenVPN on a Mac OS X client

    - by sdr56p
    I have configured an OpenVPN (2.2.1) server on an Ubuntu virtual machine in the Amazon elastic compute cloud. The server is up and running. I have installed OpenVPN (2.2.1) on a Mac OS X (10.8.2) client and I am using the openvpn2 binary to connect (in opposition to other clients like Tunnelblick or Viscosity). I can connect with the client and successfully ping or ssh the server through the tunnel. However, I can't redirect all internet traffic through the VPN even if I use the push "redirect-gateway def1 bypass-dhcp" option in the server.conf configurations. When I connect to the server with these configurations, I get a successful connection, but then an infinite series of error messages: "write UDPv4: No route to host (code=65)". Traffic routing seems to be compromised because I am not able to access anything anymore, not even the OpenVPN server (by pinging 10.8.0.1 for instance). This is beyond me. I am finding little help on the web and don't know what to try next. I don't think it is a problem of forwarding the traffic on the server since, first, I have also took care of that and, second, I can't even ping the VPN server locally through the tunnel (or ping anything at all for that matter). Thank you for your help. Here is the server.conf. file: port 1194 proto udp dev tun ca ca.crt cert ec2-server.crt key ec2-server.key # This file should be kept secret dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 And the client.conf file: client dev tun proto udp remote servername.com 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert Toto5.crt key Toto5.key ns-cert-type server comp-lzo verb 3 Here is the connection log with the error messages: $ sudo openvpn2 --config client.conf Wed Mar 13 22:58:22 2013 OpenVPN 2.2.1 x86_64-apple-darwin12.2.0 [SSL] [LZO2] [eurephia] built on Mar 4 2013 Wed Mar 13 22:58:22 2013 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Wed Mar 13 22:58:22 2013 LZO compression initialized Wed Mar 13 22:58:22 2013 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Mar 13 22:58:22 2013 Socket Buffers: R=[196724->65536] S=[9216->65536] Wed Mar 13 22:58:22 2013 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Wed Mar 13 22:58:22 2013 Local Options hash (VER=V4): '41690919' Wed Mar 13 22:58:22 2013 Expected Remote Options hash (VER=V4): '530fdded' Wed Mar 13 22:58:22 2013 UDPv4 link local: [undef] Wed Mar 13 22:58:22 2013 UDPv4 link remote: 54.234.43.171:1194 Wed Mar 13 22:58:22 2013 TLS: Initial packet from 54.234.43.171:1194, sid=ffbaf343 d0c1a266 Wed Mar 13 22:58:22 2013 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:22 2013 VERIFY OK: nsCertType=SERVER Wed Mar 13 22:58:22 2013 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:23 2013 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:23 2013 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:23 2013 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:23 2013 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:23 2013 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Mar 13 22:58:23 2013 [ec2-server] Peer Connection Initiated with 54.234.43.171:1194 Wed Mar 13 22:58:25 2013 SENT CONTROL [ec2-server]: 'PUSH_REQUEST' (status=1) Wed Mar 13 22:58:25 2013 PUSH: Received control message: 'PUSH_REPLY,route 10.8.0.0 255.255.255.0,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.6 10.8.0.5' Wed Mar 13 22:58:25 2013 OPTIONS IMPORT: timers and/or timeouts modified Wed Mar 13 22:58:25 2013 OPTIONS IMPORT: --ifconfig/up options modified Wed Mar 13 22:58:25 2013 OPTIONS IMPORT: route options modified Wed Mar 13 22:58:25 2013 ROUTE default_gateway=0.0.0.0 Wed Mar 13 22:58:25 2013 TUN/TAP device /dev/tun0 opened Wed Mar 13 22:58:25 2013 /sbin/ifconfig tun0 delete ifconfig: ioctl (SIOCDIFADDR): Can't assign requested address Wed Mar 13 22:58:25 2013 NOTE: Tried to delete pre-existing tun/tap instance -- No Problem if failure Wed Mar 13 22:58:25 2013 /sbin/ifconfig tun0 10.8.0.6 10.8.0.5 mtu 1500 netmask 255.255.255.255 up Wed Mar 13 22:58:25 2013 /sbin/route add -net 10.8.0.0 10.8.0.5 255.255.255.0 add net 10.8.0.0: gateway 10.8.0.5 Wed Mar 13 22:58:25 2013 Initialization Sequence Completed ^CWed Mar 13 22:58:30 2013 event_wait : Interrupted system call (code=4) Wed Mar 13 22:58:30 2013 TCP/UDP: Closing socket Wed Mar 13 22:58:30 2013 /sbin/route delete -net 10.8.0.0 10.8.0.5 255.255.255.0 delete net 10.8.0.0: gateway 10.8.0.5 Wed Mar 13 22:58:30 2013 Closing TUN/TAP interface Wed Mar 13 22:58:30 2013 SIGINT[hard,] received, process exiting toto5:ttntec2 Dominic$ sudo openvpn2 --config client.conf --remote ec2-54-234-43-171.compute-1.amazonaws.com Wed Mar 13 22:58:57 2013 OpenVPN 2.2.1 x86_64-apple-darwin12.2.0 [SSL] [LZO2] [eurephia] built on Mar 4 2013 Wed Mar 13 22:58:57 2013 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Wed Mar 13 22:58:57 2013 LZO compression initialized Wed Mar 13 22:58:57 2013 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Mar 13 22:58:57 2013 Socket Buffers: R=[196724->65536] S=[9216->65536] Wed Mar 13 22:58:57 2013 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Wed Mar 13 22:58:57 2013 Local Options hash (VER=V4): '41690919' Wed Mar 13 22:58:57 2013 Expected Remote Options hash (VER=V4): '530fdded' Wed Mar 13 22:58:57 2013 UDPv4 link local: [undef] Wed Mar 13 22:58:57 2013 UDPv4 link remote: 54.234.43.171:1194 Wed Mar 13 22:58:57 2013 TLS: Initial packet from 54.234.43.171:1194, sid=a0d75468 ec26de14 Wed Mar 13 22:58:58 2013 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:58 2013 VERIFY OK: nsCertType=SERVER Wed Mar 13 22:58:58 2013 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:58 2013 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:58 2013 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:58 2013 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:58 2013 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:58 2013 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Mar 13 22:58:58 2013 [ec2-server] Peer Connection Initiated with 54.234.43.171:1194 Wed Mar 13 22:59:00 2013 SENT CONTROL [ec2-server]: 'PUSH_REQUEST' (status=1) Wed Mar 13 22:59:00 2013 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1 bypass-dhcp,route 10.8.0.0 255.255.255.0,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.6 10.8.0.5' Wed Mar 13 22:59:00 2013 OPTIONS IMPORT: timers and/or timeouts modified Wed Mar 13 22:59:00 2013 OPTIONS IMPORT: --ifconfig/up options modified Wed Mar 13 22:59:00 2013 OPTIONS IMPORT: route options modified Wed Mar 13 22:59:00 2013 ROUTE default_gateway=0.0.0.0 Wed Mar 13 22:59:00 2013 TUN/TAP device /dev/tun0 opened Wed Mar 13 22:59:00 2013 /sbin/ifconfig tun0 delete ifconfig: ioctl (SIOCDIFADDR): Can't assign requested address Wed Mar 13 22:59:00 2013 NOTE: Tried to delete pre-existing tun/tap instance -- No Problem if failure Wed Mar 13 22:59:00 2013 /sbin/ifconfig tun0 10.8.0.6 10.8.0.5 mtu 1500 netmask 255.255.255.255 up Wed Mar 13 22:59:00 2013 /sbin/route add -net 54.234.43.171 0.0.0.0 255.255.255.255 add net 54.234.43.171: gateway 0.0.0.0 Wed Mar 13 22:59:00 2013 /sbin/route add -net 0.0.0.0 10.8.0.5 128.0.0.0 add net 0.0.0.0: gateway 10.8.0.5 Wed Mar 13 22:59:00 2013 /sbin/route add -net 128.0.0.0 10.8.0.5 128.0.0.0 add net 128.0.0.0: gateway 10.8.0.5 Wed Mar 13 22:59:00 2013 /sbin/route add -net 10.8.0.0 10.8.0.5 255.255.255.0 add net 10.8.0.0: gateway 10.8.0.5 Wed Mar 13 22:59:00 2013 Initialization Sequence Completed Wed Mar 13 22:59:00 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:00 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:01 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:01 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:01 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) ... The routing table after a connection WITHOUT the push redirect-gateway (all traffic is not redirected to the VPN and everything is working fine, I can ping or ssh the OpenVPN server and access all other Internet resources through my default gateway): Destination Gateway Flags Refs Use Netif Expire default user148-1.wireless UGSc 50 0 en1 10.8/24 10.8.0.5 UGSc 2 7 tun0 10.8.0.5 10.8.0.6 UH 3 2 tun0 127 localhost UCS 0 0 lo0 localhost localhost UH 6 6692 lo0 client.openvpn.net client.openvpn.net UH 3 18 lo0 142.1.148/22 link#5 UCS 2 0 en1 user148-1.wireless 0:90:b:27:10:71 UHLWIir 50 0 en1 76 user150-173.wirele localhost UHS 0 0 lo0 142.1.151.255 ff:ff:ff:ff:ff:ff UHLWbI 0 2 en1 169.254 link#5 UCS 1 0 en1 169.254.255.255 0:90:b:27:10:71 UHLSWi 0 0 en1 71 The routing table after a connection with the push redirect-gateway option enable as in the server.conf file above (all internet traffic should be redirected to the VPN tunnel, but nothing is working, I can't access any Internet ressources at all): Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 1 0 tun0 default user148-1.wireless UGSc 7 0 en1 10.8/24 10.8.0.5 UGSc 0 0 tun0 10.8.0.5 10.8.0.6 UHr 6 0 tun0 54.234.43.171/32 0.0.0.0 UGSc 1 0 en1 127 localhost UCS 0 0 lo0 localhost localhost UH 3 6698 lo0 client.openvpn.net client.openvpn.net UH 0 27 lo0 128.0/1 10.8.0.5 UGSc 2 0 tun0 142.1.148/22 link#5 UCS 1 0 en1 user148-1.wireless 0:90:b:27:10:71 UHLWIir 1 0 en1 833 user150-173.wirele localhost UHS 0 0 lo0 169.254 link#5 UCS 1 0 en1 169.254.255.255 0:90:b:27:10:71 UHLSW 0 0 en1

    Read the article

  • Determining physical location of data on a disc

    - by Synetech
    Does anybody know of a way to find out where, physically on a CD or DVD a given piece of data would be located? I am trying to watch a DVD at the moment, and am about half-way through, but it keeps dying at a specific spot in the film, presumably because of a scratch. I have a repair kit, but I don’t know where to focus my repair because there are several scuffs and scratches on the disc and I have no way of knowing which one is causing the issue. Obviously, cleaning all of them is inadvisable because not only does it waste the consumable materials in the kit, but not all of them are a problem, and by working them, some may become unreadable. Moreover, just because I am half-way through the movie does not mean that it would be half-way from the hub to the edge for several reasons: Discs have more data towards the outer edge than the inner edge (circles are more mathematically complicated than rectangles) The disc is not completely filled up (and even if it were, the movie itself would be be using it all, there are extras and such) Because in this particular case it is a commercial DVD, it is also dual-layer which further complicates manual determination As such, I am trying to find a program that can let me identify a file (or part thereof), cluster, etc. and show me a picture of where on the CD/DVD it would be located. That way, I can look at the disc and fix any scratches that correspond to that distance from the hub. For example, the image below might indicate where on a disc a couple of files or range of clusters would be located, so by looking for anomalies in those areas (rotating as necessary), the correct one can be identified. I’m sure it can be done since at least one form of copy protection (DPM) uses it and DVD-lab Pro includes a “DVD Topology” feature to do this.

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • Exchange migration: ExchangeTransport warnings after uninstalling source server

    - by carlpett
    After disabling/uninstalling Exchange from our source SBS2003 server, I'm getting these warnings: Event 5020 "The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003 sourceserver.domain.local in Routing Group [...]" Event 5006 "Cannot find route to Mailbox Server CN=SOURCESERVER [...] for store CN=[...]", for Public folder, First storage group and Recovery storage group. I followed the technet article here: http://technet.microsoft.com/en-us/library/bb288905.aspx (linked from the SBS 2003 - 2011 migration guide). When uninstalling Exchange, I got a warning about NNTP not being found in the registry, but that didn't seem relevant, and the uninstall continued. The server was subsequently removed from the domain and shut down, as per the instructions. If I open the Public Folder Management console on the Exchange 2010 server, the public folders \NON_IPM_SUBTREE\EFORMS_REGISTRY and \Archived mails gives an error on "Update content". I haven't found anything else which indicates something is wrong. We never really used the public folders on the old server, so there isn't really anything lost. Can I just remove these folders and let them be created anew?

    Read the article

  • Viability of Mac OS X 10.9 Time Machine Server in office environment

    - by user197609
    Currently we have about 20 Mac OS 10.9 MacBook Pros (almost all with SSDs) backing up to individual USB drives. I'd like to consolidate these to one drobo thunderbolt drive array attached to a Mac Mini server (running 10.9 server) using time machine server. My question is, will this scale to 20 users? Examples I have seen seem to be 5 or 6 users tops, and this isn't easy for me to test (I'd rather not ask everyone to backup to the array and then switch back to USB drives if it brings our network to its knees). My primary concern is saturating our gigabit network, as time machine backs up every hour for every machine, so there would usually be a couple people backing up at any given time. We also have some people occasionally on our 802.11ac network and not on ethernet (usually connected via 802.11n until people upgrade to newer machines), but most of the time people are connected to our thunderbolt displays which have a gigabit ethernet connection on them. Our network topology is one 32 port gigabit switch with 5 smaller gigabit switches at each desk cluster. The mac mini server is connected directly to the top level switch. Update: Failing information from someone who has done this in practice, I suppose my question is really around how switches work. If three or four people are backing up simultaneously, and then other two (different) users transfer a file between each other, will they be able to transfer the file at gigabit speeds?

    Read the article

  • Troubleshoot port forwarding. Could it be ISP blocking incoming connections?

    - by Gravy
    Had a new Axis IP camera delivered yesterday. Plugged into Cisco E2400 Wireless Router but having problems. Example topology: WAN IP: 10.10.10.10 (example) Cisco Router: 192.168.1.1 Axis Camera: 192.168.1.10:80 Port forwarding rules set up on router External Port: 999 Internal Port: 80 Protocol: TCP & UDP Device IP: 192.168.1.10:80 Enabled: True Trying to connect from within the lan to 192.168.1.1:80 from within browser - Works properly. Trying to connect from within the lan to 10:10:10:10:999 from within browser - Works properly. Trying to connect from outside the LAN (e.g. via 3g or another isp) to 10:10:10:10:999 from within browser - Doesnt work. I get the following errors from different machines / browsers: Safari could not open the page because the server stopped responding (IOS) The server at xx.xx.xx.xx is taking too long to respond. (firefox) This problem is not just for the Axis camera. I am also having similar problems connecting to my NAS drive. After using a web based port scanning tool, it appears as though port 999 is closed. Not certain why when I have set up port forwarding within the router. Any troubleshooting suggestions to help me determine whether the problem is with my Cisco settings / firewall or whether it could be my ISP blocking incoming connection requests? Many thanks

    Read the article

  • processing of Group Policy failed only on 2008 Servers and Name Resolution failure on the current domain controller

    - by Ken Wolfrom
    Spent last 3 months doing a upgrade from 2003 domain to a 2008R2 domain. our last DC was rebuilt (5 total) and brought up on line. After it was put on line we have some 2008 and 2008R2 servers (10 now) getting these errors in the event logs. ERRORS Description: The processing of Group Policy failed. Windows could not resolve the user name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).\ Can duplicate this if we drop to command prompt and run GPUPDATE manually When our users attempt to do a \directory\shared access to shared drive on an affected server get this error.– “THERE ARE CURRETLY NO LOGON SERVER AVAIALBE TO SERICE THE LOGON REQUEST. This is only affecting the 2008 OS and it is a random set of abotu 10 servers out of some 30 with this OS. The Services on the machines are running Ok and login. Able to log in with domain/user to the consoles and via RDP. WE can log onto an affected machine, and can get to the \domainname\sysvol and can see the GPO's Have checked the replication topology of the domain and it states all servers can replicate with no errrors. We went back to the last DC, demoted it, removed DNS and then removed it from the domain and waited 24 hours and issue still persist. Picked one server, removed it from domain, reboooted, and added back to domain with no problems, but still has this behavior. bottom line is we have some servers that the domain will not let any UDP/client server apps or GPO's process ,but the tcp related items seeme to work fine, http, tcp calls, sql and oracle dbs's connect and process. Any inputs on some possible reasons for this issue and fixes. It is only affecting the 2008 servers on a 2008R2 domain.

    Read the article

  • Network outage caused by SMC8013WG Cable Modem/Router ?

    - by mkocubinski
    At work, we have a basic Class C Network. The gateway/router is a SMC8013WG (stock comcast commercial cable modem), and simple unmanaged switch (HP Pro Curve 1400 24G). The SMC8013WG is our default gateway as well as DHCP server. Periodically, I'd say almost every other day.. the entire network will just stop responding. I won't be able to ping/see the gateway, any computers on our local network, or anything on the internet. The only way to fix this is to unplug the Comcast cable modem, wait, and plug back in. This unfailingly fixes the problem. But this doesn't make much sense to me.. shouldn't the network still be fine locally, since everything is plugged into the switch anyway? Why would resetting the router fix this? Can anyone suggest anything to check to in order to narrow this problem down? Just to be clear.. here is the basic topology: { Internet } -- (12.345.67.89) Comcast Cable Modem (192.168.1.1) -- Switch -- 192.168.1.2-254 P.S. Our IT guy is in about 3 hours a day every other week or so, so.. we're kind of on our own most of the time.

    Read the article

  • ODI 11g – Insight to the SDK

    - by David Allan
    This post is a useful index into the ODI SDK that cross references the type names from the user interface with the SDK class and also the finder for how to get a handle on the object or objects. The volume of content in the SDK might seem a little ominous, there is a lot there, but there is a general pattern to the SDK that I will describe here. Also I will illustrate some basic CRUD operations so you can see how the SDK usage pattern works. The examples are written in groovy, you can simply run from the groovy console in ODI 11.1.1.6. Entry to the Platform   Object Finder SDK odiInstance odiInstance (groovy variable for console) OdiInstance Topology Objects Object Finder SDK Technology IOdiTechnologyFinder OdiTechnology Context IOdiContextFinder OdiContext Logical Schema IOdiLogicalSchemaFinder OdiLogicalSchema Data Server IOdiDataServerFinder OdiDataServer Physical Schema IOdiPhysicalSchemaFinder OdiPhysicalSchema Logical Schema to Physical Mapping IOdiContextualSchemaMappingFinder OdiContextualSchemaMapping Logical Agent IOdiLogicalAgentFinder OdiLogicalAgent Physical Agent IOdiPhysicalAgentFinder OdiPhysicalAgent Logical Agent to Physical Mapping IOdiContextualAgentMappingFinder OdiContextualAgentMapping Master Repository IOdiMasterRepositoryInfoFinder OdiMasterRepositoryInfo Work Repository IOdiWorkRepositoryInfoFinder OdiWorkRepositoryInfo Project Objects Object Finder SDK Project IOdiProjectFinder OdiProject Folder IOdiFolderFinder OdiFolder Interface IOdiInterfaceFinder OdiInterface Package IOdiPackageFinder OdiPackage Procedure IOdiUserProcedureFinder OdiUserProcedure User Function IOdiUserFunctionFinder OdiUserFunction Variable IOdiVariableFinder OdiVariable Sequence IOdiSequenceFinder OdiSequence KM IOdiKMFinder OdiKM Load Plans and Scenarios   Object Finder SDK Load Plan IOdiLoadPlanFinder OdiLoadPlan Load Plan and Scenario Folder IOdiScenarioFolderFinder OdiScenarioFolder Model Objects Object Finder SDK Model IOdiModelFinder OdiModel Sub Model IOdiSubModel OdiSubModel DataStore IOdiDataStoreFinder OdiDataStore Column IOdiColumnFinder OdiColumn Key IOdiKeyFinder OdiKey Condition IOdiConditionFinder OdiCondition Operator Objects   Object Finder SDK Session Folder IOdiSessionFolderFinder OdiSessionFolder Session IOdiSessionFinder OdiSession Schedule OdiSchedule How to Create an Object? Here is a simple example to create a project, it uses IOdiEntityManager.persist to persist the object. import oracle.odi.domain.project.OdiProject; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) project = new OdiProject("Project For Demo", "PROJECT_DEMO") odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Update an Object? This update example uses the methods on the OdiProject object to change the project’s name that was created above, it is then persisted. import oracle.odi.domain.project.OdiProject; import oracle.odi.domain.project.finder.IOdiProjectFinder; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) prjFinder = (IOdiProjectFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiProject.class); project = prjFinder.findByCode("PROJECT_DEMO"); project.setName("A Demo Project"); odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Delete an Object? Here is a simple example to delete all of the sessions, it uses IOdiEntityManager.remove to delete the object. import oracle.odi.domain.runtime.session.finder.IOdiSessionFinder; import oracle.odi.domain.runtime.session.OdiSession; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) sessFinder = (IOdiSessionFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiSession.class); sessc = sessFinder.findAll(); sessItr = sessc.iterator() while (sessItr.hasNext()) {   sess = (OdiSession) sessItr.next()   odiInstance.getTransactionalEntityManager().remove(sess) } tm.commit(txnStatus) This isn't an all encompassing summary of the SDK, but covers a lot of the content to give you a good handle on the objects and how they work. For details of how specific complex objects are created via the SDK, its best to look at postings such as the interface builder posting here. Have fun, happy coding!

    Read the article

  • MySql Connector/NET 6.7.4 GA has been released

    - by fernando
    MySQL Connector/Net 6.7.4, a new version of the all-managed .NET driver for MySQL has been released.  This is the GA, is feature complete. It is recommended for production environments.  It is appropriate for use with MySQL server versions 5.0-5.7.  It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.7 version of MySQL Connector/Net brings the following new features: -  WinRT Connector. -  Load Balancing support. -  Entity Framework 5.0 support. -  Memcached support for Innodb Memcached plugin. -  This version also splits the product in two: from now on, starting version 6.7, Connector/NET will include only the former Connector/NET ADO.NET driver, Entity Framework and ASP.NET providers (Core libraries of MySql.Data, MySql.Data.Entity & MySql.Web). While all the former product Visual Studio integration (Design support, Intellisense, Debugger) are available as part of MySql Windows Installer under the name "MySql for Visual Studio".  WinRT Connector  ------------------------------------------- Now you can write MySql data access apps in Windows Runtime (aka Store Apps) using the familiar API of Connector/NET for .NET.  Load Balancing Support  -------------------------------------------  Now you can setup a Replication or Cluster configuration in the backend, and Connector/NET will balance the load of queries among all servers making up the backend topology.  Entity Framework 5.0  -------------------------------------------  Connector/NET is now compatible with EF 5, including special features of EF 5 like spatial types.  Memcached  -------------------------------------------  Just setup Innodb memcached plugin and use Connector/NET new APIs to establish a client to MySql 5.6 server's memcached daemon.  Bug fixes included in this release: - Fix for Entity Framework when inserts data having Identity columns (Oracle bug #16494585). - Fix for Connector/NET cannot read data from a MySql table using UTF-16/UTF-32 (MySql bug #69169, Oracle bug #16776818). - Fix for Malformed query in Entity Framework when eager loading due to multiple projections (MySql bug #67183, Oracle bug #16872852). - Fix for database objects with 'dbo' prefix when using automatic migrations in Entity Framework 5.0 (Oracle bug #16909439). - Fix for bug IIS application pool reset worker process causes website to crash (Oracle bug #16909237, Mysql Bug #67665). - Fix for bug Error in LINQ to Entities query when using Distinct().Count() (MySql Bug #68513, Oracle bug #16950146). - Fix for occasionally return no data when socket connection is slow, interrupted or delayed (MySql bug #69039, Oracle bug #16950212). - Fix for ConstraintException when filling a datatable (MySql bug #65065, Oracle bug #16952323). - Fix for Data Provider is not found after uninstalling Mysql for visual studio (Oracle bug #16973456). - Fix for nested sql generated for LINQ to Entities query with Take and Order by (MySql bug #65723, Oracle bug #16973939). The documentation is available at http://dev.mysql.com/doc/refman/5.7/en/connector-net.html  Enjoy and thanks for the support!  --  Fernando Gonzalez Sanchez | Software Engineer |  Oracle MySQL Windows Experience Team, Connector/NET  Guadalajara | Jalisco | Mexico 

    Read the article

  • How to move complete SharePoint Server 2007 from one box to another

    - by DipeshBhanani
    It was time of my first onsite client assignment on SharePoint. Client had one server production environment. They wanted to upgrade the topology with completely new SharePoint Farm of three servers. So, the task was to move whole MOSS 2007 stuff to the new server environment without impacting data. The last three scary words “… without impacting data…” were actually putting pressure on my head. Moreover SSP was required to move because additional information has been added for users apart from AD import.   I thought I had to do only backup and restore. It appeared pretty easy at first thought. Just because of these damn scary words, I thought to check out on internet for guidance related to this scenario. I couldn’t get anything except general guidance of moving server on Microsoft TechNet site. I promised myself for starting blogs with this post if I would be successful in this task. Well, I took long time to write this but finally made it. I hope it will be useful to all guys looking for SharePoint server movement.   Before beginning restoration, make sure that, there is no difference in versions of SharePoint at source and destination server. Also check whether the state of SharePoint Installation at the time of backup and restore is same or not. (E.g. SharePoint related service packs and patches if any)   The main tasks of the server movement are as follow:   Backup all the databases Install and configure SharePoint on new environment Deploy all solution (WSP Files) globally to destination server- for installing features attached to the solutions Install all the custom features Deploy/Copy custom pages/files which are added to the “12Hive” folder later Restore SSP Restore My Site Restore other web application   Tasks 3 to 5 are for making sure that we have configured the environment well enough for the web application to be restored successfully. The main and complex task was restoring SSP. I have started restoring SSP through Central Admin. After a while, the restoration status was updated to “unsuccessful”. “Damn it, what went wrong?” I thought looking at the error detail down the page. I couldn’t remember the error message but I had corrected and restored it again.   Actually once you fail restoring SSP, until and unless you don’t clean all related stuff well, your restoration will be failed again and again. I wanted to find the actual reason. So cleaned, restored, cleaned, restored… I had tried almost 5-6 times and finally, I succeeded. I had realized how pleasant it is, to see the word “Successful” on the screen. Without wasting your much time to read, let me write all the detailed steps of restoring SSP:   Delete the SSP through following STSADM command. stsadm -o deletessp -title <SSP name> -deletedatabases -force e.g.: stsadm -o deletessp -title SharedServices1 -deletedatabases –force Check and delete the web application associated with SSP if it exists. Remove Link from Check and remove “Alternate Access Mapping” associated with SSP if it exists. Check and delete IIS site as well as application pool associated with SSP if it exists. Stop following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search Delete all the databases associated/related to SSP from SQL Server. Reset IIS. Start again following services: ·         Office SharePoint Server Search ·         Windows SharePoint Services Search ·         Windows SharePoint Services Help Search Restore the new SSP.   After the SSP restoration, all other stuffs had completed very smoothly without any more issues. I did few modifications to sites for change of server name and finally, the new environment was ready.

    Read the article

  • Enterprise Integration: Can Companies Afford It?

    - by Ralph Wheaton
    Each year, my company holds a global sales conference where employees and partners from around the world some together to collaborate, share knowledge and ideas and learn about future plans.  As a member of the professional services division, several of us had been asked to make a presentation, an elevator pitch in 3 minutes or less that relates to a success we have worked on or directly relates to our tag (that is, our primary technology focus).  Mine happens to be Enterprise Integration as it relates Business Intelligence.  I found it rather difficult to present that pitch in a short amount of time and had to pare it down.  At any rate, in just a little over 3 minutes, this is the presentation I submitted.  Here is a link to the full presentation video in WMV format.   Many companies today subscribe to a buy versus build mentality in an attempt to drive down costs and improve time to implementation. Sometimes this makes sense, especially as it relates to specialized software or software that performs a small number of tasks extremely well. However, if not carefully considered or planned out, this oftentimes leads to multiple disparate systems with silos of data or multiple versions of the same data. For instance, client data (contact information, addresses, phone numbers, opportunities, sales) stored in your CRM system may not play well with Accounts Receivables. Employee data may be stored across multiple systems such as HR, Time Entry and Payroll. Other data (such as member data) may not originate internally, but be provided by multiple outside sources in multiple formats. And to top it all off, some data may have to be manually entered into multiple systems to keep it all synchronized. When left to grow out of control like this, overall performance is lacking, stability is questionable and maintenance is frequent and costly. Worse yet, in many cases, this topology, this hodgepodge of data creates a reporting nightmare. Decision makers are forced to try to put together pieces of the puzzle attempting to find the information they need, wading through multiple systems to find what they think is the single version of the truth. More often than not, they find they are missing pieces, pieces that may be crucial to growing the business rather than closing the business. across applications. Master data owners are defined to establish single sources of data (such as the CRM system owns client data). Other systems subscribe to the master data and changes are replicated to subscribers as they are made. This can be one way (no changes are allowed on the subscriber systems) or bi-directional. But at all times, the master data owner is current or up to date. And all data, whether internal or external, use the same processes and methods to move data from one place to another, leveraging the same validations, lookups and transformations enterprise wide, eliminating inconsistencies and siloed data. Once implemented, an enterprise integration solution improves performance and stability by reducing the number of moving parts and eliminating inconsistent data. Overall maintenance costs are mitigated by reducing touch points or the number of places that require modification when a business rule is changed or another data element is added. Most importantly, however, now decision makers can easily extract and piece together the information they need to grow their business, improve customer satisfaction and so on. So, in implementing an enterprise integration solution, companies can position themselves for the future, allowing for easy transition to data marts, data warehousing and, ultimately, business intelligence. Along this path, companies can achieve growth in size, intelligence and complexity. Truly, the question is not can companies afford to implement an enterprise integration solution, but can they afford not to.   Ralph Wheaton Microsoft Certified Technology Specialist Microsoft Certified Professional Developer Microsoft VTS-P BizTalk, .Net

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-30

    - by Bob Rhubart
    Next Generation Mobile Clients for Oracle Applications & the role of Oracle Fusion Middleware | Manish Palaparthy Manish Palaparthy examines some of Oracle's mobile applications, and takes a look at the underlying technology. Master Data Management: A Foundation for Big Data Analysis | Manouj Tahiliani "Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data." — Manouj Tahiliani Architect Day: Boston - Agenda Update Here's the latest updated information on the session schedule and content for Oracle Technology Network Architect Day in Boston, MA on September 12, 2012. Registration is open, but seating is limited. OTN Architect Day: Boston is being held on Wednesday September 12, 2012, 8:00 a.m. – 5:00 p.m., at the Boston Marriott Burlington, One Burlington Mall Road, Burlington, MA 01803. Integrating Coherence & Java EE 6 Applications using ActiveCache | Ricardo Ferreira The seamless integration between Oracle Coherence and Oracle WebLogic Server "provides a comprehensive environment to develop applications without the complexity of extra Java code to manage cache as a dependency," explains Ricardo Ferreira, "since Oracle provides a DI (Dependency Injection) mechanism for Coherence, the same DI mechanism available in standard Java EE applications. This feature is called ActiveCache." Ricardo shows you how to configure ActiveCache in WebLogic and your Java EE application. Cloud Infrastructure has a new standard from the DMTF "Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, [Cloud Infrastructure Management Interface (CIMI)] is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience." Oracle GoldenGate 11g Release Launch Webcast- September 12 The new release of Oracle GoldenGate 11g is now available for major databases and platforms. Register for this webcast and live Q&A with product experts to learn about the solution's new features. September 12, 2012. 8:00am AM and 10:00AM PT. Speakers: Doug Reid (Director, Product Management, Oracle GoldenGate), Irem Radzik (Director, Product Marketing, Oracle Data Integration Products) Thought for the Day "[When] asking skilled architects…what they do when confronted with highly complex problems… [they] would most likely answer, 'Just use Common Sense.' [A] better expression than 'common sense' is 'contextual sense'—a knowledge of what is reasonable within a given context. Practicing architects through eduction, experience and examples accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

  • Musings on the launch of SQL Monitor

    - by Phil Factor
    For several years, I was responsible for the smooth running of a large number of enterprise database servers. We ran a network monitoring tool that was primitive by today’s standards but which performed the useful function of polling every system, including all the Servers in my charge. It ran a configurable script for each service that you needed to monitor that was merely required to return one of a number of integer values. These integer values represented the pain level of the service, from 10 (“hurtin’ real bad”) to 1 (“Things is great”). Not only could you program the visual appearance of each server on the network diagram according to the value of the integer, but you could even opt to run a sound file. Very soon, we had a large TFT Screen, high on the wall of the server room, with every server represented by an icon, and a speaker next to it that would give out a series of grunts, groans, snores, shrieks and funeral marches, depending on the problem. One glance at the display, and you could dive in with iSQL/QA/SSMS and check what was going on with your favourite diagnostic tools. If you saw a server icon burst into flames on the screen or droop like a jelly, you dropped your mug of coffee to do it.  It was real fun, but I remember it more for the huge difference it made to have that real-time visibility into how your servers are performing. The management soon stopped making jokes about the real reason we wanted the TFT screen. (It rendered DVDs beautifully they said; particularly flesh-tints). If you are instantly alerted when things start to go wrong, then there was a good chance you could fix it before being alerted to the problem by the users of the system.  There is a world of difference between this sort of tool, one that gives whoever is ‘on watch’ in the server room the first warning of a potential problem on one of any number of servers, and the breed of tool that attempts to provide some sort of prosthetic DBA Brain. I like to get the early warning, to get the right information to help to diagnose a problem: No auto-fix, but just the information. I prefer to leave the task of ascertaining the exact cause of a problem to my own routines, custom code, intuition and forensic instincts. A simulated aircraft cockpit doesn’t do anything for me, especially before I know where I should be flying.  Time has moved on, and that TFT screen is now, with SQL Monitor, an iPad or any other mobile or static device that can support a browser. Rather than trying to reproduce the conceptual topology of the servers, it lists them in their groups so as to give a display that scales with the increasing number of databases you monitor.  It gives the history of the major events and trends for the servers. It gives the icons and colours that you can spot out of the corner of your eye, but goes on to give you just enough information in drill-down to give you a much clearer idea of where to look with your DBA tools and routines. It doesn't swamp you with information.  Whereas a few server and database-level problems are pretty easily fixed, others depend on judgement and experience to sort out.  Although the idea of an application that automates the bulk of a DBA’s skills is attractive to many, I can’t see it happening soon. SQL Server’s complexity increases faster than the panaceas can be created. In the meantime, I believe that the best way of helping  DBAs  is to make the monitoring process as simple and effective as possible,  and provide the right sort of detail and ‘evidence’ to allow them to decide on the fix. In the end, it is still down to the skill of the DBA.

    Read the article

  • Setting article properties for a publication using RMO in C# .NET

    - by Pavan Kumar
    I am using transaction replication with push subscription. I am developing a UI for replication using RMO in C#.NET between different instances of the same database within same machine holding similar schema and structure. I am using Single subscriber and multiple publisher topology. During creation of publication i want to set a few article properties such as Keep the existing object unchanged ,allow schema changes at subscriber to false a,copy foriegn key constarint and copy check constraints to true. How do i set the article properties using RMO in C# .NET. I am using Visual Studio 2008 SP1.I also want to know as how we can select all the objects including Tables,Views,Stored Procedures for publishing at one stretch. I could do it for one table but i want to select all the tables at one stretch. This is the code snippet i used for selecting single table for publishing. TransArticle ta = new TransArticle(); ta.Name = "Article_1"; ta.PublicationName = "TransReplication_DB2"; ta.DatabaseName = "DB2"; ta.SourceObjectName = "person"; ta.SourceObjectOwner = "dbo"; ta.ConnectionContext = conn; ta.Create();

    Read the article

  • Optimized Publish/Subcribe JMS Broker Cluster and Conflicting Posts on StackOverFlow for the Answer

    - by Gene
    Hi, I am looking to build a publish/subscribe distributed messaging framework that can manage huge volumes of message traffic with some intelligence at the broker level. I don't know if there's a topology that describes this, but this is the model I'm going after: EXAMPLE MODEL A A) There are two running message brokers (ideally all on localhost if possible, for easier demo-ing) : Broker-A Broker-B B) Each broker will have 2 listeners and 1 publisher. Example Figure [subscriber A1, subscriber A2, publisher A1] <-- BrokerA <-- BrokerB <-- [publisher B1, subscriber B1, subscriber B2] IF a message-X is published to broker A and there no subscribers for it among the listeners on Broker-B (via criteria in Message Selectors or Broker routing rules), then that message-X will never be published to Broker-B. ELSE, broker A will publish the message to broker B, where one of the broker B listeners/subscribers/services is expecting that message based on the subscription criteria. Is Clustering the Correct Approach? At first, I concluded that the "Broker Clustering" concept is what I needed to support this. However, as I have come to understand it, the typical use of clustering entails either: message redundancy across all brokers ... or Competing Consumers pattern ... and neither of these satisfy the requirement in the EXAMPLE MODEL A. What is the Correct Approach? My question is, does anyone know of a JMS implementation that supports the model I described? I scanned through all the stackoverflow post titles for the search: JMS and Cluster. I found these two informative, but seemingly conflicting posts: Says the EXAMPLE MODEL A is/should-be implicitly supported: http://stackoverflow.com/questions/2255816/jms-consumer-with-activemq-network-of-brokers " this means you pick a broker, connect to it, and let the broker network sort it out amongst themselves. In theory." Says the EXAMPLE MODEL A IS NOT suported: http://stackoverflow.com/questions/2017520/how-does-a-jms-topic-subscriber-in-a-clustered-application-server-recieve-message "All the instances of PropertiesSubscriber running on different app servers WILL get that message." Any suggestions would be greatly appreciated. Thanks very much for reading my post, Gene

    Read the article

  • Why would I need a using statement to Libary B extn methods, if they're used in Library A & it's Li

    - by Greg
    Hi, I have: Main Program Class - uses Library A Library A - has partial classes which mix in methods from Library B Library B - mix in methods & interfaces Why would I need a using statement to LibaryB just to get their extension methods working in the main class? That is given that it's Library B that defines the classes that will be extended. EDIT - Except from code // *** PROGRAM *** using TopologyDAL; using Topology; // *** THIS WAS NEEDED TO GET EXTN METHODS APPEARING *** class Program { static void Main(string[] args) { var context = new Model1Container(); Node myNode; // ** trying to get myNode mixin methods to appear seems to need using line to point to Library B *** } } // ** LIBRARY A namespace TopologyDAL { public partial class Node { // Auto generated from EF } public partial class Node : INode<int> // to add extension methods from Library B { public int Key } } // ** LIBRARY B namespace ToplogyLibrary { public static class NodeExtns { public static void FromNodeMixin<T>(this INode<T> node) { // XXXX } } public interface INode<T> { // Properties T Key { get; } // Methods } }

    Read the article

  • How do I find all paths through a set of given nodes in a DAG?

    - by Hanno Fietz
    I have a list of items (blue nodes below) which are categorized by the users of my application. The categories themselves can be grouped and categorized themselves. The resulting structure can be represented as a Directed Acyclic Graph (DAG) where the items are sinks at the bottom of the graph's topology and the top categories are sources. Note that while some of the categories might be well defined, a lot is going to be user defined and might be very messy. Example: On that structure, I want to perform the following operations: find all items (sinks) below a particular node (all items in Europe) find all paths (if any) that pass through all of a set of n nodes (all items sent via SMTP from example.com) find all nodes that lie below all of a set of nodes (intersection: goyish brown foods) The first seems quite straightforward: start at the node, follow all possible paths to the bottom and collect the items there. However, is there a faster approach? Remembering the nodes I already passed through probably helps avoiding unnecessary repetition, but are there more optimizations? How do I go about the second one? It seems that the first step would be to determine the height of each node in the set, as to determine at which one(s) to start and then find all paths below that which include the rest of the set. But is this the best (or even a good) approach? The graph traversal algorithms listed at Wikipedia all seem to be concerned with either finding a particular node or the shortest or otherwise most effective route between two nodes. I think both is not what I want, or did I just fail to see how this applies to my problem? Where else should I read?

    Read the article

  • Determine asymmetric latencies in a network

    - by BeeOnRope
    Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers my transferring data between them. Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static. Given the latencies between nodes in this host graph, calculating the bandwidth is a relative simply timing exercise. I'm having more difficulty with the latencies - however. To get round-trip time, it is a simple matter of timing a return-trip ping from the local host to a remote host - both timing events (start, stop) occur on the local host. What if I want one-way times under the assumption that the latency is not equal in both directions? Assuming that the clocks on the various hosts are not precisely synchronized (at least that their error is of the the same magnitude as the latencies involved) - how can I calculate the one-way latency? In a related question - is this asymmetric latency (where a link is quicker in direction than the other) common in practice? For what reasons/hardware configurations? Certainly I'm aware of asymmetric bandwidth scenarios, especially on last-mile consumer links such as DSL and Cable, but I'm not so sure about latency. Added: After considering the comment below, the second portion of the question is probably better off on serverfault.

    Read the article

  • Exchange 2010 setup /prepareAD fails to run

    - by MadBoy
    I've tried installing Exchange 2010 on Windows Server 2008 R2 (only domain controller and all-in-one system). I did setup.exe /prepareAD, setup /prepareSchema and it worked fine the first time I did it. Unfortunately due to problem with Hub Transport installation related to (at least from what I've read) IPv6 being disabled (some say disabling it helped them while some enabling helped them). I did it the proper way by using registry entry to disable IPv6 but it still errored out. So i managed to uninstall everything (renamed some old entries in registry of failed Hub Transport roles and tried to reinstal Exchange after rebooting server. Unfortunetly running setup /prepareAD now gives an error: D:setup /PrepareAd Welcome to Microsoft Exchange Server 2010 Unattended Setup By continuing the installation process, you agree to the license terms of Microsoft Exchange Server 2010. If you don't accept these license terms, please cancel the installation. To review these license terms, please go to http://go.microsoft.com/fwlink/?LinkId=150127&clcid=0x409/ Press any key to cancel setup................ No key presses were detected. Setup will continue. Preparing Exchange Setup Copying Setup Files ......................... COMPLETED No server roles will be installed Performing Microsoft Exchange Server Prerequisite Check Organization Checks ......................... COMPLETED Setup is going to prepare the organization for Exchange 2010 by using 'Setup /P repareAD'. No Exchange 2007 server roles have been detected in this topology. Af ter this operation, you will not be able to install any Exchange 2007 server rol es. Configuring Microsoft Exchange Server Organization Preparation ......................... FAILED The following error was generated when "$error.Clear(); buildToBuildUpgrade -ExsetDataAtom -AtomName OrgLevelCt -DomainController $RoleDomainController" was run: "An error occurred with error code '2147504140' and message 'The data type can't be converted to or from a native Active Directory data type.'.". The Exchange Server setup operation did not complete. Visit http://support.micro soft.com and enter the Error ID to find more information. Exchange Server setup encountered an error. Unfortunetly if i rerun the setup it complains that it needs setup /prepareAD to be run first. Basically all that works now is setup /PrepareSchema and setup /PrepareDomain complains that prepareAD wasn't done. For full information I'm also attaching error I had before I've uninstalled everything and tried again: Hub Transport Role Failed Error: The following error was generated when "$error.Clear(); install-ExsetdataAtom -AtomName SharedMachineSettings -DomainController $RoleDomainController" was run: "An error occurred with error code '2147950640' and message 'There is no such object on the server.'.". An error occurred with error code '2147950640' and message 'There is no such object on the server.'.

    Read the article

  • Encouter error "Linux ip -6 addr add failed" while setting up OpenVPN client

    - by Mickel
    I am trying to set up my router to use OpenVPN and have gotten quite far (I think), but something seems to be missing and I am not sure what. Here is my configuration for the client: client dev tun proto udp remote ovpn.azirevpn.net 1194 remote-random resolv-retry infinite auth-user-pass /tmp/password.txt nobind persist-key persist-tun ca /tmp/AzireVPN.ca.crt remote-cert-tls server reneg-sec 0 verb 3 OpenVPN client log: Nov 8 15:45:13 rc_service: httpd 15776:notify_rc start_vpnclient1 Nov 8 15:45:14 openvpn[27196]: OpenVPN 2.3.2 arm-unknown-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [MH] [IPv6] built on Nov 1 2013 Nov 8 15:45:14 openvpn[27196]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Nov 8 15:45:14 openvpn[27196]: Socket Buffers: R=[116736->131072] S=[116736->131072] Nov 8 15:45:14 openvpn[27202]: UDPv4 link local: [undef] Nov 8 15:45:14 openvpn[27202]: UDPv4 link remote: [AF_INET]178.132.75.14:1194 Nov 8 15:45:14 openvpn[27202]: TLS: Initial packet from [AF_INET]178.132.75.14:1194, sid=44d80db5 8b36adf9 Nov 8 15:45:14 openvpn[27202]: WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this Nov 8 15:45:14 openvpn[27202]: VERIFY OK: depth=1, C=RU, ST=Moscow, L=Moscow, O=Azire Networks, OU=VPN, CN=Azire Networks, name=Azire Networks, [email protected] Nov 8 15:45:14 openvpn[27202]: Validating certificate key usage Nov 8 15:45:14 openvpn[27202]: ++ Certificate has key usage 00a0, expects 00a0 Nov 8 15:45:14 openvpn[27202]: VERIFY KU OK Nov 8 15:45:14 openvpn[27202]: Validating certificate extended key usage Nov 8 15:45:14 openvpn[27202]: ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication Nov 8 15:45:14 openvpn[27202]: VERIFY EKU OK Nov 8 15:45:14 openvpn[27202]: VERIFY OK: depth=0, C=RU, ST=Moscow, L=Moscow, O=AzireVPN, OU=VPN, CN=ovpn, name=ovpn, [email protected] Nov 8 15:45:15 openvpn[27202]: Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Nov 8 15:45:15 openvpn[27202]: Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Nov 8 15:45:15 openvpn[27202]: Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Nov 8 15:45:15 openvpn[27202]: Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Nov 8 15:45:15 openvpn[27202]: Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 2048 bit RSA Nov 8 15:45:15 openvpn[27202]: [ovpn] Peer Connection Initiated with [AF_INET]178.132.75.14:1194 Nov 8 15:45:17 openvpn[27202]: SENT CONTROL [ovpn]: 'PUSH_REQUEST' (status=1) Nov 8 15:45:17 openvpn[27202]: PUSH: Received control message: 'PUSH_REPLY,ifconfig-ipv6 2a03:8600:1001:4010::101f/64 2a03:8600:1001:4010::1,route-ipv6 2000::/3 2A03:8600:1001:4010::1,redirect-gateway def1 bypass-dhcp,dhcp-option DNS 194.1.247.30,tun-ipv6,route-gateway 178.132.77.1,topology subnet,ping 3,ping-restart 15,ifconfig 178.132.77.33 255.255.255.192' Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: timers and/or timeouts modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: --ifconfig/up options modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: route options modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: route-related options modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified Nov 8 15:45:17 openvpn[27202]: TUN/TAP device tun0 opened Nov 8 15:45:17 openvpn[27202]: TUN/TAP TX queue length set to 100 Nov 8 15:45:17 openvpn[27202]: do_ifconfig, tt->ipv6=1, tt->did_ifconfig_ipv6_setup=1 Nov 8 15:45:17 openvpn[27202]: /usr/sbin/ip link set dev tun0 up mtu 1500 Nov 8 15:45:18 openvpn[27202]: /usr/sbin/ip addr add dev tun0 178.132.77.33/26 broadcast 178.132.77.63 Nov 8 15:45:18 openvpn[27202]: /usr/sbin/ip -6 addr add 2a03:8600:1001:4010::101f/64 dev tun0 Nov 8 15:45:18 openvpn[27202]: Linux ip -6 addr add failed: external program exited with error status: 254 Nov 8 15:45:18 openvpn[27202]: Exiting due to fatal error Any ideas are most welcome!

    Read the article

  • dhcp-snooping option 82 drops valid dhcp requests on 2610 series Procurve switches

    - by kce
    We are slowly starting to implement dhcp-snooping on our HP ProCurve 2610 series switches, all running the R.11.72 firmware. I'm seeing some strange behavior where dhcp-request or dhcp-renew packets are dropped when originating from "downstream" switches due "untrusted relay information from client". The full error: Received untrusted relay information from client <mac-address> on port <port-number> In more detail we have a 48 port HP2610 (Switch A) and a 24 port HP2610 (Switch B). Switch B is "downstream" of Switch A by virtue of a DSL connection to one of Switch A ports. The dhcp server is connected to Switch A. The relevant bits are as follows: Switch A dhcp-snooping dhcp-snooping authorized-server 192.168.0.254 dhcp-snooping vlan 1 168 interface 25 name "Server" dhcp-snooping trust exit Switch B dhcp-snooping dhcp-snooping authorized-server 192.168.0.254 dhcp-snooping vlan 1 interface Trk1 dhcp-snooping trust exit The switches are set to trust BOTH the port the authorized dhcp server is attached to and its IP address. This is all well and good for the clients attached to Switch A, but the clients attached to Switch B get denied due to the "untrusted relay information" error. This is odd for a few reasons 1) dhcp-relay is not configured on either switch, 2) the Layer-3 network here is flat, same subnet. DHCP packets should not have a modified option 82 attribute. dhcp-relay does appear to be enabled by default however: SWITCH A# show dhcp-relay DHCP Relay Agent : Enabled Option 82 : Disabled Response validation : Disabled Option 82 handle policy : append Remote ID : mac Client Requests Server Responses Valid Dropped Valid Dropped ---------- ---------- ---------- ---------- 0 0 0 0 SWITCH B# show dhcp-relay DHCP Relay Agent : Enabled Option 82 : Disabled Response validation : Disabled Option 82 handle policy : append Remote ID : mac Client Requests Server Responses Valid Dropped Valid Dropped ---------- ---------- ---------- ---------- 40156 0 0 0 And interestingly enough the dhcp-relay agent seems very busy on Switch B, but why? As far as I can tell there is no reason why dhcp requests need a relay with this topology. And furthermore I can't tell why the upstream switch is dropping legitimate dhcp requests for untrusted relay information when the relay agent in question (on Switch B) isn't modifying the option 82 attributes anyway. Adding the no dhcp-snooping option 82 on Switch A allows the dhcp traffic from Switch B to be approved by Switch A, by virtue of just turning off that feature. What are the repercussions of not validating option 82 modified dhcp traffic? If I disable option 82 on all my "upstream" switches - will they pass dhcp traffic from any downstream switch regardless of that traffic's legitimacy? This behavior is client operating system agnostic. I see it with both Windows and Linux clients. Our DHCP servers are either Windows Server 2003 or Windows Server 2008 R2 machines. I see this behavior regardless of the DHCP servers' operating system. Can anyone shed some light on what's happening here and give me some recommendations on how I should proceed with configuring the option 82 setting? I feel like i just haven't completely grokked dhcp-relaying and option 82 attributes.

    Read the article

  • How can I connect DLNA devices through NAT?

    - by Bob
    I have a Windows 7 PC running Serviio as a DLNA server. I have a Samsung I9100G running Skifta as a DLNA renderer (client). My network topology: At the moment, I can connect and watch my videos fine if the phone is on router #2. The server is on a wired network with #2. Router #1 is 192.168.1.1, router #2 is 192.168.2.1 (192.168.1.2) and router #3 is 192.168.3.1 (192.168.1.3). In other words, each router has its own subnet, using NAT - their "modem" port is connected with a "LAN" port on the modem/router 1. What I want to do is be able to connect to the DLNA server if the renderer is connected to router #1/#3 - #1 is on the WAN side of #2, while #3 is even further separated. I'll settle for just #1 working, though. Normally, I would just forward the appropriate ports, and everything would work fine. However, (apparently) DLNA uses UPnP, which I am unfamiliar with. I tried enabling UPnP on router #2, but that did not seem not change anything. It's a Belkin F5D7230-4 6000 - there's reported issues with UPnP on F5D7230-4 7000. UPnP is already enabled on router #1 - a Billion BiPAC 7700N. I've also tried the built in DLNA renderer/server/controller on my phone, Samsung AllShare. It can see the server on router #2 and browse files, but has issues playing or downloading them. It also can't see the server on the other two networks. I'm currently using Skifta/s "local" mode. "Remote" mode requires an account, which I don't really want to create if not necessary. Is it possible too do what I'm trying to do? If no, are there workarounds? If yes, how do I do it? Is my server the issue? The renderer (client)? The router(s)? My method? I can change just about anything except the routers.

    Read the article

  • Bladecenter-E Power Module fault

    - by Lihnjo
    We have problem on IBM Bladecenter-E Critical Events Power module 2 is off. DC fault. Power module 4 is off. DC fault. Warnings and System Events Insufficient chassis power to support redundancy What is the best solution for this problem? Thanks AMM Service Data Help SPAPP Capture Available 10/13/2010 17:03:47 1090347 bytes Time: 11/19/2012 11:02:31 UUID: 42E1 5D2F D7BF 41A6 A4A2 48D1 3FB7 0540 MAC Address xx:xx:xx:xx:xx:xx MM Information Name: nnnnn Contact: aaa, bbb, ccc, England Location: [email protected] IP address: 111.222.333.444 Date Time Information GMT offset: +1:00 - Central Europe Time (Western Europe, Algeria, Nigeria, Angola) Adjust for DST: Yes NTP: Enabled NTP Hostname/IP: 111.222.333.444 System Health: Critical System Status Summary One or more monitored parameters are abnormal. Critical Events Power module 2 is off. DC fault. Power module 4 is off. DC fault. Warnings and System Events Insufficient chassis power to support redundancy CHASSIS (BladeCenter-E) in CHASSIS slot: 01 TopoPath is "CHASSIS[1]". Description : BladeCenter-E Width : 1 Sub Type : BladeCenter (BC) Power Mode : 220 v KVM Owner : CHASSIS[1]/BLADE[9] MT Owner : CHASSIS[1]/MGMT_MOD[1] Component Type : CHASSIS Inventory: VPD ID: 336 (decimal) POS ID EXT: 0 (decimal) POS ID: 8 (decimal) Machine Type/Model: 86773RG Machine Serial Number: 99ZL816 Part Number: 39R8561 FRU Number: 39R8563 FRU Serial Number: YK109174W1HV Manufacturer ID: IBM Hardware Revision: 3 (decimal) Manufacture Date: 18 (wk), 07 (yr) UUID: 42E1 5D2F D7BF 41A6 A4A2 48D1 3FB7 0540 (hex) Type Code: 97 (decimal) Sub-type Code: 0 (decimal) IANA Num: 336 (decimal) Product ID: 8 (decimal) Manufacturer Sub ID: FOXC Enviroment data: -------------- Type: : POWER_USAGE Unit: : WATTS Reading: : 0xa Sensor Label: : Midplane Sensor ID: : 0x0 MGMT MOD (Advanced Management Module) in MGMT_MOD slot: 01 TopoPath is "CHASSIS[1]/MGMT_MOD[1]". Description : Advanced Management Module Name : kant Width : 1 Component Role : Primary Component Type : MGMT MOD Insert Time : 28050132 Inventory: VPD ID: 288 (decimal) POS ID EXT: 0 (decimal) POS ID: 4 (decimal) Part Number: 39Y9659 FRU Number: 39Y9661 FRU Serial Number: YK11836CE2RC Manufacturer ID: IBM Hardware Revision: 4 (decimal) Manufacture Date: 50 (wk), 06 (yr) UUID: 1D95 9937 8CA5 11DB 9499 0014 5EDF 1C98 (hex) Type Code: 81 (decimal) Sub-type Code: 1 (decimal) IANA Num: 20301 (decimal) Product ID: 65 (decimal) Manufacturer Sub ID: ASUS Firmware data: Type : AMM firmware Build ID : BPET50P File Name : CNETCMUS.PKT Release Date : 03/26/2010 Release Level : 50 Revision - Major: 80 Port info: ======================================================== Topology Path ID : 1 Label : External Phy Orientation : EXTERNAL Port Number : 1 Type : MGT Physical Meidum : Copper Number of Link Intferfaces : 1 ------------------------------------ Link Ifc ID Number : 1 Link Ifc Transport Protocol : ENET Link Ifc Addr Type : MAC Link Ifc Burned-in Addr : xx:xx:xx:xx:xx:xx Link Ifc Admin Addr : 00:00:00:00:00:00 Link Ifc Addr in use : xx:xx:xx:xx:xx:xx ---------------------------------------------------------- Configuration behaviors: Save Only Enviroment data: -------------- Type: : TEMPERATURE Unit: : DEGREES_C Reading: : 38.00 Sensor Label: : MM Ambient Sensor ID: : 0x0 -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +4.81 Sensor Label: : +5V Sensor ID: : 0x1b -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +3.26 Sensor Label: : +3.3V Sensor ID: : 0x19 -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +11.97 Sensor Label: : +12V Sensor ID: : 0x16 -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : -4.88 Sensor Label: : -5V Sensor ID: : 0x1e -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +2.47 Sensor Label: : +2.5V Sensor ID: : 0x18 -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +1.76 Sensor Label: : +1.8V Sensor ID: : 0x15 -------------- Type: : POWER_USAGE Unit: : WATTS Reading: : 0x19 Sensor Label: : kant Sensor ID: : 0x0

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13  | Next Page >