Search Results

Search found 22716 results on 909 pages for 'network architecture'.

Page 78/909 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • How to launch a Windows service network process to listen to a port on a localhost socket that is vi

    - by rwired
    Here's the code (in a standard TService in Delphi): const ProcessExe = 'MyNetApp.exe'; function RunService: Boolean; var StartInfo : TStartupInfo; ProcInfo : TProcessInformation; CreateOK : Boolean; begin CreateOK := false; FillChar(StartInfo,SizeOf(TStartupInfo),#0); FillChar(ProcInfo,SizeOf(TProcessInformation),#0); StartInfo.cb := SizeOf(TStartupInfo); CreateOK := CreateProcess(nil, PChar(ProcessEXE),nil,nil,False, CREATE_NEW_PROCESS_GROUP+NORMAL_PRIORITY_CLASS, nil, PChar(InstallDir), StartInfo, ProcInfo); CloseHandle(ProcInfo.hProcess); CloseHandle(ProcInfo.hThread); Result := CreateOK; end; procedure TServicel.ServiceExecute(Sender: TService); const IntervalsBetweenRuns = 4; //no of IntTimes between checks IntTime = 250; //ms var Count: SmallInt; begin Count := IntervalsBetweenRuns; //first time run immediately while not Terminated do begin Inc(Count); if Count >= IntervalsBetweenRuns then begin Count := 0; //We check to see if the process is running, //if not we run it. That's all there is to it. //if ProcessEXE crashes, this service host will just rerun it if processExists(ProcessEXE)=0 then RunService; end; Sleep(IntTime); ServiceThread.ProcessRequests(False); end; end; MyNetApp.exe is a SOCKS5 proxy listening on port 9870. Users configure their browser to this proxy which acts as a secure-tunnel/anonymizer. All works perfectly fine on 2000/XP/2003, but on Vista/Win7 with UAC the service runs in Session0 under LocalSystem and port 9870 doesn't show up in netstat for the logged-in user or Administrator. Seems UAC is getting in my way. Is there something I can do with the SECURITY_ATTRIBUTES or CreateProcess, or is there something I can do with CreateProcessAsUser or impersonation to ensure that a network socket on a service is available to logged-in users on the system (note, this app is for mass deployment, I don't have access to user credentials, and require the user elevate their privileges to install a service on Vista/Win7)

    Read the article

  • Sharp architecture; Accessing Validation Results

    - by nabeelfarid
    I am exploring Sharp Architecture and I would like to know how to access the validation results after calling Entity.IsValid(). I have two scenarios e.g. 1) If the entity.IsValid() return false, I would like to add the errors to ModelState.AddModelError() collection in my controller. E.g. in the Northwind sample we have an EmployeesController.Create() action when we do employee.IsValid(), how can I get access to the errors? public ActionResult Create(Employee employee) { if (ViewData.ModelState.IsValid && employee.IsValid()) { employeeRepository.SaveOrUpdate(employee); } // .... } [I already know that when an Action method is called, modelbinder enforces validation rules(nhibernate validator attributes) as it parses incoming values and tries to assign them to the model object and if it can't parse the incoming values  then it register those as errors in modelstate for each model object property. But what if i have some custom validation. Thats why we do ModelState.IsValid first.] 2) In my test methods I would like to test the nhibernate validation rules as well. I can do entity.IsValid() but that only returns true/ false. I would like to Assert against the actual error not just true/ false. In my previous projects, I normally use a wrapper Service Layer for Repositories, and instead of calling Repositories method directly from controller, controllers call service layer methods which in turn call repository methods. In my Service Layer all my custom validation rules resides and Service Layer methods throws a custom exception with a NameValueCollection of errors which I can easily add to ModelState in my controller. This way I can also easily implement sophisticated business rules in my service layer as well. I kow sharp architecture also provides a Service Layer project. But what I am interested in and my next question is: How I can use NHibernate Vaidators to implement sophisticated custom business rules (not just null,empty, range etc.) and make Entity.IsValid() to verify those rules too ?

    Read the article

  • Python libusb pyusb "mach-o, but wrong architecture"

    - by Jon
    I am having some trouble with the pyusb module. I have narrowed down the problem to a single line, and have created a small example script to replicate the error. #!/usr/bin/env python """ This module was created to isolate the problem in the pyusb package. Operating system: Mac OS 10.6.3 Python Version: 2.6.4 libusb 1.0.8 has been successfully installed using: sudo port install libusb I have also tried modifying /opt/local/etc/macports/macports.conf to force the i386 architecture instead of x86_64. """ from ctypes import * import ctypes.util libname = ctypes.util.find_library('usb-1.0') print 'libname: ', libname l = CDLL(libname, RTLD_GLOBAL) # RESULT: #libname: /usr/local/lib/libusb-1.0.dylib #Traceback (most recent call last): # File "./pyusb_problem.py", line 7, in <module> # l = CDLL(libname, RTLD_GLOBAL) # File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ctypes/__init__.py", line 353, in __init__ # self._handle = _dlopen(self._name, mode) #OSError: dlopen(/usr/local/lib/libusb-1.0.dylib, 10): no suitable image found. Did find: # /usr/local/lib/libusb-1.0.dylib: mach-o, but wrong architecture # End of File This same script runs on Ubuntu 10.04 successfully. I have tried building the libusb module (directly from source AND through macports) for 32-bit (i386) instead of x86_64 (default for OS 10.6), but I receive the same error. Thank-you in advance for your help!

    Read the article

  • sharp architecture question

    - by csetzkorn
    I am trying to get my head around the sharp architecture and follow the tutorial. I am using this code: using Bla.Core; using System.Collections.Generic; using Bla.Core.DataInterfaces; using System.Web.Mvc; using SharpArch.Core; using SharpArch.Web; using Bla.Web; namespace Bla.Web.Controllers { public class UsersController { public UsersController(IUserRepository userRepository) { Check.Require(userRepository != null,"userRepository may not be null"); this.userRepository = userRepository; } public ActionResult ListStaffMembersMatching(string filter) { List<User> matchingUsers = userRepository.FindAllMatching(filter); return View("ListUsersMatchingFilter", matchingUsers); } private readonly IUserRepository userRepository; } } I get this error: The name 'View' does not exist in the current context I have used all the correct using statements and referenced the assemblies as far as I can see. The views live in Bla.Web in this architecture. Can anyone see the problem? Thanks. Christian

    Read the article

  • Web application architecture, and application servers?

    - by seanieb
    Hi, I'm building a web application, and I need to use an architecture that allows me to run it on two servers. The application scrapes information from other sites periodically, and on input from the end user. To do this I'm using Php+curl to scrape the information, Php or python to parse it and store the results in a MySQLDB. Then I will use Python to run some algorithms on the data, this will happen both periodically and on input from the end user. I'm going to cache some of the results in the MySQL DB and sometimes if it is specific to the user, skip storing the data and serve it to the user. I'm think of using Php for the website front end on a separate web server, running the Php spider, MySQL DB and python on another server. As you can see I'm fairly clueless. I'm familiar with using Php, MySQL and the basics of Python, but bringing this all together using something more complex than a cron job is new to me. How do go about implementing this? What frame work(s) should I use? Is MVC a good architecture for this? (I'm new to MVC, architectures etc.) Is Cakephp a good solution? If so will I be able to control and monitor the Python code using it?

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • make arm architecture c library in mac

    - by gamegamelife
    I'm trying to make my own c library in Mac and include it to my iphone program. The c code is simple , like this: math.h: int myPow2(int); math.c: #include "math.h" int myPow2(int num) { return num*num; } I search how to make the c library file ( .a or .lib ..etc) seems need to use gcc compiler (Is there other methods?) so I use this command: gcc -c math.c -o math.o ar rcs libmath.a math.o And include it in iPhone Project. Now it has the problem when build xcode iphone project. "file was built for unsupported file format which is not the architecture being linked" I found some pages discuss about the problem, but no detail how to make the i386/arm architecture library. And I finally use this command to do it: gcc -arch i386 -c math.c -o math.o /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/arm-apple-darwin10-gcc-4.2.1 -c math.c -o math.o I dont know if this method is correct? Or there has another method to do it?

    Read the article

  • Macbook Pro Wireless Reconnecting

    - by A Student at a University
    I'm using a WPA2 EAP network. I'm sitting next to the access point. The connection keeps dropping and taking ~10 seconds to reconnect. My other devices are staying online. What's causing it? syslog: 01:21:10 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:21:10 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX 01:21:10 NetworkManager[XX40]: <info> (eth1): DHCPv4 state changed reboot -> renew 01:21:10 NetworkManager[XX40]: <info> address XXX.XXX.XXX.XXX 01:21:10 NetworkManager[XX40]: <info> prefix 20 (XXX.XXX.XXX.XXX) 01:21:10 NetworkManager[XX40]: <info> gateway XXX.XXX.XXX.XXX 01:21:10 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:21:10 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:21:10 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:21:10 NetworkManager[XX40]: <info> domain name 'server.domain.tld' 01:21:10 dhclient: bound to XXX.XXX.XXX.XXX -- renewal in XXX seconds. 01:33:30 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:33:30 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX 01:33:30 dhclient: bound to XXX.XXX.XXX.XXX -- renewal in XXX seconds. 01:35:13 wpa_supplicant[XX60]: CTRL-EVENT-EAP-STARTED EAP authentication started 01:35:13 wpa_supplicant[XX60]: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected 01:35:14 wpa_supplicant[XX60]: EAP-MSCHAPV2: Authentication succeeded 01:35:14 wpa_supplicant[XX60]: EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed 01:35:14 wpa_supplicant[XX60]: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully 01:35:14 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> 4-way handshake 01:35:14 wpa_supplicant[XX60]: WPA: Key negotiation completed with XX:XX:XX:XX:XX:XX [PTK=CCMP GTK=TKIP] 01:35:14 NetworkManager[XX40]: <info> (eth1): supplicant connection state: 4-way handshake -> group handshake 01:35:14 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:35:17 wpa_supplicant[XX60]: CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys 01:35:17 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> disconnected 01:35:17 NetworkManager[XX40]: <info> (eth1): supplicant connection state: disconnected -> scanning 01:35:26 wpa_supplicant[XX60]: CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys 01:35:26 NetworkManager[XX40]: <info> (eth1): supplicant connection state: scanning -> disconnected 01:35:29 NetworkManager[XX40]: <info> (eth1): supplicant connection state: disconnected -> scanning 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 8 -> 3 (reason 11) 01:35:32 NetworkManager[XX40]: <info> (eth1): deactivating device (reason: 11). 01:35:32 NetworkManager[XX40]: <info> (eth1): canceled DHCP transaction, DHCP client pid XX27 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) starting connection 'Auto XXXXXXXXXX' 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 3 -> 4 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) scheduled... 01:35:32 NetworkManager[XX40]: <info> (eth1): supplicant connection state: scanning -> disconnected 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) started... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) scheduled... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) complete. 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) starting... 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 4 -> 5 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1/wireless): access point 'Auto XXXXXXXXXX' has security, but secrets are required. 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 5 -> 6 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) complete. 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) scheduled... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) started... 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 6 -> 4 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) scheduled... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) complete. 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) starting... 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 4 -> 5 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1/wireless): connection 'Auto XXXXXXXXXX' has security, and secrets exist. No new secrets needed. 01:35:32 NetworkManager[XX40]: <info> Config: added 'ssid' value 'XXXXXXXXXX' 01:35:32 NetworkManager[XX40]: <info> Config: added 'scan_ssid' value '1' 01:35:32 NetworkManager[XX40]: <info> Config: added 'key_mgmt' value 'WPA-EAP' 01:35:32 NetworkManager[XX40]: <info> Config: added 'password' value '<omitted>' 01:35:32 NetworkManager[XX40]: <info> Config: added 'eap' value 'PEAP' 01:35:32 NetworkManager[XX40]: <info> Config: added 'fragment_size' value 'XXX0' 01:35:32 NetworkManager[XX40]: <info> Config: added 'phase2' value 'auth=MSCHAPV2' 01:35:32 NetworkManager[XX40]: <info> Config: added 'ca_cert' value '/etc/ssl/certs/Equifax_Secure_CA.pem' 01:35:32 NetworkManager[XX40]: <info> Config: added 'identity' value 'XXXXXXX' 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) complete. 01:35:32 NetworkManager[XX40]: <info> Config: set interface ap_scan to 1 01:35:32 NetworkManager[XX40]: <info> (eth1): supplicant connection state: disconnected -> scanning 01:35:36 wpa_supplicant[XX60]: Associated with XX:XX:XX:XX:XX:XX 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: scanning -> associated 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-EAP-STARTED EAP authentication started 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected 01:35:36 wpa_supplicant[XX60]: EAP-MSCHAPV2: Authentication succeeded 01:35:36 wpa_supplicant[XX60]: EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: associated -> 4-way handshake 01:35:36 wpa_supplicant[XX60]: WPA: Could not find AP from the scan results 01:35:36 wpa_supplicant[XX60]: WPA: Key negotiation completed with XX:XX:XX:XX:XX:XX [PTK=CCMP GTK=TKIP] 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-CONNECTED - Connection to XX:XX:XX:XX:XX:XX completed (reauth) [id=0 id_str=] 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: 4-way handshake -> group handshake 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:35:36 NetworkManager[XX40]: <info> Activation (eth1/wireless) Stage 2 of 5 (Device Configure) successful. Connected to wireless network 'XXXXXXXXXX'. 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 3 of 5 (IP Configure Start) scheduled. 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 3 of 5 (IP Configure Start) started... 01:35:36 NetworkManager[XX40]: <info> (eth1): device state change: 5 -> 7 (reason 0) 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Beginning DHCPv4 transaction (timeout in 45 seconds) 01:35:36 NetworkManager[XX40]: <info> dhclient started with pid XX87 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 3 of 5 (IP Configure Start) complete. 01:35:36 dhclient: Internet Systems Consortium DHCP Client VXXX.XXX.XXX 01:35:36 dhclient: Copyright 2004-2009 Internet Systems Consortium. 01:35:36 dhclient: All rights reserved. 01:35:36 dhclient: For info, please visit https://www.isc.org/software/dhcp/ 01:35:36 dhclient: 01:35:36 NetworkManager[XX40]: <info> (eth1): DHCPv4 state changed nbi -> preinit 01:35:36 dhclient: Listening on LPF/eth1/XX:XX:XX:XX:XX:XX 01:35:36 dhclient: Sending on LPF/eth1/XX:XX:XX:XX:XX:XX 01:35:36 dhclient: Sending on Socket/fallback 01:35:36 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:35:36 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX 01:35:36 dhclient: bound to XXX.XXX.XXX.XXX -- renewal in XXX seconds. 01:35:36 NetworkManager[XX40]: <info> (eth1): DHCPv4 state changed preinit -> reboot 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 4 of 5 (IP4 Configure Get) scheduled... 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 4 of 5 (IP4 Configure Get) started... 01:35:36 NetworkManager[XX40]: <info> address XXX.XXX.XXX.XXX 01:35:36 NetworkManager[XX40]: <info> prefix 20 (XXX.XXX.XXX.XXX) 01:35:36 NetworkManager[XX40]: <info> gateway XXX.XXX.XXX.XXX 01:35:36 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:35:36 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:35:36 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:35:36 NetworkManager[XX40]: <info> domain name 'server.domain.tld' 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 5 of 5 (IP Configure Commit) scheduled... 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 4 of 5 (IP4 Configure Get) complete. 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 5 of 5 (IP Configure Commit) started... 01:35:37 NetworkManager[XX40]: <info> (eth1): device state change: 7 -> 8 (reason 0) 01:35:37 NetworkManager[XX40]: <info> (eth1): roamed from BSSID XX:XX:XX:XX:XX:XX (XXXXXXXXXX) to XX:XX:XX:XX:XX:XX (XXXXXXXXX) 01:35:37 NetworkManager[XX40]: <info> Policy set 'Auto XXXXXXXXXX' (eth1) as default for IPv4 routing and DNS. 01:35:37 NetworkManager[XX40]: <info> Activation (eth1) successful, device activated. 01:35:37 NetworkManager[XX40]: <info> Activation (eth1) Stage 5 of 5 (IP Configure Commit) complete. 01:35:43 wpa_supplicant[XX60]: Trying to associate with XX:XX:XX:XX:XX:XX (SSID='XXXXXXXXXX' freq=2412 MHz) 01:35:43 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> associating 01:35:43 wpa_supplicant[XX60]: Association request to the driver failed 01:35:46 wpa_supplicant[XX60]: Associated with XX:XX:XX:XX:XX:XX 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: associating -> associated 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: associated -> 4-way handshake 01:35:46 wpa_supplicant[XX60]: WPA: Key negotiation completed with XX:XX:XX:XX:XX:XX [PTK=CCMP GTK=TKIP] 01:35:46 wpa_supplicant[XX60]: CTRL-EVENT-CONNECTED - Connection to XX:XX:XX:XX:XX:XX completed (reauth) [id=0 id_str=] 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: 4-way handshake -> group handshake 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:40:47 wpa_supplicant[XX60]: WPA: Group rekeying completed with XX:XX:XX:XX:XX:XX [GTK=TKIP] 01:40:47 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> group handshake 01:40:47 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:50:19 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:50:19 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX

    Read the article

  • Oracle Social Network Developer Challenge: Bezzotech

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. I’ve covered all the entries we had for the Oracle Social Network Developer Challenge, the winners, Dimitri and Martin, HarQen, TEAM Informatics and John Sim from Fishbowl Solutions, and today, I’m giving you bonus coverage. Friend of the ‘Lab, Bex Huff (@bex) from Bezzotech (@bezzotech), had an interesting OpenWorld. He rebounded from an allergic reaction to finish his entry, Honey Badger, only to have his other OpenWorld commitments make him unable to present his work. Still, he did a bunch of work, and I want to make sure everyone knows about the Honey Badger. If you’re wondering about the name, it’s a meme; “honey badger don’t care.” Bex tackled a common problem with social tools by adding game mechanics to create an incentive for people to keep their profiles updated. He used a Hot-or-Not style comparison app that poses expertise questions and awards a badge to the winner. Questions are based on whatever attributes the business wants to emphasize. The goal is to find the mavens in an organization, give them praise and recognition, ideally creating incentive for everyone to raise their games. In his own words: There is a real information quality problem in social networks. In last year’s keynote, Larry Elison demonstrated how to use the social network to track down resources that have the skill sets needed for specific projects. But how well would that work in real life? People usually update that information with the basic profile information, but they rarely update their profiles with latest news items, projects, customers, or skills. It’s a pain. Or, put another way, when was the last time you updated your LinkedIn profile? Enter the Honey Badger! This is a example of a comparator app that gamifies the way people keep their profiles updated, which ensures higher quality data in the social network. An administrator comes up with a series of important questions: Who is a better communicator? Who is a better Java programmer? Who is a better team player? And people would have a space in their profile to give a justification as to why they have these skills. The second part of the app is the comparator. It randomly shows two people, their names, and their justification for why they have these skills. You will click on one of them to “vote” for them, then on the next page you will see the results from the previous match, and get 2 new people to vote on. Anybody with a winning score wins a “Honey Badge” to be displayed on their profile page, which proudly states that their peers agree that this person has those skills. Once a badge is won, it will be jealously guarded. The longer your go without updating your profile, the more likely it is that you will lose your badge. This “loss aversion” is well known in psychology, and is a strong incentive for people to keep their profiles up to date. If a user sees their rank drop from 90% to 60%, they will find the time to update their justification! Unfortunately, during the hackathon we were not allowed to modify the schema to allow for additional fields such as “justification.” So this hack is limited to just the one basic question: who is the bigger Honey Badger? Here are some shots of the Honey Badger application: #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } Thanks to Bex and everyone for participating in our challenge. Despite very little time to promote this event, we had a great turnout and creative and useful entries. The amount of work required to put together these final entries was significant, especially during a conference, and the judges and all of us involved were impressed at how much work everyone was able to do. Congrats to everyone, pat yourselves on the back. Stay tuned if you’re interested in challenges like these. We’ll likely be running similar events in the not-so-distant future.

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • SOA, Governance, and Drugs

    Why is IT governance important in service oriented architecture (SOA)? IT Governance provides a framework for making appropriate decisions based on company guidelines and accepted standards. This framework also outlines each stakeholder’s responsibilities and authority when making important architectural or design decisions. Furthermore, this framework of governance defines parameters and constraints that are used to give context and perspective when making decisions. The use of governance as it applies to SOA ensures that specific design principles and patterns are used when developing and maintaining services. When governance is consistently applied systems the following benefits are achieved according to Anne Thomas Manes in 2010. Governance makes sure that services conform to standard interface patterns, common data modeling practices, and promotes the incorporation of existing system functionality by building on top of other available services across a system. Governance defines development standards based on proven design principles and patterns that promote reuse and composition. Governance provides developers a set of proven design principles, standards and practices that promote the reduction in system based component dependencies.  By following these guidelines, individual components will be easier to maintain. For me personally, I am a fan of IT governance, and feel that it valuable part of any corporate IT department. However, depending on how it is implemented can really affect the value of using IT governance.  Companies need to find a way to ensure that governance does not become extreme in its policies and procedures. I know for me personally, I would really dislike working under a completely totalitarian or laissez-faire version of governance. Developers need to be able to be creative in their designs and too much governance can really impede the design process and prevent the most optimal design from being developed. On the other hand, with no governance enforced, no standards will be followed and accepted design patterns will be ignored. I have personally had to spend a lot of time working on this particular scenario and I have found that the concept of code reuse and composition is almost nonexistent.  Based on this, too much time and money is wasted on redeveloping existing aspects of an application that already exist within the system as a whole. I think moving forward we will see a staggered form of IT governance, regardless if it is for SOA or IT in general.  Depending on the size of a company and the size of its IT department,  I can see IT governance as a layered approach in that the top layer will be defined by enterprise architects that focus on abstract concepts pertaining to high level design, general  guidelines, acceptable best practices, and recommended design patterns.  The next layer will be defined by solution architects or department managers that further expand on abstracted guidelines defined by the enterprise architects. This layer will contain further definitions as to when various design patterns, coding standards, and best practices are to be applied based on the context of the solutions that are being developed by the department. The final layer will be defined by the system designer or a solutions architect assed to a project in that they will define what design patterns will be used in a solution, naming conventions, as well as outline how a system will function based on the best practices defined by the previous layers. This layered approach allows for IT departments to be flexible in that system designers have creative leeway in designing solutions to meet the needs of the business, but they must operate within the confines of the abstracted IT governance guidelines.  A real world example of this can be seen in the United States as it pertains to governance of the people in that the US government defines rules and regulations in the abstract and then the state governments take these guidelines and applies them based on the will of the people in each individual state. Furthermore, the county or city governments are the ones that actually enforce these rules based on how they are interpreted by local community.  To further define my example, the United States government defines that marijuana is illegal. Each individual state has the option to determine this regulation as it wishes in that the state of Florida determines that all uses of the drug are illegal, but the state of California legally allows the use of marijuana for medicinal purposes only. Based on these accepted practices each local government enforces these rules in that a police officer will arrest anyone in the state of Florida for having this drug on them if they walk down the street, but in California if a person has a medical prescription for the drug they will not get arrested.  REFERENCESThomas Manes, Anne. (2010). Understanding SOA Governance: http://www.soamag.com/I40/0610-2.php

    Read the article

  • Can't connect to DeploymentShare$ from PC attempting to MDT, but can other PCs on the network

    - by Moman10
    I am in the process of setting up MDT and have run across a problem. MDT is installed on a Windows 2012 server, MDT version 6.2.5019.0. Using WDS as well. Active Directory domain, the server is up to date and on the network. I boot up the PC, it gets an address from DHCP, pulls down the LiteTouchPE_x64.wim image and goes into the MS Solution Accelerators screen, the Processing Bootstrap Settings box comes up and processes for a couple of seconds, then goes away, it sits there for another minute or so and then gives the error: A connection to the deployment share (\\Acme-MDT\DeploymentShare$) could not be made. Can not reach the DeployRoot. Possible Cause: Network Routing error or Network Configuration Error." I can then retry or cancel. I have seen this error online but so far nothing that helps fix it, but seems to be an issue with the FQDN. I verified that I am getting an IP address and that I can successfully ping the MDT server if I use the FQDN, but can not just by it's A record of Acme-MDT. I tried manually mapping the network share using net use and it works if I use the FQDN, but it fails with an error code 53, "Network path not found" if I just use the A record of Acme-MDT. Here is the net use command I'm using: net use * \\Acme-MDT\DeploymentShare$ /u:Domain\Administrator It gives the error System Error 53, Network path not found (and doesn't prompt for a password), but if I use the FQDN of \\Acme-MDT.domain.com\DeploymentShare$ it works fine to map the drive. I guess the problem is, when it tries to load the image, it is trying to start from \\Acme-MDT\DeploymentShare$ and I need it to start from \\Acme-MDT.domain.com\DeploymentShare$, but not sure how to get it to do that. I've put the fully qualified path in CustomSettings.ini and bootstrap, updated the deployment share, regenerated the boot image and replaced the boot wim in WDS. Or, if someone has an idea as to why it's acting this way and knows a way around it. The end result is what matters! :) I did verify in DNS that Acme-MDT is there, with the proper IP, and I can successfully use the net use command to map this drive from a couple other computers that are already on the network. I am assuming it has something to do with that computer not already being part of the domain, but I'm honestly at a loss as to how to fix it. Any ideas are appreciated, thanks in advance for your help!

    Read the article

  • tc rules block traffic from some hosts at network

    - by user139430
    I have a problem I can not solve. The script, which sets the rules for traffic shaping is blocking the traffic from some hosts.If I remove all the rules, then it works. I can not understand why? Here is my script... #!/bin/sh cmdTC=/sbin/tc rateLANDl="60mbit" ceilLANDl="60mbit" rateLANUl="40mbit" ceilLANUl="40mbit" quantLAN="1514" # Nowaday bandwidth limit set to 100mbit. # We devide it with 60mbit download and 40mbit upload bandthes. rateHiDl="30mbit" ceilHiDl="60mbit" rateHiUl="20mbit" ceilHiUl="40mbit" quantHi="1514" rateLoDl="30mbit" ceilLoDl="60mbit" rateLoUl="20mbit" ceilLoUl="40mbit" quantLo="1514" devNIF=eth0 devFIF=ifb0 modprobe ifb ip link set $devFIF up 2>/dev/null #exit 0 ################################################################################################ # Remove discuiplines from network and fake interfaces ################################################################################################ $cmdTC qdisc del dev $devNIF root 2>/dev/null $cmdTC qdisc del dev $devFIF root 2>/dev/null $cmdTC qdisc del dev $devNIF ingress 2>/dev/null if [ "$1" = "down" ]; then exit 0 fi ################################################################################################ # Create discuiplines for network interface ################################################################################################ $cmdTC qdisc add dev $devNIF root handle 1:0 htb default 12 # Create classes for network interface $cmdTC class add dev $devNIF parent 1:0 classid 1:1 htb rate ${rateLANDl} ceil ${ceilLANDl} quantum ${quantLAN} $cmdTC class add dev $devNIF parent 1:1 classid 1:11 htb rate ${rateHiDl} ceil ${ceilHiDl} quantum ${quantHi} $cmdTC class add dev $devNIF parent 1:1 classid 1:12 htb rate ${rateLoDl} ceil ${ceilLoDl} quantum ${quantLo} $cmdTC qdisc add dev $devNIF parent 1:11 handle 111: sfq perturb 10 $cmdTC qdisc add dev $devNIF parent 1:12 handle 112: sfq perturb 10 # Create filters for network interface $cmdTC filter add dev $devNIF protocol all parent 1:0 u32 match ip dst 10.252.2.0/24 flowid 1:11 $cmdTC filter add dev $devNIF protocol all parent 111: handle 111 flow hash keys dst divisor 1024 baseclass 1:11 $cmdTC filter add dev $devNIF protocol all parent 112: handle 112 flow hash keys dst divisor 1024 baseclass 1:12 ################################################################################################ # Create discuiplines for fake interface ################################################################################################ $cmdTC qdisc add dev $devFIF root handle 1:0 htb default 12 # Create classes for network interface $cmdTC class add dev $devFIF parent 1:0 classid 1:1 htb rate ${rateLANUl} ceil ${ceilLANUl} quantum ${quantLAN} $cmdTC class add dev $devFIF parent 1:1 classid 1:11 htb rate ${rateHiUl} ceil ${ceilHiUl} quantum ${quantHi} $cmdTC class add dev $devFIF parent 1:1 classid 1:12 htb rate ${rateLoUl} ceil ${ceilLoUl} quantum ${quantLo} $cmdTC qdisc add dev $devFIF parent 1:11 handle 111: sfq perturb 10 $cmdTC qdisc add dev $devFIF parent 1:12 handle 112: sfq perturb 10 # Create filters for network interface $cmdTC filter add dev $devFIF protocol all parent 1:0 u32 match ip src 10.252.2.0/24 flowid 1:11 $cmdTC filter add dev $devFIF protocol all parent 111: handle 111 flow hash keys src divisor 1024 baseclass 1:11 $cmdTC filter add dev $devFIF protocol all parent 112: handle 112 flow hash keys src divisor 1024 baseclass 1:12 ################################################################################################ # Create redirect discuiplines from network to fake interface ################################################################################################ $cmdTC qdisc add dev $devNIF handle ffff:0 ingress $cmdTC filter add dev $devNIF parent ffff:0 protocol all u32 match u32 0 0 action mirred egress redirect dev $devFIF Here is my /etc/modules: loop ifb ppp_mppe nf_conntrack_pptp nt_conntrack_proto_gre nf_nat_pptp nf_nat_proto_gre The system is Linux wall 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux

    Read the article

  • Network switching issues with MacOS 10.7?

    - by Denis
    I'm having a wired problem and hope somebody can tip me, what way should I dig to. I'm using MacBookPro with Lion 10.7.3 both at my working place & at home. At working place, we have a domain-based network with 802.1x authorization (more than 400 computers) and to connect it I'm using Ethernet cable. IP range is 10.10.2.*. All network settings are setup automatically by DHCP. Also, in settings, I have Network Account Server setup in the User&Groups Settings for my work Domain server - and it is available only from corporate network. At home, I have an ADSL router, that shares Internet connection by WiFi in NAT mode. I'm using WiFi to connect it. Router gives out addresses from 192.168.1.* range and all settings are also set up by router's DHCP. So, my problem is the following. When I come back home from the office, I open my MacBook and AirPort automatically connects my WiFi network. After this, for about 1 minute I'm able to browse sites & ping hosts successfully. But after this minute, network connection is broken down. All pings return time-out. trace route to google.com stops on 192.168.1.1 (which is my router). This lasts for 3-4 minutes. After that network connection is automatically repaired and all pings go smoothly again. At the same time, when my MacBook return timeouts, I can successfully ping any host from my wife's MacBook - so this doesn't look like router issue. When I come to the office, I don't have any issues and Internet connection is available & stable moments after ethernet cable plugged in. Do anybody has any clues about this? What should I monitor & what settings look for resolving this issue? Please, ask, what additional information should I provide. Hoping for good advice & thanks in advance!

    Read the article

  • Week 21: FY10 in the Rear View Mirror

    - by sandra.haan
    FY10 is coming to a close and before we dive into FY11 we thought we would take a walk down memory lane and reminisce on some of our favorite Oracle PartnerNetwork activities. June 2009 brought One Red Network to partners offering access to the same virtual kickoff environment used by Oracle employees. It was a new way to deliver valuable content to key stakeholders (and without the 100+ degree temperatures). Speaking of hot, Oracle also announced in June new licensing options for our ISV partners. This model enables an even broader community of ISVs to build, deploy and manage SaaS applications on the same platform. While some people took the summer off, the OPN Program team was working away to deliver a brand new partner program - Oracle PartnerNetwork Specialized - at Oracle OpenWorld in October. Specialized. Recognized. Preferred. If you haven't gotten the message yet, we may need an emergency crew to pull you out from that rock you've been hiding under. But seriously, the announcement at the OPN Forum drew a big crowd and our FY11 event is shaping up to be just as exciting. OPN Specialized was announced in October and opened our doors for enrollment in December 2009. To mark our grand opening we held our first ever social webcast allowing partners from around the world to interact with us live throughout the day. We had a lot of great conversations and really enjoyed the chance to speak with so many of you. After a short holiday break we were back at it - just a small announcement - Oracle's acquisition of Sun. In case you missed it, here is a short field report from Ted Bereswill, SVP North America Alliances & Channels on the partner events to support the announcement: And while we're announcing things - did we mention that both Ted Bereswill and Judson Althoff were named Channel Chiefs by CRN? Not only do we have a couple of Channel Chiefs, but Oracle also won the Partner Program 5 Star Programs Award and took top honors at the CRN Channel Champion Awards for Financial Factors/Financial Performance in the category of Data and Information Management and the and Xchange Solution Provider event in March 2010. We actually caught up with Judson at this event for a quick recap of our participation: But awards aside, let's not forget our main focus in FY10 and that is Specialization. In April we announced that we had over 35 Specializations available for partners and a plan to deliver even more in FY11. We are just days away from the end of FY10 but hope you enjoyed our walk down memory lane. We are already planning lots of activity for our partners in FY11 starting with our Partner Kickoff event on June 29th. Join us to hear the vision and strategy for FY11 and interact with regional A&C leaders. We look forward to talking with you then. The OPN Communications Team

    Read the article

  • PPTP VPN connects via NM but goes down during SSH connection

    - by Andrea Olivato
    I setup a VPN PPTP connection via network manager and it connects correctly (I see the lock near the notification icon and the message "Vpn connection has been successfully...") As soon as I try to perform any SSH connection via the established tunnel the connection itself goes down with the message "Vpn connection failed". the SSH connection always fails at debug1: SSH2_MSG_KEXINIT sent I've looked into the system logs and this is the log Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> Starting VPN service 'pptp'... Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN service 'pptp' started (org.freedesktop.NetworkManager.pptp), PID 7093 Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN service 'pptp' appeared; activating connections Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: init (1) Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: starting (3) Dec 12 12:25:00 ushuaia NetworkManager[1155]: <info> VPN connection 'Redation' (Connect) reply received. Dec 12 12:25:05 ushuaia NetworkManager[1155]: <info> VPN connection 'Redation' (IP4 Config Get) reply received from old-style plugin. Dec 12 12:25:05 ushuaia NetworkManager[1155]: <info> VPN Gateway: 5.98.141.210 Dec 12 12:25:06 ushuaia NetworkManager[1155]: <info> VPN connection 'Redation' (IP Config Get) complete. Dec 12 12:25:06 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: started (4) Dec 12 12:25:14 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: stopping (5) Dec 12 12:25:14 ushuaia NetworkManager[1155]: <info> VPN plugin state changed: stopped (6) Dec 12 12:25:14 ushuaia NetworkManager[1155]: <info> VPN plugin state change reason: 0 Dec 12 12:25:15 ushuaia NetworkManager[1155]: <warn> error disconnecting VPN: Could not process the request because no VPN connection was active. Dec 12 12:25:20 ushuaia NetworkManager[1155]: <info> VPN service 'pptp' disappeared Please note that the same vpn is configured on my colleagues Windows 7 and works without problem when they use putty to connect via SSH

    Read the article

  • Problem getting GOBI 2000 HS to work

    - by Zypher
    I've been trying to get my integrated GOBI WWAN card to work under 10.10 for a while now. I was able to get the network manager to see the card after installing the gobi-loader package. I was able to setup the connection, but i cannot establish a connection to Verizon. Below is the output from /var/log/daemon.log as i try to connect. Oct 19 14:29:42 gbeech-x201 AptDaemon: INFO: Quiting due to inactivity Oct 19 14:29:42 gbeech-x201 AptDaemon: INFO: Shutdown was requested Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) starting connection 'Verizon connection' Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 3 -> 4 (reason 0) Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) scheduled... Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) started... Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 4 -> 6 (reason 0) Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) complete. Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) scheduled... Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) started... Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 6 -> 4 (reason 0) Oct 19 14:33:45 gbeech-x201 NetworkManager[1105]: <info> Activation (ttyUSB0) Stage 1 of 5 (Device Prepare) complete. Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <warn> CDMA connection failed: (32) No service Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 4 -> 9 (reason 0) Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> Marking connection 'Verizon connection' invalid. Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <warn> Activation (ttyUSB0) failed. Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): device state change: 9 -> 3 (reason 0) Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> (ttyUSB0): deactivating device (reason: 0). Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> Policy set 'Auto SO-GUEST' (wlan0) as default for IPv4 routing and DNS. Oct 19 14:34:46 gbeech-x201 NetworkManager[1105]: <info> Policy set 'Auto SO-GUEST' (wlan0) as default for IPv4 routing and DNS.

    Read the article

  • Ubuntu 12 crashed and took down network

    - by Leopd
    We recently set up a new Ubuntu 12.04LTS server on our network. It's not fully configured so it's not doing much beyond sshd and a default apache2 install. But this evening it appears to have crashed. It wasn't responding to the network or the keyboard. But the worst part is, it took down the entire network. My knowledge of the network stack below OSI layer 3 is very limited, so the rest confuses me. When this machine was physically connected to the network, no other machine could connect to the outside internet. When things were broken, running arp showed that our gateway's IP address (10.0.1.1) was listed as "invalid." Unplugging the server from the network fixed the problem, and plugging it back in broke it again. So the crashed server was advertising itself as owning the gateway's IP address? There's nothing at all in syslog during the time when it was causing problems. Any ideas about how to figure out what went wrong or what we can do to prevent it from happening again? I'm hesitant to even put the machine back on the network right now. Update ** It crashed again, and I ran tcpdump -penn arp (thanks bahamat!) for several minutes and got this... (timestamps and duplicate lines removed) 00:1e:65:f8:dc:24 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.1 tell 10.0.2.191, length 46 00:1e:65:f8:dc:24 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.44 tell 10.0.2.191, length 46 60:d8:19:d4:71:d6 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.1 tell 10.0.2.125, length 46 d4:9a:20:04:e9:78 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 192.168.1.1 tell 192.168.1.100, length 28 Update 2 ** When the network is functioning properly, arping -c4 10.0.1.1 returns this: ARPING 10.0.1.1 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=0 time=267.982 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=1 time=422.955 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=2 time=299.215 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=3 time=366.926 usec --- 10.0.1.1 statistics --- 4 packets transmitted, 4 packets received, 0% unanswered (0 extra) When the bad server is plugged in, arping -c4 10.0.1.1 returns: ARPING 10.0.1.1 --- 10.0.1.1 statistics --- 4 packets transmitted, 0 packets received, 100% unanswered (0 extra) Context ** 10.0.x.x is the main subnet. 10.0.1.1 is the main internet gateway 10.0.1.44 is a printer 10.0.2.* devices are all laptops / workstations I have no idea what's using the 192.168.x.x subnet -- your guesses are at least as good as mine. A VM on a workstation? A misconfigured WAP? Somebody re-sharing wifi? A machine that failed to DHCP? The offending ubuntu server's MAC address ends in cd:80 so isn't listed in the dump. It should DHCP to 10.0.3.3 Thanks for any help. This ARP stuff is all voodoo to me. Packets just go to IP addresses, right? ;)

    Read the article

  • Proper network configuration for a KVM guest to be on the same networks at the host

    - by Steve Madsen
    I am running a Debian Linux server on Lenny. Within it, I am running another Lenny instance using KVM. Both servers are externally available, with public IPs, as well as a second interface with private IPs for the LAN. Everything works fine, except the VM sees all network traffic as originating from the host server. I suspect this might have something to do with the iptables-based firewall I'm running on the host. What I'd like to figure out is: how to I properly configure the host's networking such that all of these requirements are met? Both host and VMs have 2 network interfaces (public and private). Both host and VMs can be independently firewalled. Ideally, VM traffic does not have to traverse the host firewall. VMs see real remote IP addresses, not the host's. Currently, the host's network interfaces are configured as bridges. eth0 and eth1 do not have IP addresses assigned to them, but br0 and br1 do. /etc/network/interfaces on the host: # The primary network interface auto br1 iface br1 inet static address 24.123.138.34 netmask 255.255.255.248 network 24.123.138.32 broadcast 24.123.138.39 gateway 24.123.138.33 bridge_ports eth1 bridge_stp off auto br1:0 iface br1:0 inet static address 24.123.138.36 netmask 255.255.255.248 network 24.123.138.32 broadcast 24.123.138.39 # Internal network auto br0 iface br0 inet static address 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 bridge_ports eth0 bridge_stp off This is the libvirt/qemu configuration file for the VM: <domain type='kvm'> <name>apps</name> <uuid>636b6620-0949-bc88-3197-37153b88772e</uuid> <memory>393216</memory> <currentMemory>393216</currentMemory> <vcpu>1</vcpu> <os> <type arch='i686' machine='pc'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='cdrom'> <target dev='hdc' bus='ide'/> <readonly/> </disk> <disk type='file' device='disk'> <source file='/raid/kvm-images/apps.qcow2'/> <target dev='vda' bus='virtio'/> </disk> <interface type='bridge'> <mac address='54:52:00:27:5e:02'/> <source bridge='br0'/> <model type='virtio'/> </interface> <interface type='bridge'> <mac address='54:52:00:40:cc:7f'/> <source bridge='br1'/> <model type='virtio'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> </devices> </domain> Along with the rest of my firewall rules, the firewalling script includes this command to pass packets destined for a KVM guest: # Allow bridged packets to pass (for KVM guests). iptables -A FORWARD -m physdev --physdev-is-bridged -j ACCEPT (Not applicable to this question, but a side-effect of my bridging configuration appears to be that I can't ever shut down cleanly. The kernel eventually tells me "unregister_netdevice: waiting for br1 to become free" and I have to hard reset the system. Maybe a sign I've done something dumb?)

    Read the article

  • Two network interfaces and two IP addresses on the same subnet in Linux

    - by Scott Duckworth
    I recently ran into a situation where I needed two IP addresses on the same subnet assigned to one Linux host so that we could run two SSL/TLS sites. My first approach was to use IP aliasing, e.g. using eth0:0, eth0:1, etc, but our network admins have some fairly strict settings in place for security that squashed this idea: They use DHCP snooping and normally don't allow static IP addresses. Static addressing is accomplished by using static DHCP entries, so the same MAC address always gets the same IP assignment. This feature can be disabled per switchport if you ask and you have a reason for it (thankfully I have a good relationship with the network guys and this isn't hard to do). With the DHCP snooping disabled on the switchport, they had to put in a rule on the switch that said MAC address X is allowed to have IP address Y. Unfortunately this had the side effect of also saying that MAC address X is ONLY allowed to have IP address Y. IP aliasing required that MAC address X was assigned two IP addresses, so this didn't work. There may have been a way around these issues on the switch configuration, but in an attempt to preserve good relations with the network admins I tried to find another way. Having two network interfaces seemed like the next logical step. Thankfully this Linux system is a virtual machine, so I was able to easily add a second network interface (without rebooting, I might add - pretty cool). A few keystrokes later I had two network interfaces up and running and both pulled IP addresses from DHCP. But then the problem came in: the network admins could see (on the switch) the ARP entry for both interfaces, but only the first network interface that I brought up would respond to pings or any sort of TCP or UDP traffic. After lots of digging and poking, here's what I came up with. It seems to work, but it also seems to be a lot of work for something that seems like it should be simple. Any alternate ideas out there? Step 1: Enable ARP filtering on all interfaces: # sysctl -w net.ipv4.conf.all.arp_filter=1 # echo "net.ipv4.conf.all.arp_filter = 1" >> /etc/sysctl.conf From the file networking/ip-sysctl.txt in the Linux kernel docs: arp_filter - BOOLEAN 1 - Allows you to have multiple network interfaces on the same subnet, and have the ARPs for each interface be answered based on whether or not the kernel would route a packet from the ARP'd IP out that interface (therefore you must use source based routing for this to work). In other words it allows control of which cards (usually 1) will respond to an arp request. 0 - (default) The kernel can respond to arp requests with addresses from other interfaces. This may seem wrong but it usually makes sense, because it increases the chance of successful communication. IP addresses are owned by the complete host on Linux, not by particular interfaces. Only for more complex setups like load- balancing, does this behaviour cause problems. arp_filter for the interface will be enabled if at least one of conf/{all,interface}/arp_filter is set to TRUE, it will be disabled otherwise Step 2: Implement source-based routing I basically just followed directions from http://lartc.org/howto/lartc.rpdb.multiple-links.html, although that page was written with a different goal in mind (dealing with two ISPs). Assume that the subnet is 10.0.0.0/24, the gateway is 10.0.0.1, the IP address for eth0 is 10.0.0.100, and the IP address for eth1 is 10.0.0.101. Define two new routing tables named eth0 and eth1 in /etc/iproute2/rt_tables: ... top of file omitted ... 1 eth0 2 eth1 Define the routes for these two tables: # ip route add default via 10.0.0.1 table eth0 # ip route add default via 10.0.0.1 table eth1 # ip route add 10.0.0.0/24 dev eth0 src 10.0.0.100 table eth0 # ip route add 10.0.0.0/24 dev eth1 src 10.0.0.101 table eth1 Define the rules for when to use the new routing tables: # ip rule add from 10.0.0.100 table eth0 # ip rule add from 10.0.0.101 table eth1 The main routing table was already taken care of by DHCP (and it's not even clear that its strictly necessary in this case), but it basically equates to this: # ip route add default via 10.0.0.1 dev eth0 # ip route add 130.127.48.0/23 dev eth0 src 10.0.0.100 # ip route add 130.127.48.0/23 dev eth1 src 10.0.0.101 And voila! Everything seems to work just fine. Sending pings to both IP addresses works fine. Sending pings from this system to other systems and forcing the ping to use a specific interface works fine (ping -I eth0 10.0.0.1, ping -I eth1 10.0.0.1). And most importantly, all TCP and UDP traffic to/from either IP address works as expected. So again, my question is: is there a better way to do this? This seems like a lot of work for a seemingly simple problem.

    Read the article

  • Authenticating your windows domain users in the cloud

    - by cibrax
    Moving to the cloud can represent a big challenge for many organizations when it comes to reusing existing infrastructure. For applications that drive existing business processes in the organization, reusing IT assets like active directory represent good part of that challenge. For example, a new web mobile application that sales representatives can use for interacting with an existing CRM system in the organization. In the case of Windows Azure, the Access Control Service (ACS) already provides some integration with ADFS through WS-Federation. That means any organization can create a new trust relationship between the STS running in the ACS and the STS running in ADFS. As the following image illustrates, the ADFS running in the organization should be somehow exposed out of network boundaries to talk to the ACS. This is usually accomplish through an ADFS proxy running in a DMZ. This is the official story for authenticating existing domain users with the ACS.  Getting an ADFS up and running in the organization, which talks to a proxy and also trust the ACS could represent a painful experience. It basically requires  advance knowledge of ADSF and exhaustive testing to get everything right.  However, if you want to get an infrastructure ready for authenticating your domain users in the cloud in a matter of minutes, you will probably want to take a look at the sample I wrote for talking to an existing Active Directory using a regular WCF service through the Service Bus Relay Binding. You can use the WCF ability for self hosting the authentication service within a any program running in the domain (a Windows service typically). The service will not require opening any port as it is opening an outbound connection to the cloud through the Relay Service. In addition, the service will be protected from being invoked by any unauthorized party with the ACS, which will act as a firewall between any client and the service. In that way, we can get a very safe solution up and running almost immediately. To make the solution even more convenient, I implemented an STS in the cloud that internally invokes the service running on premises for authenticating the users. Any existing web application in the cloud can just establish a trust relationship with this STS, and authenticate the users via WS-Federation passive profile with regular http calls, which makes this very attractive for web mobile for example. This is how the WCF service running on premises looks like, [ServiceBehavior(Namespace = "http://agilesight.com/active_directory/agent")] public class ProxyService : IAuthenticationService { IUserFinder userFinder; IUserAuthenticator userAuthenticator;   public ProxyService() : this(new UserFinder(), new UserAuthenticator()) { }   public ProxyService(IUserFinder userFinder, IUserAuthenticator userAuthenticator) { this.userFinder = userFinder; this.userAuthenticator = userAuthenticator; }   public AuthenticationResponse Authenticate(AuthenticationRequest request) { if (userAuthenticator.Authenticate(request.Username, request.Password)) { return new AuthenticationResponse { Result = true, Attributes = this.userFinder.GetAttributes(request.Username) }; }   return new AuthenticationResponse { Result = false }; } } Two external dependencies are used by this service for authenticating users (IUserAuthenticator) and for retrieving user attributes from the user’s directory (IUserFinder). The UserAuthenticator implementation is just a wrapper around the LogonUser Win Api. The UserFinder implementation relies on Directory Services in .NET for searching the user attributes in an existing directory service like Active Directory or the local user store. public UserAttribute[] GetAttributes(string username) { var attributes = new List<UserAttribute>();   var identity = UserPrincipal.FindByIdentity(new PrincipalContext(this.contextType, this.server, this.container), IdentityType.SamAccountName, username); if (identity != null) { var groups = identity.GetGroups(); foreach(var group in groups) { attributes.Add(new UserAttribute { Name = "Group", Value = group.Name }); } if(!string.IsNullOrEmpty(identity.DisplayName)) attributes.Add(new UserAttribute { Name = "DisplayName", Value = identity.DisplayName }); if(!string.IsNullOrEmpty(identity.EmailAddress)) attributes.Add(new UserAttribute { Name = "EmailAddress", Value = identity.EmailAddress }); }   return attributes.ToArray(); } As you can see, the code is simple and uses all the existing infrastructure in Azure to simplify a problem that looks very complex at first glance with ADFS. All the source code for this sample is available to download (or change) in this GitHub repository, https://github.com/AgileSight/ActiveDirectoryForCloud

    Read the article

  • W006.15 isp or T-mobile Network Error.

    - by Mike Koerner
    I just bought a new T-Mobile G2.  I wanted to turn on Wifi calling but kept receiving "W006.15 isp or T-mobile Network Error" and asking me to register.  After doing some research I looked at my cable modem/router logs and config.  There were fragmented packets originating from the phone and the incoming fragmented packets were blocked.  I disabled the "Block fragmented packets" setting and the phone was able to connect for Wifi calling

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >