Search Results

Search found 14989 results on 600 pages for 'street address'.

Page 361/600 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • CMOTECH D-50 modem installation in Ubuntu 12.04

    - by Ricardo
    I have recently upgraded from 10.04 to 12.04. I had installed a 3G D-50 modem from CMOTECH. The program for Debian is provided by a Swedish company (ice.net). Usually after some mambo jambo of installung the libg++ libraries requested you can install it and runned in Ubuntu 8.04, 9.04, 10.04 and as far as I know in 11.04. When I upgraded and I click on the icon of ice.net it worked. However, I noticed that the usb D-50 modem was never mounted as usb and it didn't show up in the Lauchnpad or workspace (as when you plug a usb memory stick or another HD). I moved the icon from place in the launchpad and since when I click on the ice.net icon the same message appears: "please plug in your modem". The modem works (I've tested in Windows after this) and it blinks blue (sign that it works and picks up signal). If I type lsusb then I see that Ubuntu sees it on BUS address 006: Bus 006 Device 003: ID 16d8:6803 CMOTECH Co., Ltd. CNU-680 CDMA EV-DO modem I've tried wvdial without success. How can I get the D-50 usb modem mounted as in the previous versions of Ubuntu ?. Any help will be much appreciated. bests. Ricardo

    Read the article

  • How to route HyperV VMs traffic through host VPN

    - by Random
    I'm using Windows 8.1 Pro with HyperV. I have several VMs for development, all of them connected with host via Internal adapter using network addresses: 192.168.10.0/24 Where: 192.168.10.1 is my host's Hyper-V internal NIC address. When I'm not in my office I use 3G usb dongle an dialup VPN connection. I would like to route traffic from all existing and future VMs through the VPN. In best scenario traffic would be routed only partially to the local company network addresses 10.1.1.0/24 I don't want to use sharing because I'm switching between WiFi, USB 3G dongle and VPN. Moving to other virtualization is also not an option for me.

    Read the article

  • Fix Google Reader Lag by Blocking Google Plus Button

    - by Jason Fitzpatrick
    Chrome: Many Google Reader fans have noticed, since the upgrades last month, that the service is unbearably slow. Speed things up by blocking the Google Plus button. Ever since the upgrade from the old Google Reader interface to the new integrated-with-Google-Plus interface, many Google Reader users were reporting a painfully long lag between reading entries in Reader. Previously hitting a keyboard shortcut or arrow button to move you through the new stories was instant with no noticeable lag. After the upgrade a lag of 3-5 seconds per individual story became common (we experienced this annoying lag around the How-To Geek office immediately after the upgrade). One of the theories was that the addition of the Google Plus button to every article was causing memory issues. Geeks Are Sexy tested the theory by blocking this address: plusone.google.com/u/0/_/+1/fastbutton using AdBlock. While people were reporting great success with that move (and you may find it works great too) we didn’t have any luck. What did work for us was installing Chromeblock and, while visiting reader.google.com, clicking on the ChromeBlock toolbar button and blocking Google +1. After that the 3-5 second lag vanished and browsing articles was as snappy as it had been. Hit up the link below to grab a copy of Chromeblock. Amazon’s New Kindle Fire Tablet: the How-To Geek Review HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed

    Read the article

  • ArchBeat Link-o-Rama for 11/16/2011

    - by Bob Rhubart
    Size, Failure, and Optimization | Roger Sessions The slide deck from Roger Sessions' keynote address at the 2nd IT Architect Regional Conference in Bogota, Colombia. Webcast: Oracle Business Intelligence Mobile Event Date: Tuesday, November 29, 2011 Time: 9 a.m. PT/12 noon ET Featuring Manan Goel (Director BI Product Marketing, Oracle) and Shailesh Shedge (Director BI & Analytics Practice, Ascentt). Live Webinar: Solutions for MySQL High Availability (November 29) Tune into this webcast to learn how MySQL’s High Availability solution can help you minimize downtime and ensure business continuity. Domain-Driven Design: Useful Models for Complex Problems | @ericevans0 Domain-Driven Design: Useful Models for Complex Problems | Eric Evans Eric Evans' slide deck from the recent IASA event in Spain. Oracle Hardware goes social Introducing the Oracle Hardware Social Media Hub -- The new Facebook meeting place for the global hardware community. The hub now features a pioneering Q&A app called Oracle Ask the Expert, where you can ask questions and engage with Oracle experts. Review: WebLogic Server 11g Administration Handbook by S. Alapati Dr. Frank Munz, author of "Middleware and Cloud Computing, reviews the new WebLogic book by Sam Alapati and offers a quick overview of a couple of other new titles. SOA All the Time; Architects in AZ; Clearing Info Integration hurdles This week on the Architect Home Page on OTN.

    Read the article

  • VPN to Buffalo WHR-HP-G300N produces a Connection Error 807

    - by Darius
    My friend has modem/router from Clear and I have sent him Buffalo router to put it between his Clear device and the Network. I walked him thru establishing VPN in DD-WRT but when I try to VPN I get a VPN Connection Error 807. I am out of ideas how to solve this. Any suggestion? Clear Modem WAN: xx.xx.xx.xx Clearn Modem NAT's to: 192.168.15.XXX Clear Modem DHCP: 192.168.15.2 - 192.168.15.2 (range is limited to that of ONE ip address) Clear Modem DMZ: 192.168.15.2 the LAN of the clear modem is 192.168.15.XXX The DD-WRT IP: 192.168.4.1 Port FWD: 1723 to 192.168.4.1 PPTP server listens on 192.168.4.1 Where is the problem with this setup?

    Read the article

  • How can I disable ipv6 on Ubuntu Server 8.04?

    - by Boden
    I'm trying to run Dell OMSA on Ubuntu 8.04. However, it's binding to ipv6 and not to an ipv4 address. I can't seem to figure out how to change this behavior. So, since I don't need ipv6 support, I'd like to just disable it and see if that clears things up. I've tried blacklisting ipv6 in /etc/modprobe.d/blacklist (blacklist ipv6), and turning it off in /etc/modprobe.d/aliases (alias net-pf-10 off). I'm seeing both solutions recommended in forums and blogs, but neither works.

    Read the article

  • Use Web Browser (Google Chrome or Firefox) to Open Web Sites from a List in File

    - by MMA
    I have a (text) file containing a list of web site addresses. I know that I can open the file in an editor and then copy-paste the addresses to the browser address bar. That means, if I have ten addresses in that file, I will need ten copy-paste operations, which must be tedious. But is there a way to ask the browser (preferably Google Chrome or Firefox) to open all of them in one go in different tabs? I am using Ubuntu, if that is related in any way to the solution you provide.

    Read the article

  • supervise apache with daemontools

    - by perlwle
    I am trying to setup daemontools for two apaches in one server. one apache 2.2 listening on port 80 proxy request to a second apache 1.3 listening on port 8888. ./run script as following: #!/bin/sh # apache 1.3 exec /apache_1_3/apache/bin/httpd -F #!/bin/sh # apache 2.2 exec /apache_2_2/apache/bin/httpd -D FOREGROUND daemontools monitors both apache fine. however, If I stop apache2.2 (using svc -t or apachectl), the apache 1.3 will see the following error in error_log [crit] (98)Address already in use: make_sock: could not bind to port 8888 I had to manually apachectl stop the apache1.3 to stop the error message clobber the log file. There is no such problem before using daemontools. any idea why this is happening?

    Read the article

  • At what year in history was computers first used to store porn? [closed]

    - by Emil H
    Of course this sounds like a joke question, but it's meant seriously. I remember being told by an old system administrator back in the early nineties about people asking about good FTPs for porn, and that they would as a joke always tell them to connect to 127.0.0.1. They would come back saying that there was a lot of porn at that address, but that oddly enough it seemed like they already had it all. Point being, it seems like it's been around for quite a while. Anyway. Considering that a considerable portion of the internet is devoted to porn these days, it would be interesting to know if someone has any kind of idea as to when and where the phenomena first arose? There must be some mention of this in old hacker folk lore? (Changed to CW to emphasize that this isn't about rep, but about genuine curiousity. :)

    Read the article

  • Bridge virtual machines out WLAN interface

    - by Thomas
    It seems that my wlan card (intel 5100 AGN) firmware doesn't allow "spoofing" MAC addresses. This has the side effect of destroying the capability to bridge out my virtual machines on that interface. Apparently this is a common thing on wlan cards. I can see the incoming traffic just fine in my virtual machines, but their DHCP queries don't get bridged out of the WLAN card. It works perfectly well when using the wired ethernet port. Is there a workaround for this? MAC-NAT or something? I don't want to route my virtual machines out to the Internet because I don't want my host OS to even have an IP address. I'm using Linux and KVM for virtualization.

    Read the article

  • DSL PPPoE connection not working?

    - by Mussnoon
    I use a wired PPPoE connection to connect to the Internet. What I need to do on Windows to connect to it is put in static IP address, gateway, subnet mask and DNS servers for my LAN card. Next I have to create a dialer for a PPPoE connection, put in my user name, the service name and the password, and "dial" this connection. And it works fine. On Ubuntu 10.04, however, I have tried setting things up in a similar fashion - put in all static addresses for the "automatic" wired connection, then put in user name, service name, password for a "DSL" connection. It worked for a while, then stopped. I have tried putting in all the details within the DSL configuration dialog, same thing happened - it worked for a while, then stopped. I have tried deleting the ethernet connection and only keeping the DSL one with all the numbers put in place, same thing happened - it worked for a while, then stopped. Each of the times, when it connected, it connected randomly, after trying a few times, and either stopped working within a few minutes, or after I had rebooted. I have deleted and remade the connection dozens of times - even with different names, but nothing seems to be working. I have also tried pppoeconf from the terminal, didn't work. I have checked /var/log/kern.log, but nothing changes in the file when I try to connect. I have also checked /sbin/route, but gedit can't even open it (says it can't figure the character encoding...). The "connection established" notification pops up from the top right corner, the same way as when the computer is actually connected to a network. Can anyone figure what's wrong and how it can be solved?

    Read the article

  • Active X Control issue on Terminal Server 2003

    - by Saif Khan
    I have a security camera system which can be viewed remotely via a web browser. It works excellent only with IE 6 and up and requires an ActiveX control "ERViewer.ocx". Some users require to view the cameras via Windows Terminal Server, but when they try to open the link to the DVR they get the prompt ti install the ActiveX and then the browser crashes when they try to install it. I logged in as admin and got the same issue. I called the tech support of the DVR but they have no idea, in other words, the usual useless tech support. Here is what I get in the error log Faulting application iexplore.exe, version 7.0.6000.16735, faulting module ERViewer.ocx, version 1.6.0.8, fault address 0x000064d7. I am sure it could be some kinda permission getting an ocx to run in IE. What else can I tweak?

    Read the article

  • GWB | 30 in 60 Update &ndash; Enrique is almost there!

    - by Staff of Geeks
    We are very close to having our first blogger to reach 30 posts, Enrique Lima.  Stuart Brierley is over the hump with 16 posts and Dave Campbell and Eric Nelson are definitely in the running.  If you don’t know what I am talking about, we are running a contest for our bloggers.  Anyone who blogs on Geekswithblogs who creates 30 posts from May 15th to July 13th will receive a custom Geekswithblogs.net t-shirt with their URL on the back.  This could be their Geekswithblogs.net address or their custom domain.  It is definitely not too late to get started and with TechEd or WWDC right around the corner, there is definitely a lot to talk about. Current Standings: Enrique Lima (28 posts) - http://geekswithblogs.net/enriquelima StuartBrierley (16 posts) - http://geekswithblogs.net/StuartBrierley Dave Campbell (12 posts) - http://geekswithblogs.net/WynApseTechnicalMusings Eric Nelson (10 posts) - http://geekswithblogs.net/iupdateable Christopher House (10 posts) - http://geekswithblogs.net/13DaysaWeek mbcrump (7 posts) - http://geekswithblogs.net/mbcrump Chris Williams (6 posts) - http://geekswithblogs.net/cwilliams Michael Stephenson (5 posts) - http://geekswithblogs.net/michaelstephenson Steve Michelotti (5 posts) - http://geekswithblogs.net/michelotti Liam McLennan (5 posts) - http://geekswithblogs.net/liammclennan Follow Us On Twitter: @StaffOfGeeks Technorati Tags: Geekswithblogs,30 in 60,Standings

    Read the article

  • IIS7 Windows Server 2008 FTP -> Response: 530 User cannot log in

    - by RSolberg
    I just launched my first IIS FTP site following many of the tutorials from IIS.NET... I'm using IIS Users and Permissions rather than anonymous and/or basic. This is what I'm seeing while trying to establish the connection... Status: Resolving address of ftp.mydomain.com Status: Connecting to ###.###.##.###:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER MyFTPUser Response: 331 Password required for MyFTPUser. Command: PASS ******************** Response: 530 User cannot log in. Error: Critical error Error: Could not connect to server

    Read the article

  • Debuild fails to make package for bluelog-1.04

    - by Dean Howell
    When trying to build a package for bluelog, Debuild give several errors. In the past, I've used checkinstall to quickly build crude packages. I am now trying to do it the right way and upload to a PPA. Bluelog can be found here: http://www.digifail.com/software/bluelog.shtml Here is the output from debuild; dpkg-buildpackage -rfakeroot -D -us -uc dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): -D_FORTIFY_SOURCE=2 dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions -Wl,-z,relro dpkg-buildpackage: source package bluelog dpkg-buildpackage: source version 1.0.4-0ubuntu1 dpkg-buildpackage: source changed by Dean Howell <dean@unknown> dpkg-source --before-build bluelog dpkg-buildpackage: host architecture amd64 fakeroot debian/rules clean dh clean dh_testdir dh_auto_clean make[1]: Entering directory `/home/dean/Launchpad Builds/bluelog/bluelog' rm -rf bluelog www/cgi-bin/* *.o *.txt *.log *.gz *.cgi make[1]: Leaving directory `/home/dean/Launchpad Builds/bluelog/bluelog' dh_clean dpkg-source -b bluelog dpkg-source: warning: Version number suggests Ubuntu changes, but Maintainer: does not have Ubuntu address dpkg-source: warning: Version number suggests Ubuntu changes, but there is no XSBC-Original-Maintainer field dpkg-source: info: using source format `3.0 (quilt)' dpkg-source: info: building bluelog using existing ./bluelog_1.0.4.orig.tar.gz dpkg-source: error: cannot represent change to bluelog/Builds/bluelog/bluelog/debian/bluelog/usr/bin/bluelog: binary file contents changed dpkg-source: error: add Builds/bluelog/bluelog/debian/bluelog/usr/bin/bluelog in debian/source/include-binaries if you want to store the modified binary in the debian tarball dpkg-source: error: unrepresentable changes to source dpkg-buildpackage: error: dpkg-source -b bluelog gave error exit status 2

    Read the article

  • Running 64 bit Ubuntu distribution from 32 bit Ubuntu

    - by csg
    Related to this question How do I run qemu with 64bit processor on a 64bit machine?, I'm trying to run latest ubuntu 11.10 64bit distribution under Ubuntu 11.04 32 bit using qemu on a core2duo (64 bit cpu) machine, using following qemu parameters with no success. Error under qemu: "This kernel required an x86-64 CPU, but only detected an i686 CPU. Unable to boot - please use a kernel appropiate for your CPU" Isn't qemu suppose to emulate a 64 bit machine? I think I'm missing something, but I can't figure it out. qemu -cpu (kvm64|core2duo|qemu64) -boot d -cdrom ubuntu-11.10-desktop-amd64.iso qemu-system-x86_64 -boot d -cdrom ubuntu-11.10-desktop-amd64.iso Here is my uname -m i686 Here is my /proc/cpuinfo processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU P8400 @ 2.26GHz stepping : 6 cpu MHz : 800.000 cache size : 3072 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 4522.45 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:

    Read the article

  • Integrating different branches from external sources into a single Mercurial repository

    - by dukeofgaming
    I'm currently working in a company using Perforce and am making way for distributed version control with Mercurial. I've had success importing Perforce history using the perfarce (quite a suitable name, I laugh every time I see/say it) however, this only works with a single branch at a time. Here's how my P4 integration setup works: In perforce, create a "client", which is kind of a description of what you will be constantly updating/checking-out. This can only address one branch at a time (trunk or other). Once you do this, run hg clone p4://<server>/<client_name> Go to .hg/hgrc and put the perforce path line: perforce = p4://<server>/<client_name> Work normally with the code under mercurial, do hg pull perforce to sync up, hg push to export a changelist What I'd like to be able to do is have a perforce path per branch and have everything work in the same repository. Now, pushing is not a problem, however, if I pull the history from another branch it would end up at the default branch. I'd like to be able to do something like hg pull perforce-R5 and have it land in mercurial's R5 branch. Even if I have no merging history, it would be sweet enough to be able to preserve it. There are also other plugins for CVCSs that let you integrate mercurial, but AFAIK the subversion one has the same problem. I don't think there is a straight-through way of doing this, but as long as I could automate the process with some hooks and scripts in a single Mercurial machine, that would be good enough.

    Read the article

  • Domain connection shows as "unauthenticated"

    - by gareth89
    I have seen various different questions for this problem floating around but either the circumstances arent the same or the solution doesnt work so thought i would post it to see if anybody has any suggestions. Various domain PCs and laptops appear to randomly give the connection name of "lewis.local 2(Unauthenticated)" - lewis.local being our domain - and provides an exclamation mark where the network type logo is normally shown. This also appears to happen every time connecting via vpn. Our setup is: 2 servers both running windows server 2003 R2 (x32) main server has AD, DNS and DHCP installed IPv4 on approx 30 client machines (some wired, some wireless) If anybody has any thoughts on solutions i would appreciate it. I have tried removing all but AD server roles, resetting all of the systems and nothing. It doesnt prevent anything from working just like a domain connection most of the time however it is getting fustrating! Also dont know if it could have anything to do with it but the DHCP server seems to have quite a long lead time on issuing the IP address to the client.

    Read the article

  • WSDL-world vs CLR-world – some differences

    - by nmarun
    A change in mindset is required when switching between a typical CLR application and a web service application. There are some things in a CLR environment that just don’t add-up in a WSDL arena (and vice-versa). I’m listing some of them here. When I say WSDL-world, I’m mostly talking with respect to a WCF Service and / or a Web Service. No (direct) Method Overloading: You definitely can have overloaded methods in a, say, Console application, but when it comes to a WCF / Web Services application, you need to adorn these overloaded methods with a special attribute so the service knows which specific method to invoke. When you’re working with WCF, use the Name property of the OperationContract attribute to provide unique names. 1: [OperationContract(Name = "AddInt")] 2: int Add(int arg1, int arg2); 3:  4: [OperationContract(Name = "AddDouble")] 5: double Add(double arg1, double arg2); By default, the proxy generates the code for this as: 1: [System.ServiceModel.OperationContractAttribute( 2: Action="http://tempuri.org/ILearnWcfService/AddInt", 3: ReplyAction="http://tempuri.org/ILearnWcfService/AddIntResponse")] 4: int AddInt(int arg1, int arg2); 5: 6: [System.ServiceModel.OperationContractAttribute( 7: Action="http://tempuri.org/ILearnWcfServiceExtend/AddDouble", 8: ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/AddDoubleResponse")] 9: double AddDouble(double arg1, double arg2); With Web Services though the story is slightly different. Even after setting the MessageName property of the WebMethod attribute, the proxy does not change the name of the method, but only the underlying soap message changes. 1: [WebMethod] 2: public string HelloGalaxy() 3: { 4: return "Hello Milky Way!"; 5: } 6:  7: [WebMethod(MessageName = "HelloAnyGalaxy")] 8: public string HelloGalaxy(string galaxyName) 9: { 10: return string.Format("Hello {0}!", galaxyName); 11: } The one thing you need to remember is to set the WebServiceBinding accordingly. 1: [WebServiceBinding(ConformsTo = WsiProfiles.None)] The proxy is: 1: [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://tempuri.org/HelloGalaxy", 2: RequestNamespace="http://tempuri.org/", 3: ResponseNamespace="http://tempuri.org/", 4: Use=System.Web.Services.Description.SoapBindingUse.Literal, 5: ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] 6: public string HelloGalaxy() 7:  8: [System.Web.Services.WebMethodAttribute(MessageName="HelloGalaxy1")] 9: [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://tempuri.org/HelloAnyGalaxy", 10: RequestElementName="HelloAnyGalaxy", 11: RequestNamespace="http://tempuri.org/", 12: ResponseElementName="HelloAnyGalaxyResponse", 13: ResponseNamespace="http://tempuri.org/", 14: Use=System.Web.Services.Description.SoapBindingUse.Literal, 15: ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] 16: [return: System.Xml.Serialization.XmlElementAttribute("HelloAnyGalaxyResult")] 17: public string HelloGalaxy(string galaxyName) 18:  You see the calling method name is the same in the proxy, however the soap message that gets generated is different. Using interchangeable data types: See details on this here. Type visibility: In a CLR-based application, if you mark a field as private, well we all know, it’s ‘private’. Coming to a WSDL side of things, in a Web Service, private fields and web methods will not get generated in the proxy. In WCF however, all your operation contracts will be public as they get implemented from an interface. Even in case your ServiceContract interface is declared internal/private, you will see it as a public interface in the proxy. This is because type visibility is a CLR concept and has no bearing on WCF. Also if a private field has the [DataMember] attribute in a data contract, it will get emitted in the proxy class as a public property for the very same reason. 1: [DataContract] 2: public struct Person 3: { 4: [DataMember] 5: private int _x; 6:  7: [DataMember] 8: public int Id { get; set; } 9:  10: [DataMember] 11: public string FirstName { get; set; } 12:  13: [DataMember] 14: public string Header { get; set; } 15: } 16: } See the ‘_x’ field is a private member with the [DataMember] attribute, but the proxy class shows as below: 1: [System.Runtime.Serialization.DataMemberAttribute()] 2: public int _x { 3: get { 4: return this._xField; 5: } 6: set { 7: if ((this._xField.Equals(value) != true)) { 8: this._xField = value; 9: this.RaisePropertyChanged("_x"); 10: } 11: } 12: } Passing derived types to web methods / operation contracts: Once again, in a CLR application, I can have a derived class be passed as a parameter where a base class is expected. I have the following set up for my WCF service. 1: [DataContract] 2: public class Employee 3: { 4: [DataMember(Name = "Id")] 5: public int EmployeeId { get; set; } 6:  7: [DataMember(Name="FirstName")] 8: public string FName { get; set; } 9:  10: [DataMember] 11: public string Header { get; set; } 12: } 13:  14: [DataContract] 15: public class Manager : Employee 16: { 17: [DataMember] 18: private int _x; 19: } 20:  21: // service contract 22: [OperationContract] 23: Manager SaveManager(Employee employee); 24:  25: // in my calling code 26: Manager manager = new Manager {_x = 1, FirstName = "abc"}; 27: manager = LearnWcfServiceClient.SaveManager(manager); The above will throw an exception saying: In short, this is saying, that a Manager type was found where an Employee type was expected! Hierarchy flattening of interfaces in WCF: See details on this here. In CLR world, you’ll see the entire hierarchy as is. That’s another difference. Using ref parameters: * can use ref for parameters, but operation contract should not be one-way (gives an error when you do an update service reference)   => bad programming; create a return object that is composed of everything you need! This one kind of stumped me. Not sure why I tried this, but you can pass parameters prefixed with ref keyword* (* terms and conditions apply). The main issue is this, how would we know the changes that were made to a ‘ref’ input parameter are returned back from the service and updated to the local variable? Turns out both Web Services and WCF make this tracking happen by passing the input parameter in the response soap. This way when the deserializer does its magic, it maps all the elements of the response xml thereby updating our local variable. Here’s what I’m talking about. 1: [WebMethod(MessageName = "HelloAnyGalaxy")] 2: public string HelloGalaxy(ref string galaxyName) 3: { 4: string output = string.Format("Hello {0}", galaxyName); 5: if (galaxyName == "Andromeda") 6: { 7: galaxyName = string.Format("{0} (2.5 million light-years away)", galaxyName); 8: } 9: return output; 10: } This is how the request and response look like in soapUI. As I said above, the behavior is quite similar for WCF as well. But the catch comes when you have a one-way web methods / operation contracts. If you have an operation contract whose return type is void, is marked one-way and that has ref parameters then you’ll get an error message when you try to reference such a service. 1: [OperationContract(Name = "Sum", IsOneWay = true)] 2: void Sum(ref double arg1, ref double arg2); 3:  4: public void Sum(ref double arg1, ref double arg2) 5: { 6: arg1 += arg2; 7: } This is what I got when I did an update to my service reference: Makes sense, because a OneWay operation is… one-way – there’s no returning from this operation. You can also have a one-way web method: 1: [SoapDocumentMethod(OneWay = true)] 2: [WebMethod(MessageName = "HelloAnyGalaxy")] 3: public void HelloGalaxy(ref string galaxyName) This will throw an exception message similar to the one above when you try to update your web service reference. In the CLR space, there’s no such concept of a ‘one-way’ street! Yes, there’s void, but you very well can have ref parameters returned through such a method. Just a point here; although the ref/out concept sounds cool, it’s generally is a code-smell. The better approach is to always return an object that is composed of everything you need returned from a method. These are some of the differences that we need to bear when dealing with services that are different from our daily ‘CLR’ life.

    Read the article

  • IIS displaying page differently when localhost is used in URL vs. hostname

    - by maik
    I'm having (yet another) strange problem with IIS. When viewing an ASPX page I've designed on my local machine by browsing to http://localhost/page.aspx the page looks as expected (and looks the same in IE, Firefox and Chrome. If I change localhost to my_hostname the page is rendered with a disabled vertical scroll bar. The behavior was first noticed when I published my site to our live server and saw the same discrepancy. After beating my head against the wall I tried what I described above and was able to duplicate my "problem". So with that, I turn to you guys. This wouldn't really be an issue (save for the cross-browser inconsistency) except that this screws up an "absolute"ly positioned <div> moving it partway off the screen instead of being centered like it should be (and is when viewed any other way except in IE when the address is anything but localhost).

    Read the article

  • ISA Server Route Add Question

    - by Kip
    Hi All, I have a situation where I have and ISA 2006 server (on Win2k3) that has an internal and an externaly facing NIC's. All works fine but I need to add a couple of routes for the following reason: Our monitoring software is on a different network. Our Terminal server is on a different network. Currently, access to the internet, through this proxy server, from the terminal server fails. Also, monitoring of the ISA server via a remote monitor or the installed agent talking to the remote monitor (BMC) also fails. The default enterprise rule on ISA blocks the traffic as I beleive it doesn't trust / know about those networks. Here is my routing table: I need to add a couple of address, but this one being the main one: 192.168.245.137 / mask 255.255.255.192 / gateway 192.168.245.129 But I can't get it to work. Routing is not my strong point but at the moment have no one else available to help. Can you offer any assistance? Please ask if you need more info

    Read the article

  • BIND - zone not loaded du to errors

    - by Johan Barelds
    After upgrading from Ubuntu 8.04 to 10.04 my DNS isn't working properly anymore. I keep getting this error when I run named-checkzone example.com /var/cache/bind/example.com.zone.db zone example.com/IN: NS 'mx002a.example.com' has no address records (A or AAAA) zone example.com/IN: not loaded due to errors. in /var/cached/bind/example.com.db $TTL 3D @ IN SOA mx002a.example.com. chantra.example.com. ( 200608081 ; serial, todays date + todays serial # 8H ; refresh, seconds 2H ; retry, seconds 4W ; expire, seconds 1D ) ; minimum, seconds ; ; mx002a.example.com IN A 192.168.85.19 example.com. IN NS mx002a.example.com. mx001 60 IN A 192.168.85.17 mx001 60 IN A 192.168.85.18

    Read the article

  • rhel/centos vs. ubuntu (possibly other debian-based systems) linux in handling duplicate ips in the same subnet

    - by johnshen64
    This has bothered me for quite a while but I never found out why or how to change the behavior. ip duplicates could be caused by typos or dhcp errors etc., but they do occur from time to time. in rpm-based systems such as centos, the old server with the duplicate ip wins, and the new server will get an error in bringing up the nic (ip address already used). this is somewhat harmless because we can just fix the system that is coming up. ubuntu only the other hand happily grabs the used ip for itself and leave the old server/device without a valid ip. this is the more dangerous behavior because it causes outages. what i want is to change the ubuntu behavior to that of the centos/rhel so would appreciate any help.

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • How do I make my USB Bluetooth dongle work in Ubuntu 11.04 ? (Can't init device hci0: Connection timed out (110)) [closed]

    - by MaikoID
    I've a USB bluetooth dongle root@maiko-cce-lin:~# lsusb | grep Bluetooth Bus 001 Device 007: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) that isn't working properly, hardly-ever it works but stops working in my next reboot. what I've tried it isn't software blocked root@maiko-cce-lin:~# rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: hci0: Bluetooth Soft blocked: no Hard blocked: no my device is recognized by hciconfig root@maiko-cce-lin:~# hciconfig -a hci0: Type: BR/EDR Bus: USB BD Address: 00:1F:81:00:01:1C ACL MTU: 1021:4 SCO MTU: 180:1 DOWN RX bytes:330 acl:0 sco:0 events:8 errors:0 TX bytes:24 acl:0 sco:0 commands:30 errors:22 Features: 0xff 0x3e 0x09 0x76 0x80 0x01 0x00 0x80 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: Link mode: SLAVE ACCEPT but I can't turn on my hci interface root@maiko-cce-lin:~# hciconfig hci up Can't init device hci0: Connection timed out (110) I don't understand why.. the hcitool command doesn't show any device. root@maiko-cce-lin:~# hcitool dev Devices: I've tried to restart my bluetooth service too with this command and make all these previous commands again but without success. root@maiko-cce-lin:~# service bluetooth restart * Stopping bluetooth [ OK ] * Starting bluetooth [ OK ] root@maiko-cce-lin:~# The dongle works if you disconnect it from usb, wait a few seconds and connect it again. so there must be better solution for it ( a solution not involving physically removing the dongle!)

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >