Search Results

Search found 11077 results on 444 pages for 'no such ip'.

Page 148/444 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Error number 13 - Remote access svn with dav_svn failing

    - by C. Ross
    I'm getting the following error on my svn repository <D:error> <C:error/> <m:human-readable errcode="13"> Could not open the requested SVN filesystem </m:human-readable> </D:error> I've followed the instructions from the How to Geek, and the Ubuntu Community Page, but to no success. I've even given the repository 777 permissions. <Location /svn/myProject > # Uncomment this to enable the repository DAV svn # Set this to the path to your repository SVNPath /svn/myProject # Comments # Comments # Comments AuthType Basic AuthName "My Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd # More Comments </Location> The permissions follow: drwxrwsrwx 6 www-data webdev 4096 2010-02-11 22:02 /svn/myProject And svnadmin validates the directory $svnadmin verify /svn/myProject/ * Verified revision 0. and I'm accessing the repository at http://ipAddress/svn/myProject Edit: The apache error log says [Fri Feb 12 13:55:59 2010] [error] [client <ip>] (20014)Internal error: Can't open file '/svn/myProject/format': Permission denied [Fri Feb 12 13:55:59 2010] [error] [client <ip>] Could not fetch resource information. [500, #0] [Fri Feb 12 13:55:59 2010] [error] [client <ip>] Could not open the requested SVN filesystem [500, #13] [Fri Feb 12 13:55:59 2010] [error] [client <ip>] Could not open the requested SVN filesystem [500, #13] Even though I confirmed that this file is ugo readable and writable. What am I doing wrong?

    Read the article

  • Do connection string DNS lookups get cached?

    - by joshcomley
    Suppose the following: I have a database set up on database.mywebsite.com, which resolves to IP 111.111.1.1, running from a local DNS server on our network. I have countless ASP, ASP.NET and WinForms applications that use a connection string utilising database.mywebsite.com as the server name, all running from the internal network. Then the box running the database dies, and I switch over to a new box with an IP of 222.222.2.2. So, I update the DNS for database.mywebsite.com to point to 222.222.2.2. Will all the applications and computers running them have cached the old resolved IP address? I'm assuming they will have. Any suggestions along the lines of "don't have your IP change each time you switch box" are not too welcome as I cannot control this aspect of the situation, unfortunately. We are currently using the machine name of the box, which changes every time it dies and all apps etc. have to be updated with the new machine name. It hurts.

    Read the article

  • Java: Cleaning up what causes a connection reset

    - by Zombies
    There seems to be some confusion as well contradicting statements on various SO answers: http://stackoverflow.com/questions/585599/whats-causing-my-java-net-socketexception-connection-reset . You can see here that the accepted answer states that the connection was closed by other side. But this is not true, closing a connection doesn't cause a connection reset. It is cauesed by "an underlying TCP/IP error." What I want to know is if a SocketException: Connection reset means really besides "unerlying TCP/IP Error." What really causes this? As I doubt it has anything to do with the connection being closed (since closing a connection isn't an exception worthy flag, and reading from a closed connection is, but that isn't an "underlying TCP/IP error." My hypothesis is this Connection reset is caused from a server's failure to acknowledge an ACK packet (either wholly or just improperly as per TCP/IP). And that a SocketTimeoutException is generated only when no data is generated to be read (since this is thrown during a read after a certain duration, and read is waiting for data, but is not concerned with ACK packets). In other words, read() throws SocketTimeoutException if it didn't read any bytes of actual data (DATA LAYER) in its allotted time.

    Read the article

  • How to convert from hex-encoded string to a "human readable" string?

    - by John Jensen
    I'm using the Net-SNMP bindings for python and I'm attempting to grab an ARP cache from a Brocade switch. Here's what my code looks like: #!/usr/bin/env python import netsnmp def get_arp(): oid = netsnmp.VarList(netsnmp.Varbind('ipNetToMediaPhysAddress')) res = netsnmp.snmpwalk(oid, Version=2, DestHost='10.0.1.243', Community='public') return res arp_table = get_arp() print arp_table The SNMP code itself is working fine. Output from snmpwalk looks like this: <snip> IP-MIB::ipNetToMediaPhysAddress.128.10.200.6.158 = STRING: 0:1b:ed:a3:ec:c1 IP-MIB::ipNetToMediaPhysAddress.129.10.200.6.162 = STRING: 0:1b:ed:a4:ac:c1 IP-MIB::ipNetToMediaPhysAddress.130.10.200.6.166 = STRING: 0:1b:ed:38:24:1 IP-MIB::ipNetToMediaPhysAddress.131.10.200.6.170 = STRING: 74:8e:f8:62:84:1 </snip> But my output from the python script yields a tuple of hex-encoded strings that looks like this: ('\x00$8C\x98\xc1', '\x00\x1b\xed;_A', '\x00\x1b\xed\xb4\x8f\x81', '\x00$86\x15\x81', '\x00$8C\x98\x81', '\x00\x1b\xed\x9f\xadA', ...etc) I've spent some time googling and came across the struct module and the .decode("hex") string method, but the .decode("hex") method doesn't seem to work: Python 2.7.3 (default, Apr 10 2013, 06:20:15) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> hexstring = '\x00$8C\x98\xc1' >>> newstring = hexstring.decode("hex") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/encodings/hex_codec.py", line 42, in hex_decode output = binascii.a2b_hex(input) TypeError: Non-hexadecimal digit found >>> And the documentation for struct is a bit over my head.

    Read the article

  • Java: Clearing up the confusion on what causes a connection reset

    - by Zombies
    There seems to be some confusion as well contradicting statements on various SO answers: http://stackoverflow.com/questions/585599/whats-causing-my-java-net-socketexception-connection-reset . You can see here that the accepted answer states that the connection was closed by other side. But this is not true, closing a connection doesn't cause a connection reset. It is cauesed by "an underlying TCP/IP error." What I want to know is if a SocketException: Connection reset means really besides "unerlying TCP/IP Error." What really causes this? As I doubt it has anything to do with the connection being closed (since closing a connection isn't an exception worthy flag, and reading from a closed connection is, but that isn't an "underlying TCP/IP error." My hypothesis is this Connection reset is caused from a server's failure to acknowledge an ACK packet (either wholly or just improperly as per TCP/IP). And that a SocketTimeoutException is generated only when no data is generated to be read (since this is thrown during a read after a certain duration, and read is waiting for data, but is not concerned with ACK packets). In other words, read() throws SocketTimeoutException if it didn't read any bytes of actual data (DATA LAYER) in its allotted time.

    Read the article

  • PHP recaptcha send mail issues

    - by Mike
    Hey guys, if anybody can help me out i'd love it... What i have is a form, that went sent, uses doublecheck.php php require_once('recaptchalib.php'); $privatekey = ""; $resp = recaptcha_check_answer ($privatekey, $_SERVER["REMOTE_ADDR"], $_POST["recaptcha_challenge_field"], $_POST["recaptcha_response_field"]); if (!$resp-is_valid) { die ("Sorry please go back and try it again." . "" . $resp-error . ")"); } if ($resp-is_valid) { require_once('sendmail.php'); } ? And then my sendmail.php php $ip = $_POST['ip']; $httpref = $_POST['httpref']; $httpagent = $_POST['httpagent']; $visitor = $_POST['visitor']; $notes = $_POST['notes']; $attn = $_POST['attn']; $todayis = date("l, F j, Y, g:i a") ; $attn = $attn ; $subject = $attn; $notes = stripcslashes($notes); $message = " $todayis [EST] \n Attention: $attn \n Message: $notes \n From: $visitor ($Your Prayer or Concern)\n Additional Info : IP = $ip \n Browser Info: $httpagent \n Referral : $httpref \n "; $from = "From:\r\n"; mail("", Prayers and Concerns, $message); ? Date: Attention: Message: ", $notes); echo $notesout; ? Next Page What i'm having a hard time with is when its succesful i need to send out $notes but $notes is always blank. Should i just put my sendmail php inside of my successful php? Or can someone explain to me why $notes is blank. I do have my recaptcha key in, and also i do have an email address. I kept some things private, also there is a notes textarea in my HTML

    Read the article

  • Stale connection with Pheanstalk

    - by token47
    I'm using beanstalkd to offload some work to other machines. The setup is a bit unusual, the server is on the internet (public ip) but the consumers are behind adsl lines on some peoples homes. So there is a linux server as client going out through a dynamic ip and connecting to the server to get a job. It's all PHP and I'm using pheanstalk library. Everything runs smoothly for some time, but then the adsl changes the IP (every 24h hours the provider forces a disconnect-reconnect) the client just hangs, never to go out of "reserve". I thought that putting a timeout on the reserve would help it, but it didn't. As it seems, the client issues a command and blocks, it never checks the timeout. It just issues a reserve-with-timeout (instead of a simple reserve) and it is the servers responsibility to return a TIME_OUT as the timeout occurs. The problem is, the connection is broken (but the TCP/IP doesn't know about that yet until any of the sides try to talk to the other side) and if the client blocked reading, it will never return. The library seems to have support for some kind of timeouts locally (for example when trying to connect to server), but it does not seem to contemplate this scenario. How could I detect the stale connection and force a reconnect? Is there some kind of keepalive on the protocol (and on the pheanstalk itself)? Thanks!

    Read the article

  • Remotely connecting two non-local computers with sockets

    - by Velizar Hristov
    This question seems like something very obvious to ask, and yet I spent more than an hour trying to find an answer. First I host and wait for someone to connect. Then, from another instance of the application, I try to connect with a socket - for the constructor, I use InetAddress, port. The port is always right, and everything works if I use "localhost" for the address. However, if I type my IP, I get an IOException. I even sent the application to someone else, gave him my IP, and it didn't work. The aim of the application is to connect two computers. It's in Java. Here is the relevant code. Server: ServerSocket serverSocket = new ServerSocket(port); Socket clientSocket = serverSocket.accept(); Client: InetAddress a = InetAddress.getByName(ip); Socket s = new Socket(a, port); I don't get past that. Obviously, the values of int port and String ip are taken from text fields. Edit: the purpose of my application is to connect two non-local computers.

    Read the article

  • YouTube - Encrypted cookie string

    - by Robertof
    Hello! I'm new to Stack Overflow. I'm building a YouTube Downloader in PHP. But YouTube have some IP-checks. Because the PHP file is on a remote server, the ip of the server != the ip of the user and the video-download fails. So, maybe I've found a solution. YouTube sends a cookie with an encrypted string, which is the user IP. I need to know the encrypted-string algorithm and know how to crypt a string with this. Here there is the string: nQ0CrJmASJk . It could be base64, but when I try to decode it with base64_decode, it gives me strange characters. You could check the cookie by requesting the main page of youtube, and check the headers "Set-Cookie". You will found a cookie with the name "VISITOR_INFO1_LIVE". Here there is the encrypyed string. Anyone knows what is the algorithm? Thanks. PS: sorry for my bad english. Cheers, Roberto.

    Read the article

  • MongoDB complex MapReduce of video logs

    - by Justin Hourigan
    I have a dataset from video streaming logs. Each video is identified by a FileGUID. The log entries record the FileGUID, the fragment of the video watched and the bandwidth it was watched at. I would like to create a mapreduce outputting, for each video, a count for fragments both total and for each bandwidth. Ideally it would look like; {"FileGUID":"50acb3a5796634df0e073285", { "1":{"total":76, "0832":34, "1028":42}, "2":{"total":42, "0832":28, "1028":14}, ... } } Is this possible with one mapreduce or is it a multi-step process, or should I use a different method? Here is a sample of the data. { "_id": ObjectId("50acb3a5796634df0e073285"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:57.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(1028), "Segment": NumberInt(1), "Fragment": NumberInt(237), "Status": NumberInt(200), "Size": NumberInt(576790), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073284"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:52.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(1028), "Segment": NumberInt(1), "Fragment": NumberInt(236), "Status": NumberInt(200), "Size": NumberInt(577100), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073283"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:47.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(0832), "Segment": NumberInt(1), "Fragment": NumberInt(234), "Status": NumberInt(200), "Size": NumberInt(576664), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" } { "_id": ObjectId("50acb3a5796634df0e073282"), "IP": "46.7.1.88", "DateTime": ISODate("2012-10-24T22:59:42.0Z"), "FileGUID": "8cdde821fb934a6da7c125a012a26612", "Bandwidth": NumberInt(0832), "Segment": NumberInt(1), "Fragment": NumberInt(233), "Status": NumberInt(200), "Size": NumberInt(575692), "UserAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko\/20100101 Firefox\/16.0" }

    Read the article

  • Remote connection to SQL Server Express fails

    - by worlds-apart89
    I have two computers that share the same Internet IP address. Using one of the computers, I can remotely connect to a SQL Server database on the other. Here is my connection string: SqlConnection connection = new SqlConnection(@"Data Source=192.168.1.101\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); 192.168.1.101 is the server, SQLEXPRESSNI is the SQL Server instance name, and FirstDB is the name of the database. Now, I have another computer with a different Internet IP address. I want to connect to the server above using the third computer that does not belong to my local area network. I dont have access to that third computer at the moment, so I want to use (if possible) the client computer in LAN again. SqlConnection connection = new SqlConnection(@"Data Source=SharedInternetIP\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); Does not work Note that I am a beginner, so I am not quite sure what I am doing even though I know what I want to do. By passing the Internet IP to the SqlConnection object rather than the local IP address, how can I successfully connect to the server computer, using the client computer in the same network? Also note that my ultimate goal is to connect to the server with an external client, but I don't have access to that computer right now. I'd appreciate any help.

    Read the article

  • How do I send data between two computers over the internet

    - by Johan
    I have been struggling with this for the entire day now, I hope somebody can help me with this. My problem is fairly simple: I wish to transfer data (mostly simple commands) from one PC to another over the internet. I have been able to achieve this using sockets in Java when both computers are connected to my home router. I then connected both computers to the internet using two different mobile phones and attempted to transmit the data again. I used the mobile phones as this provides a direct route to the internet and if I use my router I have to set up port forwarding, at least, that is how I understand it. I think the problem lies in the method that I set up the client socket. I used: Socket kkSocket = new Socket(ipAddress, 3333); where ipAddress is the IP address of the computer running the server. I got the IP address by right-clicking on the connection, status, support. Is that the correct IP address to use or where can I obtain the address of the server? Also, is it possible to get a fixed name for my computer that I can use instead of entering the IP address, as this changes every time I connect to the internet using my mobile phone? Alternatively, are there better methods to solving my problem such as using http, and if so, where can I find more information about this? Thanks!

    Read the article

  • Micrsoft Silverlight 3 cannot create service reference to localhost:port

    - by Monte
    Windows Server 2003 (IIS 6) Visual Studio 2008 .NET FrameWork 3.5 SP1 I am a .NET developer for a living and I have over 40 hours in the problem Project type = "Silverlight Navigation Application", "APS.NET Web Site" (when I tried it as "ASP.NET Web Application Project" I could not copy it to the production web site - well I could copy it but I could not make it run) Created a service.cs on the .Web side of the application. Created a reference to that service.cs on the Silverlight side. For a time all is good as I can reference the service as localhost:port (e.g. localhost:1374) in Visual Studio and debug both Silverlight side and service.cs To access the application in production mode (from IE) I update the service refrence and replace localhost:port with the IP address. The problem with the IP address is I cannot debug the service.cs so I have to change it back to localhost:port to debug. Now to the problem. After a period of time localhost:port just plain breaks. I get an error message no service at the other end Yes I know the port can change - that is not the problem - the port on the service side just plain breaks! For example from Visual Studio from the Silverlight side of the project right click "Service Reference", "Add Service Reverence". It finds 1 service in the application on a port. But when I click that service under "Services:" in the modal dialog box "Add Service Reference" I get an error: There was an error downloading 'http://localhost:1377/SehaleCSS.Web/Service.svc'. The request failed with the error message: -- Could not load file or assembly 'App_Web_tipnndfq, If I go back to the IP address the service is repsponding (with the right answer) The service just plain goes a while responding to localhost:port and then fails Even making NO change to service.cs it go a while then fails as a localhost:port It is not IIS environmental as I can go back to a prior saved version of the code and it works Something is happening that the .web side of the application is failing. It still works as an IP and it still exposes itself as a localhost:port but it fails to properly repsonde as a localhost:port.

    Read the article

  • Looking for: nosql (redis/mongodb) based event logging for Django

    - by Parand
    I'm looking for a flexible event logging platform to store both pre-defined (username, ip address) and non-pre-defined (can be generated as needed by any piece of code) events for Django. I'm currently doing some of this with log files, but it ends up requiring various analysis scripts and ends up in a DB anyway, so I'm considering throwing it immediately into a nosql store such as MongoDB or Redis. The idea is to be easily able to query, for example, which ip address the user most commonly comes from, whether the user has ever performed some action, lookup the outcome for a specific event, etc. Is there something that already does this? If not, I'm thinking of this: The "event" is a dictionary attached to the request object. Middleware fills in various pieces (username, ip, sql timing), code fills in the rest as needed. After the request is served a post-request hook drops the event into mongodb/redis, normalizing various fields (eg. incrementing the username:ip address counter) and dropping the rest in as is. Words of wisdom / pointers to code that does some/all of this would be appreciated.

    Read the article

  • Java: Cleaing up connection reset (but not by peer).

    - by Zombies
    There seems to be some confusion as well contradicting statements on various SO answers: http://stackoverflow.com/questions/585599/whats-causing-my-java-net-socketexception-connection-reset . You can see here that the accepted answer states that the connecteion was closed by other side. But this is not true, closing a connection doesn't cause a connection reset. It is cauesed by "an underlying TCP/IP error." What I want to know is if a SocketException: Connection reset means really besides "unerlying TCP/IP Error." What really causes this? As I doubt it has anything to do with the connection being closed (since closing a connection isn't an exception worthy flag, and reading from a closed connection is, but that isn't an "underlying TCP/IP error." My hypothesis is this Connection reset is caused from a server's failure to acknowledge an ACK packet (either wholly or just improperly as per TCP/IP). And that a SocketTimeoutException is generated only when no data is generated to be read (since this is thrown during a read after a certain duration, and read is waiting for data, but is not concerned with ACK packets). In other words, read() throws SocketTimeoutException if it didn't read any bytes of actual data (DATA LAYER) in its allotted time.

    Read the article

  • MonoTouch's Soft Debugger don't connect to App on iPhone - why?

    - by smokinharp
    Hi everyone, I'm quite new on MonoTouch, so please forgive me my question in doubt... ;-) I need help with the soft-debugger, because it's not connecting to the App on the device. While with iPhone Simulator everything is working as expected, the following happens when I start debugging against my device: The is uploaded and installed to the device. MonoDevelop comes up with a window saying the following: "Waiting for debugger to connect on 127.0.0.1:10000..." Please start the application on the device" When starting the app on the device, the device vibrates indicating that the debugger is not connected.... In the settings of my App on the iPhone I have set the IP-Adress to my Mac's IP. My iPhone is connected via WIFI to my network. I can ping my Mac from my iPhone and vice versa. In several screenshots where the debugger was obviously working I saw that the debugger came up with the Mac's IP address and not the 127.0.0.1.... Do I have to configure my IP-address somewhere in MonoDevelop? BTW: I'm using the latest version of MonoDevelop - it's 2.4.1 I have tried anything.... re-installing MonoDevelop, cleaning up the project several times, setting up a new project.... nothing... Please, please help....

    Read the article

  • Clustering for Mere Mortals (Pt 3)

    - by Geoff N. Hiten
    The Controller Now we get to the meat of the matter.  You want a virtual cluster, the first thing you have to do is create your own portable domain.  Start with a plain vanilla install of Windows 2003 R2 Standard on a semi-default VM. (1 GB RAM, 2 cores, 2 NICs, 128GB dynamically expanding VHD file).  I chose this because it had the smallest disk and memory footprint of any current supported Microsoft Server product.  I created the VM with a single dynamically expanding VHD, one fixed 16 GB VHD, and two NICs.  One NIC is connected to the outside world and the other one is part of an internal-only network.  The first NIC is set up as a DHCP client.  We will get to the other one later. I actually tried this with Windows 2008 R2, but it failed miserably.  Not sure whether it was 2008 R2 or the fact I tried to use cloned VMs in the cluster.  Clustering is one place where NewSID would really come in handy.  Too bad Microsoft bought and buried it. Load and Patch the OS (hence the need for the outside connection).This is a good time to go get dinner.  Maybe a movie too.  There are close to a hundred patches that need to be downloaded and applied.  Avoiding that mess was why I put so much time into trying to get the 2008 R2 version working.  Maybe next time.  Don’t forget to add the extensions for VMLite (or whatever virtualization product you prefer). Set a fixed IP address on the internal-only NIC.  Do not give it a gateway.  Put the same IP address for the NIC and for the DNS Server.  This IP should be in a range that is never available on your public network.  You will need all the addresses in the range available.  See the previous post for the exact settings I used. I chose 10.97.230.1 as the server.  The rest of the 10.97.230 range is what I will use later.  For the curious, those numbers are based on elements of my home address.  Not truly random, but good enough for this project. Do not bridge the network connections.  I never allowed the cluster nodes direct access to any public network. Format the fixed VHD and leave it alone for now. Promote the VM to a Domain Controller.  If you have never done this, don’t worry.  The only meaningful decision is what to call the new domain.  I prefer a bogus name that does not correspond to a real Top-Level Domain (TLD).  .com, .biz., .net, .org  are all TLDs that we know and love.  I chose .test as the TLD since it is descriptive AND it does not exist in the real world.  The domain is called MicroAD.  This gives me MicroAD.Test as my domain. During the promotion process, you will be prompted to install DNS as part of the Domain creation process.  You want to accept this option.  The installer will automatically assign this DNS server as the authoritative owner of the MicroAD.test DNS domain (not to be confused with the MicroAD.test Active Directory domain.) For the rest of the DCPROMO process, just accept the defaults. Now let’s make our IP address management easy.  Add the DHCP Role to the server.  Add the server (10.97.230.1 in this case) as the default gateway to assign to DHCP clients.  Here is where you have to be VERY careful and bind it ONLY to the Internal NIC.  Trust me, your network admin will NOT like an extra DHCP server “helping” out on her network.  Go ahead and create a range of 10-20 IP Addresses in your scope.  You might find other uses for a pocket domain controller <cough> Mirroring </cough> than just for building a cluster.  And Clustering in SQL 2008 and Windows 2008 R2 fully supports DHCP addresses. Now we have three of the five key roles ready.  Two more to go. Next comes file sharing.  Since your cluster node VMs will not have access to any outside, you have to have some way to get files into these VMs.  I simply go to the root of C: and create a “Shared” folder.  I then share it out and grant full control to “Everyone” to both the share and to the underlying NTFS folder.   This will be immensely useful for Service Packs, demo databases, and any other software that isn’t packaged as an ISO that we can mount to the VM. Finally we need to create a block-level multi-connect storage device.  The kind folks at Starwinds Software (http://www.starwindsoftware.com/) graciously gave me a non-expiring demo license for expressly this purpose.  Their iSCSI SAN software lets you create an iSCSI target from nearly any storage medium.  Refreshingly, their product does exactly what they say it does.  Thanks. Remember that 16 GB VHD file?  That is where we are going to carve into our LUNs.  I created an iSCSI folder off the root, just so I can keep everything organized.  I then carved 5 ea. 2 GB iSCSI targets from that folder.  I chose a fixed VHD for performance.  I tried this earlier with a dynamically expanding VHD, but too many layers of abstraction and sparseness combined to make it unusable even for a demo.  Stick with a fixed VHD so there is a one-to-one mapping between abstract and physical storage.  If you read the previous post, you know what I named these iSCSI LUNs and why.  Yes, I do have some left over space.  Always leave yourself room for future growth or options. This gets us up to where we can actually build the nodes and install SQL.  As with most clusters, the real work happens long before the individual nodes get installed and configured.  At least it does if you want the cluster to be a true high-availability platform.

    Read the article

  • Clustering for Mere Mortals (Pt3)

    - by Geoff N. Hiten
    The Controller Now we get to the meat of the matter.  You want a virtual cluster, the first thing you have to do is create your own portable domain.  IStart with a plain vanilla install of Windows 2003 R2 Standard on a semi-default VM. (1 GB RAM, 2 cores, 2 NICs, 128GB dynamically expanding VHD file).  I chose this because it had the smallest disk and memory footprint of any current supported Microsoft Server product.  I created the VM with a single dynamically expanding VHD, one fixed 16 GB VHD, and two NICs.  One NIC is connected to the outside world and the other one is part of an internal-only network.  The first NIC is set up as a DHCP client.  We will get to the other one later. I actually tried this with Windows 2008 R2, but it failed miserably.  Not sure whether it was 2008 R2 or the fact I tried to use cloned VMs in the cluster.  Clustering is one place where NewSID would really come in handy.  Too bad Microsoft bought and buried it. Load and Patch the OS (hence the need for the outside connection).This is a good time to go get dinner.  Maybe a movie too.  There are close to a hundred patches that need to be downloaded and applied.  Avoiding that mess was why I put so much time into trying to get the 2008 R2 version working.  Maybe next time.  Don’t forget to add the extensions for VMLite (or whatever virtualization product you prefer). Set a fixed IP address on the internal-only NIC.  Do not give it a gateway.  Put the same IP address for the NIC and for the DNS Server.  This IP should be in a range that is never available on your public network.  You will need all the addresses in the range available.  See the previous post for the exact settings I used. I chose 10.97.230.1 as the server.  The rest of the 10.97.230 range is what I will use later.  For the curious, those numbers are based on elements of my home address.  Not truly random, but good enough for this project. Do not bridge the network connections.  I never allowed the cluster nodes direct access to any public network. Format the fixed VHD and leave it alone for now. Promote the VM to a Domain Controller.  If you have never done this, don’t worry.  The only meaningful decision is what to call the new domain.  I prefer a bogus name that does not correspond to a real Top-Level Domain (TLD).  .com, .biz., .net, .org  are all TLDs that we know and love.  I chose .test as the TLD since it is descriptive AND it does not exist in the real world.  The domain is called MicroAD.  This gives me MicroAD.Test as my domain. During the promotion process, you will be prompted to install DNS as part of the Domain creation process.  You want to accept this option.  The installer will automatically assign this DNS server as the authoritative owner of the MicroAD.test DNS domain (not to be confused with the MicroAD.test Active Directory domain.) For the rest of the DCPROMO process, just accept the defaults. Now let’s make our IP address management easy.  Add the DHCP Role to the server.  Add the server (10.97.230.1 in this case) as the default gateway to assign to DHCP clients.  Here is where you have to be VERY careful and bind it ONLY to the Internal NIC.  Trust me, your network admin will NOT like an extra DHCP server “helping” out on her network.  Go ahead and create a range of 10-20 IP Addresses in your scope.  You might find other uses for a pocket domain controller <cough> Mirroring </cough> than just for building a cluster.  And Clustering in SQL 2008 and Windows 2008 R2 fully supports DHCP addresses. Now we have three of the five key roles ready.  Two more to go. Next comes file sharing.  Since your cluster node VMs will not have access to any outside, you have to have some way to get files into these VMs.  I simply go to the root of C: and create a “Shared” folder.  I then share it out and grant full control to “Everyone” to both the share and to the underlying NTFS folder.   This will be immensely useful for Service Packs, demo databases, and any other software that isn’t packaged as an ISO that we can mount to the VM. Finally we need to create a block-level multi-connect storage device.  The kind folks at Starwinds Software (http://www.starwindsoftware.com/) graciously gave me a non-expiring demo license for expressly this purpose.  Their iSCSI SAN software lets you create an iSCSI target from nearly any storage medium.  Refreshingly, their product does exactly what they say it does.  Thanks. Remember that 16 GB VHD file?  That is where we are going to carve into our LUNs.  I created an iSCSI folder off the root, just so I can keep everything organized.  I then carved 5 ea. 2 GB iSCSI targets from that folder.  I chose a fixed VHD for performance.  I tried this earlier with a dynamically expanding VHD, but too many layers of abstraction and sparseness combined to make it unusable even for a demo.  Stick with a fixed VHD so there is a one-to-one mapping between abstract and physical storage.  If you read the previous post, you know what I named these iSCSI LUNs and why.  Yes, I do have some left over space.  Always leave yourself room for future growth or options. This gets us up to where we can actually build the nodes and install SQL.  As with most clusters, the real work happens long before the individual nodes get installed and configured.  At least it does if you want the cluster to be a true high-availability platform.

    Read the article

  • VNIC - New feature of AK8 - Working with VNICs

    - by Steve Tunstall
    One of the important new features of the AK8 code is the ability to use multiple IP addresses on the same physical network port. This feature is called VNICs, or Virtual NICs. This allows us to no longer "burn" a whole port in a cluster when one cluster peer owns a network port. Traditionally, we have had to leave Net0 empty on controller 2, because it was used for managing controller 1. Vise-versa for Net1 on Controller 1. Then, if you have data going over 10GigE ports, you probably only had half of your ports running at any given time, and the partner 10GigE port on the other controller just sat there, doing nothing, unless the first controller went down. What a waste. Those days are over.  I want to thank and give a big shout-out to our good partner, OnX Enterprise Solutions, for allowing me to come into their lab and play around with their 7320 to do this demo. They let me make a big mess of their lab for the day as I played around with VNICs. If you're looking for a partner who knows Oracle well and can also piece together a solution from multiple vendors to get you what you need, OnX is a good choice. If you would like to talk to your local OnX rep, you can contact Scott Gill at [email protected] and he can point you in the right direction for your area.  Here we go: Here is what your Datalinks window looks like BEFORE you upgrade to AK8. Here's what the same screen looks like after you upgrade. See the new box? So here is my current network setup. I have my 4 physical interfaces setup each with an IP address. If I ping them, no problems.  So I can ping 180, 181, 251, and 252. However, if I try to ping 240, it does not work, as the 240 address is not being used by any of these interfaces, right?Let's change that. Here, I'm going to make a new Datalink by clicking the Datalink "Plus sign" button. I will check the VNIC box and tell it to use igb2, even though another interface is already using it. Now, I will create a new Interface, and choose "v_dl2" for it's datalink. My new network screen looks like this. A few things to take note of here. First, when I click the "igb2" device, it only highlights dl2 and int2. It does not highlight v_dl2 or v_int2.I think it should, but OK, it looks like VNICs don't highlight when you click the device. Second, note how the underscore character in v_dl2 and v_int2 do not seem to show on this screen. You can see it plainly if you go in and edit them, but from here it looks like a space instead of an underscore. Just a cosmetic bug, but something to be aware of. Now, if I click the VNIC datalink "v_dl2", on the other hand, it DOES highlight the device it belongs to, as it should. Seen here: Note that it did not, however, highlight int2 with it, even though int2 is connected to igb2. That's because we clicked v_dl2, which int2 has nothing to do with. So I'm OK with that. So let's try pinging 240 now. Of course, it works great.  So I now make another VNIC, and call it v_dl3 using igb3, and v_int3 with an address of 241. I then setup three shares, using ports 251, 240, and 241.Remember that IP 251 and 240 both are using the same physical port of igb2, and IP 241 is using port igb3. Next, I copy a folder full of stuff over to all three shares at the same time. I have analytics going so I can see the traffic. My top chart is showing the logical interfaces, and the bottom chart is showing the physical ports.Sure enough, look at the igb2 and vnic1 interfaces. They equal the traffic going over the igb2 physical port on the second chart. VNIC2, on the other hand, gets igb3 all to itself. This would work the same way with 10Gig or Infiniband ports. You can now have multiple IP addresses and even completely different subnets sharing the same physical ports. You may need to make route table entries for that. This allows us to use all of the ports you paid for with no more waste.  Very, very cool.  One small "bug" I found when doing this. It's really not a bug, it was designed to do this when VNICs were not around. But now that we have NVIC capability, they should probably change this. I've alerted the engineering team about this and they're looking into it, so perhaps it will be fixed in a later code. Here it is. Remember when we made the new VNIC datalink, I specifically said to click on the "Plus Sign" button to create it? I don't always do that. I really like to use the drag-and-drop method to create my datalinks in the network screen.HOWEVER, if you were to do that for building a VNIC, it will mess you up a little. Watch this. Here, I'm dragging igb3 over to make a new datalink. igb3 is already being used by dl3, but I'm going to make this a VNIC, so who cares, right? Well, the ZFSSA does not KNOW you are going to make it a VNIC, now does it? So... it works as designed and REMOVES the igb3 device from the current dl3 datalink in the background. See how it's now missing? At the same time, the dl3 datalink choice is missing from my list of possible VNICs for me to choose from!!!! Hey!!! I wanted to pick dl3. Why isn't it on the list??? Well, it can't be on this list because dl3 no longer has a device associated with it. Bummer for you. When you click cancel, the device is still missing from dl3. The fix is easy. Just edit dl3 by clicking the pencil button, do absolutely nothing, and click "Apply". The device will magically come back. Now, make the VNIC datalink by clicking the "Plus Sign" button. Sure enough, once you check the VNIC box, dl3 is a valid choice. No problem.  That's it for now. Have fun with VNICs.

    Read the article

  • Macbook Pro Wireless Reconnecting

    - by A Student at a University
    I'm using a WPA2 EAP network. I'm sitting next to the access point. The connection keeps dropping and taking ~10 seconds to reconnect. My other devices are staying online. What's causing it? syslog: 01:21:10 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:21:10 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX 01:21:10 NetworkManager[XX40]: <info> (eth1): DHCPv4 state changed reboot -> renew 01:21:10 NetworkManager[XX40]: <info> address XXX.XXX.XXX.XXX 01:21:10 NetworkManager[XX40]: <info> prefix 20 (XXX.XXX.XXX.XXX) 01:21:10 NetworkManager[XX40]: <info> gateway XXX.XXX.XXX.XXX 01:21:10 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:21:10 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:21:10 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:21:10 NetworkManager[XX40]: <info> domain name 'server.domain.tld' 01:21:10 dhclient: bound to XXX.XXX.XXX.XXX -- renewal in XXX seconds. 01:33:30 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:33:30 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX 01:33:30 dhclient: bound to XXX.XXX.XXX.XXX -- renewal in XXX seconds. 01:35:13 wpa_supplicant[XX60]: CTRL-EVENT-EAP-STARTED EAP authentication started 01:35:13 wpa_supplicant[XX60]: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected 01:35:14 wpa_supplicant[XX60]: EAP-MSCHAPV2: Authentication succeeded 01:35:14 wpa_supplicant[XX60]: EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed 01:35:14 wpa_supplicant[XX60]: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully 01:35:14 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> 4-way handshake 01:35:14 wpa_supplicant[XX60]: WPA: Key negotiation completed with XX:XX:XX:XX:XX:XX [PTK=CCMP GTK=TKIP] 01:35:14 NetworkManager[XX40]: <info> (eth1): supplicant connection state: 4-way handshake -> group handshake 01:35:14 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:35:17 wpa_supplicant[XX60]: CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys 01:35:17 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> disconnected 01:35:17 NetworkManager[XX40]: <info> (eth1): supplicant connection state: disconnected -> scanning 01:35:26 wpa_supplicant[XX60]: CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys 01:35:26 NetworkManager[XX40]: <info> (eth1): supplicant connection state: scanning -> disconnected 01:35:29 NetworkManager[XX40]: <info> (eth1): supplicant connection state: disconnected -> scanning 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 8 -> 3 (reason 11) 01:35:32 NetworkManager[XX40]: <info> (eth1): deactivating device (reason: 11). 01:35:32 NetworkManager[XX40]: <info> (eth1): canceled DHCP transaction, DHCP client pid XX27 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) starting connection 'Auto XXXXXXXXXX' 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 3 -> 4 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) scheduled... 01:35:32 NetworkManager[XX40]: <info> (eth1): supplicant connection state: scanning -> disconnected 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) started... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) scheduled... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) complete. 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) starting... 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 4 -> 5 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1/wireless): access point 'Auto XXXXXXXXXX' has security, but secrets are required. 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 5 -> 6 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) complete. 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) scheduled... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) started... 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 6 -> 4 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) scheduled... 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 1 of 5 (Device Prepare) complete. 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) starting... 01:35:32 NetworkManager[XX40]: <info> (eth1): device state change: 4 -> 5 (reason 0) 01:35:32 NetworkManager[XX40]: <info> Activation (eth1/wireless): connection 'Auto XXXXXXXXXX' has security, and secrets exist. No new secrets needed. 01:35:32 NetworkManager[XX40]: <info> Config: added 'ssid' value 'XXXXXXXXXX' 01:35:32 NetworkManager[XX40]: <info> Config: added 'scan_ssid' value '1' 01:35:32 NetworkManager[XX40]: <info> Config: added 'key_mgmt' value 'WPA-EAP' 01:35:32 NetworkManager[XX40]: <info> Config: added 'password' value '<omitted>' 01:35:32 NetworkManager[XX40]: <info> Config: added 'eap' value 'PEAP' 01:35:32 NetworkManager[XX40]: <info> Config: added 'fragment_size' value 'XXX0' 01:35:32 NetworkManager[XX40]: <info> Config: added 'phase2' value 'auth=MSCHAPV2' 01:35:32 NetworkManager[XX40]: <info> Config: added 'ca_cert' value '/etc/ssl/certs/Equifax_Secure_CA.pem' 01:35:32 NetworkManager[XX40]: <info> Config: added 'identity' value 'XXXXXXX' 01:35:32 NetworkManager[XX40]: <info> Activation (eth1) Stage 2 of 5 (Device Configure) complete. 01:35:32 NetworkManager[XX40]: <info> Config: set interface ap_scan to 1 01:35:32 NetworkManager[XX40]: <info> (eth1): supplicant connection state: disconnected -> scanning 01:35:36 wpa_supplicant[XX60]: Associated with XX:XX:XX:XX:XX:XX 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: scanning -> associated 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-EAP-STARTED EAP authentication started 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected 01:35:36 wpa_supplicant[XX60]: EAP-MSCHAPV2: Authentication succeeded 01:35:36 wpa_supplicant[XX60]: EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: associated -> 4-way handshake 01:35:36 wpa_supplicant[XX60]: WPA: Could not find AP from the scan results 01:35:36 wpa_supplicant[XX60]: WPA: Key negotiation completed with XX:XX:XX:XX:XX:XX [PTK=CCMP GTK=TKIP] 01:35:36 wpa_supplicant[XX60]: CTRL-EVENT-CONNECTED - Connection to XX:XX:XX:XX:XX:XX completed (reauth) [id=0 id_str=] 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: 4-way handshake -> group handshake 01:35:36 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:35:36 NetworkManager[XX40]: <info> Activation (eth1/wireless) Stage 2 of 5 (Device Configure) successful. Connected to wireless network 'XXXXXXXXXX'. 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 3 of 5 (IP Configure Start) scheduled. 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 3 of 5 (IP Configure Start) started... 01:35:36 NetworkManager[XX40]: <info> (eth1): device state change: 5 -> 7 (reason 0) 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Beginning DHCPv4 transaction (timeout in 45 seconds) 01:35:36 NetworkManager[XX40]: <info> dhclient started with pid XX87 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 3 of 5 (IP Configure Start) complete. 01:35:36 dhclient: Internet Systems Consortium DHCP Client VXXX.XXX.XXX 01:35:36 dhclient: Copyright 2004-2009 Internet Systems Consortium. 01:35:36 dhclient: All rights reserved. 01:35:36 dhclient: For info, please visit https://www.isc.org/software/dhcp/ 01:35:36 dhclient: 01:35:36 NetworkManager[XX40]: <info> (eth1): DHCPv4 state changed nbi -> preinit 01:35:36 dhclient: Listening on LPF/eth1/XX:XX:XX:XX:XX:XX 01:35:36 dhclient: Sending on LPF/eth1/XX:XX:XX:XX:XX:XX 01:35:36 dhclient: Sending on Socket/fallback 01:35:36 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:35:36 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX 01:35:36 dhclient: bound to XXX.XXX.XXX.XXX -- renewal in XXX seconds. 01:35:36 NetworkManager[XX40]: <info> (eth1): DHCPv4 state changed preinit -> reboot 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 4 of 5 (IP4 Configure Get) scheduled... 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 4 of 5 (IP4 Configure Get) started... 01:35:36 NetworkManager[XX40]: <info> address XXX.XXX.XXX.XXX 01:35:36 NetworkManager[XX40]: <info> prefix 20 (XXX.XXX.XXX.XXX) 01:35:36 NetworkManager[XX40]: <info> gateway XXX.XXX.XXX.XXX 01:35:36 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:35:36 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:35:36 NetworkManager[XX40]: <info> nameserver 'XXX.XXX.XXX.XXX' 01:35:36 NetworkManager[XX40]: <info> domain name 'server.domain.tld' 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 5 of 5 (IP Configure Commit) scheduled... 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 4 of 5 (IP4 Configure Get) complete. 01:35:36 NetworkManager[XX40]: <info> Activation (eth1) Stage 5 of 5 (IP Configure Commit) started... 01:35:37 NetworkManager[XX40]: <info> (eth1): device state change: 7 -> 8 (reason 0) 01:35:37 NetworkManager[XX40]: <info> (eth1): roamed from BSSID XX:XX:XX:XX:XX:XX (XXXXXXXXXX) to XX:XX:XX:XX:XX:XX (XXXXXXXXX) 01:35:37 NetworkManager[XX40]: <info> Policy set 'Auto XXXXXXXXXX' (eth1) as default for IPv4 routing and DNS. 01:35:37 NetworkManager[XX40]: <info> Activation (eth1) successful, device activated. 01:35:37 NetworkManager[XX40]: <info> Activation (eth1) Stage 5 of 5 (IP Configure Commit) complete. 01:35:43 wpa_supplicant[XX60]: Trying to associate with XX:XX:XX:XX:XX:XX (SSID='XXXXXXXXXX' freq=2412 MHz) 01:35:43 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> associating 01:35:43 wpa_supplicant[XX60]: Association request to the driver failed 01:35:46 wpa_supplicant[XX60]: Associated with XX:XX:XX:XX:XX:XX 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: associating -> associated 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: associated -> 4-way handshake 01:35:46 wpa_supplicant[XX60]: WPA: Key negotiation completed with XX:XX:XX:XX:XX:XX [PTK=CCMP GTK=TKIP] 01:35:46 wpa_supplicant[XX60]: CTRL-EVENT-CONNECTED - Connection to XX:XX:XX:XX:XX:XX completed (reauth) [id=0 id_str=] 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: 4-way handshake -> group handshake 01:35:46 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:40:47 wpa_supplicant[XX60]: WPA: Group rekeying completed with XX:XX:XX:XX:XX:XX [GTK=TKIP] 01:40:47 NetworkManager[XX40]: <info> (eth1): supplicant connection state: completed -> group handshake 01:40:47 NetworkManager[XX40]: <info> (eth1): supplicant connection state: group handshake -> completed 01:50:19 dhclient: DHCPREQUEST of XXX.XXX.XXX.XXX on eth1 to XXX.XXX.XXX.XXX port 67 01:50:19 dhclient: DHCPACK of XXX.XXX.XXX.XXX from XXX.XXX.XXX.XXX

    Read the article

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • IIS: 404 error on every file in a virtual directory.

    - by Scott Chamberlain
    I am trying to write my first WCF service for IIS 6.0. I followed the instructions on MSDN. I created the virtual directory, I can browse the directory fine but anything I click (even a sub-folder in that folder) gives me a 404 error. What am I missing that I can not access any files or folders? Any logs or whatnot you need just tell me where to find them in the comments and I will post them. UPDATE- Found the log, here is what it says when I connect and try to click on a sub folder. #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2010-03-07 19:08:07 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2010-03-07 19:08:07 W3SVC1 74.62.95.101 GET /prx2.php hash=AA70CBCE8DDD370B4A3E5F6500505C6FBA530220D856 80 - 221.192.199.35 Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.0) 404 0 2 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2010-03-07 22:21:20 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2010-03-07 22:21:20 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 401 2 2148074254 2010-03-07 22:21:26 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 401 1 0 2010-03-07 22:21:26 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 webinfinity\srchamberlain 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 200 0 0 2010-03-07 22:21:29 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/bin/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 404 0 2 --Update again I found this here IIS6 Dynamic Content: A 404.2 entry in the W3C Extended Log file is recorded when a Web Extension is not enabled. Use the IIS Microsoft Management Console (MMC) snap-in to enable the appropriate Web extension. Default Web Extensions include: ASP, ASP.net, Server-Side Includes, WebDAV publishing, FrontPage Server Extensions, Common Gateway Interface (CGI). Custom extensions must be added and explicitly enabled. See the IIS 6.0 Help File for more information. I am guessing the 404 0 2 at the end of the log is a 404.2 error. I now know the why, I still don't know the how on how to fix it.

    Read the article

  • IIS: 404 error on every file in a virtual directory.

    - by Scott Chamberlain
    I am trying to write my first WCF service for IIS 6.0. I followed the instructions on MSDN. I created the virtual directory, I can browse the directory fine but anything I click (even a sub-folder in that folder) gives me a 404 error. What am I missing that I can not access any files or folders? Any logs or whatnot you need just tell me where to find them in the comments and I will post them. UPDATE- Found the log, here is what it says when I connect and try to click on a sub folder. #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2010-03-07 19:08:07 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2010-03-07 19:08:07 W3SVC1 74.62.95.101 GET /prx2.php hash=AA70CBCE8DDD370B4A3E5F6500505C6FBA530220D856 80 - 221.192.199.35 Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.0) 404 0 2 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2010-03-07 22:21:20 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2010-03-07 22:21:20 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 401 2 2148074254 2010-03-07 22:21:26 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 401 1 0 2010-03-07 22:21:26 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/ - 80 webinfinity\srchamberlain 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 200 0 0 2010-03-07 22:21:29 W3SVC1 127.0.0.1 GET /RemoteUserManagerService/bin/ - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.2;+WOW64;+Trident/4.0;+.NET+CLR+3.0.04506.30;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.04506.648;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) 404 0 2

    Read the article

  • What is "Disable class based route addition" good for?

    - by id.roppert.dejroppert
    In the advanced TCP/IP settings of a VPN connection, i found a checkbox labeled with "Disable class based route addition". The checkbox is only enabled as long as "Use default gateway on remote network" is switched off. What is "Disable class based route addition" good for? Detailed instructions to find the settings: Open Properties of VPN connection Go to Networking tab Open Properties of "Internet Protocol Version 4 (TCP/IPv4)" (and/or TCP/IPv6) Click "Advanced..." Button Change to "IP Settings" tab Here you can find the checkboxes mentioned above

    Read the article

  • Mounting NAS share: Bad Address

    - by Korben
    I've faced to the problem that can't solve. Hope you can help me with it. I have a storage QNAP TS-459U, with it's own Linux, and 'massive1' folder shared, which I need to mount to my Debian server. They are connected by regular patch cord. Debian server has two network interfaces - eth0 and eth1. eth0 is for Internet, eth1 is for QNAP. So, I'm saying this: mount -t cifs //169.254.100.100/massive1/ /mnt/storage -o user=admin , where 169.254.100.100 is an IP of QNAP's interface. The result I get (after entering password): mount error(14): Bad address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Tried: mount.cifs, smbmount, with '/' at the end of the network share and without it, and many other variations of that command. And always its: mount error(14): Bad address Funny thing is when I was in Data Center, I had connected my netbook to QNAP by the same scheme (with Fedora 16 on it), and it connected without any problems, I could read/write files on the QNAP's NAS share! So I'm really stuck with the Debian. I can't undrestand where's the difference with Fedora, making this error. Yeah, I've used Google. Couldn't find any useful info. Ping to the QNAP's IP is working, I can log into QNAP's Linux by ssh, telnet on 139's port is working. This is network interface configuration I use in Debian: IP: 169.254.100.1 Netmask: 255.255.0.0 The only diffence in connecting to Fedora and Debian is that in Fedora I've added gateway - 169.254.100.129, but ping to this IP is not working, so I think it's not necessary at all. P.S. ~# cat /etc/debian_version wheezy/sid ~# uname -a Linux host 2.6.32-5-openvz-amd64 #1 SMP Mon Mar 7 22:25:57 UTC 2011 x86_64 GNU/Linux ~# smbtree WORKGROUP \\HOST host server \\HOST\IPC$ IPC Service (host server) \\HOST\print$ Printer Drivers NAS \\MASSIVE1 NAS Server \\MASSIVE1\IPC$ IPC Service (NAS Server) \\MASSIVE1\massive1 \\MASSIVE1\Network Recycle Bin 1 [RAID5 Disk Volume: Drive 1 2 3 4] \\MASSIVE1\Public System default share \\MASSIVE1\Usb System default share \\MASSIVE1\Web System default share \\MASSIVE1\Recordings System default share \\MASSIVE1\Download System default share \\MASSIVE1\Multimedia System default share Please, help me with solving this strange issue. Thanks before.

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >