Search Results

Search found 5157 results on 207 pages for 'node'.

Page 204/207 | < Previous Page | 200 201 202 203 204 205 206 207  | Next Page >

  • How to move the mouse

    - by GroundZero
    I'm making a little bot in C#. At the moment it works pretty well, it can load text from a file and type it for you. But for now, I need to manualy click the textfield to put the focus on it, remaximize my form and then click the Type-button. After the typing, I need to manualy slide the scorebar and press submit. I'd like to know how I can move my mouse with C# and if possible, if possible I'd like to load the mouse positions from a xml-file. I need to move to the textfield, click in it to focus on it, start the type script, move to the slider, hold the mouse down on it while dragging, releasing it on the correct position & clicking on the submitbutton This is what I have for now: To load in the variables, I'm using this script: private void Initialize() { XmlTextReader reader = new XmlTextReader(Application.StartupPath + @"..\..\..\CursorPositions.xml"); while (reader.Read()) { switch (reader.NodeType) { case XmlNodeType.Element: // The node is an element. element = reader.Value; break; case XmlNodeType.Text: //Display the text in each element. switch (element) { case "Textbox-X": textX = int.Parse(reader.Value); break; case "Textbox-Y": textY = int.Parse(reader.Value); break; case "SliderBegin-X": sliderX = int.Parse(reader.Value); break; case "SliderBegin-Y": sliderY = int.Parse(reader.Value); break; case "SubmitButton-X": submitX = int.Parse(reader.Value); break; case "SubmitButton-Y": submitY = int.Parse(reader.Value); break; } break; } } This is the xml-file: <?xml version="1.0" encoding="utf-8" ?> <CursorPositions> <Textbox-X>430</Textbox-X> <Textbox-Y>270</Textbox-Y> <SliderBegin-X>430</SliderBegin-X> <SliderBegin-Y>470</SliderBegin-Y> <SubmitButton-X>860</SubmitButton-X> <SubmitButton-Y>365</SubmitButton-Y> </CursorPositions> To move the mouse I'm using this piece of code: public partial class FrmMain : Form { [System.Runtime.InteropServices.DllImport("user32.dll")] public static extern void mouse_event(int dwFlags, int dx, int dy, int cButtons, int dwExtraInfo); public const int MOUSEEVENTF_LEFTDOWN = 0x0002; public const int MOUSEEVENTF_LEFTUP = 0x0004; public const int MOUSEEVENTF_RIGHTDOWN = 0x0008; public const int MOUSEEVENTF_RIGHTUP = 0x0010; ... private void btnStart_Click(object sender, EventArgs e) { // start button (de)activates loop if (!running) { btnStart.Text = "Stop"; btnStart.Cursor = Cursors.No; running = true; } else { btnStart.Text = "Start"; btnStart.Cursor = Cursors.AppStarting; running = false; } while (running) { // move to textbox & type Cursor.Position = new Point(textX, textY); mouse_event(MOUSEEVENTF_LEFTDOWN, textX, textY, 0, 0); mouse_event(MOUSEEVENTF_LEFTUP, textX, textY, 0, 0); Type(); // wait 90 seconds till slider available Thread.Sleep(90 * 1000); // move to slider & slide according to score Cursor.Position = new Point(sliderX, sliderY); mouse_event(MOUSEEVENTF_LEFTDOWN, sliderX, sliderY, 0, 0); Cursor.Position = new Point(sliderX + 345 / 10 * score, sliderY); mouse_event(MOUSEEVENTF_LEFTUP, sliderX + 345 / 10 * score, sliderY, 0, 0); // submit Cursor.Position = new Point(submitX, submitY); mouse_event(MOUSEEVENTF_LEFTDOWN, submitX, submitY, 0, 0); mouse_event(MOUSEEVENTF_LEFTUP, submitX, submitY, 0, 0); // wait 10 sec to be sure it's submitted Thread.Sleep(10 * 1000); // refresh page SendKeys.SendWait("{F5}"); // get new text NewText(); // wait 10 sec to refresh and load song Thread.Sleep(10 * 1000); } } } PS: I get the coordinates via my form. I've got 2 labels that show my X & Y coordinates. To capture them outside the form, I press and hold my Left Mouse Button and 'drag' it outside the form to the correct place. This way I get the coordinates of my mouse outside the form

    Read the article

  • Many-to-one relation exception due to closed session after loading

    - by Nick Thissen
    Hi, I am using NHibernate (version 1.2.1) for the first time so I wrote a simple test application (an ASP.NET project) that uses it. In my database I have two tables: Persons and Categories. Each person gets one category, seems easy enough. | Persons | | Categories | |--------------| |--------------| | Id (PK) | | Id (PK) | | Firstname | | CategoryName | | Lastname | | CreatedTime | | CategoryId | | UpdatedTime | | CreatedTime | | Deleted | | UpdatedTime | | Deleted | The Id, CreatedTime, UpdatedTime and Deleted attributes are a convention I use in all my tables, so I have tried to bring this fact into an additional abstraction layer. I have a project DatabaseFramework which has three important classes: Entity: an abstract class that defines these four properties. All 'entity objects' (in this case Person and Category) must inherit Entity. IEntityManager: a generic interface (type parameter as Entity) that defines methods like Load, Insert, Update, etc. NHibernateEntityManager: an implementation of this interface using NHibernate to do the loading, saving, etc. Now, the Person and Category classes are straightforward, they just define the attributes of the tables of course (keeping in mind that four of them are in the base Entity class). Since the Persons table is related to the Categories table via the CategoryId attribute, the Person class has a Category property that holds the related category. However, in my webpage, I will also need the name of this category (CategoryName), for databinding purposes for example. So I created an additional property CategoryName that returns the CategoryName property of the current Category property, or an empty string if the Category is null: Namespace Database Public Class Person Inherits DatabaseFramework.Entity Public Overridable Property Firstname As String Public Overridable Property Lastname As String Public Overridable Property Category As Category Public Overridable ReadOnly Property CategoryName As String Get Return If(Me.Category Is Nothing, _ String.Empty, _ Me.Category.CategoryName) End Get End Property End Class End Namespace I am mapping the Person class using this mapping file. The many-to-one relation was suggested by Yads in another thread: <id name="Id" column="Id" type="int" unsaved-value="0"> <generator class="identity" /> </id> <property name="CreatedTime" type="DateTime" not-null="true" /> <property name="UpdatedTime" type="DateTime" not-null="true" /> <property name="Deleted" type="Boolean" not-null="true" /> <property name="Firstname" type="String" /> <property name="Lastname" type="String" /> <many-to-one name="Category" column="CategoryId" class="NHibernateWebTest.Database.Category, NHibernateWebTest" /> (I can't get it to show the root node, this forum hides it, I don't know how to escape the html-like tags...) The final important detail is the Load method of the NHibernateEntityManager implementation. (This is in C# as it's in a different project, sorry about that). I simply open a new ISession (ISessionFactory.OpenSession) in the GetSession method and then use that to fill an EntityCollection(Of TEntity) which is just a collection inheriting System.Collections.ObjectModel.Collection(Of T). public virtual EntityCollection< TEntity Load() { using (ISession session = this.GetSession()) { var entities = session .CreateCriteria(typeof (TEntity)) .Add(Expression.Eq("Deleted", false)) .List< TEntity (); return new EntityCollection< TEntity (entities); } } (Again, I can't get it to format the code correctly, it hides the generic type parameters, probably because it reads the angled symbols as a HTML tag..? If you know how to let me do that, let me know!) Now, the idea of this Load method is that I get a fully functional collection of Persons, all their properties set to the correct values (including the Category property, and thus, the CategoryName property should return the correct name). However, it seems that is not the case. When I try to data-bind the result of this Load method to a GridView in ASP.NET, it tells me this: Property accessor 'CategoryName' on object 'NHibernateWebTest.Database.Person' threw the following exception:'Could not initialize proxy - the owning Session was closed.' The exception occurs on the DataBind method call here: public virtual void LoadGrid() { if (this.Grid == null) return; this.Grid.DataSource = this.Manager.Load(); this.Grid.DataBind(); } Well, of course the session is closed, I closed it via the using block. Isn't that the correct approach, should I keep the session open? And for how long? Can I close it after the DataBind method has been run? In each case, I'd really like my Load method to just return a functional collection of items. It seems to me that it is now only getting the Category when it is required (eg, when the GridView wants to read the CategoryName, which wants to read the Category property), but at that time the session is closed. Is that reasoning correct? How do I stop this behavior? Or shouldn't I? And what should I do otherwise? Thanks!

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • Failure connecting to Dell MD3200i from XenServer 6.2 pool

    - by Tom Sparrow
    This question also asked at Citrix Forums http://forums.citrix.com/thread.jspa?threadID=332289 I have a MD3200i that is currently working fine with my Xen5.6 pool, but I cannot get a connection to the new 6.2 pool to work. I previously had a problem with a 6.0 upgrade (which is why the old pool is still on 5.6), but rolled back rather than fix it as it wasn't urgent at the time. This install is on new machines - I tried 6.1 first (which had the same problems) then 6.2 was released the second day after installation so I switched to that. I've not installed anything from the Dell resource DVD at this point - I can't find anything saying I should, and everything I have read suggests it shouldn't be necessary. I can ping all 8 ip addresses from both servers in the pool, iscsiadm -m discovery works fine, I can login to the nodes and iscsiadm reports the sessions active correctly. I've added the required sections to multipath.conf, but multipath -ll reports DM multipath kernel driver not loaded immediately after boot. The following is a log of a test session immediately after boot. root@xen3 ~]# iscsiadm -m node --loginall=all Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.101,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.101,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.104,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.102,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.103,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.104,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.102,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.103,3260] Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.101,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.101,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.104,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.102,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.103,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.104,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.102,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.103,3260]: successful [root@xen3 ~]# iscsiadm -m session tcp: [1] 192.168.130.101:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [2] 192.168.131.101:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [3] 192.168.131.104:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [4] 192.168.131.102:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [5] 192.168.130.103:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [6] 192.168.130.104:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [7] 192.168.130.102:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [8] 192.168.131.103:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 [root@xen3 ~]# service multipathd restart ok Stopping multipathd daemon: [ OK ] Starting multipathd daemon: [ OK ] [root@xen3 ~]# multipath Jul 04 09:58:47 | DM multipath kernel driver not loaded Jul 04 09:58:47 | DM multipath kernel driver not loaded [root@xen3 ~]# multipath -ll Jul 04 09:59:03 | DM multipath kernel driver not loaded Jul 04 09:59:03 | DM multipath kernel driver not loaded [ root@xen3 ~]# modprobe dm_multipath [root@xen3 ~]# multipath Jul 04 10:19:50 | 36b8ca3a0e7024800194a0bd11891cd14: ignoring map create: 1Dell_Internal_Dual_SD_0123456789AB undef Dell,Internal Dual SD size=1.9G features='0' hwhandler='0' wp=undef `-+- policy='round-robin 0' prio=1 status=undef `- 7:0:0:0 sdb 8:16 undef ready running [root@xen3 ~]# multipath -ll 1Dell_Internal_Dual_SD_0123456789AB dm-1 Dell,Internal Dual SD size=1.9G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=enabled `- 7:0:0:0 sdb 8:16 active ready running [root@xen3 ~]# iscsiadm -m session tcp: [1] 192.168.130.101:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [2] 192.168.131.101:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [3] 192.168.131.104:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [4] 192.168.131.102:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [5] 192.168.130.103:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [6] 192.168.130.104:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [7] 192.168.130.102:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [8] 192.168.131.103:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 [root@xen3 ~]# dmesg | tail -n 50 [ 1161.881010] sd 8:0:0:0: [sdf] Unhandled error code [ 1161.881013] sd 8:0:0:0: [sdf] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1161.881017] sd 8:0:0:0: [sdf] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1161.881024] end_request: I/O error, dev sdf, sector 0 [ 1161.881031] Buffer I/O error on device sdf, logical block 0 [ 1161.881045] sd 15:0:0:0: [sdi] Unhandled error code [ 1161.881048] sd 15:0:0:0: [sdi] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1161.881052] sd 15:0:0:0: [sdi] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1161.881058] end_request: I/O error, dev sdi, sector 0 [ 1161.881065] Buffer I/O error on device sdi, logical block 0 [ 1161.881122] sd 9:0:0:0: [sdg] Unhandled error code [ 1161.881124] sd 9:0:0:0: [sdg] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1161.881126] sd 9:0:0:0: [sdg] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1161.881132] end_request: I/O error, dev sdg, sector 0 [ 1161.881140] Buffer I/O error on device sdg, logical block 0 [ 1168.220951] connection6:0: ping timeout of 15 secs expired, recv timeout 10, last rx 84060, last ping 85060, now 86560 [ 1168.220957] connection7:0: ping timeout of 15 secs expired, recv timeout 10, last rx 84060, last ping 85060, now 86560 [ 1168.220967] connection7:0: detected conn error (1011) [ 1168.220969] connection4:0: ping timeout of 15 secs expired, recv timeout 10, last rx 84060, last ping 85060, now 86560 [ 1168.220973] connection4:0: detected conn error (1011) [ 1168.220975] connection3:0: ping timeout of 15 secs expired, recv timeout 10, last rx 84060, last ping 85060, now 86560 [ 1168.220978] connection3:0: detected conn error (1011) [ 1168.220985] connection6:0: detected conn error (1011) [ 1168.480994] sd 14:0:0:0: [sde] Unhandled error code [ 1168.480998] sd 14:0:0:0: [sde] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1168.481001] sd 14:0:0:0: [sde] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1168.481009] end_request: I/O error, dev sde, sector 0 [ 1168.481015] Buffer I/O error on device sde, logical block 0 [ 1168.481076] sd 11:0:0:0: [sdc] Unhandled error code [ 1168.481078] sd 11:0:0:0: [sdc] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1168.481080] sd 11:0:0:0: [sdc] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1168.481087] end_request: I/O error, dev sdc, sector 0 [ 1168.481092] Buffer I/O error on device sdc, logical block 0 [ 1168.481144] sd 10:0:0:0: [sdd] Unhandled error code [ 1168.481147] sd 10:0:0:0: [sdd] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1168.481150] sd 10:0:0:0: [sdd] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1168.481156] end_request: I/O error, dev sdd, sector 0 [ 1168.481163] Buffer I/O error on device sdd, logical block 0 [ 1168.481168] sd 13:0:0:0: [sdj] Unhandled error code [ 1168.481170] sd 13:0:0:0: [sdj] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1168.481172] sd 13:0:0:0: [sdj] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1168.481178] end_request: I/O error, dev sdj, sector 0 [ 1168.481184] Buffer I/O error on device sdj, logical block 0 [ 1457.105996] device-mapper: multipath round-robin: version 1.0.0 loaded [ 1457.106155] device-mapper: multipath: Cannot access device path 8:0: -16 [ 1457.106164] device-mapper: table: 252:1: multipath: error getting device [ 1457.106172] device-mapper: ioctl: error adding target to table [ 1457.171292] device-mapper: multipath: Cannot access device path 8:0: -16 [ 1457.171299] device-mapper: table: 252:1: multipath: error getting device [ 1457.171304] device-mapper: ioctl: error adding target to table [root@xen3 ~]# fdisk -l Disk /dev/sda: 299.4 GB, 299439751168 bytes 255 heads, 63 sectors/track, 36404 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 5 40131 de Dell Utility /dev/sda2 * 6 528 4194304 83 Linux Partition 2 does not end on cylinder boundary. /dev/sda3 528 1050 4194304 83 Linux /dev/sda4 1050 36404 283986359+ 8e Linux LVM Disk /dev/sdb: 2040 MB, 2040528896 bytes 255 heads, 63 sectors/track, 248 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 248 1992028+ 83 Linux Disk /dev/dm-1: 2040 MB, 2040528896 bytes 255 heads, 63 sectors/track, 248 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/dm-1p1 1 248 1992028+ 83 Linux [root@xen3 ~]# xe sr-probe type=lvmoiscsi device-config:target=192.168.130.101 device-config:targetIQN=iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 Error code: SR_BACKEND_FAILURE_107 Error parameters: , The SCSIid parameter is missing or incorrect, <?xml version="1.0" ?> <iscsi-target/> Note: the xml ends there correctly on the last line - it doesn't ever return a list of LUNs (and there is one in the group on the SAN for those servers.

    Read the article

  • Java update/install via group policy

    - by Maximus
    I trying to deploy the latest Java RE version via GP, Java 7 update 9. I want to update computers that are currently running an older version of Java, a mixture of 7.6 and 7.7, some computers are running versions as old as 6.31. Some are running a mixture of both. I would also like this GP to install Java if it's not installed. Previously I used push out Java updates to users machines as Java didn't remove the old version. So when it was done the user would restart their browser or pc to start using the latest version. Not the best way to manage it as it leaves the old version installed but it worked. I've created group policies before for printer deployment, log on drive mapping scripts, but never software deployment. I've extracted the Java MSI and created a transform file to suppress reboot etc using orca. As described on this site http://ivan.dretvic.com/2011/06/how-to-package-and-deploy-java-jre-1-6-0_26-via-group-policy/. I have also tried saving the edited MSI directly and that didn't work either. But it just won't deploy. I have tried to enable logging as suggested on this site http://openofficetechnology.com/node/32, GPO logging via UserEnvDebugLevel, Software deployment logging via AppmgmtDebugLevel and MSI logging, but there is no log C:\Windows\Debug\UserMode\userenv.log being created. The windows event viewer has the following errors: Error 24/10/2012 11:44:04 AM - "Failed to apply changes to software installation settings. Software changes could not be applied. A previous log entry with details should exist. The error was : %%1612" Information 24/10/2012 11:44:04 AM - "The removal of the assignment of application Java 7 Update 9 - FB Java Transform from policy JavaDeploy succeeded." Error 24/10/2012 11:44:04 AM - "The install of application Java 7 Update 9 - FB Java Transform from policy JavaDeploy failed. The error was : %%1612" There is a log created for MSI logging and it's as below. It says the source is invalid but it exists on the share and the PC that I'm testing has permissions and I've included the recommendation here Group Policy installation failed error 1274 to enable "Always wait for the network at computer startup and logon" === Verbose logging started: 24/10/2012 11:43:59 Build type: SHIP UNICODE 5.00.7601.00 Calling process: C:\Windows\system32\svchost.exe === MSI (c) (9C:EC) [11:43:59:898]: Resetting cached policy values MSI (c) (9C:EC) [11:43:59:898]: Machine policy value 'Debug' is 3 MSI (c) (9C:EC) [11:43:59:898]: ******* RunEngine: ******* Product: {26a24ae4-039d-4ca4-87b4-2f83217009ff} ******* Action: ******* CommandLine: ********** MSI (c) (9C:EC) [11:43:59:898]: Client-side and UI is none or basic: Running entire install on the server. MSI (c) (9C:EC) [11:43:59:898]: Grabbed execution mutex. MSI (c) (9C:EC) [11:44:03:431]: Cloaking enabled. MSI (c) (9C:EC) [11:44:03:431]: Attempting to enable all disabled privileges before calling Install on Server MSI (c) (9C:EC) [11:44:03:439]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (2C:70) [11:44:03:574]: Running installation inside multi-package transaction {26a24ae4-039d-4ca4-87b4-2f83217009ff} MSI (s) (2C:70) [11:44:03:574]: Grabbed execution mutex. MSI (s) (2C:7C) [11:44:03:607]: Resetting cached policy values MSI (s) (2C:7C) [11:44:03:607]: Machine policy value 'Debug' is 3 MSI (s) (2C:7C) [11:44:03:607]: ******* RunEngine: ******* Product: {26a24ae4-039d-4ca4-87b4-2f83217009ff} ******* Action: ******* CommandLine: ********** MSI (s) (2C:7C) [11:44:03:607]: Machine policy value 'DisableUserInstalls' is 0 MSI (s) (2C:7C) [11:44:03:623]: User policy value 'SearchOrder' is 'nmu' MSI (s) (2C:7C) [11:44:03:624]: User policy value 'DisableMedia' is 0 MSI (s) (2C:7C) [11:44:03:624]: Machine policy value 'AllowLockdownMedia' is 0 MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Media enabled only if package is safe. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Looking for sourcelist for product {26a24ae4-039d-4ca4-87b4-2f83217009ff} MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Adding {26a24ae4-039d-4ca4-87b4-2f83217009ff}; to potential sourcelist list (pcode;disk;relpath). MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Now checking product {26a24ae4-039d-4ca4-87b4-2f83217009ff} MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Media is enabled for product. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Attempting to use LastUsedSource from source list. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Processing net source list. MSI (s) (2C:7C) [11:44:03:624]: SOURCEMGMT: Trying source \\server\share\deployment\Java\stable\x32\. MSI (s) (2C:7C) [11:44:03:650]: Note: 1: 2303 2: 5 3: \\server\share\ MSI (s) (2C:7C) [11:44:03:650]: Note: 1: 1325 2: deployment MSI (s) (2C:7C) [11:44:03:650]: ConnectToSource: CreatePath/CreateFilePath failed with: -2147483648 1325 -2147483648 MSI (s) (2C:7C) [11:44:03:650]: ConnectToSource (con't): CreatePath/CreateFilePath failed with: -2147483648 -2147483648 MSI (s) (2C:7C) [11:44:03:650]: SOURCEMGMT: net source '\\server\share\deployment\Java\stable\x32\' is invalid. MSI (s) (2C:7C) [11:44:03:650]: Note: 1: 1706 2: -2147483647 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:03:650]: SOURCEMGMT: Processing media source list. MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 2203 2: 3: -2147287037 MSI (s) (2C:7C) [11:44:04:668]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1706 2: -2147483647 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:04:668]: SOURCEMGMT: Processing URL source list. MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1402 2: UNKNOWN\URL 3: 2 MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1706 2: -2147483647 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:04:668]: Note: 1: 1706 2: 3: jre1.7.0_09.msi MSI (s) (2C:7C) [11:44:04:668]: SOURCEMGMT: Failed to resolve source MSI (s) (2C:7C) [11:44:04:668]: MainEngineThread is returning 1612 MSI (s) (2C:70) [11:44:04:670]: User policy value 'DisableRollback' is 0 MSI (s) (2C:70) [11:44:04:670]: Machine policy value 'DisableRollback' is 0 MSI (s) (2C:70) [11:44:04:670]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (2C:70) [11:44:04:670]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 MSI (s) (2C:70) [11:44:04:671]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 MSI (s) (2C:70) [11:44:04:671]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2 MSI (s) (2C:70) [11:44:04:671]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2 MSI (s) (2C:70) [11:44:04:671]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (s) (2C:70) [11:44:04:671]: Restoring environment variables MSI (c) (9C:EC) [11:44:04:675]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (9C:EC) [11:44:04:675]: MainEngineThread is returning 1612 === Verbose logging stopped: 24/10/2012 11:44:04 === I'm not sure what my next approach should be. Any help would be much appreciated. Thanks.

    Read the article

  • How to stop a ICMP attack?

    - by cumhur onat
    We are under a heavy icmp flood attack. Tcpdump shows the result below. Altough we have blocked ICMP with iptables tcpdump still prints icmp packets. I've also attached iptables configuration and "top" result. Is there any thing I can do to completely stop icmp packets? [root@server downloads]# tcpdump icmp -v -n -nn tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 03:02:47.810957 IP (tos 0x0, ttl 49, id 16007, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 124, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.811559 IP (tos 0x0, ttl 49, id 16010, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 52, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.811922 IP (tos 0x0, ttl 49, id 16012, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 122, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.812485 IP (tos 0x0, ttl 49, id 16015, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 126, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.812613 IP (tos 0x0, ttl 49, id 16016, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 122, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.812992 IP (tos 0x0, ttl 49, id 16018, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 122, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.813582 IP (tos 0x0, ttl 49, id 16020, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 52, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.814092 IP (tos 0x0, ttl 49, id 16023, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 120, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.814233 IP (tos 0x0, ttl 49, id 16024, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 120, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.815579 IP (tos 0x0, ttl 49, id 16025, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 50, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.815726 IP (tos 0x0, ttl 49, id 16026, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 IP (tos 0x0, ttl 50, id 31864, offset 0, flags [none], proto: ICMP (1), length: 76) 77.92.136.196 > 94.201.175.188: [|icmp] 03:02:47.815890 IP (tos 0x0, ttl 49, id 16027, offset 0, flags [none], proto: ICMP (1), length: 56) 80.227.64.183 > 77.92.136.196: ICMP redirect 94.201.175.188 to host 80.227.64.129, length 36 iptables configuration: [root@server etc]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ofis tcp -- anywhere anywhere tcp dpt:mysql ofis tcp -- anywhere anywhere tcp dpt:ftp DROP icmp -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination DROP icmp -- anywhere anywhere Chain ofis (2 references) target prot opt source destination ACCEPT all -- OUR_OFFICE_IP anywhere DROP all -- anywhere anywhere top: top - 03:12:19 up 400 days, 15:43, 3 users, load average: 1.49, 1.67, 2.61 Tasks: 751 total, 3 running, 748 sleeping, 0 stopped, 0 zombie Cpu(s): 8.2%us, 1.0%sy, 0.0%ni, 87.9%id, 2.1%wa, 0.1%hi, 0.7%si, 0.0%st Mem: 32949948k total, 26906844k used, 6043104k free, 4707676k buffers Swap: 10223608k total, 0k used, 10223608k free, 14255584k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 36 root 39 19 0 0 0 R 100.8 0.0 17:03.56 ksoftirqd/11 10552 root 15 0 11408 1460 676 R 5.7 0.0 0:00.04 top 7475 lighttpd 15 0 304m 22m 15m S 3.8 0.1 0:05.37 php-cgi 1294 root 10 -5 0 0 0 S 1.9 0.0 380:54.73 kjournald 3574 root 15 0 631m 11m 5464 S 1.9 0.0 0:00.65 node 7766 lighttpd 16 0 302m 19m 14m S 1.9 0.1 0:05.70 php-cgi 10237 postfix 15 0 52572 2216 1692 S 1.9 0.0 0:00.02 scache 1 root 15 0 10372 680 572 S 0.0 0.0 0:07.99 init 2 root RT -5 0 0 0 S 0.0 0.0 0:16.72 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.06 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 1:10.46 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:01.11 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 2:36.15 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.19 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 3:48.91 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.20 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 uname -a [root@server etc]# uname -a Linux thisis.oursite.com 2.6.18-238.19.1.el5 #1 SMP Fri Jul 15 07:31:24 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux arp -an [root@server downloads]# arp -an ? (77.92.136.194) at 00:25:90:04:F0:90 [ether] on eth0 ? (192.168.0.2) at 00:25:90:04:F0:91 [ether] on eth1 ? (77.92.136.193) at 00:23:9C:0B:CD:01 [ether] on eth0

    Read the article

  • DNS Problems (NIGHTMARES!) with BIND and Virtualmin

    - by Nyxynyx
    I have a webserver (Ubuntu 12.04 with LAMP) using Virtualmin / Webmin. Because I just moved from a Cpanel system, I am having a nightmare configuring the DNS! Using intoDNS.com, the failed reports are: Mismatched NS records WARNING: One or more of your nameservers did not return any of your NS records. DNS servers responded ERROR: One or more of your nameservers did not respond: The ones that did not respond are: 123.123.123.123 213.251.188.141x Multiple Nameservers ERROR: Looks like you have less than 2 nameservers. According to RFC2182 section 5 you must have at least 3 nameservers, and no more than 7. Having 2 nameservers is also ok by me. Missing nameservers reported by your nameserver You should already know that your NS records at your nameservers are missing, so here it is again: ns1.mydomain.com. sdns2.ovh.net. SOA record No valid SOA record came back! MX Records WWW A Record ERROR: I could not get any A records for www.mydomain.com! Step-by-Step of my Attempt In my domain registrar (Namecheap), I registered ns1.mydomain.com as a nameserver, pointing to the IP address of my web server which is running bind9. The domain is setup with DNS ns1.mydomain.com and sdns2.ovh.net. sdns2.ovh.net is a secondary DNS server (SLAVE and pointing mydomain.com to the IP address of my web server) Webserver domain: mydomain.com Webserver hostname: ns4000000.ip-123-123-123.net Webserver IP: 123.123.123.123 Under Virtualmin, I edited the default Virtual server template, BIND DNS records for new domains: ns1.mydomain.com Master DNS server hostname: ns1.mydomain.com Next I created a Virtual server using that server template. This is what I've done but its still not working! Any ideas? I've been stuck for days, thank you for all your help! service bind9 status * bind9 is running lsof -i :53 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME named 6966 bind 20u IPv6 338583 0t0 TCP *:domain (LISTEN) named 6966 bind 21u IPv4 338588 0t0 TCP localhost.localdomain:domain (LISTEN) named 6966 bind 22u IPv4 338590 0t0 TCP ns4000000.ip-123-123-123.net:domain (LISTEN) named 6966 bind 512u IPv6 338582 0t0 UDP *:domain named 6966 bind 513u IPv4 338587 0t0 UDP localhost.localdomain:domain named 6966 bind 514u IPv4 338589 0t0 UDP ns4000000.ip-123-123-123.net:domain /etc/resolv.con (Not sure how 213.186.33.99 got here) nameserver 127.0.0.1 nameserver 213.186.33.99 search ovh.net host 123.123.123.123 (my web server's IP) 13.60.245.198.in-addr.arpa domain name pointer ns4000000.ip-123-123-123.net. nslookup 213.186.33.99 Server: 127.0.0.1 Address: 127.0.0.1#53 Non-authoritative answer: 99.33.186.213.in-addr.arpa name = cdns.ovh.net. Authoritative answers can be found from: 33.186.213.in-addr.arpa nameserver = ns.ovh.net. 33.186.213.in-addr.arpa nameserver = dns.ovh.net. nslookup ns1.mydomain.com ;; Got SERVFAIL reply from 127.0.0.1, trying next server ;; connection timed out; no servers could be reached nslookup ns2.mydomain.com ;; Got SERVFAIL reply from 127.0.0.1, trying next server ;; connection timed out; no servers could be reached nslookup www.mydomain.com ;; Got SERVFAIL reply from 127.0.0.1, trying next server ;; connection timed out; no servers could be reached dig mydomain.com ; <<>> DiG 9.8.1-P1 <<>> mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 43540 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;mydomain.com. IN A ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 11 11:30:09 2012 ;; MSG SIZE rcvd: 30 dig ns1.mydomain.com ; <<>> DiG 9.8.1-P1 <<>> ns1.mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31254 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns1.mydomain.com. IN A ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 11 11:30:16 2012 ;; MSG SIZE rcvd: 34 /etc/bind/named.conf include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; /etc/bind/named.conf.default-zones zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; /etc/bind/named.conf.local zone "mydomain.com" { type master; file "/var/lib/bind/mydomain.com.hosts"; allow-transfer { 127.0.0.1; localnets; }; }; /etc/bind/named.conf.options options { directory "/var/cache/bind"; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; // allow-recursion { 127.0.0.1; }; // transfer-source; }; named-checkconf -z dns_master_load: /var/lib/bind/mydomain.com.hosts:21: unexpected end of line dns_master_load: /var/lib/bind/mydomain.com.hosts:20: unexpected end of input /var/lib/bind/mydomain.com.hosts: file does not end with newline zone mydomain.com/IN: loading from master file /var/lib/bind/mydomain.com.hosts failed: unexpected end of input zone mydomain.com/IN: not loaded due to errors. _default/mydomain.com/IN: unexpected end of input zone localhost/IN: loaded serial 2 zone 127.in-addr.arpa/IN: loaded serial 1 zone 0.in-addr.arpa/IN: loaded serial 1 zone 255.in-addr.arpa/IN: loaded serial 1 iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:20000 ACCEPT tcp -- anywhere anywhere tcp dpt:webmin ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:imaps ACCEPT tcp -- anywhere anywhere tcp dpt:imap2 ACCEPT tcp -- anywhere anywhere tcp dpt:pop3s ACCEPT tcp -- anywhere anywhere tcp dpt:pop3 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:submission ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Can't install any drivers at all on Windows 8. Error 0x000003F9

    - by ABarney
    I suddenly can't install any drivers at all on my Windows 8 Pro x64 install. It doesn't matter what kind of driver it is, nothing will install. Everything ends with error 0x000003F9: The system has attempted to load or restore a file into the registry, but the specified file is not in a registry file format. When Windows Update tries to install a driver, it just gives error code 800703F9 and says that "Windows Update ran into a problem." I've already done a scan of system files with sfc, tried another user account, done a chkdsk, and a few more things, but nothing works. The problem started when I tried to install drivers for my printer earlier today and suddenly started getting messages saying that "Windows Modules Installer has stopped working." I decided to restart and was being greeted with the recovery boot options. I shut the computer down, but when I booted it back up the same thing happened, so I did a repair your pc, and was able to boot into the OS properly. Then I rebooted into my external drive and did a chkdsk on the Windows 8 install that started acting funny. When I booted back into Windows 8, I wasn't able to install any drivers. They all keep coming up with the same error. And I can't seem to find anything at all on this issue. Any help would be much appreciated. Here's an install log from a failed driver install: >>> [Device Install (DiInstallDriver) - F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf] >>> Section start 2012/12/06 20:15:20.714 cmd: "F:\Windows\System32\InfDefaultInstall.exe" "F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf" inf: {SetupCopyOEMInf: F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf} 20:15:20.716 sto: {Import Driver Package: F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf} 20:15:20.719 sto: Driver Store = F:\Windows\System32\DriverStore [Online] (6.2.9200) sto: Driver Package = F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf sto: Architecture = amd64 sto: Flags = 0x00000000 inf: Provider = Google, Inc. inf: Class GUID = {3f966bd9-fa04-4ec5-991c-d326973b5128} inf: Driver Version = 08/27/2012,7.0.0.1 inf: Catalog File = androidwinusba64.cat inf: Version Flags = 0x00000011 ! sto: Unable to determine presence of driver package 'android_winusb.inf'. Error = 0x000003F9 flq: Copying 'F:\Android\android-sdk\extras\google\usb_driver\amd64\WdfCoInstaller01009.dll' to 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\amd64\WdfCoInstaller01009.dll'. flq: Copying 'F:\Android\android-sdk\extras\google\usb_driver\amd64\WinUSBCoInstaller2.dll' to 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\amd64\WinUSBCoInstaller2.dll'. flq: Copying 'F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf' to 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\android_winusb.inf'. flq: Copying 'F:\Android\android-sdk\extras\google\usb_driver\androidwinusba64.cat' to 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\androidwinusba64.cat'. pol: {Driver package policy check} 20:15:20.814 pol: {Driver package policy check - exit(0x00000000)} 20:15:20.814 sto: {Stage Driver Package: F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\android_winusb.inf} 20:15:20.815 ! sto: Unable to determine presence of driver package 'android_winusb.inf'. Error = 0x000003F9 inf: {Query Configurability: F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\android_winusb.inf} 20:15:20.820 inf: Driver package uses WDF. inf: Driver package 'android_winusb.inf' is configurable. inf: {Query Configurability: exit(0x00000000)} 20:15:20.823 flq: Copying 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\amd64\WdfCoInstaller01009.dll' to 'F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\amd64\WdfCoInstaller01009.dll'. flq: Copying 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\amd64\WinUSBCoInstaller2.dll' to 'F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\amd64\WinUSBCoInstaller2.dll'. flq: Copying 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\android_winusb.inf' to 'F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\android_winusb.inf'. flq: Copying 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\androidwinusba64.cat' to 'F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\androidwinusba64.cat'. sto: {DRIVERSTORE IMPORT VALIDATE} 20:15:20.875 sig: {_VERIFY_FILE_SIGNATURE} 20:15:20.881 sig: Key = android_winusb.inf sig: FilePath = F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\android_winusb.inf sig: Catalog = F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\androidwinusba64.cat ! sig: Verifying file against specific (valid) catalog failed! (0x800b0109) ! sig: Error 0x800b0109: A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider. sig: {_VERIFY_FILE_SIGNATURE exit(0x800b0109)} 20:15:20.893 sig: {_VERIFY_FILE_SIGNATURE} 20:15:20.893 sig: Key = android_winusb.inf sig: FilePath = F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\android_winusb.inf sig: Catalog = F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\androidwinusba64.cat sig: Success: File is signed in Authenticode(tm) catalog. sig: Error 0xe0000242: The publisher of an Authenticode(tm) signed catalog has not yet been established as trusted. sig: {_VERIFY_FILE_SIGNATURE exit(0xe0000242)} 20:15:20.907 ! sig: Driver package signer is unknown, but user trusts signer. sto: {DRIVERSTORE IMPORT VALIDATE: exit(0x00000000)} 20:15:22.701 sig: Signer Score = 0x0F000000 sig: Signer Name = Google Inc sto: {DRIVERSTORE IMPORT BEGIN} 20:15:22.702 sto: {DRIVERSTORE IMPORT BEGIN: exit(0x00000000)} 20:15:22.702 cpy: {Copy Directory: F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}} 20:15:22.703 cpy: Target Path = F:\Windows\System32\DriverStore\FileRepository\android_winusb.inf_amd64_f7c4b212c9d862a3 cpy: {Copy Directory: F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\amd64} 20:15:22.704 cpy: Target Path = F:\Windows\System32\DriverStore\FileRepository\android_winusb.inf_amd64_f7c4b212c9d862a3\amd64 cpy: {Copy Directory: exit(0x00000000)} 20:15:22.705 cpy: {Copy Directory: exit(0x00000000)} 20:15:22.706 ! sto: Unable to determine if driver package 'android_winusb.inf' is already registered. Error = 0x000003F9 idb: {Register Driver Package: F:\Windows\System32\DriverStore\FileRepository\android_winusb.inf_amd64_f7c4b212c9d862a3\android_winusb.inf} 20:15:22.707 !!! idb: Failed to create driver package object 'android_winusb.inf_amd64_f7c4b212c9d862a3' in DRIVERS database node. Error = 0x000003F9 !!! idb: Failed to register driver package 'F:\Windows\System32\DriverStore\FileRepository\android_winusb.inf_amd64_f7c4b212c9d862a3\android_winusb.inf'. Error = 0x000003F9 idb: {Register Driver Package: exit(0x000003f9)} 20:15:22.709 sto: {DRIVERSTORE IMPORT END} 20:15:22.710 sto: {DRIVERSTORE IMPORT END: exit(0x000003f9)} 20:15:22.710 sto: Rolled back driver package import. !!! sto: Failed to import driver package into Driver Store. Error = 0x000003F9 sto: {Stage Driver Package: exit(0x000003f9)} 20:15:22.736 sto: {Import Driver Package: exit(0x000003f9)} 20:15:22.766

    Read the article

  • What is the RSA SecurID packet format?

    - by bmatthews68
    I am testing a client application that authenticates using RSA SecurID hardware tokens. The authentication is failing and I am not finding any useful information in the log files. I am using Authentication Manager 8.0 and the Java SDK. I have a traffic capture which I would like to analyze with Wireshark to and from port 5500 on the authentication agent. But I can't find the packet format searching the internet or on the the RSA SecurCare knowledge base. Can anybody direct me to the packet format? Here is an extract from the rsa_api_debug.log file which dumps the UDP payload of the request and the response: [2013-11-06 15:11:08,602] main - b.a():? - Sending 508 bytes to 192.168.10.121; contents: 5c 5 0 3 3 5 0 0 2 0 0 0 0 0 1 ea 71 ee 50 6e 45 83 95 8 39 4 72 e 55 cf cc 62 6d d5 a4 10 79 89 13 d5 23 6a c1 ab 33 8 c3 a1 91 92 93 4f 1e 4 8d 2a 22 2c d0 c3 7 fc 96 5f ba bf 0 80 60 60 9d 1d 9c b9 f3 58 4b 43 18 5f e0 6d 5e f5 f4 5d df bf 41 b9 9 ae 46 a0 a9 66 2d c7 6 f6 d7 66 f1 4 f8 ad 8a 9f 4d 7e e5 9c 45 67 16 15 33 70 f0 1 d5 c0 38 39 f5 fd 5e 15 4f e3 fe ea 70 fa 30 c9 e0 18 ab 64 a9 fe 2c 89 78 a2 96 b6 76 3e 2e a2 ae 2e e0 69 80 8d 51 9 56 80 f4 1a 73 9a 70 f3 e7 c1 49 49 c3 41 3 c6 ce 3e a8 68 71 3f 2 b2 9b 27 8e 63 ce 59 38 64 d1 75 b7 b7 1f 62 eb 4d 1d de c7 21 e0 67 85 b e6 c3 80 0 60 54 47 e ef 3 f9 33 7b 78 e2 3e db e4 8e 76 73 45 3 38 34 1e dd 43 3e 72 a7 37 72 5 34 8e f4 ba 9d 71 6c e 45 49 fa 92 a f6 b bf 5 b 4f dc bd 19 0 7e d2 ef 94 d 3b 78 17 37 d9 ae 19 3a 7e 46 7d ea e4 3a 8c e1 e5 9 50 a2 eb df f2 57 97 bc f2 c3 a7 6f 19 7f 2c 1a 3f 94 25 19 4b b2 37 ed ce 97 f ae f ec c9 f5 be f0 8f 72 1c 34 84 1b 11 25 dd 44 8b 99 75 a4 77 3d e1 1d 26 41 58 55 5f d5 27 82 c d3 2a f8 4 aa 8d 5e e4 79 0 49 43 59 27 5e 15 87 a f4 c4 57 b6 e1 f8 79 3b d3 20 69 5e d0 80 6a 6b 9f 43 79 84 94 d0 77 b6 fc f 3 22 ca b9 35 c0 e8 7b e9 25 26 7f c9 fb e4 a7 fc bb b7 75 ac 7b bc f4 bb 4f a8 80 9b 73 da 3 94 da 87 e7 94 4c 80 b3 f1 2e 5b d8 2 65 25 bb 92 f4 92 e3 de 8 ee 2 30 df 84 a4 69 a6 a1 d0 9c e7 8e f 8 71 4b d0 1c 14 ac 7c c6 e3 2a 2e 2a c2 32 bc 21 c4 2f 4d df 9a f3 10 3e e5 c5 7f ad e4 fb ae 99 bf 58 0 20 0 0 0 0 0 0 0 0 0 0 [2013-11-06 15:11:08,602] main - b.b():? - Enterring getResponse [2013-11-06 15:11:08,618] main - b.a():? - Enterring getTimeoutValue(AceRequest AceAuthV4Request[AbstractAceRequest[ hdr=AcePacketHeader[Type=92 Ver=5 AppID=3 Enc=ENCRYPT Hi-Proto=5 Opt=0 CirID=0] created=1383750668571 trailer=AcePackeTrailer[nonce=39e7a607b517c4dd crc=722833884]] user=bmatthews node-sec-req=0 wpcodes=null resp-mac=0 m-resp-mac=0 client=192.168.10.3 passcode==ZTmY|? sec-sgmt=AceSecondarySegments[ cnt=3] response=none]) [2013-11-06 15:11:08,618] main - b.a():? - acm base timeout: 5 [2013-11-06 15:11:08,618] main - b.b():? - Timeout is 5000 [2013-11-06 15:11:08,618] main - b.b():? - Current retries: 0 [2013-11-06 15:11:10,618] main - b.b():? - Received 508 bytes from 192.168.10.121; contents: 6c 5 0 3 3 6 0 0 0 0 0 1 4d 18 55 ca 18 df 84 49 70 ee 24 4a a5 c3 1c 4e 36 d8 51 ad c7 ef 49 89 6e 2e 23 b4 7e 49 73 4 15 d f4 d5 c0 bf fc 72 5b be d1 62 be e0 de 23 56 bf 26 36 7f b f0 ba 42 61 9b 6f 4b 96 88 9c e9 86 df c6 82 e5 4c 36 ee dc 1e d8 a1 0 71 65 89 dc ca ee 87 ae d6 60 c 86 1c e8 ef 9f d9 b9 4c ed 7 55 77 f3 fc 92 61 f9 32 70 6f 32 67 4d fc 17 4e 7b eb c3 c7 8c 64 3f d0 d0 c7 86 ad 4e 21 41 a2 80 dd 35 ba 31 51 e2 a0 ef df 82 52 d0 a8 43 cb 7c 51 c 85 4 c5 b2 ec 8f db e1 21 90 f5 d7 1b d7 14 ca c0 40 c5 41 4e 92 ee 3 ec 57 7 10 45 f3 54 d7 e4 e6 6e 79 89 9a 21 70 7a 3f 20 ab af 68 34 21 b7 1b 25 e1 ab d 9f cd 25 58 5a 59 b1 b8 98 58 2f 79 aa 8a 69 b9 4c c1 7d 36 28 a3 23 f5 cc 2b ab 9e f a1 79 ab 90 fd 5f 76 9f d9 86 d1 fc 4c 7a 4 24 6d de 64 f1 53 22 b0 b7 91 9a 7c a2 67 2a 35 68 83 74 6a 21 ac eb f8 a2 29 53 21 2f 5a 42 d6 26 b8 f6 7f 79 96 5 3b c2 15 3a b d0 46 42 b7 74 4e 1f 6a ad f5 73 70 46 d3 f8 e a3 83 a3 15 29 6e 68 2 df 56 5c 88 8d 6c 2f ab 11 f1 5 73 58 ec 4 5f 80 e3 ca 56 ce 8 b9 73 7c 79 fc 3 ff f1 40 97 bb e3 fb 35 d1 8d ba 23 fc 2d 27 5b f7 be 15 de 72 30 b e d6 5c 98 e8 44 bd ed a4 3d 87 b8 9b 35 e9 64 80 9a 2a 3c a2 cf 3e 39 cb f6 a2 f4 46 c7 92 99 bc f7 4a de 7e 79 9d 9b d9 34 7f df 27 62 4f 5b ef 3a 4c 8d 2e 66 11 f7 8 c3 84 6e 57 ba 2a 76 59 58 78 41 18 66 76 fd 9d cb a2 14 49 e1 59 4a 6e f5 c3 94 ae 1a ba 51 fc 29 54 ba 6c 95 57 6b 20 87 cc b8 dc 5f 48 72 9c c0 2c dd 60 56 4e 4c 6c 1d 40 bd 4 a1 10 4e a4 b1 87 83 dd 1c f2 df 4c [2013-11-06 15:11:10,618] main - a.a():? - Response status is: 1 [2013-11-06 15:11:10,618] main - a.a():? - Authenticaton failed for bmatthews ! [2013-11-06 15:11:10,618] main - AuthSessionFactory.shutdown():? - RSA Authentication API shutdown invoked [2013-11-06 15:11:10,618] main - AuthSessionFactory.shutdown():? - RSA Authentication API shutdown successful

    Read the article

  • Stop duplicate icmp echo replies when bridging to a dummy interface?

    - by mbrownnyc
    I recently configured a bridge br0 with members as eth0 (real if) and dummy0 (dummy.ko if). When I ping this machine, I receive duplicate replies as: # ping SERVERA PING SERVERA.domain.local (192.168.100.115) 56(84) bytes of data. 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=114 ms (DUP!) 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms (DUP!) Using tcpdump on SERVERA, I was able to see icmp echo replies being sent from eth0 and br0 itself as follows (oddly two echo request packets arrive "from" my Windows box myhost): 23:19:05.324192 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324212 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324217 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324221 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324264 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324272 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 It's worth noting, testing reveals that hosts on the same physical switch do not see DUP icmp echo responses (a host on the same VLAN on another switch does see a dup icmp echo response). I've read that this could be due to the ARP table of a switch, but I can't find any info directly related to bridges, just bonds. I have a feeling my problem lay in the stack on linux, not the switch, but am opened to any suggestions. The system is running centos6/el6 kernel 2.6.32-71.29.1.el6.i686. How do I stop ICMP echo replies from being sent in duplicate when dealing with a bridge interface/bridged interfaces? Thanks, Matt [edit] Quick note: It was recommended in #linux to: [08:53] == mbrownnyc [gateway/web/freenode/] has joined ##linux [08:57] <lkeijser> mbrownnyc: what happens if you set arp_ignore to 1 for the dummy interface? [08:59] <lkeijser> also set arp_announce to 2 for that interface [09:24] <mbrownnyc> lkeijser: I set arp_annouce to 2, arp_ignore to 2 in /etc/sysctl.conf and rebooted the machine... verifying that the bits are set after boot... the problem is still present I did this and came up empty. Same dup problem. I will be moving away from including the dummy interface in the bridge as: [09:31] == mbrownnyc [gateway/web/freenode/] has joined #Netfilter [09:31] <mbrownnyc> Hello all... I'm wondering, is it correct that even with an interface in PROMISC that the kernel will drop /some/ packets before they reach applications? [09:31] <whaffle> What would you make think so? [09:32] <mbrownnyc> I ask because I am receiving ICMP echo replies after configuring a bridge with a dummy interface in order for ipt_netflow to see all packets, only as reported in it's documentation: http://ipt-netflow.git.sourceforge.net/git/gitweb.cgi?p=ipt-netflow/ipt-netflow;a=blob;f=README.promisc [09:32] <mbrownnyc> but I do not know if PROMISC will do the same job [09:33] <mbrownnyc> I was referred here from #linux. any assistance is appreciated [09:33] <whaffle> The following conditions need to be met: PROMISC is enabled (bridges and applications like tcpdump will do this automatically, otherwise they won't function). [09:34] <whaffle> If an interface is part of a bridge, then all packets that enter the bridge should already be visible in the raw table. [09:35] <mbrownnyc> thanks whaffle PROMISC must be set manually for ipt_netflow to function, but [09:36] <whaffle> promisc does not need to be set manually, because the bridge will do it for you. [09:36] <whaffle> When you do not have a bridge, you can easily create one, thereby rendering any kernel patches moot. [09:36] <mbrownnyc> whaffle: I speak without the bridge [09:36] <whaffle> It is perfectly valid to have a "half-bridge" with only a single interface in it. [09:36] <mbrownnyc> whaffle: I am unfamiliar with the raw table, does this mean that PROMISC allows the raw table to be populated with packets the same as if the interface was part of a bridge? [09:37] <whaffle> Promisc mode will cause packets with {a dst MAC address that does not equal the interface's MAC address} to be delivered from the NIC into the kernel nevertheless. [09:37] <mbrownnyc> whaffle: I suppose I mean to clearly ask: what benefit would creating a bridge have over setting an interface PROMISC? [09:38] <mbrownnyc> whaffle: from your last answer I feel that the answer to my question is "none," is this correct? [09:39] <whaffle> Furthermore, the linux kernel itself has a check for {packets with a non-local MAC address}, so that packets that will not enter a bridge will be discarded as well, even in the face of PROMISC. [09:46] <mbrownnyc> whaffle: so, this last bit of information is quite clearly why I would need and want a bridge in my situation [09:46] <mbrownnyc> okay, the ICMP echo reply duplicate issue is likely out of the realm of this channel, but I sincerely appreciate the info on the kernels inner-workings [09:52] <whaffle> mbrownnyc: either the kernel patch, or a bridge with an interface. Since the latter is quicker, yes [09:54] <mbrownnyc> thanks whaffle [edit2] After removing the bridge, and removing the dummy kernel module, I only had a single interface chilling out, lonely. I still received duplicate icmp echo replies... in fact I received a random amount: http://pastebin.com/2LNs0GM8 The same thing doesn't happen on a few other hosts on the same switch, so it has to do with the linux box itself. I'll likely end up rebuilding it next week. Then... you know... this same thing will occur again. [edit3] Guess what? I rebuilt the box, and I'm still receiving duplicate ICMP echo replies. Must be the network infrastructure, although the ARP tables do not contain multiple entries. [edit4] How ridiculous. The machine was a network probe, so I was (ingress and egress) mirroring an uplink port to a node that was the NIC. So, the flow (must have) gone like this: ICMP echo request comes in through the mirrored uplink port. (the real) ICMP echo request is received by the NIC (the mirrored) ICMP echo request is received by the NIC ICMP echo reply is sent for both. I'm ashamed of myself, but now I know. It was suggested on #networking to either isolate the mirrored traffic to an interface that does not have IP enabled, or tag the mirrored packets with dot1q.

    Read the article

  • Unable to connect to Linux (Virtual OS-vmware) through Putty on Windows

    - by RBA
    Hi, I want to access my Linux box (Virtual OS) through Putty on Windows using Run command: putty -ssh -P 22 192.168.171.130,,, but it is returning an error message, not able to connect. But few days back I was able to connect it today. But not now. Why?? Windows IP Configuration Host Name . . . . . . . . . . . . : rba7791fd466 Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter VMware Network Adapter VMnet1: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet1 Physical Address. . . . . . . . . : 00-50-56-C0-00-01 Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.234.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter Wireless Network Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Dell Wireless 1395 WLAN Mini-Card Physical Address. . . . . . . . . : 00-24-2B-60-A0-88 Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 10.0.0.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.0.0.1 DHCP Server . . . . . . . . . . . : 10.0.0.1 DNS Servers . . . . . . . . . . . : 10.0.0.1 Lease Obtained. . . . . . . . . . : Friday, August 28, 2009 4:11:09 AM Lease Expires . . . . . . . . . . : Saturday, August 29, 2009 4:11:09 AM Ubuntu Configuration eth0 inet addr:192.168.171.130

    Read the article

  • How do I prevent TCP connection freezes over an OpenVPN network?

    - by Jason R
    New details added at the end of this question; it's possible that I'm zeroing in on the cause. I have a UDP OpenVPN-based VPN set up in tap mode (I need tap because I need the VPN to pass multicast packets, which doesn't seem to be possible with tun networks) with a handful of clients across the Internet. I've been experiencing frequent TCP connection freezes over the VPN. That is, I will establish a TCP connection (e.g. an SSH connection, but other protocols have similar issues), and at some point during the session, it seems that traffic will cease being transmitted over that TCP session. This seems to be related to points at which large data transfers occur, such as if I execute an ls command in an SSH session, or if I cat a long log file. Some Google searches turn up a number of answers like this previous one on Server Fault, indicating that the likely culprit is an MTU issue: that during periods of high traffic, the VPN is trying to send packets that get dropped somewhere in the pipes between the VPN endpoints. The above-linked answer suggests using the following OpenVPN configuration settings to mitigate the problem: fragment 1400 mssfix This should limit the MTU used on the VPN to 1400 bytes and fix the TCP maximum segment size to prevent the generation of any packets larger than that. This seems to mitigate the problem a bit, but I still frequently see the freezes. I've tried a number of sizes as arguments to the fragment directive: 1200, 1000, 576, all with similar results. I can't think of any strange network topology between the two ends that could trigger such a problem: the VPN server is running on a pfSense machine connected directly to the Internet, and my client is also connected directly to the Internet at another location. One other strange piece of the puzzle: if I run the tracepath utility, then that seems to band-aid the problem. A sample run looks like: [~]$ tracepath -n 192.168.100.91 1: 192.168.100.90 0.039ms pmtu 1500 1: 192.168.100.91 40.823ms reached 1: 192.168.100.91 19.846ms reached Resume: pmtu 1500 hops 1 back 64 The above run is between two clients on the VPN: I initiated the trace from 192.168.100.90 to the destination of 192.168.100.91. Both clients were configured with fragment 1200; mssfix; in an attempt to limit the MTU used on the link. The above results would seem to suggest that tracepath was able to detect a path MTU of 1500 bytes between the two clients. I would assume that it would be somewhat smaller due to the fragmentation settings specified in the OpenVPN configuration. I found that result somewhat strange. Even stranger, however: if I have a TCP connection in the stalled state (e.g. an SSH session with a directory listing that froze in the middle), then executing the tracepath command shown above causes the connection to start up again! I can't figure out any reasonable explanation for why this would be the case, but I feel like this might be pointing toward a solution to ultimately eradicate the problem. Does anyone have any recommendations for other things to try? Edit: I've come back and looked at this a bit further, and have found only more confounding information: I set the OpenVPN connection to fragment at 1400 bytes, as shown above. Then, I connected to the VPN from across the Internet and used Wireshark to look at the UDP packets that were sent to the VPN server while the stall occurred. None were greater than the specified 1400 byte count, so the fragmentation seems to be functioning properly. To verify that even a 1400-byte MTU would be sufficient, I pinged the VPN server using the following (Linux) command: ping <host> -s 1450 -M do This (I believe) sends a 1450-byte packet with fragmentation disabled (I at least verified that it didn't work if I set it to an obviously-too-large value like 1600 bytes). These seem to work just fine; I get replies back from the host with no issue. So, maybe this isn't an MTU issue at all. I'm just confused as to what else it might be! Edit 2: The rabbit hole just keeps getting deeper: I've now isolated the problem a bit more. It seems to be related to the exact OS that the VPN client uses. I have successfully duplicated the problem on at least three Ubuntu machines (versions 12.04 through 13.04). I can reliably duplicate an SSH connection freeze within a minute or so by just cat-ing a large log file. However, if I do the same test using a CentOS 6 machine as a client, then I don't see the problem! I've tested using the exact same OpenVPN client version as I was using on the Ubuntu machines. I can cat log files for hours without seeing the connection freeze. This seems to provide some insight as to the ultimate cause, but I'm just not sure what that insight is. I have examined the traffic over the VPN using Wireshark. I'm not a TCP expert, so I'm not sure what to make of the gory details, but the gist is that at some point, a UDP packet gets dropped due to the limited bandwidth of the Internet link, causing TCP retransmissions inside the VPN tunnel. On the CentOS client, these retransmissions occur properly and things move on happily. At some point with the Ubuntu clients, though, the remote end starts retransmitting the same TCP segment over and over (with the transmit delay increasing between each retransmission). The client sends what looks like a valid TCP ACK to each retransmission, but the remote end still continues to transmit the same TCP segment periodically. This extends ad infinitum and the connection stalls. My question here would be: Does anyone have any recommendations for how to troubleshoot and/or determine the root cause of the TCP issue? It's as if the remote end isn't accepting the ACK messages sent by the VPN client. One common difference between the CentOS node and the various Ubuntu releases is that Ubuntu has a much more recent Linux kernel version (from 3.2 in Ubuntu 12.04 to 3.8 in 13.04). A pointer to some new kernel bug maybe? I'm assuming that if that were so, then I wouldn't be the only one experiencing the problem; I don't think this seems like a particularly exotic setup.

    Read the article

  • Windows using the DNS suffix search list on all lookups, even valid FQDNs. How to stop this?

    - by RealityGone
    When doing DNS lookups (specifically using nslookup, for some reason most things are not effected) Windows XP Pro SP3 is using the DNS suffix search list for every single one. Even for fully qualified domain names. For example I lookup "www.microsoft.com" but windows actually asks for "www.microsoft.com.eondream.com" (eondream.com is my primary domain). Now I can fix the issue by removing the Primary DNS suffix, but it seems to me that the DNS suffix search list should be for short, invalid names (where dots=0 or something). I'm sure I have a misconfiguration somewhere in windows but I don't know where. I've changed every option I can think of or find. Below is the output of ipconfig /all and nslookup (with debug & db2 enabled). This is using a static IP & (internal) DNS server. C:\ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : frayedlogic Primary Dns Suffix . . . . . . . : eondream.com Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : eondream.com Ethernet adapter Wireless Network Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Dell Wireless 1390 WLAN Mini-Card Physical Address. . . . . . . . . : 00-1B-FC-29-EB-6B Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.13.32 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.13.13 DNS Servers . . . . . . . . . . . : 192.168.19.19 C:\nslookup Default Server: shardik.eondream.com Address: 192.168.19.19 set debug set db2 www.microsoft.com Server: shardik.eondream.com Address: 192.168.19.19 ------------ Got answer: HEADER: opcode = QUERY, id = 2, rcode = NOERROR header flags: response, want recursion, recursion avail. questions = 1, answers = 1, authority records = 0, additional = 0 QUESTIONS: www.microsoft.com.eondream.com, type = A, class = IN ANSWERS: - www.microsoft.com.eondream.com internet address = 208.69.36.132 ttl = 0 (0 secs) ------------ Non-authoritative answer: Name: www.microsoft.com.eondream.com Address: 208.69.36.132 (Note: it resolves to that IP because I use the opendns service and that is their suggestion page or whatever you want to call it) If I am reading the nslookup output correctly then it is not a problem with my DNS server because windows is actually asking for the incorrect domain.

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

  • Using the jQuery UI Library in a MVC 3 Application to Build a Dialog Form

    - by ChrisD
    Using a simulated dialog window is a nice way to handle inline data editing. The jQuery UI has a UI widget for a dialog window that makes it easy to get up and running with it in your application. With the release of ASP.NET MVC 3, Microsoft included the jQuery UI scripts and files in the MVC 3 project templates for Visual Studio. With the release of the MVC 3 Tools Update, Microsoft implemented the inclusion of those with NuGet as packages. That means we can get up and running using the latest version of the jQuery UI with minimal effort. To the code! Another that might interested you about JQuery Mobile and ASP.NET MVC 3 with C#. If you are starting with a new MVC 3 application and have the Tools Update then you are a NuGet update and a <link> and <script> tag away from adding the jQuery UI to your project. If you are using an existing MVC project you can still get the jQuery UI library added to your project via NuGet and then add the link and script tags. Assuming that you have pulled down the latest version (at the time of this publish it was 1.8.13) you can add the following link and script tags to your <head> tag: < link href = "@Url.Content(" ~ / Content / themes / base / jquery . ui . all . css ")" rel = "Stylesheet" type = "text/css" /> < script src = "@Url.Content(" ~ / Scripts / jquery-ui-1 . 8 . 13 . min . js ")" type = "text/javascript" ></ script > The jQuery UI library relies upon the CSS scripts and some image files to handle rendering of its widgets (you can choose a different theme or role your own if you like). Adding these to the stock _Layout.cshtml file results in the following markup: <!DOCTYPE html> < html > < head >     < meta charset = "utf-8" />     < title > @ViewBag.Title </ title >     < link href = "@Url.Content(" ~ / Content / Site . css ")" rel = "stylesheet" type = "text/css" />     <link href="@Url.Content("~/Content/themes/base/jquery.ui.all.css")" rel="Stylesheet" type="text/css" />     <script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script>     <script src="@Url.Content("~/Scripts/modernizr-1.7.min . js ")" type = "text/javascript" ></ script >     < script src = "@Url.Content(" ~ / Scripts / jquery-ui-1 . 8 . 13 . min . js ")" type = "text/javascript" ></ script > </ head > < body >     @RenderBody() </ body > </ html > Our example will involve building a list of notes with an id, title and description. Each note can be edited and new notes can be added. The user will never have to leave the single page of notes to manage the note data. The add and edit forms will be delivered in a jQuery UI dialog widget and the note list content will get reloaded via an AJAX call after each change to the list. To begin, we need to craft a model and a data management class. We will do this so we can simulate data storage and get a feel for the workflow of the user experience. The first class named Note will have properties to represent our data model. namespace Website . Models {     public class Note     {         public int Id { get ; set ; }         public string Title { get ; set ; }         public string Body { get ; set ; }     } } The second class named NoteManager will be used to set up our simulated data storage and provide methods for querying and updating the data. We will take a look at the class content as a whole and then walk through each method after. using System . Collections . ObjectModel ; using System . Linq ; using System . Web ; namespace Website . Models {     public class NoteManager     {         public Collection < Note > Notes         {             get             {                 if ( HttpRuntime . Cache [ "Notes" ] == null )                     this . loadInitialData ();                 return ( Collection < Note >) HttpRuntime . Cache [ "Notes" ];             }         }         private void loadInitialData ()         {             var notes = new Collection < Note >();             notes . Add ( new Note                           {                               Id = 1 ,                               Title = "Set DVR for Sunday" ,                               Body = "Don't forget to record Game of Thrones!"                           });             notes . Add ( new Note                           {                               Id = 2 ,                               Title = "Read MVC article" ,                               Body = "Check out the new iwantmymvc.com post"                           });             notes . Add ( new Note                           {                               Id = 3 ,                               Title = "Pick up kid" ,                               Body = "Daughter out of school at 1:30pm on Thursday. Don't forget!"                           });             notes . Add ( new Note                           {                               Id = 4 ,                               Title = "Paint" ,                               Body = "Finish the 2nd coat in the bathroom"                           });             HttpRuntime . Cache [ "Notes" ] = notes ;         }         public Collection < Note > GetAll ()         {             return Notes ;         }         public Note GetById ( int id )         {             return Notes . Where ( i => i . Id == id ). FirstOrDefault ();         }         public int Save ( Note item )         {             if ( item . Id <= 0 )                 return saveAsNew ( item );             var existingNote = Notes . Where ( i => i . Id == item . Id ). FirstOrDefault ();             existingNote . Title = item . Title ;             existingNote . Body = item . Body ;             return existingNote . Id ;         }         private int saveAsNew ( Note item )         {             item . Id = Notes . Count + 1 ;             Notes . Add ( item );             return item . Id ;         }     } } The class has a property named Notes that is read only and handles instantiating a collection of Note objects in the runtime cache if it doesn't exist, and then returns the collection from the cache. This property is there to give us a simulated storage so that we didn't have to add a full blown database (beyond the scope of this post). The private method loadInitialData handles pre-filling the collection of Note objects with some initial data and stuffs them into the cache. Both of these chunks of code would be refactored out with a move to a real means of data storage. The GetAll and GetById methods access our simulated data storage to return all of our notes or a specific note by id. The Save method takes in a Note object, checks to see if it has an Id less than or equal to zero (we assume that an Id that is not greater than zero represents a note that is new) and if so, calls the private method saveAsNew . If the Note item sent in has an Id , the code finds that Note in the simulated storage, updates the Title and Description , and returns the Id value. The saveAsNew method sets the Id , adds it to the simulated storage, and returns the Id value. The increment of the Id is simulated here by getting the current count of the note collection and adding 1 to it. The setting of the Id is the only other chunk of code that would be refactored out when moving to a different data storage approach. With our model and data manager code in place we can turn our attention to the controller and views. We can do all of our work in a single controller. If we use a HomeController , we can add an action method named Index that will return our main view. An action method named List will get all of our Note objects from our manager and return a partial view. We will use some jQuery to make an AJAX call to that action method and update our main view with the partial view content returned. Since the jQuery AJAX call will cache the call to the content in Internet Explorer by default (a setting in jQuery), we will decorate the List, Create and Edit action methods with the OutputCache attribute and a duration of 0. This will send the no-cache flag back in the header of the content to the browser and jQuery will pick that up and not cache the AJAX call. The Create action method instantiates a new Note model object and returns a partial view, specifying the NoteForm.cshtml view file and passing in the model. The NoteForm view is used for the add and edit functionality. The Edit action method takes in the Id of the note to be edited, loads the Note model object based on that Id , and does the same return of the partial view as the Create method. The Save method takes in the posted Note object and sends it to the manager to save. It is decorated with the HttpPost attribute to ensure that it will only be available via a POST. It returns a Json object with a property named Success that can be used by the UX to verify everything went well (we won't use that in our example). Both the add and edit actions in the UX will post to the Save action method, allowing us to reduce the amount of unique jQuery we need to write in our view. The contents of the HomeController.cs file: using System . Web . Mvc ; using Website . Models ; namespace Website . Controllers {     public class HomeController : Controller     {         public ActionResult Index ()         {             return View ();         }         [ OutputCache ( Duration = 0 )]         public ActionResult List ()         {             var manager = new NoteManager ();             var model = manager . GetAll ();             return PartialView ( model );         }         [ OutputCache ( Duration = 0 )]         public ActionResult Create ()         {             var model = new Note ();             return PartialView ( "NoteForm" , model );         }         [ OutputCache ( Duration = 0 )]         public ActionResult Edit ( int id )         {             var manager = new NoteManager ();             var model = manager . GetById ( id );             return PartialView ( "NoteForm" , model );         }         [ HttpPost ]         public JsonResult Save ( Note note )         {             var manager = new NoteManager ();             var noteId = manager . Save ( note );             return Json ( new { Success = noteId > 0 });         }     } } The view for the note form, NoteForm.cshtml , looks like so: @model Website . Models . Note @using ( Html . BeginForm ( "Save" , "Home" , FormMethod . Post , new { id = "NoteForm" })) { @Html . Hidden ( "Id" ) < label class = "Title" >     < span > Title < /span><br / >     @Html . TextBox ( "Title" ) < /label> <label class="Body">     <span>Body</ span >< br />     @Html . TextArea ( "Body" ) < /label> } It is a strongly typed view for our Note model class. We give the <form> element an id attribute so that we can reference it via jQuery. The <label> and <span> tags give our UX some structure that we can style with some CSS. The List.cshtml view is used to render out a <ul> element with all of our notes. @model IEnumerable < Website . Models . Note > < ul class = "NotesList" >     @foreach ( var note in Model )     {     < li >         @note . Title < br />         @note . Body < br />         < span class = "EditLink ButtonLink" noteid = "@note.Id" > Edit < /span>     </ li >     } < /ul> This view is strongly typed as well. It includes a <span> tag that we will use as an edit button. We add a custom attribute named noteid to the <span> tag that we can use in our jQuery to identify the Id of the note object we want to edit. The view, Index.cshtml , contains a bit of html block structure and all of our jQuery logic code. @ {     ViewBag . Title = "Index" ; } < h2 > Notes < /h2> <div id="NoteListBlock"></ div > < span class = "AddLink ButtonLink" > Add New Note < /span> <div id="NoteDialog" title="" class="Hidden"></ div > < script type = "text/javascript" >     $ ( function () {         $ ( "#NoteDialog" ). dialog ({             autoOpen : false , width : 400 , height : 330 , modal : true ,             buttons : {                 "Save" : function () {                     $ . post ( "/Home/Save" ,                         $ ( "#NoteForm" ). serialize (),                         function () {                             $ ( "#NoteDialog" ). dialog ( "close" );                             LoadList ();                         });                 },                 Cancel : function () { $ ( this ). dialog ( "close" ); }             }         });         $ ( ".EditLink" ). live ( "click" , function () {             var id = $ ( this ). attr ( "noteid" );             $ ( "#NoteDialog" ). html ( "" )                 . dialog ( "option" , "title" , "Edit Note" )                 . load ( "/Home/Edit/" + id , function () { $ ( "#NoteDialog" ). dialog ( "open" ); });         });         $ ( ".AddLink" ). click ( function () {             $ ( "#NoteDialog" ). html ( "" )                 . dialog ( "option" , "title" , "Add Note" )                 . load ( "/Home/Create" , function () { $ ( "#NoteDialog" ). dialog ( "open" ); });         });         LoadList ();     });     function LoadList () {         $ ( "#NoteListBlock" ). load ( "/Home/List" );     } < /script> The <div> tag with the id attribute of "NoteListBlock" is used as a container target for the load of the partial view content of our List action method. It starts out empty and will get loaded with content via jQuery once the DOM is loaded. The <div> tag with the id attribute of "NoteDialog" is the element for our dialog widget. The jQuery UI library will use the title attribute for the text in the dialog widget top header bar. We start out with it empty here and will dynamically change the text via jQuery based on the request to either add or edit a note. This <div> tag is given a CSS class named "Hidden" that will set the display:none style on the element. Since our call to the jQuery UI method to make the element a dialog widget will occur in the jQuery document ready code block, the end user will see the <div> element rendered in their browser as the page renders and then it will hide after that jQuery call. Adding the display:hidden to the <div> element via CSS will ensure that it is never rendered until the user triggers the request to open the dialog. The jQuery document load block contains the setup for the dialog node, click event bindings for the edit and add links, and a call to a JavaScript function called LoadList that handles the AJAX call to the List action method. The .dialog() method is called on the "NoteDialog" <div> element and the options are set for the dialog widget. The buttons option defines 2 buttons and their click actions. The first is the "Save" button (the text in quotations is used as the text for the button) that will do an AJAX post to our Save action method and send the serialized form data from the note form (targeted with the id attribute "NoteForm"). Upon completion it will close the dialog widget and call the LoadList to update the UX without a redirect. The "Cancel" button simply closes the dialog widget. The .live() method handles binding a function to the "click" event on all elements with the CSS class named EditLink . We use the .live() method because it will catch and bind our function to elements even as the DOM changes. Since we will be constantly changing the note list as we add and edit we want to ensure that the edit links get wired up with click events. The function for the click event on the edit links gets the noteid attribute and stores it in a local variable. Then it clears out the HTML in the dialog element (to ensure a fresh start), calls the .dialog() method and sets the "title" option (this sets the title attribute value), and then calls the .load() AJAX method to hit our Edit action method and inject the returned content into the "NoteDialog" <div> element. Once the .load() method is complete it opens the dialog widget. The click event binding for the add link is similar to the edit, only we don't need to get the id value and we load the Create action method. This binding is done via the .click() method because it will only be bound on the initial load of the page. The add button will always exist. Finally, we toss in some CSS in the Content/Site.css file to style our form and the add/edit links. . ButtonLink { color : Blue ; cursor : pointer ; } . ButtonLink : hover { text - decoration : underline ; } . Hidden { display : none ; } #NoteForm label { display:block; margin-bottom:6px; } #NoteForm label > span { font-weight:bold; } #NoteForm input[type=text] { width:350px; } #NoteForm textarea { width:350px; height:80px; } With all of our code in place we can do an F5 and see our list of notes: If we click on an edit link we will get the dialog widget with the correct note data loaded: And if we click on the add new note link we will get the dialog widget with the empty form: The end result of our solution tree for our sample:

    Read the article

  • Mac OS X 10.6 assign mapped IP to Windows 7 VM in Parallels

    - by Alex
    I'm trying to assign a mapped IP address to a Windows 7 VM. I have setup running in Parallels 5 in wireless bridged networking mode. The problem I am having is that it looks like the VM is actually broadcasting the MAC address of the host machine and thus causing a clash of IP addresses on the network. This is my current setup: Macbook Pro :~ ifconfig -a lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280 stf0: flags=0<> mtu 1280 en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:26:b0:df:31:b4 media: autoselect status: inactive supported media: none autoselect 10baseT/UTP <half-duplex> 10baseT/UTP <full-duplex> 10baseT/UTP <full-duplex,flow-control> 10baseT/UTP <full-duplex,hw-loopback> 100baseTX <half-duplex> 100baseTX <full-duplex> 100baseTX <full-duplex,flow-control> 100baseTX <full-duplex,hw-loopback> 1000baseT <full-duplex> 1000baseT <full-duplex,flow-control> 1000baseT <full-duplex,hw-loopback> fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078 lladdr 00:26:b0:ff:fe:df:31:b4 media: autoselect <full-duplex> status: inactive supported media: autoselect <full-duplex> en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 inet6 fe80::226:bbff:fe0a:59a1%en1 prefixlen 64 scopeid 0x6 inet 192.168.1.97 netmask 0xffffff00 broadcast 192.168.1.255 ether 00:26:bb:0a:59:a1 media: autoselect status: active supported media: autoselect vnic0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 inet 192.168.1.81 netmask 0xffffff00 broadcast 192.168.1.255 ether 00:1c:42:00:00:08 media: autoselect status: active supported media: autoselect vnic1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500 inet 10.37.129.2 netmask 0xffffff00 broadcast 10.37.129.255 ether 00:1c:42:00:00:09 media: autoselect status: active supported media: autoselect Windows 7: :~ ipconfig -all Windows IP Configuration Host Name . . . . . . . . . . . . : Alex-PC Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Parallels Ethernet Adapter Physical Address. . . . . . . . . : 00-1C-42-B8-E7-B4 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Tunnel adapter Teredo Tunneling Pseudo-Interface: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Teredo Tunneling Adapter Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter isatap.{ACAC7EBB-5E5F-4F53-AFD9-E6EAEEA0FEE2}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft ISATAP Adapter #3 Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Billion Bipac 7200 modem router: In DHCP server settings have the following two mapping entries. alex-macbook-win7 00:1c:42:00:00:08 192.168.1.98 alex-macbook 00:26:bb:0a:59:a1 192.168.1.97 The problem I have is that when the VM starts up it gets assigned the 192.168.1.97 address instead of the .98 address and thus networking on the host stops working as it says there is a clash. I have tried removing the mapping for "alex-macbook" which results in the guest machine being assigned a normal DHCP address and NOT the one that is in the mapping of the router.

    Read the article

  • Can ping IP address and nslookup hostname but cannot ping hostname

    - by jao
    On a windows 2003 server I can nslookup www.google.com which returns Server: localhost Address: 127.0.0.1 Non-authoritative answer: Name: www.l.google.com Addresses: 74.125.79.104, 74.125.79.147, 74.125.79.99 Aliases: www.google.com I can then ping 74.125.79.104: Pinging 74.125.79.104 with 32 bytes of data: Reply from 74.125.79.104: bytes=32 time=16ms TTL=54 Reply from 74.125.79.104: bytes=32 time=32ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Ping statistics for 74.125.79.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 15ms, Maximum = 32ms, Average = 19ms But I cannot ping www.google.com: Ping request could not find host www.google.com. Please check the name and try again. (this one is different from the other question in that this one has a TLD, it is not a local domain.) Update: I am running a dns server at localhost (127.0.0.1). Even when I change it to use for example opendns, it still can nslookup hostname and ping ip address, but not ping hostname. So what is wrong? Update 2: here is the ipconfig /all result: Windows IP Configuration Host Name . . . . . . . . . . . . : SERVER Primary Dns Suffix . . . . . . . : NETWORK.local Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : NETWORK.local Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet #2 Physical Address. . . . . . . . . : 00-0F-1F-56-3B-AA DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.7.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.7.1 DNS Servers . . . . . . . . . . . : 127.0.0.1 Update 3: Thanks everyone for their help and suggestions. I appreciate that. Ipconfig /flushdns returns: Sucessfully flushed the DNS resolver cache Ipconfig /displaydns returns: 2.7.168.192.in-addr.arpa ---------------------------------------- Record Name . . . . . : 2.7.168.192.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : webserver.mydomainname.com 1.0.0.127.in-addr.arpa ---------------------------------------- Record Name . . . . . : 1.0.0.127.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : localhost Update 4: Wireshark shows the following: 3 11.540542 208.67.220.220 192.168.7.2 DNS Standard query response A 74.125.79.99 A 74.125.79.104 A 74.125.79.147 6 42.056794 192.168.7.2 192.168.7.255 NBNS Name query NB WWW.GOOGLE.COM<00> which is weird: when I ping, it sends a packet to 192.168.7.255 instead of asking the DNS server for an address

    Read the article

  • Linux (NAS) Permissions problem (Permission Denied)

    - by calumbrodie
    This is probably easier to show than to explain... -bash-3.2$ id uid=501(admin) gid=503(admin) groups=100(users),501(admins),503(admin) -bash-3.2$ groups admin users admins -bash-3.2$ ls -l total 8 drwxrwxrwx 78 admin www 4096 Dec 9 09:02 Inbox drwxrwxrwx 21 admin www 4096 Dec 8 21:45 Movies drwxrwx--- 3 admin www 52 Dec 9 07:57 TV -bash-3.2$ cd Movies -bash-3.2$ ls -l total 20 drwxrwx--- 7 admin www 4096 Dec 8 00:04 Action drwxrwx--- 6 admin www 4096 Dec 8 00:05 Animation drwxrwx--- 4 admin www 4096 Dec 8 00:17 Comedy drwxrwx--- 4 admin www 4096 Dec 8 00:14 Drama drwxrwx--- 4 admin www 4096 Dec 8 00:14 Family drwxrwx--- 6 admin www 58 Dec 6 19:10 Foreign Language drwxrwx--- 2 admin www 31 Dec 7 23:58 Horror drwxrwx--- 3 admin www 50 Dec 8 00:15 Science Fiction drwxrwx--- 2 admin www 6 Dec 8 00:16 Thriller -bash-3.2$ cd ../Inbox -bash: cd: ../Inbox: Permission denied Filesystem is XFS. Are there permissions on the directories that ls -l wouldn't show? I'm the owner of all directories and files inside them. I can sudo to modify the file permissions or view the contents of the folders but I need them to be accessible by 'admin'. Any ideas? I'll be checking the question regularly so let me know if I need to update this with more information. Thanks Edit : Added strace execve("/bin/ls", ["ls", "Inbox"], [/* 21 vars */]) = 0 brk(0) = 0x26000 uname({sys="Linux", node="axentraserver.the-brodie-stora.mystora.com", ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x4001c000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=17972, ...}) = 0 mmap2(NULL, 17972, PROT_READ, MAP_PRIVATE, 3, 0) = 0x4001d000 close(3) = 0 open("/lib/librt.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0P\25\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=39776, ...}) = 0 mmap2(NULL, 57816, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x40025000 mprotect(0x4002b000, 28672, PROT_NONE) = 0 mmap2(0x40032000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5) = 0x40032000 close(3) = 0 open("/lib/libacl.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\0\24\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=134375, ...}) = 0 mmap2(NULL, 54368, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x40034000 mprotect(0x4003a000, 28672, PROT_NONE) = 0 mmap2(0x40041000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5) = 0x40041000 close(3) = 0 open("/lib/libselinux.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\2147\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=297439, ...}) = 0 mmap2(NULL, 117504, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x40042000 mprotect(0x40056000, 28672, PROT_NONE) = 0 mmap2(0x4005d000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13) = 0x4005d000 close(3) = 0 open("/lib/libgcc_s.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\10\"\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=43164, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40022000 mmap2(NULL, 74572, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x4005f000 mprotect(0x4006a000, 28672, PROT_NONE) = 0 mmap2(0x40071000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xa) = 0x40071000 close(3) = 0 open("/lib/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0XI\1\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=1517948, ...}) = 0 mmap2(NULL, 1245628, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x40072000 mprotect(0x40195000, 32768, PROT_NONE) = 0 mmap2(0x4019d000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x123) = 0x4019d000 mmap2(0x401a0000, 8636, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x401a0000 close(3) = 0 open("/lib/libpthread.so.0", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\230A\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=121044, ...}) = 0 mmap2(NULL, 115184, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x401a3000 mprotect(0x401b5000, 28672, PROT_NONE) = 0 mmap2(0x401bc000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x11) = 0x401bc000 mmap2(0x401be000, 4592, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x401be000 close(3) = 0 open("/lib/libattr.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\364\f\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=40571, ...}) = 0 mmap2(NULL, 45512, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x401c0000 mprotect(0x401c3000, 32768, PROT_NONE) = 0 mmap2(0x401cb000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3) = 0x401cb000 close(3) = 0 open("/lib/libdl.so.2", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\254\10\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=15344, ...}) = 0 mmap2(NULL, 41116, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x401cc000 mprotect(0x401ce000, 28672, PROT_NONE) = 0 mmap2(0x401d5000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1) = 0x401d5000 close(3) = 0 open("/lib/libsepol.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\330/\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=228044, ...}) = 0 mmap2(NULL, 301748, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x401d7000 mprotect(0x4020f000, 28672, PROT_NONE) = 0 mmap2(0x40216000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x37) = 0x40216000 mmap2(0x40217000, 39604, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x40217000 close(3) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40221000 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40222000 set_tls(0x40221d00, 0x40221d00, 0x40024000, 0x402223e8, 0x41) = 0 mprotect(0x401d5000, 4096, PROT_READ) = 0 mprotect(0x401bc000, 4096, PROT_READ) = 0 mprotect(0x4019d000, 8192, PROT_READ) = 0 mprotect(0x4005d000, 4096, PROT_READ) = 0 mprotect(0x40032000, 4096, PROT_READ) = 0 mprotect(0x40023000, 4096, PROT_READ) = 0 munmap(0x4001d000, 17972) = 0 set_tid_address(0x402218a8) = 9539 set_robust_list(0x402218b0, 0xc) = 0 rt_sigaction(SIGRTMIN, {0x401a6d90, [], SA_SIGINFO|0x4000000}, NULL, 8) = 0 rt_sigaction(SIGRT_1, {0x401a6c64, [], SA_RESTART|SA_SIGINFO|0x4000000}, NULL, 8) = 0 rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0 getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM_INFINITY}) = 0 brk(0) = 0x26000 brk(0x47000) = 0x47000 open("/proc/mounts", O_RDONLY|O_LARGEFILE) = 3 fstat64(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x4001d000 read(3, "rootfs / rootfs rw 0 0\nubi0:root"..., 1024) = 1024 read(3, "fs.xino,noplink,create=mfs,sum,b"..., 1024) = 428 read(3, "", 1024) = 0 close(3) = 0 munmap(0x4001d000, 4096) = 0 access("/etc/selinux/", F_OK) = 0 open("/etc/selinux/config", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(1, TIOCGWINSZ, {ws_row=52, ws_col=153, ws_xpixel=918, ws_ypixel=728}) = 0 stat64("Inbox", {st_mode=S_IFDIR|0777, st_size=4096, ...}) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 open("/etc/nsswitch.conf", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=1696, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x4001d000 read(3, "#\n# /etc/nsswitch.conf\n#\n# An ex"..., 4096) = 1696 read(3, "", 4096) = 0 close(3) = 0 munmap(0x4001d000, 4096) = 0 open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=17972, ...}) = 0 mmap2(NULL, 17972, PROT_READ, MAP_PRIVATE, 3, 0) = 0x4001d000 close(3) = 0 open("/lib/libnss_files.so.2", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0\304\27\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=49256, ...}) = 0 mmap2(NULL, 70316, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x40223000 mprotect(0x4022c000, 28672, PROT_NONE) = 0 mmap2(0x40233000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x8) = 0x40233000 close(3) = 0 mprotect(0x40233000, 4096, PROT_READ) = 0 munmap(0x4001d000, 17972) = 0 open("/etc/passwd", O_RDONLY) = 3 fcntl64(3, F_GETFD) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=1661, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x4001d000 read(3, "root:x:0:0:root:/root:/bin/bash\n"..., 4096) = 1661 close(3) = 0 munmap(0x4001d000, 4096) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 open("/etc/group", O_RDONLY) = 3 fcntl64(3, F_GETFD) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=700, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x4001d000 read(3, "root:x:0:root\nbin:x:1:root,bin,d"..., 4096) = 700 close(3) = 0 munmap(0x4001d000, 4096) = 0 open("Inbox", O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) = -1 EACCES (Permission denied) write(2, "ls: ", 4ls: ) = 4 write(2, "Inbox", 5Inbox) = 5 write(2, ": Permission denied", 19: Permission denied) = 19 write(2, "\n", 1 ) = 1 close(1) = 0 exit_group(2) = ? 2nd edit: Elaboration for Mike. The Inbox sits at the following location /home/admin/MyLibrary/MyVideos/Inbox /home/admin/MyLibrary/MyVideos/Movies The system is a Netgear Stora NAS box that I have root access to. The /home/ folder is mounted as an smb share on various computers around the house. The folder /Inbox cannot be opened on any of those machines (they all connect as 'admin'). When I ssh into the box using the 'admin' credentials I am also unable to access the folder. The folder was created via a Web Admin page hosted on the NAS. The user/group for the Inbox folder was previously apache:www (expected as this folder was created by the web application), but I chmod/chowned the folder as the root user in an attempt to grant the admin user (therefore the rest of the connected machines) access to the files. Sorry for not including this earlier, I wasn't sure if it was relevant and didn't want to confuse the situation. -Thanks 3rd Edit Sorry again - It looks like this NAS is running some custom version of Red Hat, not Debian as previously stated - I'm not sure if this makes a difference

    Read the article

  • How to remove the hint in the terminal?

    - by jiangchengwu
    As a normal user , when I run some command like ps\netstat, the terminal hint me: (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) I know could redirect STDERR to /dev/null can remove this hint. But I want to know is there any way to remove it , such as edit some configuration files ? [deploy@storage2 ~]$ ps -V (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) procps version 3.2.7 [deploy@storage2 ~]$ ps -V 2>/dev/null procps version 3.2.7 My OS info: [deploy@storage2 ~]$ uname -a Linux storage2 2.6.18-243.el5 #1 SMP Mon Feb 7 18:47:27 EST 2011 x86_64 x86_64 x86_64 GNU/Linux [deploy@storage2 ~]$ lsb_release LSB Version: :core-3.1-amd64:core-3.1-ia32:core-3.1-noarch:graphics-3.1-amd64:graphics-3.1-ia32:graphics-3.1-noarch [deploy@storage2 ~]$ netstat -V (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) net-tools 1.60 netstat 1.42 (2001-04-15) Fred Baumgarten, Alan Cox, Bernd Eckenfels, Phil Blundell, Tuan Hoang and others +NEW_ADDRT +RTF_IRTT +RTF_REJECT +FW_MASQUERADE +I18N AF: (inet) +UNIX +INET +INET6 +IPX +AX25 +NETROM +X25 +ATALK +ECONET +ROSE HW: +ETHER +ARC +SLIP +PPP +TUNNEL +TR +AX25 +NETROM +X25 +FR +ROSE +ASH +SIT +FDDI +HIPPI +HDLC/LAPB There are more info from strace: [deploy@storage2 ~]$ strace ps -V execve("/bin/ps", ["ps", "-V"], [/* 27 vars */]) = 0 brk(0) = 0x929a000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=99752, ...}) = 0 mmap2(NULL, 99752, PROT_READ, MAP_PRIVATE, 3, 0) = 0xfffffffff7fde000 close(3) = 0 open("/lib/libnsl.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0 \241\210\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=101404, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7fdd000 mmap2(0x887000, 92104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x887000 mmap2(0x89a000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x12) = 0x89a000 mmap2(0x89c000, 6088, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x89c000 close(3) = 0 open("/lib/libdl.so.2", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0Pzt\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=16428, ...}) = 0 mmap2(0x747000, 12408, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x747000 mmap2(0x749000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1) = 0x749000 close(3) = 0 open("/lib/libm.so.6", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\20\204p\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=208352, ...}) = 0 mmap2(0x705000, 155760, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x705000 mmap2(0x72a000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x24) = 0x72a000 close(3) = 0 open("/lib/libcrypt.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\340\246q\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=45288, ...}) = 0 mmap2(0x71a000, 201020, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xfffffffff7fab000 mmap2(0xf7fb4000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x8) = 0xfffffffff7fb4000 mmap2(0xf7fb6000, 155964, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7fb6000 close(3) = 0 open("/lib/libutil.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0 \n\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=13420, ...}) = 0 mmap2(NULL, 12428, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xfffffffff7fa7000 mmap2(0xf7fa9000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1) = 0xfffffffff7fa9000 close(3) = 0 open("/lib/libpthread.so.0", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0@(s\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=129716, ...}) = 0 mmap2(0x72e000, 90596, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x72e000 mmap2(0x741000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13) = 0x741000 mmap2(0x743000, 4580, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x743000 close(3) = 0 open("/lib/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\340?]\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=1611564, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7fa6000 mmap2(0x5be000, 1328580, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x5be000 mmap2(0x6fd000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13f) = 0x6fd000 mmap2(0x700000, 9668, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x700000 close(3) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7fa5000 set_thread_area(0xffd61bb4) = 0 mprotect(0x6fd000, 8192, PROT_READ) = 0 mprotect(0x741000, 4096, PROT_READ) = 0 mprotect(0xf7fa9000, 4096, PROT_READ) = 0 mprotect(0xf7fb4000, 4096, PROT_READ) = 0 mprotect(0x72a000, 4096, PROT_READ) = 0 mprotect(0x749000, 4096, PROT_READ) = 0 mprotect(0x89a000, 4096, PROT_READ) = 0 mprotect(0x5ba000, 4096, PROT_READ) = 0 munmap(0xf7fde000, 99752) = 0 set_tid_address(0xf7fa5708) = 20214 set_robust_list(0xf7fa5710, 0xc) = 0 futex(0xffd61f74, FUTEX_WAKE_PRIVATE, 1) = 0 rt_sigaction(SIGRTMIN, {0x4007323d0, [], 0}, NULL, 8) = 0 rt_sigaction(SIGRT_1, {0x10000004007322e0, [], 0}, NULL, 8) = 0 rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0 getrlimit(RLIMIT_STACK, {rlim_cur=-4284481536, rlim_max=67108864*1024}) = 0 uname({sys="Linux", node="storage2", ...}) = 0 readlink("/proc/self/exe", "/bin/ps"..., 260) = 7 brk(0) = 0x929a000 brk(0x92bb000) = 0x92bb000 open("/bin/ps", O_RDONLY|O_LARGEFILE) = 3 _llseek(3, -12, [711660], SEEK_END) = 0 read(3, "\274U!\253\2\0\0\0\224\237\t\0", 12) = 12 mmap2(NULL, 634880, PROT_READ, MAP_SHARED, 3, 0x13) = 0xfffffffff7f0a000 mmap2(NULL, 630784, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7e70000 close(3) = 0 futex(0x74a06c, FUTEX_WAKE_PRIVATE, 2147483647) = 0 geteuid32() = 501 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 open("/etc/nsswitch.conf", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=1696, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7ff6000 read(3, "#\n# /etc/nsswitch.conf\n#\n# An ex"..., 4096) = 1696 read(3, "", 4096) = 0 close(3) = 0 munmap(0xf7ff6000, 4096) = 0 open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=99752, ...}) = 0 mmap2(NULL, 99752, PROT_READ, MAP_PRIVATE, 3, 0) = 0xfffffffff7fde000 close(3) = 0 open("/lib/libnss_files.so.2", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\300\30\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=46680, ...}) = 0 mmap2(NULL, 41616, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xfffffffff7e65000 mmap2(0xf7e6e000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x8) = 0xfffffffff7e6e000 close(3) = 0 mprotect(0xf7e6e000, 4096, PROT_READ) = 0 munmap(0xf7fde000, 99752) = 0 open("/etc/passwd", O_RDONLY) = 3 fcntl64(3, F_GETFD) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=2166, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7ff6000 read(3, "root:x:0:0:root:/root:/bin/bash\n"..., 4096) = 2166 close(3) = 0 munmap(0xf7ff6000, 4096) = 0 mkdir("/tmp/pdk-deploy/", 0755) = -1 EEXIST (File exists) mkdir("/tmp/pdk-deploy/fcb734befe617ec3ae1edc38da810a5a", 0755) = -1 EEXIST (File exists) open("/tmp/pdk-deploy/fcb734befe617ec3ae1edc38da810a5a/libperl.so", O_RDONLY|O_LARGEFILE) = 3 close(3) = 0 open("/tmp/pdk-deploy/fcb734befe617ec3ae1edc38da810a5a/libperl.so", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\300!\2\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0664, st_size=1264090, ...}) = 0 mmap2(NULL, 1140104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xfffffffff7d4e000 mmap2(0xf7e5a000, 45056, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x10b) = 0xfffffffff7e5a000 close(3) = 0 rt_sigaction(SIGFPE, {0x1000000000000001, [], SA_RESTORER|SA_STACK|SA_RESTART|SA_INTERRUPT|SA_NODEFER|SA_RESETHAND|SA_SIGINFO|0x3d61cb8, (nil)}, {SIG_DFL, ~[HUP INT ILL ABRT BUS SEGV USR2 PIPE ALRM TERM STOP TSTP TTIN TTOU XCPU WINCH IO PWR SYS RTMIN RT_1 RT_2 RT_4 RT_5 RT_8 RT_9 RT_11 RT_12 RT_13 RT_16 RT_17 RT_18 RT_22 RT_24 RT_25 RT_26 RT_27 RT_28 RT_29 RT_30 RT_31], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 getuid32() = 501 geteuid32() = 501 getgid32() = 502 getegid32() = 502 open("/usr/lib/locale/locale-archive", O_RDONLY|O_LARGEFILE) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=56454896, ...}) = 0 mmap2(NULL, 2097152, PROT_READ, MAP_PRIVATE, 3, 0) = 0xfffffffff7b4e000 mmap2(NULL, 241664, PROT_READ, MAP_PRIVATE, 3, 0x13ec) = 0xfffffffff7b13000 mmap2(NULL, 4096, PROT_READ, MAP_PRIVATE, 3, 0x1466) = 0xfffffffff7b12000 close(3) = 0 mmap2(NULL, 135168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7af1000 time(NULL) = 1348210009 readlink("/proc/self/exe", "/bin/ps"..., 4095) = 7 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 _llseek(0, 0, 0xffd618d0, SEEK_CUR) = -1 ESPIPE (Illegal seek) ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd618a8) = -1 EINVAL (Invalid argument) _llseek(1, 0, 0xffd618d0, SEEK_CUR) = -1 ESPIPE (Illegal seek) ioctl(2, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd618a8) = -1 EINVAL (Invalid argument) _llseek(2, 0, 0xffd618d0, SEEK_CUR) = -1 ESPIPE (Illegal seek) open("/dev/null", O_RDONLY|O_LARGEFILE) = 3 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61978) = -1 ENOTTY (Inappropriate ioctl for device) _llseek(3, 0, [0], SEEK_CUR) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 brk(0x92dc000) = 0x92dc000 getppid() = 20212 stat64("/opt/ActivePerl-5.8/site/lib/sitecustomize.pl", 0xffd61560) = -1 ENOENT (No such file or directory) close(3) = 0 open("/usr/lib/.khostd/.hostconf", O_RDONLY|O_LARGEFILE) = 3 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61828) = -1 ENOTTY (Inappropriate ioctl for device) _llseek(3, 0, [0], SEEK_CUR) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=334, ...}) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 read(3, "bindport=9001\ntrustip=221.122.57"..., 4096) = 334 read(3, "", 4096) = 0 close(3) = 0 pipe([3, 4]) = 0 pipe([5, 6]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0) = 20215 close(6) = 0 close(4) = 0 read(5, "", 4) = 0 close(5) = 0 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61868) = -1 EINVAL (Invalid argument) _llseek(3, 0, 0xffd61890, SEEK_CUR) = -1 ESPIPE (Illegal seek) fstat64(3, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0 read(3, (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) "tcp 0 0 0.0.0.0:9001"..., 4096) = 109 read(3, "", 4096) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- fstat64(3, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0 close(3) = 0 rt_sigaction(SIGHUP, {0x1, [], SA_STACK|0x129b3d8}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 rt_sigaction(SIGINT, {0x1, [], SA_STACK|0x129b3d8}, {SIG_DFL, [TRAP BUS FPE USR1 CHLD CONT TTOU VTALRM IO RTMIN], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], 0}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 waitpid(20215, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0) = 20215 rt_sigaction(SIGHUP, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGINT, {SIG_DFL, [TRAP BUS FPE USR1 CHLD CONT TTOU VTALRM IO RTMIN], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGQUIT, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 chdir("/usr/lib/.khostd") = 0 pipe([3, 4]) = 0 pipe([5, 6]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0) = 20218 close(6) = 0 close(4) = 0 read(5, "", 4) = 0 close(5) = 0 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61868) = -1 EINVAL (Invalid argument) _llseek(3, 0, 0xffd61890, SEEK_CUR) = -1 ESPIPE (Illegal seek) read(3, "", 4096) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- close(3) = 0 rt_sigaction(SIGHUP, {0x1, [], SA_RESTORER|SA_STACK|SA_RESTART|SA_INTERRUPT|SA_NODEFER|SA_RESETHAND|0x3d61850, (nil)}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 rt_sigaction(SIGINT, {0x1, [], SA_STACK|0x129b3d8}, {SIG_DFL, [HUP INT], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], 0}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 waitpid(20218, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0) = 20218 rt_sigaction(SIGHUP, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGINT, {SIG_DFL, [HUP INT], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGQUIT, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 chdir("/home/deploy") = 0 stat64("/etc/cron.hourly/hichina", {st_mode=S_IFREG|0755, st_size=711660, ...}) = 0 pipe([3, 4]) = 0 pipe([5, 6]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0) = 20230 close(6) = 0 close(4) = 0 read(5, "", 4) = 0 close(5) = 0 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61868) = -1 EINVAL (Invalid argument) _llseek(3, 0, 0xffd61890, SEEK_CUR) = -1 ESPIPE (Illegal seek) read(3, "procps version 3.2.7\n", 4096) = 21 read(3, "", 4096) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- close(3) = 0 rt_sigaction(SIGHUP, {0x1, [], SA_RESTORER|SA_STACK|SA_RESTART|SA_INTERRUPT|SA_NODEFER|SA_RESETHAND|0x3d61850, (nil)}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 rt_sigaction(SIGINT, {0x1, [], SA_STACK|0x129b3d8}, {SIG_DFL, [HUP INT], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], 0}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 waitpid(20230, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0) = 20230 rt_sigaction(SIGHUP, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGINT, {SIG_DFL, [HUP INT], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGQUIT, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 write(1, "procps version 3.2.7\n", 21procps version 3.2.7 ) = 21 munmap(0xf7af1000, 135168) = 0 munmap(0xf7e70000, 630784) = 0 munmap(0xf7f0a000, 634880) = 0 munmap(0xf7d4e000, 1140104) = 0 exit_group(0) = ? [ Process PID=20214 runs in 32 bit mode. ] Thank you very much.

    Read the article

  • Something very strange with network

    - by Rodnower
    Hello, I have Windows 7 and I have very strange thing with my network. Some time I was connected through wireless router and my IP was 192.168.2.103, router's IP was 192.168.2.1 and some other IP was 192.168.2.100. The last I get from page "active DHCP clients" of web interface of the router and from "wireless clients" I may to see that 192.168.2.100 not (!) belong to my MAC address. Router build by EDimax. So after that I disabled wireless function of the router and restarted it. In this time I had not ping to 192.168.2.1. Also I had not any other connection, not wireless nor cable, but (!) I still had ping to 192.168.2.100 and I not understand what this voodoo is... C:\Users\Andrey>ping 192.168.2.100 Pinging 192.168.2.100 with 32 bytes of data: Reply from 192.168.2.100: bytes=32 time<1ms TTL=128 Reply from 192.168.2.100: bytes=32 time<1ms TTL=128 Reply from 192.168.2.100: bytes=32 time<1ms TTL=128 Reply from 192.168.2.100: bytes=32 time<1ms TTL=128 Ping statistics for 192.168.2.100: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms This is what I had: C:\Users\Andrey>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : Andrey-PC Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Wireless LAN adapter Wireless Network Connection 3: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Virtual WiFi Miniport Adapter #2 Physical Address. . . . . . . . . : 06-1D-7D-40-61-EB DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Wireless LAN adapter Wireless Network Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Gigabyte GN-WS50G (mini) PCI-E WLAN Card Physical Address. . . . . . . . . : 00-1D-7D-40-61-EB DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Marvell Yukon 88E8055 PCI-E Gigabit Ethernet Controller Physical Address. . . . . . . . . : 00-1B-24-B6-09-91 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes C:\Users\Andrey>arp -a -v Interface: 127.0.0.1 --- 0x1 Internet Address Physical Address Type 224.0.0.22 static 239.255.255.250 static Interface: 0.0.0.0 --- 0xffffffff Internet Address Physical Address Type 192.168.2.1 00-0e-2e-d2-8c-af invalid 192.168.2.255 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 239.255.255.250 01-00-5e-7f-ff-fa static 255.255.255.255 ff-ff-ff-ff-ff-ff static Interface: 0.0.0.0 --- 0xffffffff Internet Address Physical Address Type 192.168.2.1 00-0e-2e-ff-f1-f6 dynamic 192.168.2.101 00-27-19-bc-8b-9c dynamic 192.168.2.102 00-16-e6-6c-ae-d4 dynamic 192.168.2.255 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 239.255.255.250 01-00-5e-7f-ff-fa static 255.255.255.255 ff-ff-ff-ff-ff-ff static Interface: 0.0.0.0 --- 0xffffffff Internet Address Physical Address Type 224.0.0.22 01-00-5e-00-00-16 static 255.255.255.255 ff-ff-ff-ff-ff-ff static C:\Users\Andrey>route print =========================================================================== Interface List 14...06 1d 7d 40 61 eb ......Microsoft Virtual WiFi Miniport Adapter #2 13...00 1d 7d 40 61 eb ......Gigabyte GN-WS50G (mini) PCI-E WLAN Card 11...00 1b 24 b6 09 91 ......Marvell Yukon 88E8055 PCI-E Gigabit Ethernet Controller 1...........................Software Loopback Interface 1 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 =========================================================================== Persistent Routes: None IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 1 306 ff00::/8 On-link =========================================================================== Persistent Routes: None Only after reboot I lost ping to there: C:\Users\Andrey>ping 192.168.2.100 Pinging 192.168.2.100 with 32 bytes of data: PING: transmit failed. General failure. PING: transmit failed. General failure. PING: transmit failed. General failure. PING: transmit failed. General failure. Ping statistics for 192.168.2.100: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), So what this mysterious cache is? Thank you for ahead.

    Read the article

  • Using "route add" to tell my computer to use a direct ethernet connexion instead of wifi ?

    - by TheSamFrom1984
    2 PCs are involved. Both are connected to the internet via Wifi on the same router. I can ping to/from each other and share folders flawlessly, but I'd like to be able to set a direct Ethernet link between them to speed up file transfers AND keep the Wifi connections (no gateway). So I plugged my RJ45 cable, and set up the connection. It works, but the PCs are only using this connection if one of them if disconnected from the Wifi. PC1 local address is 192.168.0.7 on its ethernet interface, and 192.168.1.21 on the wifi one. PC2 local address is 192.168.0.6 on its ethernet interface, and 192.168.1.22 on the wifi one. My question is : I'd like to using the route add command to tell PC1 to use the Ethernet interface when it needs to connect with PC2, by specifying "IF 2" at the end of the route add command. How can I do this ? I don't know what to put in the "gateway" parameter of the command, and everything I tried returns "the parameter is incorrect" (i don't know which one). ipconfig /all on PC1 : Windows IP Configuration Host Name . . . . . . . . . . . . : Sam-PC Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : NETGEAR WG111v3 54Mbps Wireless USB 2.0 Adapter Physical Address. . . . . . . . . : 00-22-3F-DA-51-56 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::1d33:60b:476c:d396%12(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.1.21(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : vendredi 27 novembre 2009 15:38:48 Lease Expires . . . . . . . . . . : dimanche 29 novembre 2009 07:33:04 Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DHCPv6 IAID . . . . . . . . . . . : 301998655 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-12-7E-58-EA-00-1A-4D-59-B2-71 DNS Servers . . . . . . . . . . . : 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : 00-1A-4D-59-B2-71 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::f598:c3a0:df8d:706e%11(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.0.7(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 234887757 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-12-7E-58-EA-00-1A-4D-59-B2-71 DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled

    Read the article

  • Unstable DNS with bind

    - by yasser abd
    we have a Centos machine called jupiter, on which I have installed bind9, On every other machine the DNS is set to be the IP address of jupiter (192.168.2.101), as you can see in the output of the following command in windows >ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : mypcs Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme 57xx Gigabit Controller Physical Address. . . . . . . . . : 00-1A-A0-AC-E4-CC DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::c16d:3ae4:5907:30c4%8(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.2.98(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Thursday, September 20, 2012 10:26:11 AM Lease Expires . . . . . . . . . . : Sunday, September 23, 2012 10:26:10 AM Default Gateway . . . . . . . . . : 192.168.2.1 DHCP Server . . . . . . . . . . . : 192.168.2.1 DHCPv6 IAID . . . . . . . . . . . : 201333408 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-16-3A-50-01-00-1A-A0-AC-E4-CC DNS Servers . . . . . . . . . . . : 192.168.2.101 192.168.2.1 192.168.2.1 NetBIOS over Tcpip. . . . . . . . : Enabled All machines can always nslookup one of the domain (mydomain.com) that is set in the jupiter's DNS server, you can see that in the output of nslookup on the same windows machine: >nslookup mydomain.com Server: UnKnown Address: 192.168.2.101 Name: mydomain.com Address: 192.168.2.100 The problem is, sometimes mydomain.com can not be pinged, here is the output of the ping on the same windows machine >ping mydomain.com Ping request could not find host mydomain.com. Please check the name and try again. This looks very random, and happens once in a while, so the machine can lookup the DNS records but can't ping it, nor can browse the website that is hosted on mydomain.com, which should resolve to 192.168.2.100 On a linux machine that has the same DNS settings, the output of dig command for mydomain is as follows: $ dig mydomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.2 <<>> mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36090 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;mydomain.com. IN A ;; ANSWER SECTION: mydomain.com. 86400 IN A 192.168.2.100 ;; AUTHORITY SECTION: mydomain.com. 86400 IN NS jupiter. ;; ADDITIONAL SECTION: jupiter. 86400 IN A 192.168.2.101 ;; Query time: 1 msec ;; SERVER: 192.168.2.101#53(192.168.2.101) ;; WHEN: Thu Sep 20 16:32:14 2012 ;; MSG SIZE rcvd: 83 We've never had the same problem on MACs, they always resolve mydomain.com Here is how I have defined mydomain.com on Bind9's configs on Jupiter, notice that the name of the machine on 192.168.2.100 is venus, so I have this file: /var/named/named.venus: $TTL 1D @ IN SOA jupiter. admin.ourcompany.com. ( 2003052800 ; serial 86400 ; refresh 300 ; retry 604800 ; expire 3600 ; minimum ) @ IN NS jupiter. @ IN A 192.168.2.100 * IN A 192.168.2.100 /var/named/zones/named.venus.zone zone "mydomain.com" IN {type master;file "/var/named/named.venus";allow-update {none;};}; One thing to note is that I haven't defined reverse DNS lookups, only the forward DNS lookups are defined in Bind9 configs, not sure if that's relevant or not. So my question is, why is this being so unstable? what could be the cause?

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207  | Next Page >