Search Results

Search found 18363 results on 735 pages for 'external ip'.

Page 135/735 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • GPIB connection to external device using MATLAB

    - by hkf
    Is there a way to establish a GPIB connection using MATLAB without the instrument control Tool box? (I don't have it). Also is there a way for MATLAB to know what the external device's RS232 parameter values are ( Baud rate, stop bit etc..). For the RS232 connection I have the following code: % This function is meant to send commands to Potentiostat Model 263A. % A run includes turning the cell on, reading current for time t1, turning % the cell off, waiting for time t2. % t1 is the duration [secs] for which the Potentiostat must run (cell is on) % t2 is the duration [secs] to on after off % n is the number of runs % port is the serial port name such as COM1 function [s] = Potentiostat_control(t1,t2,n) port = input('type port name such as COM1', 's') s = serial(port); set(s,'BaudRate', 9600, 'DataBits', 8, 'Parity', 'even', 'StopBits', 2 ,'Terminator', 'CR/LF'); fopen(s) %fprintf(s,'RS232?') disp(['Total runs requested = ' num2str(n)]) disp('i denotes number of runs executed so far..'); for i=1:n i %data1 = query(s, '*IDN?') fprintf(s,'%s','CELL 1'); % sends the command 'CELL 1' %fprintf(s,'%s','READI'); pause(t1); fprintf(s,'%s','CELL 0'); %fprintf(s,'%s','CLEAR'); pause(t2); end fclose(s)

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • error LNK2001: unresolved external symbol

    - by numerical25
    I am receiving this error >GXRenderManager.obj : error LNK2001: unresolved external symbol "private: static class GXRenderer * GXRenderManager::renderDevice" (?renderDevice@GXRenderManager@@0PAVGXRenderer@@A) The following is my code... GXDX.h class GXDX: public GXRenderer { public: void Render(); void StartUp(); }; GXGL.h class GXGL: public GXRenderer { public: void Render(); void StartUp(); }; GXRenderer class GXRenderer { public: virtual void Render() = 0; virtual void StartUp() = 0; }; GXRenderManager.h #ifndef GXRM #define GXRM #include <windows.h> #include "GXRenderer.h" #include "GXDX.h" #include "GXGL.h" enum GXDEVICE { DIRECTX, OPENGL }; class GXRenderManager { public: static int Ignite(GXDEVICE); private: static GXRenderer *renderDevice; }; #endif GXRenderManager.cpp #include "GXRenderManager.h" int GXRenderManager::Ignite(GXDEVICE DeviceType) { switch(DeviceType) { case DIRECTX: GXRenderManager::renderDevice = new GXDX; return 1; break; case OPENGL: GXRenderManager::renderDevice = new GXGL; return 1; break; default: return 0; } } main.cpp #include "GXRenderManager.h" int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { return 0; } I am not trying to get it to do anything. I am just trying to compile with no errors. I am new with all this so if anyone can give me a hand. that will be great. thanks

    Read the article

  • Bind WCF webservice to specific network interface / IP

    - by Markus
    On a machine with multiple network cards I need to bind a WCF webservice to a specific network interface. It seems that the default is to bind on all network interfaces. The machine has two network adapters with the IPs 192.168.0.10 and 192.168.0.11. I have an Apache running that binds on 192.168.0.10:80 and need to run the webservice on 192.168.0.11:80. (Due to external circumstances I cannot choose another port.) I tried the following: string endpoint = "http://192.168.0.11:80/SOAP"; ServiceHost = new ServiceHost(typeof(TService), new Uri(endpoint)); ServiceHost.AddServiceEndpoint(typeof(TContract), Binding, ""); // or: ServiceHost.AddServiceEndpoint(typeof(TContract), Binding, endpoint); But it doesn't work; netstat -ano -p tcp always shows the webservice listening on 0.0.0.0:80, which is all interfaces (if I got that correct). When I start Apache first, it correctly binds to the other interface, which in turn prevents the WCF service to bind to "all". Any ideas?

    Read the article

  • How to let one external stylsheet selectively overrule the other

    - by Ferdy
    I'm stunned by a simple thing that I want to accomplish but does not work. I have a website and I want it to support themes, which are a named set of CSS + images. No matter which theme is selected, I always include the main CSS file, which is the default theme. On top of that I'm loading a second stylesheet, the one that is theme-specific, like so: <link rel="stylesheet" type="text/css" href="css/main.css" title=main" media="screen" /> <link rel="stylesheet" type="text/css" href="themes/<?= $style ?>/css/<?= $style ?>.css" title="<?= $style ?>" media="screen" /> My idea is that the theme specific css should not be a full copy of the main css file. Instead, it should only contain CSS rules that overrule those of the main.css file. This makes themes much smaller and easier to maintain. I thought I could simply load two external stylesheets after each other and that for conflicting rules it will always use the theme specific css, the second file. However, it does not seem to work. If I make a dramatic styling change in the theme file then it has no effect. If I then comment the main CSS file, the theme CSS does have effect. Was I too naive in expecting this to work like this? I know I can use inline styles to overrule anything, but I prefer a setup like this if possible.

    Read the article

  • error message The URI does not identify an external Java class

    - by iHeartGreek
    Hi! I am new to XSL, and thus new to using scripts within the XSL. I have taken example code (also using C#) and adapted it for my own use.. but it does not work. The error message is: The URI urn:cs-scripts does not identify an external Java class The relevant code I have is: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" xmlns:strTok="urn:cs-scripts"> ... ... ... </xsl:template> <xsl:variable name="temp"> <xsl:value-of select="tok:getList('AAA BBB CCC', ' ')"/> </xsl:variable> <msxsl:script language="C#" implements-prefix="tok"> <![CDATA[ public string[] getList(string str, char[] delim) { return str.Split(delim, StringSplitOptions.None); } public string getString(string[] list, int i) { return list[i]; } ]]> </msxsl:script> </xsl:stylesheet>

    Read the article

  • document.write Not working when loading external Javascript source

    - by jadent
    I'm trying to load an external JavaScript file dynamically into an HTML element to preview an ad tag. The script loads and executes but the script contains "document.write" which has an issue executing properly but there are no errors. <html> <head> <script src="//ajax.googleapis.com/ajax/libs/jquery/2.0.3/jquery.min.js"></script> <script type="text/javascript"> $(function() { source = 'http://ib.adnxs.com/ttj?id=555281'; // DOM Insert Approach // ----------------------------------- var script = document.createElement('script'); script.setAttribute('type', 'text/javascript'); script.setAttribute('src', source); document.body.appendChild(script); }); </script> </head> <body> </body> </html> I can get it to work if If i move the the source to the same domain for testing If the script was modified to use document.createElement and appendChild instead of document.write like the code above. I don't have the ability to modify the script since it being generated and hosted by a 3rd party. Does anyone know why the document.write will not work correctly? And is there a way to get around this? Thanks for the help!

    Read the article

  • Improving I/O performance in C++ programs[external merge sort]

    - by Ajay
    I am currently working on a project involving external merge-sort using replacement-selection and k-way merge. I have implemented the project in C++[runs on linux]. Its very simple and right now deals with only fixed sized records. For reading & writing I use (i/o)fstream classes. After executing the program for few iterations, I noticed that I/O read blocks for requests of size more than 4K(typical block size). Infact giving buffer sizes greater than 4K causes performance to decrease. The output operations does not seem to need buffering, linux seemed to take care of buffering output. So I issue a write(record) instead of maintaining special buffer of writes and then flushing them out at once using write(records[]). But the performance of the application does not seem to be great. How could I improve the performance? Should I maintain special I/O threads to take care of reading blocks or are there existing C++ classes providing this abstraction already?(Something like BufferedInputStream in java)

    Read the article

  • Proper way to scan a range of IP addresses

    - by Josh G
    Given a range of IP addresses entered by a user (through various means), I want to identify which of these machines have software running that I can talk to. Here's the basic process: Ping these addresses to find available machines Connect to a known socket on the available machines Send a message to the successfully established sockets Compare the response to the expected response Steps 2-4 are straight forward for me. What is the best way to implement the first step in .NET? I'm looking at the System.Net.NetworkInformation.Ping class. Should I ping multiple addresses simultaneously to speed up the process? If I ping one address at a time with a long timeout it could take forever. But with a small timeout, I may miss some machines that are available. Sometimes pings appear to be failing even when I know that the address points to an active machine. Do I need to ping twice in the event of the request getting discarded? To top it all off, when I scan large collections of addresses with the network cable unplugged, Ping throws a NullReferenceException in FreeUnmanagedResources(). !? Any pointers on the best approach to scanning a range of IPs like this?

    Read the article

  • ColdFusion Session issue - multiple users behind one proxy IP -- cftoken and cfid seems to be shared

    - by smoothoperator
    Hi Everyone, I have an application that uses coldfusion's session management (instead of the J2EE) session management. We have one client, who has recently switched their company's traffic to us to come viaa proxy server in their network. So, to our Coldfusion server, it appears that all traffic is coming from this one IP Address, for all of the accounts of this one company.. Of the session variables, Part 1 is kept in a cflock, and Part 2 is kept in editable session variables. I may be misundestanding, but we have done it this way as we modify some values as needed throughout the application's usage. We are now running into an issue of this client having their session variables mixed up (?). We have one case where we set a timestamp.. and when it comes time to look it up, it's empty. From the looks of it this is happening because of another user on the same token. My initial thoughts are to look into modifying our existing session management to somehow generate a unique cftoken/cfid, or to start using jsession_ID, if this solves the problem at all. I have done some basic research on this issue and couldn't find anything similar, so I thought I'd ask here. Thanks!

    Read the article

  • implementing a Intelligent File Transfer Software in java over TCP/IP

    - by whyjava
    Hello I am working on a proposal where we have to implement a software which can move files between one source to destination.The overall goal of this project is to create intelligent file transfer.This software will have three components :- 1) Broker : Broker is the module that communicates with other brokers, monitors files, moves files, retrieves configurations from the Configuration Manager, supplies process information for the monitor, archives files, writes all process data to log files and escalates issues if necessary 2) Configuration Manager :Configuration Manager is a web-based application used to configure and deploy the configuration to all brokers. 3) Monitor : Monitor is a web-based application used to monitor each Broker in the environment. This project has to be built up in java and protocol for file transfer in tcp/ip. Client does not want to use FTP. File Transfer seems very easy, until there are several processes who are waiting to pick the file up automatically. Several problems arise: How can we guarantee the file is received at the destination? If a file isn’t received the first time, we should try it again (even after a restart or power breakdown) ? How does the receiver knows the file that is received is complete? How can we transfer multiple files synchronously? How can we protect the bandwidth, so file transfer isn’t blocking other processes? How does one interoperate between multiple OS platforms? What about authentication? How can we monitor het workflow? Auditing / logging Archiving Can you please provide answer to some of these? Thanks

    Read the article

  • Loading dynamic external content into the Div of another on click

    - by Robin I Knight
    Trying to load an external php file into another onClick. Iframes will not work as the size of the content changes with collapsible panels. That leaves AJAX. I was given a piece of code HTML <a href="#" id="getData">Get data</a> <div id="myContainer"></div> JS $('#getData').click(function(){ $.get('data.php', { section: 'mySection' }, function(data){ $('#myContainer').html(data); }); }); PHP: <?php if($_GET['section'] === 'mySection') echo '<span style="font-weigth:bold;">Hello World</span>'; ?> I have tested it here http://www.divethegap.com/scuba-diving-programmes-dive-the-gap/programme-pages/dahab-divemaster/divemaster-trainingA.php and get the most unexpected results. It certainly loads the right amount of items as it says in the lower bar on safari but I see three small calendars and that is it. Can anyone see where I have made a mistake?

    Read the article

  • How can I load an external jQuery gallery/slideshow into a div

    - by DanTransformer
    Ive got a jQuery navigation menu loading external content into my #main div, which works fine when the content is static, but the site im working on contains jQuery galleries/slideshows which id like to call into the div. The problem im having is when the gallery is loaded, the images all appear but the jQuery functionality does not work. Any help appreciated. here is the javascript im using... $(document).ready(function() { // Check for hash value in URL var hash = window.location.hash.substr(1); var href = $('#accordion ul li a').each(function(){ var href = $(this).attr('href'); if(hash==href.substr(0,href.length-5)){ var toLoad = hash+'.html #main'; $('#main').load(toLoad) } }); $('#accordion ul li a').click(function(){ var toLoad = $(this).attr('href')+' #main'; $('#main').hide('fast',loadContent); $('#load').remove(); $('#wrapper').append('<span id="load">LOADING...</span>'); $('#load').fadeIn('normal'); window.location.hash = $(this).attr('href').substr(0,$(this).attr('href').length-5); function loadContent() { $('#main').load(toLoad,'',showNewContent()) } function showNewContent() { $('#main').show('normal',hideLoader()); } function hideLoader() { $('#load').fadeOut('normal'); } return false; }); });

    Read the article

  • C++ Virtual Methods for Class-Specific Attributes or External Structure

    - by acanaday
    I have a set of classes which are all derived from a common base class. I want to use these classes polymorphically. The interface defines a set of getter methods whose return values are constant across a given derived class, but vary from one derived class to another. e.g.: enum AVal { A_VAL_ONE, A_VAL_TWO, A_VAL_THREE }; enum BVal { B_VAL_ONE, B_VAL_TWO, B_VAL_THREE }; class Base { //... virtual AVal getAVal() const = 0; virtual BVal getBVal() const = 0; //... }; class One : public Base { //... AVal getAVal() const { return A_VAL_ONE }; BVal getBVal() const { return B_VAL_ONE }; //... }; class Two : public Base { //... AVal getAVal() const { return A_VAL_TWO }; BVal getBVal() const { return B_VAL_TWO }; //... }; etc. Is this a common way of doing things? If performance is an important consideration, would I be better off pulling the attributes out into an external structure, e.g.: struct Vals { AVal a_val; VBal b_val; }; storing a Vals* in each instance, and rewriting Base as follows? class Base { //... public: AVal getAVal() const { return _vals->a_val; }; BVal getBVal() const { return _vals->b_val; }; //... private: Vals* _vals; }; Is the extra dereference essentially the same as the vtable lookup? What is the established idiom for this type of situation? Are both of these solutions dumb? Any insights are greatly appreciated

    Read the article

  • Apply [ThreadStatic] attribute to a method in external assembly

    - by Sen Jacob
    Can I use an external assembly's static method like [ThreadStatic] method? Here is my situation. The assembly class (which I do not have access to its source) has this structure public class RegistrationManager() { private RegistrationManager() {} public static void RegisterConfiguration(int ID) {} public static object DoWork() {} public static void UnregisterConfiguration(int ID) {} } Once registered, I cannot call the DoWork() with a different ID without unregistering the previously registered one. Actually I want to call the DoWork() method with different IDs simultaneously with multi-threading. If the RegisterConfiguration(int ID) method was [ThreadStatic], I could have call it in different threads without problems with calls, right? So, can I apply the [ThreadStatic] attribute to this method or is there any other way I can call the two static methods same time without waiting for other thread to unregister it? If I check it like the following, it should work. for(int i=0; i < 10; i++) { new Thread(new ThreadStart(() => Checker(i))).Start(); } public string Checker(int i) { public static void RegisterConfiguration(i); // Now i cannot register second time public static object DoWork(i); Thread.Sleep(5000); // DoWork() may take a little while to complete before unregistered public static void UnregisterConfiguration(i); }

    Read the article

  • RegEx expression or jQuery selector to NOT match "external" links in href

    - by TrueBlueAussie
    I have a jQuery plugin that overrides link behavior, to allow Ajax loading of page content. Simple enough with a delegated event like $(document).on('click','a', function(){});. but I only want it to apply to links that are not like these ones (Ajax loading is not applicable to them, so links like these need to behave normally): target="_blank" // New browser window href="#..." // Bookmark link (page is already loaded). href="afs://..." // AFS file access. href="cid://..." // Content identifiers for MIME body part. href="file://..." // Specifies the address of a file from the locally accessible drive. href="ftp://..." // Uses Internet File Transfer Protocol (FTP) to retrieve a file. href="http://..." // The most commonly used access method. href="https://..." // Provide some level of security of transmission href="mailto://..." // Opens an email program. href="mid://..." // The message identifier for email. href="news://..." // Usenet newsgroup. href="x-exec://..." // Executable program. href="http://AnythingNotHere.com" // External links Sample code: $(document).on('click', 'a:not([target="_blank"])', function(){ var $this = $(this); if ('some additional check of href'){ // Do ajax load and stop default behaviour return false; } // allow link to work normally }); Q: Is there a way to easily detect all "local links" that would only navigate within the current website? excluding all the variations mentioned above. Note: This is for an MVC 5 Razor website, so absolute site URLs are unlikely to occur.

    Read the article

  • md5_file() not working with IP addresses?

    - by Rob
    Here is my code relating to the question: $theurl = trim($_POST['url']); $md5file = md5_file($theurl); if ($md5file != '96a0cec80eb773687ca28840ecc67ca1') { echo 'Hash doesn\'t match. Incorrect file. Reupload it and try again'; When I run this script, it doesn't even output an error. It just stops. It loads for a bit, and then it just stops. Further down the script I implement it again, and it fails here, too: while($row=mysql_fetch_array($execquery, MYSQL_ASSOC)){ $hash = @md5_file($row['url']); $url = $row['url']; mysql_query("UPDATE urls SET hash='" . $hash . "' WHERE url='" . $url . "'") or die("MYSQL is indeed gay: ".mysql_error()); if ($hash != '96a0cec80eb773687ca28840ecc67ca1'){ $status = 'down'; }else{ $status = 'up'; } mysql_query("UPDATE urls SET status='" . $status . "' WHERE url='" . $url . "'") or die("MYSQL is indeed gay: ".mysql_error()); } And it checks all the URL's just fine, until it gets to one with an IP instead of a domain, such as: http://188.72.215.195/config.php In which, again, the script then just loads for a bit, and then stops. Any help would be much appreciated, if you need any more information just ask.

    Read the article

  • Jquery Selecting Multiple Classes, Loading External Files

    - by WillingLearner
    I have 2 links, with the class dynamicLoad. <ul class="navbar"> <li><a href="Page3-News.html" class="dynamicLoad news">NEWS</a></li> <li><a href="Page2-Events.html" class="dynamicLoad">EVENTS</a></li> </ul> and then I have this already working code, which loads external pages into a div named #MainWrapper: <script type="text/javascript"> $( document ).ready( function() { $( 'a.dynamicLoad' ).click( function( e ) { e.preventDefault(); // prevent the browser from following the link e.stopPropagation(); // prevent the browser from following the link $( '#MainWrapper' ).load( $( this ).attr( 'href' ) ); }); }); </script> How do I edit this code and my links, so that i can target the 1st link with the classes of both dynamicLoad and news, and then, load another script and/or pages into the main wrapper, without breaking its already working functionality?

    Read the article

  • Building a SOA/BPM/BAM Cluster Part I &ndash; Preparing the Environment

    - by antony.reynolds
    An increasing number of customers are using SOA Suite in a cluster configuration, I might hazard to say that the majority of production deployments are now using SOA clusters.  So I thought it may be useful to detail the steps in building an 11g cluster and explain a little about why things are done the way they are. In this series of posts I will explain how to build a SOA/BPM cluster using the Enterprise Deployment Guide. This post will explain the setting required to prepare the cluster for installation and configuration. Software Required The following software is required for an 11.1.1.3 SOA/BPM install. Software Version Notes Oracle Database Certified databases are listed here SOA & BPM Suites require a working database installation. Repository Creation Utility (RCU) 11.1.1.3 If upgrading an 11.1.1.2 repository then a separate script is available. Web Tier Utilities 11.1.1.3 Provides Web Server, 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. Web Tier Utilities 11.1.1.3 Web Server, 11.1.1.3 Patch.  You can use the 11.1.1.2 version without problems. Oracle WebLogic Server 11gR1 10.3.3 This is the host platform for 11.1.1.3 SOA/BPM Suites. SOA Suite 11.1.1.2 SOA Suite 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. SOA Suite 11.1.1.3 SOA Suite 11.1.1.3 patch, requires 11.1.12 to have been installed. My installation was performed on Oracle Enterprise Linux 5.4 64-bit. Database I will not cover setting up the database in this series other than to identify the database requirements.  If setting up a SOA cluster then ideally we would also be using a RAC database.  I assume that this is running on separate machines to the SOA cluster.  Section 2.1, “Database”, of the EDG covers the database configuration in detail. Settings The database should have processes set to at least 400 if running SOA/BPM and BAM. alter system set processes=400 scope=spfile Run RCU The Repository Creation Utility creates the necessary database tables for the SOA Suite.  The RCU can be run from any machine that can access the target database.  In 11g the RCU creates a number of pre-defined users and schema with a user defiend prefix.  This allows you to have multiple 11g installations in the same database. After running the RCU you need to grant some additional privileges to the soainfra user.  The soainfra user should have privileges on the transaction tables. grant select on sys.dba_pending_transactions to prefix_soainfra Grant force any transaction to prefix_soainfra Machines The cluster will be built on the following machines. EDG Name is the name used for this machine in the EDG. Notes are a description of the purpose of the machine. EDG Name Notes LB External load balancer to distribute load across and failover between web servers. WEBHOST1 Hosts a web server. WEBHOST2 Hosts a web server. SOAHOST1 Hosts SOA components. SOAHOST2 Hosts SOA components. BAMHOST1 Hosts BAM components. BAMHOST2 Hosts BAM components. Note that it is possible to collapse the BAM servers so that they run on the same machines as the SOA servers. In this case BAMHOST1 and SOAHOST1 would be the same, as would BAMHOST2 and SOAHOST2. The cluster may include more than 2 servers and in this case we add SOAHOST3, SOAHOST4 etc as needed. My cluster has WEBHOST1, SOAHOST1 and BAMHOST1 all running on a single machine. Software Components The cluster will use the following software components. EDG Name is the name used for this machine in the EDG. Type is the type of component, generally a WebLogic component. Notes are a description of the purpose of the component. EDG Name Type Notes AdminServer Admin Server Domain Admin Server WLS_WSM1 Managed Server Web Services Manager Policy Manager Server WLS_WSM2 Managed Server Web Services Manager Policy Manager Server WLS_SOA1 Managed Server SOA/BPM Managed Server WLS_SOA2 Managed Server SOA/BPM Managed Server WLS_BAM1 Managed Server BAM Managed Server running Active Data Cache WLS_BAM2 Managed Server BAM Manager Server without Active Data Cache   Node Manager Will run on all hosts with WLS servers OHS1 Web Server Oracle HTTP Server OHS2 Web Server Oracle HTTP Server LB Load Balancer Load Balancer, not part of SOA Suite The above assumes a 2 node cluster. Network Configuration The SOA cluster requires an extensive amount of network configuration.  I would recommend assigning a private sub-net (internal IP addresses such as 10.x.x.x, 192.168.x.x or 172.168.x.x) to the cluster for use by addresses that only need to be accessible to the Load Balancer or other cluster members.  Section 2.2, "Network", of the EDG covers the network configuration in detail. EDG Name is the hostname used in the EDG. IP Name is the IP address name used in the EDG. Type is the type of IP address: Fixed is fixed to a single machine. Floating is assigned to one of several machines to allow for server migration. Virtual is assigned to a load balancer and used to distribute load across several machines. Host is the host where this IP address is active.  Note for floating IP addresses a range of hosts is given. Bound By identifies which software component will use this IP address. Scope shows where this IP address needs to be resolved. Cluster scope addresses only have to be resolvable by machines in the cluster, i.e. the machines listed in the previous section.  These addresses are only used for inter-cluster communication or for access by the load balancer. Internal scope addresses Notes are comments on why that type of IP is used. EDG Name IP Name Type Host Bound By Scope Notes ADMINVHN VIP1 Floating SOAHOST1-SOAHOSTn AdminServer Cluster Admin server, must be able to migrate between SOA server machines. SOAHOST1 IP1 Fixed SOAHOST1 NodeManager, WLS_WSM1 Cluster WSM Server 1 does not require server migration. SOAHOST2 IP2 Fixed SOAHOST1 NodeManager, WLS_WSM2 Cluster WSM Server 2 does not require server migration SOAHOST1VHN VIP2 Floating SOAHOST1-SOAHOSTn WLS_SOA1 Cluster SOA server 1, must be able to migrate between SOA server machines SOAHOST2VHN VIP3 Floating SOAHOST1-SOAHOSTn WLS_SOA2 Cluster SOA server 2, must be able to migrate between SOA server machines BAMHOST1 IP4 Fixed BAMHOST1 NodeManager Cluster   BAMHOST1VHN VIP4 Floating BAMHOST1-BAMHOSTn WLS_BAM1 Cluster BAM server 1, must be able to migrate between BAM server machines BAMHOST2 IP3 Fixed BAMHOST2 NodeManager, WLS_BAM2 Cluster BAM server 2 does not require server migration WEBHOST1 IP5 Fixed WEBHOST1 OHS1 Cluster   WEBHOST2 IP6 Fixed WEBHOST2 OHS2 Cluster   soa.mycompany.com VIP5 Virtual LB LB Public External access point to SOA cluster. admin.mycompany.com VIP6 Virtual LB LB Internal Internal access to WLS console and EM soainternal.mycompany.com VIP7 Virtual LB LB Internal Internal access point to SOA cluster Floating IP addresses are IP addresses that may be re-assigned between machines in the cluster.  For example in the event of failure of SOAHOST1 then WLS_SOA1 will need to be migrated to another server.  In this case VIP2 (SOAHOST1VHN) will need to be activated on the new target machine.  Once set up the node manager will manage registration and removal of the floating IP addresses with the exception of the AdminServer floating IP address. Note that if the BAMHOSTs and SOAHOSTs are the same machine then you can obviously share the hostname and fixed IP addresses, but you still need separate floating IP addresses for the different managed servers.  The hostnames don’t have to be the ones given in the EDG, but they must be distinct in the same way as the ETC names are distinct.  If the type is a fixed IP then if the addresses are the same you can use the same hostname, for example if you collapse the soahost1, bamhost1 and webhost1 onto a single machine then you could refer to them all as HOST1 and give them the same IP address, however SOAHOST1VHN can never be the same as BAMHOST1VHN because these are floating IP addresses. Notes on DNS IP addresses that are of scope “Cluster” just need to be in the hosts file (/etc/hosts on Linux, C:\Windows\System32\drivers\etc\hosts on Windows) of all the machines in the cluster and the load balancer.  IP addresses that are of scope “Internal” need to be available on the internal DNS servers, whilst IP addresses of scope “Public” need to be available on external and internal DNS servers. Shared File System At a minimum the cluster needs shared storage for the domain configuration, XA transaction logs and JMS file stores.  It is also possible to place the software itself on a shared server.  I strongly recommend that all machines have the same file structure for their SOA installation otherwise you will experience pain!  Section 2.3, "Shared Storage and Recommended Directory Structure", of the EDG covers the shared storage recommendations in detail. The following shorthand is used for locations: ORACLE_BASE is the root of the file system used for software and configuration files. MW_HOME is the location used by the installed SOA/BPM Suite installation.  This is also used by the web server installation.  In my installation it is set to <ORACLE_BASE>/SOA11gPS2. ORACLE_HOME is the location of the Oracle SOA components or the Oracle Web components.  This directory is installed under the the MW_HOME but the name is decided by the user at installation, default values are Oracle_SOA1 and Oracle_Web1.  In my installation they are set to <MW_HOME>/Oracle_SOA and <MW_HOME>/Oracle _WEB. ORACLE_COMMON_HOME is the location of the common components and is located under the MW_HOME directory.  This is always <MW_HOME>/oracle_common. ORACLE_INSTANCE is used by the Oracle HTTP Server and/or Oracle Web Cache.  It is recommended to create it under <ORACLE_BASE>/admin.  In my installation they are set to <ORACLE_BASE>/admin/Web1, <ORACLE_BASE>/admin/Web2 and <ORACLE_BASE>/admin/WC1. WL_HOME is the WebLogic server home and is always found at <MW_HOME>/wlserver_10.3. Key file locations are shown below. Directory Notes <ORACLE_BASE>/admin/domain_name/aserver/domain_name Shared location for domain.  Used to allow admin server to manually fail over between machines.  When creating domain_name provide the aserver directory as the location for the domain. In my install this is <ORACLE_BASE>/admin/aserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/aserver/applications Shared location for deployed applications.  Needs to be provided when creating the domain. In my install this is <ORACLE_BASE>/admin/aserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/domain_name Either unique location for each machine or can be shared between machines to simplify task of packing and unpacking domain.  This acts as the managed server configuration location.  Keeping it separate from Admin server helps to avoid problems with the managed servers messing up the Admin Server. In my install this is <ORACLE_BASE>/admin/mserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/applications Either unique location for each machine or can be shared between machines.  Holds deployed applications. In my install this is <ORACLE_BASE>/admin/mserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/soa_cluster_name Shared directory to hold the following   dd – deployment descriptors   jms – shared JMS file stores   fadapter – shared file adapter co-ordination files   tlogs – shared transaction log files In my install this is <ORACLE_BASE>/admin/soa_cluster. <ORACLE_BASE>/admin/instance_name Local folder for web server (OHS) instance. In my install this is <ORACLE_BASE>/admin/web1 and <ORACLE_BASE>/admin/web2. I also have <ORACLE_BASE>/admin/wc1 for the Web Cache I use as a load balancer. <ORACLE_BASE>/product/fmw This can be a shared or local folder for the SOA/BPM Suite software.  I used a shared location so I only ran the installer once. In my install this is <ORACLE_BASE>/SOA11gPS2 All the shared files need to be put onto a shared storage media.  I am using NFS, but recommendation for production would be a SAN, with mirrored disks for resilience. Collapsing Environments To reduce the hardware requirements it is possible to collapse the BAMHOST, SOAHOST and WEBHOST machines onto a single physical machine.  This will require more memory but memory is a lot cheaper than additional machines.  For environments that require higher security then stay with a separate WEBHOST tier as per the EDG.  Similarly for high volume environments then keep a separate set of machines for BAM and/or Web tier as per the EDG. Notes on Dev Environments In a dev environment it is acceptable to use a a single node (non-RAC) database, but be aware that the config of the data sources is different (no need to use multi-data source in WLS).  Typically in a dev environment we will collapse the BAMHOST, SOAHOST and WEBHOST onto a single machine and use a software load balancer.  To test a cluster properly we will need at least 2 machines. For my test environment I used Oracle Web Cache as a load balancer.  I ran it on one of the SOA Suite machines and it load balanced across the Web Servers on both machines.  This was easy for me to set up and I could administer it from a web based console.

    Read the article

  • Layer 3 switch routing

    - by Yoshiwaan
    I need help moving over to using our layer 3 switch as the inter vlan routing device rather than our cisco router. I've mostly got it working but I've got stuck near the end and need some advice (I think I just need a bit of education on the subject really). Cur I have a Dell PowerConnect 7048 connecting to a Cisco 1841 router. I've got a few key excerpts from the configs to provide the key information. On the powerconnect I have the following: ip routing ip default-gateway 172.31.14.1 ip route 0.0.0.0 0.0.0.0 172.31.14.1 253 ! interface vlan 1 ip address 172.31.14.254 255.255.255.0 exit interface vlan 2 ip address 172.31.19.254 255.255.255.0 exit interface vlan 4 ip address 172.31.16.254 255.255.255.0 ! interface Gi1/0/1 description 'Link to L7Router01' switchport mode trunk switchport trunk allowed vlan except 3,7-4093 exit ! and on the Cisco the following: interface FastEthernet0/0 ip address 172.31.14.1 255.255.255.0 ip nat inside ip virtual-reassembly ! interface FastEthernet0/0.2 description Accounts VLAN encapsulation dot1Q 2 ip address 172.31.19.1 255.255.255.0 ip nat inside ip virtual-reassembly ! interface FastEthernet0/0.4 description Voice VLAN encapsulation dot1Q 4 ip address 172.31.16.1 255.255.255.0 ip nat inside ip virtual-reassembly ! So what I'm doing is moving clients over so that their default gateway is a 172.31.x.254 address rather than a 172.31.x.1 address. This works great for inter-vlan routing, I have no issues with this. The switch can also access the router no worries, and users on the 172.31.14.0/24 network can access all interfaces and sub-interfaces on the router, including 172.31.14.1. They can also access all of the interfaces that the router connects off to, no worries there. The problem I have is that users on the 172.31.16.0/24 and 172.31.19.0/24 subnets cannot access either 172.31.14.1 or any of the subnets the router connects to. They can, however, connect to BOTH of the sub interfaces on the router from either subnet. What am I missing here? Why can't the vlans connect to the non-sub interface on the router? Are tagged packets being sent to this interface?

    Read the article

  • Creating reverse DNS entries which resolve [closed]

    - by Tiffany Walker
    Possible Duplicate: Reverse DNS - how to correctly configure for SMTP delivery I ran a DNS check and ended up with the following error: FAIL: Found reverse DNS entries which don't resolves IP-IP-IP-IP.HOST.DOMAIN.TLD ? ??? All IP's reverse DNS entries should resolve back to IP address (MX record's name -> IP -> IP Reverse -> IP). Many mail servers are configured to reject e-mails from IPs with inconsistent reverse DNS configuration. How do I properly configure and it so it goes to an IP?

    Read the article

  • How can I display eth0's IP address at the login screen on Precise Server?

    - by Andrew Stebenne
    The server I administrate, inconveniently enough, has a dynamic IP address assigned by DHCP. The convenient counterbalance, though, is that it happens to be set up about two feet from where I sit. I know how to edit /etc/issue to show different values before the login prompt is delivered to the display, but I'd like to know if it's possible for /etc/issue to display the current IP address of eth0 (re-evaluated at boot time) so that I can see it and then ssh in without having to log in to run ifconfig.

    Read the article

  • Can a NodeJS webserver handle multiple hostnames on the same IP?

    - by Matthew Patrick Cashatt
    I have just begun learning NodeJS and LOVE it so far. I have set up a Linux box to run it and, in learning to use the event-driven model, I am curious if I can use a common IP for multiple domain names. Could I point, for example, www.websiteA.com, www.websiteB.com, and www.websiteC.com all to the same IP (node webserver) and then route to the appropriate source files based on the request? Would this cause certain doom when it came to scaling to any reasonable size?

    Read the article

  • Multiple Zend application code organisation

    - by user966936
    For the past year I have been working on a series of applications all based on the Zend framework and centered on a complex business logic that all applications must have access to even if they don't use all (easier than having multiple library folders for each application as they are all linked together with a common center). Without going into much detail about what the project is specifically about, I am looking for some input (as I am working on the project alone) on how I have "grouped" my code. I have tried to split it all up in such a way that it removes dependencies as much as possible. I'm trying to keep it as decoupled as I logically can, so in 12 months time when my time is up anyone else coming in can have no problem extending on what I have produced. Example structure: applicationStorage\ (contains all applications and associated data) applicationStorage\Applications\ (contains the applications themselves) applicationStorage\Applications\external\ (application grouping folder) (contains all external customer access applications) applicationStorage\Applications\external\site\ (main external customer access application) applicationStorage\Applications\external\site\Modules\ applicationStorage\Applications\external\site\Config\ applicationStorage\Applications\external\site\Layouts\ applicationStorage\Applications\external\site\ZendExtended\ (contains extended Zend classes specific to this application example: ZendExtended_Controller_Action extends zend_controller_Action ) applicationStorage\Applications\external\mobile\ (mobile external customer access application different workflow limited capabilities compared to full site version) applicationStorage\Applications\internal\ (application grouping folder) (contains all internal company applications) applicationStorage\Applications\internal\site\ (main internal application) applicationStorage\Applications\internal\mobile\ (mobile access has different flow and limited abilities compared to main site version) applicationStorage\Tests\ (contains PHP unit tests) applicationStorage\Library\ applicationStorage\Library\Service\ (contains all business logic, services and servicelocator; these are completely decoupled from Zend framework and rely on models' interfaces) applicationStorage\Library\Zend\ (Zend framework) applicationStorage\Library\Models\ (doesn't know services but is linked to Zend framework for DB operations; contains model interfaces and model datamappers for all business objects; examples include Iorder/IorderMapper, Iworksheet/IWorksheetMapper, Icustomer/IcustomerMapper) (Note: the Modules, Config, Layouts and ZendExtended folders are duplicated in each application folder; but i have omitted them as they are not required for my purposes.) For the library this contains all "universal" code. The Zend framework is at the heart of all applications, but I wanted my business logic to be Zend-framework-independent. All model and mapper interfaces have no public references to Zend_Db but actually wrap around it in private. So my hope is that in the future I will be able to rewrite the mappers and dbtables (containing a Models_DbTable_Abstract that extends Zend_Db_Table_Abstract) in order to decouple my business logic from the Zend framework if I want to move my business logic (services) to a non-Zend framework environment (maybe some other PHP framework). Using a serviceLocator and registering the required services within the bootstrap of each application, I can use different versions of the same service depending on the request and which application is being accessed. Example: all external applications will have a service_auth_External implementing service_auth_Interface registered. Same with internal aplications with Service_Auth_Internal implementing service_auth_Interface Service_Locator::getService('Auth'). I'm concerned I may be missing some possible problems with this. One I'm half-thinking about is a config.ini file for all externals, then a separate application config.ini overriding or adding to the global external config.ini. If anyone has any suggestions I would be greatly appreciative. I have used contextswitching for AJAX functions within the individual applications, but there is a big chance both external and internal will get web services created for them. Again, these will be separated due to authorization and different available services. \applicationstorage\Applications\internal\webservice \applicationstorage\Applications\external\webservice

    Read the article

  • Is it possible to populate HTML form field data in an iPhone UIWebView using external accessory fram

    - by Jon Smallberries
    I have an iPhone app where I'd like to load a remotely served HTML form into a UIWebView and then populate that form as data becomes available from an external accessory using the "External Accessory Framework." Right now the data is entered by hand. The proposed flow is: Fetch an HTML page containing a form and put it into a UIWebView When data becomes available from the external accessory, populate the form field(s) Submit the form Is it possible to do this by "injecting" data from the external accessory into the UIWebView when all required data has been retrieved from the external accessory? I cannot seem to find any good examples on how to use the external accessory framework to achieve this.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >