Search Results

Search found 21004 results on 841 pages for 'load assembly'.

Page 22/841 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • PF, load balanced gateways, and Squid

    - by Santa
    Hi, So I have a FreeBSD router running PF and Squid, and it has three network interfaces: two connected to upstream providers (em0 and em1 respectively), and one for LAN (re0) that we serve. There is some load balancing configured with PF. Basically, it routes all traffic to ports 1-1024 through one interface (em0) and everything else through the other (em1). Now, I have a Squid proxy also running on the box that transparently redirects any HTTP request from LAN to port 3128 in 127.0.0.1. Since Squid redirects this request to HTTP outside, it should follow the load balancing rule through em0, no? The problem is, when we tested it out (by browsing from a computer in the LAN to http://whatismyip.com, it reports the external IP of the em1 interface! When we turn Squid off, the external IP of em0 is reported, as expected. How do I make Squid behave with the load balancing rule that we have set up? Here's the related settings in /etc/pf.conf that I have: ext_if1="em1" # DSL ext_if2="em0" # T1 int_if="re0" ext_gw1="x.x.x.1" ext_gw2="y.y.y.1" int_addr="10.0.0.1" int_net="10.0.0.0/16" dsl_ports = "1024:65535" t1_ports = "1:1023" ... squid=3128 rdr on $int_if inet proto tcp from $int_net \ to any port 80 -> 127.0.0.1 port $squid pass in quick on $int_if route-to lo0 inet proto tcp \ from $int_net to 127.0.0.1 port $squid keep state ... # load balancing pass in on $int_if route-to ($ext_if1 $ext_gw1) \ proto tcp from $int_net to any port $dsl_ports keep state pass in on $int_if route-to ($ext_if1 $ext_gw1) \ proto udp from $int_net to any port $dsl_ports pass in on $int_if route-to ($ext_if2 $ext_gw2) \ proto tcp from $int_net to any port $t1_ports keep state pass in on $int_if route-to ($ext_if2 $ext_gw2) \ proto udp from $int_net to any port $t1_ports Thanks!

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

  • Calling function after .load (Jquery)

    - by Matt
    Having a little difficulty getting a function to call after a .load: $(function(){ $('a.pageFetcher').click(function(){ $('#main').load($(this).attr('rel')); }); }); The page loads, but the functions don't fire: $(function(){ var $container = $('#container'); $container.imagesLoaded(function(){ $container.masonry({ itemSelector: '.box', }); }); $container.infinitescroll({ navSelector : '#page-nav', nextSelector : '#page-nav a', itemSelector : '.box', loading: { finishedMsg: 'Nothing else to load.', img: 'http://i.imgur.com/6RMhx.gif' } }, function( newElements ) { $.superbox.settings = { closeTxt: "Close this", loadTxt: "Loading your selection", nextTxt: "Next item", prevTxt: "Previous item" }; $.superbox(); var $newElems = $( newElements ).css({ opacity: 0 }); $newElems.imagesLoaded(function(){ $newElems.animate({ opacity: 1 }); $container.masonry( 'appended', $newElems, true ); }); } ); }); I've attempted to combine them so that the 2nd functions are called after .load (after doing some searching on this site and looking at given answers/examples) but nothing seems to work properly. Suggestions?

    Read the article

  • Is there a simple Load Balancer app for development environment on Windows?

    - by djangofan
    Does there exist a simple Load Balancer app for development on Windows? I am running a pair of JBoss 5.x instances in a cluster on a single machine. Normally , this configuration is load balanced by a nice hardware load balancer but I am wondering if there is a simple piece of software to enable load balancing in my Eclipse dev environment. Basically, for example, I want a load balancer running on port 11111 that round-robins between the 2 clustered JBoss instances on ssl ports 8443 and 8543 . (or http port if thats not possible) I know that Glassfish has a built-in load balancer but I can't use Glassfish. One idea I have is to try to setup a separate instance of Tomcat with the "balancer" web app. Im trying that now... not sure if it will work... and its a complicated setup and I wish there was something really easy.

    Read the article

  • CodeIgniter: Can't load database from within a model

    - by thedp
    Hello, I've written a new model for my CodeIgniter framework. I'm trying to load the database from within the constructor function, but I'm getting the following error: Severity: Notice Message: Undefined property: userdb::$load Filename: models/userdb.php Line Number: 7 Fatal error: Call to a member function database() on a non-object in /var/www/abc/system/application/models/userdb.php on line 7 Here is my model: <?php class userdb extends Model { function __construct() { $this->load->database(); } ?> What am I doing wrong here? Thank you.

    Read the article

  • Load balancing a console application or service

    - by David
    So it's easy to load balance an ASP.NET web application. You set up a load balancer between two servers, and if the web server isn't responding on Port 80, it won't receive requests. Are there any proven techniques for doing this for a C# console application or Windows service that takes actions of its own volition? Are there any frameworks for knowing if peer processes are alive or dead, doing heartbeats, etc? I've been experimenting a bit with NServiceBus and it seems like, for certain kinds of applications, it would help to have most of the work done as a response to an event, which makes it more like a web application, actually, and therefore easier to scale and load balance with multiple processes, but I feel like that's a half-baked solution since in most cases there usually needs to be some concept of a "master" process that's responsible for getting work started.

    Read the article

  • Actionscript 2.0, load images into array

    - by incrediman
    I need to load an external image into an array. Let's say the image is http://sstatic.net/so/img/logo.png I'm using AS2 - I do not have the option of using AS3. Any idea what to do? I'm able to load the image just fine into a movieclip in _root (below), but not into an array. var loader:MovieClipLoader = new MovieClipLoader(); loader.loadClip("http://sstatic.net/so/img/logo.png",_root.mcOnTheStage); Like is there some way to make an array ov MC's that I can load the images into?

    Read the article

  • Amazon EC2 Load Balancer: Defending against DoS attack?

    - by netvope
    We usually blacklist IPs address with iptables. But in Amazon EC2, if a connection goes through the Elastic Load Balancer, the remote address will be replaced by the load balancer's address, rendering iptables useless. In the case for HTTP, apparently the only way to find out the real remote address is to look at the HTTP header HTTP_X_FORWARDED_FOR. To me, blocking IPs at the web application level is not an effective way. What is the best practice to defend against DoS attack in this scenario? In this article, someone suggested that we can replace Elastic Load Balancer with HAProxy. However, there are certain disadvantages in doing this, and I'm trying to see if there is any better alternatives.

    Read the article

  • ASP.NET Request.ServerVariables["SERVER_PORT_SECURE"] and proxy SSL by load balancer

    - by frankadelic
    We have some legacy ASP.NET code that detects if a request is secure, and redirects to the https version of the page if required. This code uses Request.ServerVariables["SERVER_PORT_SECURE"] to detect if SSL is needed. Our operations team has suggested doing proxy SSL at the load balancer (F5 Big-IP) instead of on the web servers (assume for the purposes of this question that this is a requirement). The consequence would be that all requests appear as HTTP to the web server. My question: how can we let the web servers known that the incoming connection was secure before it hit the load balancer? Can we continue to use Request.ServerVariables["SERVER_PORT_SECURE"]? Do you know of a load balancer config that will send headers so that no application code changes are needed?

    Read the article

  • Jquery load() help

    - by mtwallet
    Hi. I am creating a portfolio page for m personal site. I have a slider with approx 20 anchors that link to projects I have worked on, each one contains a client logo that when clicked should load some html content then fade that content into a container div on the same page. I have been advised to use the JQuery method load() which seems straight forward. The question I have is do I have to repeat the following code for each of the 20 anchors as the url is different for each one or is there a more efficient way? $('a#project1').click(function() { $('#work').load('ajax/project1.html'); } Also would I have to use the unload() method first to ensure the div I am loading into is empty? Many thanks in advance.

    Read the article

  • How does AssemblyName.ReferenceMatchesDefinition work?

    - by Fabian Schmied
    Given the following code: var n1 = new AssemblyName ("TestDll, Version=1.0.0.0, Culture=Neutral, PublicKeyToken=b77a5c561934e089"); var n2 = new AssemblyName ("TestDll, Version=2.0.0.2001, Culture=en-US, PublicKeyToken=ab7a5c561934e089"); Console.WriteLine (AssemblyName.ReferenceMatchesDefinition (n1, n2)); Console.WriteLine (AssemblyName.ReferenceMatchesDefinition (n2, n1)); Why do both of these checks print "True"? I would have thought that AssemblyName.ReferenceMatchesDefinition should consider differences in the version, culture, and public key token attributes of an assembly name, shouldn't they? If not, what does ReferenceMatchesDefinition do that a comparison of the simple names doesn't?

    Read the article

  • Create CRM Organizations on Load Balancing network

    - by user82613
    I'm trying to understand how to create CRM Organization on Load Balancing network. I've three web servers (Web01, Web 02, Web03); three application servers (App01, App02, App03) and a SQL Server (SQL01). I already have Load Balancer setup and there is already one organizaiton setup by someone on all web servers. This organization is Internet Facing. Now I want to create one more Organization on same set of Web Servers. Can anyone please help me understand how to setup new Organization on Load Balancer in this scenario?

    Read the article

  • "Cannot load ViewState" after dynamic control changed

    - by Emil D
    In my ASP.NET page I have to dynamically choose and load a custom control, depending on the selected value in a dropdownlist.However I encountered the following problem: When the parameters of the dynamically loaded control are changed, and then the selection in the dropdownlist is changed( thus forcing me to load a different dynamic control the next time the page reloads ), I end up with a "Cannot load ViewState" exception.I assume that this happens because the ViewState is trying to restore the parameters of the old control and it doesn't find it. So , is there any way to stop the viewstate from attempting to restore the state of the non-existig control?

    Read the article

  • load search results into a div jquery and rails

    - by odpogn
    In my rails app I have a search bar where users can search other users. Currently when a User submits the search from, they're redirected to a "results" page. I want to load those results in a div on the same page.. I was able to do this with my websites navigation links, but I'm pretty new to jQuery and rails and can't figure this one out... my jQuery corresponding to my navigation links: $(function() { $('#links a').live('click', function() { $('#pages').load(this.href).fadeIn('slow'); return false; }); }); my attempt to do the same with my search function... $(function() { $('#search').submit(function() { $('#pages').load(this.href).fadeIn('slow'); }); }); any help would be much appreciated~ along with some useful jQuery tutorials for a newbie!!

    Read the article

  • AWS Load Balancer with a static IP address

    - by user965904
    I have a set-up running on Amazon cloud with a couple of EC2 Instances running through a load balancer. It is important that the site has a unique(static) IP or set of IPs as I'm plugging in 3rd party APIs which only accept requests made from IPs which have been added to their whitelist. So basically unless we can give these 3rd parties a static IP or range of IPs that the requests from the site will always come from then we would be unable to make any calls to them. Anyone knows how to achieve this as I know that Elastic IPs are not compatible with load balancers? If I were to look up the IP of the load balancer DNS name (e.g. dualstack.awseb-BAMobile-ENV-xxxxxxxxx.eu-west-1.elb.amazonaws.com resolves to 200.200.200.200) would that IP be Static? Any help/advise is greatly appreciated guys.

    Read the article

  • iphone best practice, how to load multiple high quality images

    - by bennythemink
    Hi guys and girls, I have about 20-ish high quality images (~3840x5800 px) that I need to load in a simple gallery type app. The user clicks a button and the next image is loaded into the UIImageView. I currently use [UIImage imageWithContentsOfFile:] which takes about 6 seconds to load each image in the simulator :( if I use [UIImage imageNamed:] it takes even longer to load but caches the images which means its quicker if the user wishes to see the same images again. But it may cause memory problems later with all that caching crashing my app. I want to know whats the best practice for loading these? I'm experimenting with reducing image file size as much as is possible but I really need them to be high quality image for the purpose of the app (zoomable, etc.). Thanks for any advice

    Read the article

  • Scheduling a visual studio load test using powershell giving me BSOD

    - by user952342
    I have a visual studio load test which I want to run every hour so that I can start to collect some data. To do this, I thought it would be best to make a little powershell script and put a command like this inside: Invoke-Expression -command "& '$env:VS100COMNTOOLS..\IDE\mstest.exe' /testcontainer:"C:\Users\benb\Documents\Visual Studio 2010\Projects\BBPerformanceTest\bin\Debug\HomePageOnly.loadtest"" That command works fine, but sometimes when its run I get a blue screen of death. However, when I run my load test through the visual studio GUI, I never get a BSOD. two questions: is it possible to avoid this BSOD? Is there another way I can schedule my load test? Thanks

    Read the article

  • Data Loading Issues? Try the new Demantra Data Load Guided Resolution

    - by user702295
    Hello!   Do you have data loading issues?  Perhaps you are trying the new partial schema export tool.   New to Demantra, the Data Load Guided Resolution, document 1461899.1.  This interactive guide will help you locate known solutions to previously discovered issues quickly.  From performance, ORA and ODPM errors to collections related issues that have no known hard number error.   This guide includes the diagnosis of data being imported into Demantra and data being exported from Demantra.  Contact me with any questions or suggestions.   Thank You!

    Read the article

  • ILMerge - Unresolved assembly reference not allowed: System.Core

    - by Steve Michelotti
    ILMerge is a utility which allows you the merge multiple .NET assemblies into a single binary assembly more for convenient distribution. Recently we ran into problems when attempting to use ILMerge on a .NET 4 project. We received the error message: An exception occurred during merging: Unresolved assembly reference not allowed: System.Core.     at System.Compiler.Ir2md.GetAssemblyRefIndex(AssemblyNode assembly)     at System.Compiler.Ir2md.GetTypeRefIndex(TypeNode type)     at System.Compiler.Ir2md.VisitReferencedType(TypeNode type)     at System.Compiler.Ir2md.GetMemberRefIndex(Member m)     at System.Compiler.Ir2md.PopulateCustomAttributeTable()     at System.Compiler.Ir2md.SetupMetadataWriter(String debugSymbolsLocation)     at System.Compiler.Ir2md.WritePE(Module module, String debugSymbolsLocation, BinaryWriter writer)     at System.Compiler.Writer.WritePE(String location, Boolean writeDebugSymbols, Module module, Boolean delaySign, String keyFileName, String keyName)     at System.Compiler.Writer.WritePE(CompilerParameters compilerParameters, Module module)     at ILMerging.ILMerge.Merge()     at ILMerging.ILMerge.Main(String[] args) It turns out that this issue is caused by ILMerge.exe not being able to find the .NET 4 framework by default. The answer was ultimately found here. You either have to use the /lib option to point to your .NET 4 framework directory (e.g., “C:\Windows\Microsoft.NET\Framework\v4.0.30319” or “C:\Windows\Microsoft.NET\Framework64\v4.0.30319”) or just use an ILMerge.exe.config file that looks like this: 1: <configuration> 2: <startup useLegacyV2RuntimeActivationPolicy="true"> 3: <requiredRuntime safemode="true" imageVersion="v4.0.30319" version="v4.0.30319"/> 4: </startup> 5: </configuration> This was able to successfully resolve my issue.

    Read the article

  • Incrementing Assembly Version in TFS Builds and its affect over Other Build Definitions

    - by ssmantha
    A very common scenario while performing TFS builds is to increment version number of the assemblies. There are quite a few approaches of which I would like to share two links: Ewald Hofman’s Approach: http://www.ewaldhofman.nl/post/2010/05/13/Customize-Team-Build-2010-e28093-Part-5-Increase-AssemblyVersion.aspx#id_02e7b082-ce95-49a9-92e9-7dc88887b377 Richard Bank’s Approach : http://www.richard-banks.org/2010/07/how-to-versioning-builds-with-tfs-2010.html   Both these approaches work well, however there are scenarios where Editing and Checking–in the Assembly version information can create problems with Build Definitions meant for Continuous Integration, or gated Check-ins. You can suppress the Continuous Integration Builds while checking in the Assembly info file by just putting a comment “***NO_CI***” as specified by Ewald in his blog. However, if you have Gated Checkin in place, this can turn out to be difficult to suppress, I myself tried to suppress the Build Trigger during the check in process but things doesn’t turn out well. That’s where Richard’s solution comes as handy. Both the solutions have their own pros and cons, which I believe can only be experienced over a period of time. In case of Richard’s solution I believe that we don’t have any history of the Assembly Version Info file and when you take latest of the solution the information will be lost. If you notice closely, that suppressing the Continuous Integration (the NO_CI approach in check in comments) is a workaround provided by Microsoft, however I didn’t find anything to suppress the gated Checkin so far. Suggestions or Findings are most welcome.

    Read the article

  • Dynamic LINQ in an Assembly Near By

    - by Ricardo Peres
    You may recall my post on Dynamic LINQ. I said then that you had to download Microsoft's samples and compile the DynamicQuery project (or just grab my copy), but there's another way. It turns out Microsoft included the Dynamic LINQ classes in the System.Web.Extensions assembly, not the one from ASP.NET 2.0, but the one that was included with ASP.NET 3.5! The only problem is that all types are private: Here's how to use it: Assembly asm = typeof(UpdatePanel).Assembly; Type dynamicExpressionType = asm.GetType("System.Web.Query.Dynamic.DynamicExpression"); MethodInfo parseLambdaMethod = dynamicExpressionType.GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = (m.Name == "ParseLambda") && (m.GetParameters().Length == 2)).Single().MakeGenericMethod(typeof(DateTime), typeof(Boolean)); Func filterExpression = (parseLambdaMethod.Invoke(null, new Object [] { "Year == 2010", new Object [ 0 ] }) as Expression).Compile(); List list = new List { new DateTime(2010, 1, 1), new DateTime(1999, 1, 12), new DateTime(1900, 10, 10), new DateTime(1900, 2, 20), new DateTime(2012, 5, 5), new DateTime(2012, 1, 20) }; IEnumerable filteredDates = list.Where(filterExpression); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Oracle 64-bit assembly throws BadImageFormatException when running unit tests

    - by pjohnson
    We recently upgraded to the 64-bit Oracle client. Since then, Visual Studio 2010 unit tests that hit the database (I know, unit tests shouldn't hit the database--they're not perfect) all fail with this error message:Test method MyProject.Test.SomeTest threw exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess, Version=4.112.3.0, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format.I resolved this by changing the test settings to run tests in 64-bit. From the Test menu, go to Edit Test Settings, and pick your settings file. Go to Hosts, and change the "Run tests in 32 bit or 64 bit process" dropdown to "Run tests in 64 bit process on 64 bit machine". Now your tests should run.This fix makes me a little nervous. Visual Studio 2010 and earlier seem to change that file for no apparent reason, add more settings files, etc. If you're not paying attention, you could have TestSettings1.testsettings through TestSettings99.testsettings sitting there and never notice the difference. So it's worth making a note of how to change it in case you have to redo it, and being vigilant about files VS tries to add.I'm not entirely clear on why this was even a problem. Isn't that the point of an MSIL assembly, that it's not specific to the hardware it runs on? An IL disassembler can open the Oracle.DataAccess.dll in question, and in its Runtime property, I see the value "v4.0.30319 / x64". So I guess the assembly was specifically build to target 64-bit platforms only, possibly due to a 64-bit-specific difference in the external Oracle client upon which it depends. Most other assemblies, especially in the .NET Framework, list "msil", and a couple list "x86". So I guess this is another entry in the long list of ways Oracle refuses to play nice with Windows and .NET.If this doesn't solve your problem, you can read others' research into this error, and where to change the same test setting in Visual Studio 2012.

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Load text from specific external DIV using AJAX?

    - by Josh
    I'm trying to load up the estimated world population from http://www.census.gov/ipc/www/popclockworld.html using AJAX, and so far, failing miserably. There's a DIV with the ID "worldnumber" on that page which contains the estimated population, so that's the only text I want to grab from the page. Here's what I've tried: $(document).ready(function(){ $("#population").load('http://www.census.gov/ipc/www/popclockworld.html #worldnumber *'); });

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >