Search Results

Search found 19483 results on 780 pages for 'load average'.

Page 13/780 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Data Loading Issues? Try the new Demantra Data Load Guided Resolution

    - by user702295
    Hello!   Do you have data loading issues?  Perhaps you are trying the new partial schema export tool.   New to Demantra, the Data Load Guided Resolution, document 1461899.1.  This interactive guide will help you locate known solutions to previously discovered issues quickly.  From performance, ORA and ODPM errors to collections related issues that have no known hard number error.   This guide includes the diagnosis of data being imported into Demantra and data being exported from Demantra.  Contact me with any questions or suggestions.   Thank You!

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Load text from specific external DIV using AJAX?

    - by Josh
    I'm trying to load up the estimated world population from http://www.census.gov/ipc/www/popclockworld.html using AJAX, and so far, failing miserably. There's a DIV with the ID "worldnumber" on that page which contains the estimated population, so that's the only text I want to grab from the page. Here's what I've tried: $(document).ready(function(){ $("#population").load('http://www.census.gov/ipc/www/popclockworld.html #worldnumber *'); });

    Read the article

  • Scroll to anchor after jquery.load

    - by Vitaly
    There's placeholder on the page that is loaded asynchronously using jQuery load method. Page URL might have anchor and I want to scroll to the anchor after content is loaded. What is the best way to do that? Problem is similar to this: http://forum.jquery.com/topic/goto-anchor-after-load But I don't like the solution. May be someone has and better ideas on this?

    Read the article

  • Tactics for using PHP in a high-load site

    - by Ross
    Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques. I'm developing a tool in PHP that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well). Databases At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server? Caching I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called it's cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load. My question on this: Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached. Finally Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for? Thanks, Ross

    Read the article

  • Load html document in javascript from text

    - by QAH
    Hello everyone! Is it possible to load an html document into a DOM javascript object so that you can read the elements in the document? For example, if I have a file on the server Test.html. Can the page Hello.html call javascript code to load Test.html into a DOM object? Please let me know. Thanks

    Read the article

  • are deleted entries counted in the load factor of a hash table using open addressing

    - by Dr. Monkey
    When calculating the load factor of a hashtable with an open-addressing array implementation I am using: numberOfKeysInArray/sizeOfArray however it occurred to me that since deleted entries must be marked as such (to distinguish them from empty spaces), it might make sense to include these in the number of keys. My thinking is that as far as estimating the average number of probes to find an entry, deleted entries should count towards the load factor, but as far as inserting a new key they should not. Which is the proper calculation: including deleted keys or not?

    Read the article

  • Determining the load on a particular core in a multicore processor

    - by S.Man
    In a multicore processor, there are ways to tell the particular application to run in either single core or 2 cores or 3 cores. During such scenarios, how will a scheduler be able to determine the load(number of threads) on a particular core in a multicore processor and accordingly distribute(balance) the load(allocate threads) across the various cores ?

    Read the article

  • DataServiceCollection load from external source

    - by spdro
    I'm trying to load external data to my entyity. DataServiceCollection<products> products = new DataServiceCollection<products>(); var adapter = new OleDbDataAdapter("SELECT * FROM [products$]", "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=d:\\products.xlsx;Extended Properties=\"Excel 12.0;HDR=YES;\""); DataSet ds = new DataSet(); adapter.Fill(ds, "excell"); products.Load(????); Can anyone help ?

    Read the article

  • jQuery pass url variable into load function

    - by Adi
    Hi, I'm trying to use load to reload a portion of the current page (long story why) but am having an issue with the variable syntax. Here is the snippet of code: var pathname = window.location.pathname; $('#menu').load("/cms.php #menu"); I woudl like to replace /cms.php with the variable, but am having issues with the corrent syntax. Any help/advice would be much appricated. A.

    Read the article

  • Opinion on LastPass's security for the Average Joe [closed]

    - by Rook
    This is borderline on objective/subjective, but I'm posting it here since I'm more interested in objective facts, without going into too much technical details, than I am in user reviews of LastPass. I've always used offline ways for (password / sensitive data) storage, but lately I keep hearing good things about LastPass. Indeed, it is more practical having it always accessible from every computer you're using without syncing and related problems, but the security aspect still troubles me. How (in a nutshell for dummies) does LastPass keep your data secure / can their employees see your data, and what is your opinion for such storage of more than usual keeping of sensitive data (bank PIN codes, some financial / business related stuff and so on - you know, the things that would practically hurt if lost / phished)? What are your opinions of it, and do you trust it for such? Any bad experiences? If someone for example is sniffing your wifi network, would such data be easier than usual to sniff out?

    Read the article

  • Excel how to get an average for column for rows that meet multiple criteria

    - by Jess
    I would like to know the average days between open and close dates for an item with a close date in a particular month. So from the below example in Jan 2013 items 2,5 and 6 were closed (Closed can be RESOLVED or CANCELLED status), each were open for 26, 9 and 6 days respectivly. So of the jobs that have a closed date in Jan 2013 (between 01/01/2013 and 13/02/13) they have an average open time (between open and close date) of 13.67 days to 2dp. I have tried a few ways to get this to work and i think the issue I am having is with the AVERAGE function. First time using a forum so apologies if my question is unclear. Was unable to post image to have this comma seperated below Item_ID,Open_Date,Status,Close_Date 1,1/06/2012,RESOLVED,16/07/2012 2,20/12/2012,RESOLVED,16/01/2013 3,2/01/2013,IN PROGRESS, 4,3/01/2013,CANCELLED,7/05/2013 5,3/01/2013,RESOLVED,12/01/2013 6,4/01/2013,RESOLVED,10/01/2013 7,1/02/2013,RESOLVED,15/02/2013 8,2/02/2013,OPEN, 9,7/02/2013,CANCELLED,26/02/2013

    Read the article

  • The spork/platypus average: shameless self promotion

    - by Roger Hart
    This is the video of presentation I gave at UA Europe and TCUK this year. The actual sub-title was "Content strategy at Red Gate Software", but this heading feels more honest. For anybody who missed it, or is just vaguely interested, here's a link to me talking about de-suckifying the web. You can find the slideshare deck here, too* Watching it back is more than a little embarrassing, and makes me really, really want to do a follow up, so I can do three things: explain the rest of the big web project, now we've done it give some data on the outcome of the content review make a grovelling apology to our marketing guys, who I've been unfairly mean to in a childish effort to look cool There are a whole bunch of other TCUK presentations online, too. You can find them all here: http://tiny.cc/tcuk10_videos I'd particularly recommend Chris Atherton's: "Everything you always wanted to know about psychology and technical communication" - it's full of cool stuff. You should probably also watch David Black's opening keynote, which managed to make my hour of precocious grandstanding look measured, meek, and helpful. He actually makes some interesting points, but you'd basically have to ship Richard Dawkins off to Utah, if you wanted to go further out of your way to aggravate your audience. It does give an engaging account of running a large tech comms project, and raise some questions about how we propose to understand a world where increasing amounts of our stuff gets done by increasingly many increasingly complicated tissues of APIs. Well, sort of. That's what all the notes I made were about, anyway.   *Slideshare ate my fonts. Just so we're clear on this: I'd never use badly-kerned Arial in a presentation. Don't worry.

    Read the article

  • Stock Analysis and Moving Average with PowerPivot

    - by Marco Russo (SQLBI)
    One week ago Alberto Ferrari wrote a post about how to do working days calculation in PowerPivot . You might think this is necessary only for accounting department or something like that… but in reality the same techniques are really useful to implement calculations that might be useful when you want to implement some stock analysis using PowerPivot and Excel! As you might know, in PowerPivot it is important having a Dates table containing all the days, without exceptions. But when you manage stock...(read more)

    Read the article

  • average screen ratio

    - by sam
    Im building a portfolio website that uses full screen background images slideshow that are cropped to fit using a js plugin. To give the minimum amount of cropping whats the best ratio to make the images ? ie i know 13" macbooks are around 13:7 (when taking into account about 100px for the browser bar) but does that scale up on 15",24",17" displays ? I know there are charts showing the most common dimensions but they just show a range of sizes and thats categorized by groups rather than actual dimensions

    Read the article

  • Number of malicious attacks defended/done on the average user daily [closed]

    - by DalexL
    As a web hoster, it is very easy to notice the large amounts of exploit/abuse attempts done on my servers. Out of curiosity, how often are these attempts done on the average user? I'm assuming almost all of them are prevented just by simple security protocols in place by their browsers, local network, etc. How many attempts, on average, are committed against a single user daily through any method? (email, internet, downloads, etc.)? If known, what percentage of these things are blocked by the average users security? I tried googling but I was having a hard time getting the right search terms together.

    Read the article

  • Strategy to store/average logs of pings

    - by José Tomás Tocino
    I'm developing a site to monitor web services. The most basic type of check is sending a ping, storing the response time in a CheckLog object. By default, PingCheck objects are triggered every minute, so in one hour you get 60 CheckLogs and in one day you get 1440 CheckLogs. That's a lot of them, I don't need to store such level of detail, so I've set a up collapsing mechanism that periodically takes the uncollapsed CheckLogs older than 24h and collapses (averages) them in intervals of 30 minutes. So, if you have 360 CheckLogs that have been saved from 0:00 to 6:00, after collapsing you retain just 12 of them. The problem.. well, is this: After averaging the response times, the graph changes drastically. What can I do to improve this? Guess one option could be narrowing the interval duration to 15 min. I've seen the graphs at the GitHub status page and they do not seem to suffer from this problem. I'd appreciate any kind of information you could give me about this area.

    Read the article

  • .NET load assembly error metabase config

    - by peter
    Hi, Yesterday I found out that the mailroot directory had moved to a different location on the C drive. I don't know how this happened as no configs were changed, using IIS7 with IIS6 SMTP service on Windows 2008 R2 web edition. I wanted to restore it to the default c:\inetpub\mailroot location. So I opened system32/inetsrv/MetaBase.xml, and in the node was the problem, BadMailDirectory, DropDirectory etc were all pointing to the wrong location... I changed them back to the default c:\inetpub\mailroot. Later I discovered an ASP.NET application was throwing this error: An error occurred in the Microsoft .NET Framework while trying to load assembly id 1. The server may be running out of resources, or the assembly may not be trusted with PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE. Run the query again, or check documentation to see how to solve the assembly trust issues. For more information about this error: System.IO.FileNotFoundException: Could not load file or assembly 'microsoft.sqlserver.types, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I figured it had to do with my changes to the mailroot locations, so I gave ASP.NET access to the mailroot directory and it was working.... My question is, how is this possible? Why does APS.NET require access to the mailroot directory to load assemblies? Thanks

    Read the article

  • Flash AS3 load file xml

    - by Elias
    Hello, I'm just trying to load an xml file witch can be anywere in the hdd, this is what I have done to browse it, but later when I'm trying to load the file it would only look in the same path of the swf file here is the code package { import flash.display.Sprite; import flash.events.; import flash.net.; public class cargadorXML extends Sprite { public var cuadro:Sprite = new Sprite(); public var file:FileReference; public var req:URLRequest; public var xml:XML; public var xmlLoader:URLLoader = new URLLoader(); public function cargadorXML() { cuadro.graphics.beginFill(0xFF0000); cuadro.graphics.drawRoundRect(0,0,100,100,10); cuadro.graphics.endFill(); cuadro.addEventListener(MouseEvent.CLICK,browser); addChild(cuadro); } public function browser(e:Event) { file = new FileReference(); file.addEventListener(Event.SELECT,bien); file.browse(); } public function bien(e:Event) { xmlLoader.addEventListener(Event.COMPLETE, loadXML); req=new URLRequest(file.name); xmlLoader.load(req); } public function loadXML(e:Event) { xml=new XML(e.target.data); //xml.name=file.name; trace(xml); } } } when I open a xml file that isnt it the same directory as the swf, it gives me an unfound file error. is there anything I can do? cause for example for mp3 there is an especial class for loading the file, see http://www.flexiblefactory.co.uk/flexible/?p=46 thanks

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >