Search Results

Search found 1150 results on 46 pages for 'partitioning'.

Page 34/46 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • "Failed to create swap space" error during installation

    - by Welsh Heron
    I've been trying to install Ubuntu for the past two days or so, but I've been running into a problem: every time I run the installation program on the LiveCD, I always get the same (or a very similar) error: "Failed to create Swap space The creation of swap space in partition #3 of SCSI5 (0,0,0)(sda) failed." So far, I've run DBAN (Darik's Boot and Nuke) on my HDD once, to make absolutely sure that everything on it had been erased. Then, I simply put in the LiveCD, and let it run the automated install. I get the above error directly after I tell it to automatically partition the HDD (it will work for a second or so, then this will pop up), forcing me back to the screen that lets me choose whether I want to automatically or manually partition the HDD. Well, after failing to install the software manually, I did a little research and learned enough about partitioning Linux to use the 'Manual partitioning' option. I partitioned the HDD as follows (it's a 1TB drive): /home - (the rest)- ext2, / - 20GB - ext2, /boot - 100MB - ext2, /swap - 8GB /EFIboot - 40MB The only difference when I tried this method was that I got THIS message: "Failed to create Swap space The creation of swap space in partition #2 of SCSI5 (0,0,0)(sda) failed." Basically, the only difference was that there was now a '2' instead of a '3'. If I may ask, what exactly am I doing wrong? I've tried looking around the internet (that's basically all I've done for the last two days), but no one seems to have the same problem that I have, and I've tried most of the solutions for similar problems (DBAN, formatting partitions in ext2 format, etc). The only thing I haven't tried is using the terminal to manually partition the HDD...and I actually DID try to do this, but I wasn't able to get past 'su' 's password demand, so I wasn't able to use the terminal. Thank you for your help in advance. ~Welsh

    Read the article

  • SQL Saturday Atlanta: Intro To Performance Tuning

    - by Mike Femenella
    I'm looking forward to speaking in Atlanta on the 24th, will be fun to get back down that way to visit with some friends and present two topics that I really enjoy. First, an introduction to performance tuning. Performance tuning is a very wide and deep topic and we're staying close to the surface. I direct this class for newbie sql users who have less than 2 years of experience. It's all the things I wish someone would have told me in my first 2 years about what to look for when the database was slow...or allegedly slow I should say. We'll cover using profiler to find slow performing queries and how to save the data off to a table as well as a tour of other features. The difference between clustered, non clustered and covering indexes. How to look at and understand an execution plan (at a high level) and finally the difference between a temp table and a table variable and what the implications are of using either one in your code. That pretty much takes up a full hour. Second presentation, Loading Data in Real Time. It's really a presentation about partitioning but with a twist that we used at work recently to solve a need to load some data quickly and put it into production with minimal downtime. We'll cover partition functions, schemes,$partition, merge, sys.partitions and show some examples of building a set of partitioned tables and using the switch statement to move it from one table to another. Finally we'll cover the differences in partitioning between 2005 and 2008. Hope to see you there! And if you read my blog please introduce yourself!

    Read the article

  • Ubuntu 12.04 installation on GPT + RAID going into grub rescue

    - by Proy
    I have two 2TB disks. I am installing Ubuntu 12.04 using the alternate version of the server cd. On the partitioning page I have done my partitioning as follows /dev/sda1 - 32 MB - bios_grub /dev/sda2 - 50 GB -raid device /dev/sda3 - 8 GB -raid device /dev/sda4 - Balance full GB - raid device /dev/sdb1 - 32 MB - bios_grub /dev/sdb2 - 50 GB -raid device /dev/sdb3 - 8 GB -raid device /dev/sdb4 - Balance full GB - raid device After this I have setup raid devices /dev/md0 for /(/dev/sda2 + /dev/sdb2) for / ext4 /dev/md1 for swap( /dev/sda3 + /dev/sdb3)for swap /dev/md2 for /home(/dev/sda4 + /dev/sdb4)for /home ext4 The installation finishes it shows that it is installing grub to /dev/sda and /dev/sdb. But once the system reboots it falls into grub rescue mode. on doing ls I can not see the md devices only hd once. I also tried booting into rescue mode with the install cd and doing grub-install /dev/sda and /dev/sdb. What am I doing wrong ? Why is grub2 not detecting the raid revices ? UPDATE: I just did the same steps with Ubuntu 10.04 and it worked perfectly fine. I wiped out the RAID and partitions and everything and did it from scratch. I think the issue is with Ubuntu 12.04 and the way it partitions 2 TB disks

    Read the article

  • Installing 10.04 server on HP xw9400 Workstation with RAID 5

    - by Dave Long
    I have a workstation that was given to me that is a friggen powerhouse, so I figured I would set it up as my development and demo server. This is my first experience installing Ubuntu onto a RAID array and so far it has not been a fun one. I have been following the Advanced Installation guide for installing Ubuntu 10.04 server, and it says that there will be an option on the Partition Disks screen to manually create the partitions, but the only options I have are: Configure iSCSI volumes Undo changes to partitions Finish partitioning and write changes to disk Just before I got to that screen I got a message that said: One or more drives containing Serial ATA RAID configurations have been found. Do you wish to activate these RAID devices? It doesn't matter whether I answer yes or no to that, I still get the same Partition Disks screen. When I try to select Finish partitioning and write changes to disk I just get the No root file system error. Has anyone else experienced this, and how do I get past it? Can I not run Ubuntu on this machine?

    Read the article

  • How to setup VM in KVM? Qcow or LVM etc.

    - by JohnAdams
    Finally, after quite a bit of this vs that, I have chosen to virtualize a couple of my servers with KVM. I did do a test setup as well, but I have a few questions about setting VM's in KVM. Would appreciate pointers. What is the best storage to use - Qcow2 or LVM? I like the fact that I can copy the VM file easily with a Qcow2 but what about LVM, how do I take a backup or make copy on a development server to play with? I know I can clone a LVM, but how do I bring to my development server? How do I setup the guest partitioning? For example, when setting up Ubuntu inside Ubuntu, do I choose LVM for that VM or regular fdisk partitioning? Can I increase the partition size then later, if I need a bigger disk?

    Read the article

  • Sprite batching in OpenGL

    - by Roy T.
    I've got a JAVA based game with an OpenGL rendering front that is drawing a large amount of sprites every frame (during testing it peaked at 700). Now this game is completely unoptimized. There is no spatial partitioning (so a sprite is drawn even if it isn't on screen) and every sprite is drawn separately like this: graphics.glPushMatrix(); { graphics.glTranslated(x, y, 0.0); graphics.glRotated(degrees, 0, 0, 1); graphics.glBegin(GL2.GL_QUADS); graphics.glTexCoord2f (1.0f, 0.0f); graphics.glVertex2d(half_size , half_size); // upper right // same for upper left, lower left, lower right graphics.glEnd(); } graphics.glPopMatrix(); Currently the game is running at +-25FPS and is CPU bound. I would like to improve performance by adding spatial partitioning (which I know how to do) and sprite batching. Not drawing sprites that aren't on screen will help a lot, however since players can zoom out it won't help enough, hence the need for batching. However sprite batching in OpenGL is a bit of mystery to me. I usually work with XNA where a few classes to do this are built in. But in OpenGL I don't know what to do. As for further optimization, the game I'm working on as a few interesting characteristics. A lot of sprites have the same texture and all the sprites are square. Maybe these characteristics will help determine an efficient batching technique?

    Read the article

  • Determining whether two fast moving objects should be submitted for a collision check

    - by dreta
    I have a basic 2D physics engine running. It's pretty much a particle engine, just uses basic shapes like AABBs and circles, so no rotation is possible. I have CCD implemented that can give accurate TOI for two fast moving objects and everything is working smoothly. My issue now is that i can't figure out how to determine whether two fast moving objects should even be checked against each other in the first place. I'm using a quad tree for spacial partitioning and for each fast moving object, i check it against objects in each cell that it passes. This works fine for determining collision with static geometry, but it means that any other fast moving object that could collide with it, but isn't in any of the cells that are checked, is never considered. The only solution to this i can think of is to either have the cells large enough and cross fingers that this is enough, or to implement some sort of a brute force algorithm. Is there a proper way of dealing with this, maybe somebody solved this issue in an efficient manner. Or maybe there's a better way of partitioning space that accounts for this?

    Read the article

  • What is the best way to manage large 3d worlds (i.e minecraft style)?

    - by SomeXnaChump
    After playing minecraft I was marvelling a bit at their large worlds but at the same time finding it extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume its fairly slow because: A) Its written in Java, and as most of the actual spatial partitioning and other memory management activities happen in there it would be slower than a native C++ version. B) They are not partitioning their world very well I could be wrong on both assumptions, however it got me thinking about the best way to manage large worlds. As it is more of a true 3d world, where a block can exist in any part of the world, it is basically a big 3d array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc). Now I am assuming to make this sort of world performant you would need to: a) Use a tree of some variety (oct/kd/bsp) to split all the cubes out, it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. b) Use some algorithm to work out if the blocks within the scene can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. c) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees I guess there is no right answer to this, but I would be interested to see peoples opinions on the subject.

    Read the article

  • Importing Multiple Schemas to a Model in Oracle SQL Developer Data Modeler

    - by thatjeffsmith
    Your physical data model might stretch across multiple Oracle schemas. Or maybe you just want a single diagram containing tables, views, etc. spanning more than a single user in the database. The process for importing a data dictionary is the same, regardless if you want to suck in objects from one schema, or many schemas. Let’s take a quick look at how to get started with a data dictionary import. I’m using Oracle SQL Developer in this example. The process is nearly identical in Oracle SQL Developer Data Modeler – the only difference being you’ll use the ‘File’ menu to get started versus the ‘File – Data Modeler’ menu in SQL Developer. Remember, the functionality is exactly the same whether you use SQL Developer or SQL Developer Data Modeler when it comes to the data modeling features – you’ll just have a cleaner user interface in SQL Developer Data Modeler. Importing a Data Dictionary to a Model You’ll want to open or create your model first. You can import objects to an existing or new model. The easiest way to get started is to simply open the ‘Browser’ under the View menu. The Browser allows you to navigate your open designs/models You’ll see an ‘Untitled_1′ model by default. I’ve renamed mine to ‘hr_sh_scott_demo.’ Now go back to the File menu, and expand the ‘Data Modeler’ section, and select ‘Import – Data Dictionary.’ This is a fancy way of saying, ‘suck objects out of the database into my model’ Connect! If you haven’t already defined a connection to the database you want to reverse engineer, you’ll need to do that now. I’m going to assume you already have that connection – so select it, and hit the ‘Next’ button. Select the Schema(s) to be imported Select one or more schemas you want to import The schemas selected on this page of the wizard will dictate the lists of tables, views, synonyms, and everything else you can choose from in the next wizard step to import. For brevity, I have selected ALL tables, views, and synonyms from 3 different schemas: HR SCOTT SH Once I hit the ‘Finish’ button in the wizard, SQL Developer will interrogate the database and add the objects to our model. The Big Model and the 3 Little Models I can now see ALL of the objects I just imported in the ‘hr_sh_scott_demo’ relational model in my design tree, and in my relational diagram. Quick Tip: Oracle SQL Developer calls what most folks think of as a ‘Physical Model’ the ‘Relational Model.’ Same difference, mostly. In SQL Developer, a Physical model allows you to define partitioning schemes, advanced storage parameters, and add your PL/SQL code. You can have multiple physical models per relational models. For example I might have a 4 Node RAC in Production that uses partitioning, but in test/dev, only have a single instance with no partitioning. I can have models for both of those physical implementations. The list of tables in my relational model Wouldn’t it be nice if I could segregate the objects based on their schema? Good news, you can! And it’s done by default Several of you might already know where I’m going with this – SUBVIEWS. You can easily create a ‘SubView’ by selecting one or more objects in your model or diagram and add them to a new SubView. SubViews are just mini-models. They contain a subset of objects from the main model. This is very handy when you want to break your model into smaller, more digestible parts. The model information is identical across the model and subviews, so you don’t have to worry about making a change in one place and not having it propagate across your design. SubViews can be used as filters when you create reports and exports as well. So instead of generating a PDF for everything, just show me what’s in my ‘ABC’ subview. But, I don’t want to do any work! Remember, I’m really lazy. More good news – it’s already done by default! The schemas are automatically used to create default SubViews Auto-Navigate to the Object in the Diagram In the subview tree node, right-click on the object you want to navigate to. You can ask to be taken to the main model view or to the SubView location. If you haven’t already opened the SubView in the diagram, it will be automatically opened for you. The SubView diagram only contains the objects from that SubView Your SubView might still be pretty big, many dozens of objects, so don’t forget about the ‘Navigator‘ either! In summary, use the ‘Import’ feature to add existing database objects to your model. If you import from multiple schemas, take advantage of the default schema based SubViews to help you manage your models! Sometimes less is more!

    Read the article

  • Large ResultSet on postgresql query

    - by tuler
    I'm running a query against a table in a postgresql database. The database is on a remote machine. The table has around 30 sub-tables using postgresql partitioning capability. The query will return a large result set, something around 1.8 million rows. In my code I use spring jdbc support, method JdbcTemplate.query, but my RowCallbackHandler is not being called. My best guess is that the postgresql jdbc driver (I use version 8.3-603.jdbc4) is accumulating the result in memory before calling my code. I thought the fetchSize configuration could control this, but I tried it and nothing changes. I did this as postgresql manual recomended. This query worked fine when I used Oracle XE. But I'm trying to migrate to postgresql because of the partitioning feature, which is not available in Oracle XE. My environment: Postgresql 8.3 Windows Server 2008 Enterprise 64-bit JRE 1.6 64-bit Spring 2.5.6 Postgresql JDBC Driver 8.3-603

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • SQL Saturday Richmond, VA

    - by Mike
    Very excited to announce that I’ll be holding 2 sessions at SQL Saturday in VA on April 10th. If there are any frequent readers of SQLTeam.com attending, please make sure to say hi! Topics I’m covering are partitioning & loading data real time and an introduction to performance tuning. Hope to see you there! SQL Saturday Richmond Schedule

    Read the article

  • Dual Boot ubuntu 12.04 and Windows 7 with on two separate SSDs with UEFI

    - by Björn
    With the following setup I get a blinking cursor after installation: Windows 7 64bit installed in first SSD (not UEFI, using MBR) Installation of Ubuntu 12.04 64Bit on gpt partioned disk seems to work without problems but does not boot. It stops with a blinking cursor. Partitioning scheme: sdb1 efi boot partition fat32 sdb2 root btrfs sdb3 home btrfs sdb4 swap Is it possible to mix uefi BIOS with MBR and gpt when using two separate SSDs? I tried grub2 into a MBR as well but it would not install there...

    Read the article

  • SQL SERVER – Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post of Prox ‘n’ Funx. This is a very interesting subject. By the way Brad Schulz is my favorite guy when it is about blogging. I respect him as well learn a lot from him. Everybody is writing something new his subject, I decided to start SQL Server 2012 analytic functions series. SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Let us fun following query. USE AdventureWorks GO SELECT SalesOrderID, OrderQty, CUME_DIST() OVER(ORDER BY SalesOrderID) AS CDist FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY CDist DESC GO Above query will give us following result. Now let us understand what is the formula behind CUME_DIST and why the values in SalesOrderID = 43670 are 1. Let us take more example and be clear about why the values in SalesOrderID = 43667 are 0.5. Now let us enhence the same example and use PARTITION BY into the OVER clause and see the results. Run following query in SQL Server 2012. USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID DESC, CDist DESC GO Now let us see the result of this query. We are have changed the ORDER BY clause as well partitioning by SalesOrderID. You can see that CUME_DIST() function provides us different results. Additionally now we see value 1 multiple times. As we are using partitioning for each group of SalesOrderID we get the CUME_DIST() value. CUME_DIST() was long awaited Analytical function and I am glad to see it in SQL Server 2012. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ODI 11g – How to Load Using Partition Exchange

    - by David Allan
    Here we will look at how to load large volumes of data efficiently into the Oracle database using a mixture of CTAS and partition exchange loading. The example we will leverage was posted by Mark Rittman a couple of years back on Interval Partitioning, you can find that posting here. The best thing about ODI is that you can encapsulate all those ‘how to’ blog posts and scripts into templates that can be reused – the templates are of course Knowledge Modules. The interface design to mimic Mark's posting is shown below; The IKM I have constructed performs a simple series of steps to perform a CTAS to create the stage table to use in the exchange, then lock the partition (to ensure it exists, it will be created if it doesn’t) then exchange the partition in the target table. You can find the IKM Oracle PEL.xml file here. The IKM performs the follows steps and is meant to illustrate what can be done; So when you use the IKM in an interface you configure the options for hints (for parallelism levels etc), initial extent size, next extent size and the partition variable;   The KM has an option where the name of the partition can be passed in, so if you know the name of the partition then set the variable to the name, if you have interval partitioning you probably don’t know the name, so you can use the FOR clause. In my example I set the variable to use the date value of the source data FOR (TO_DATE(''01-FEB-2010'',''dd-MON-yyyy'')) Using a variable lets me invoke the scenario many times loading different partitions of the same target table. Below you can see where this is defined within ODI, I had to double single-quote the strings since this is placed inside the execute immediate tasks in the KM; Note also this example interface uses the LKM Oracle to Oracle (datapump), so this illustration uses a lot of the high performing Oracle database capabilities – it uses Data Pump to unload, then a CreateTableAsSelect (CTAS) is executed on the external table based on top of the Data Pump export. This table is then exchanged in the target. The IKM and illustrations above are using ODI 11.1.1.6 which was needed to get around some bugs in earlier releases with how the variable is handled...as far as I remember.

    Read the article

  • Why Clonezilla needs to reinstall grub?

    - by Pablo
    By default CloneZilla Live will reinstall grub after restore. In my particular case I am restoring a disk image to same phisical disk without any partitioning/size change. I am expecting to have exact same state as I had right after backup. However, reinstalling grub will render my system to unbootable. If I remove that option [-g] then everything is fine. My question is in what cases and why CloneZilla ever need to re-install grub when restoring disk image?

    Read the article

  • Installation issue after 4 attempts

    - by SixTen
    Have successfully installed Ubuntu 12.10 on 2 laptops, one running Vista & one running Windows8. Made 4 attempts (long downloads of WUBI.EXE) to different HD's & still NO GO. The machine is an older machine with Windows2000Professional installed & running. The system has 3 hard drives; C:(20.5 Gb with 7.26Gb Free), D: (74.5 Gb with 33.1 Gb Free), & E: (35.3 Gb with 24.8 Gb Free) which all have Gigabytes space available; also an A: 3 1/2floppy drive and a CD-drive burner. The CPU processor is older but seems sufficient: AMD Athlon XP 1700+ and the task manager of Windows2000 shows the processor works fine.. The flat-screen display works fine. Here is the error message I receive each time the 'installation configuration' is verified: "No root file system is defined" "Please correct this from partitioning menu" << The Ubuntu operating system is allowing me a couple of options at the very top right menu. I was able to establish a wireless connection but the MAIN homepage won't load with FIREFOX app or any other apps. I cannot access or even find any 'Partitioning Menu" from the displayed page. I cannot access files or Windows Explorer to view drives since I'm not using the Windows O/S. If I try to go back & re-install the UBUNTU 12.10 again, it always asks me to UNINSTALL the one found on the HD & then I run WUBI.EXE again which takes a long time for the download. Do I need to go back into Windows2000 & use Windows Explorer to look at the file structure & add a partition? On previous attempts I have tried loading the WUBI.EXE on all 3 HD's C: D: & E: Sure is frustrating?? Thanks for any suggestions. NEW UBUNTU user & what I've seen so far I like.. (J.R.)

    Read the article

  • Bay Area Coherence Special Interest Group Next Meeting July 21, 2011

    - by csoto
    Date: Thursday, July 21, 2011 Time: 4:30pm - 8:15pm ET (note that Parking at 475 Sansome Closes at 8:30pm) Where: Oracle Office, 475 Sansome Street, San Francisco, CA Google Map We will be providing snacks and beverages. Register! - Registration is required for building security. Presentation Line Up:? 5:10pm - Batch Processing Using Coherence in Oracle Group Policy Administration - Paul Cleary, Oracle Oracle Insurance Policy Administration (OIPA) is a flexible, rules-based policy administration solution that provides full record keeping for all policy lifecycle transactions. One component of OIPA is Cycle processing, which is the batch processing of pending insurance transactions. This presentation introduces OIPA and Cycle processing, describing the unique challenges of processing a high volume of transactions within strict time windows. It then reviews how OIPA uses Oracle Coherence and the Processing Pattern to meet these challenges, describing implementation specifics that highlight the simplicity and robustness of the Processing Pattern. 6:10pm - Secure, Optimize, and Load Balance Coherence with F5 - Chris Akker, F5 F5 Networks, Inc., the global leader in Application Delivery Networking, helps the world’s largest enterprises and service providers realize the full value of virtualization, cloud computing, and on-demand IT. Recently, F5 and Oracle partnered to deliver a novel solution that integrates Oracle Coherence 3.7 with F5 BIG-IP Local Traffic Manager (LTM). This session will introduce F5 and how you can leverage BIG-IP LTM to secure, optimize, and load balance application traffic generated from Coherence*Extend clients across any number of servers in a cluster and to hardware-accelerate CPU-intensive SSL encryption. 7:10pm - Using Oracle Coherence to Enable Database Partitioning and DC Level Fault Tolerance - Alexei Ragozin, Independent Consultant and Brian Oliver, Oracle Partitioning is a very powerful technique for scaling database centric applications. One tricky part of partitioned architecture is routing of requests to the right database. The routing layer (routing table) should know the right database instance for each attribute which may be used for routing (e.g. account id, login, email, etc): it should be fast, it should fault tolerant and it should scale. All the above makes Oracle Coherence a natural choice for implementing such routing tables in partitioned architectures. This presentation will cover synchronization of the grid with multiple databases, conflict resolution, cross cluster replication and other aspects related to implementing robust partitioned architecture. Additional Info:?? - Download Past Presentations: The presentations from the previous meetings of the BACSIG are available for download here. Click on the presentation titles to download the PDF files. - Join the Coherence online community on our Oracle Coherence Users Group on LinkedIn. - Contact BACSIG with any comments, questions, presentation proposals and content suggestions.

    Read the article

  • Can I use VirtualBox as a sandbox for 12.04?

    - by Stephen Myall
    I am quite a cautious person though not afraid to experiment. I currently have 11.10 Unity and 12.04 Beta 2 running on different partitions (no problems). I have learned here that Unity can run in VirtuaBox. When I upgrade to 12.04 LTS later this month, I am curious to understand if I could also run 12.04 in Virtual Box for experimentation and test some things before I implement them into my system? Maybe there is a better way to achieve this, without partitioning my hard drive.

    Read the article

  • Ubuntu Install 11.10 doesn't recognize Windows 7 installation with new HDD

    - by arlendo
    Replaced my crashed HDD with a Seagate 2TB Sata (bought from a company who pulled it from a working computer, OS unknown) and did a fresh install of Windows 7. Windows shows 100MB boot partition (bootable NTFS) and 200GB Windows partition (NTFS), the rest is unallocated. Win7 Disk Management says the partitioning type is Master Boot Record. Win7 boots and runs fine. Ubuntu 11.10 Install procedes to Allocate Drive Space screen and should say This computer currently has Windows 7 on it. What would you like to do? Instead, it says something like Install doesn't detect any existing OS on this computer. When I click on Something else, the partition table shows only the unallocated space of 1.8TB. Ubuntu Disk Utility says Partitioning: Master Boot Record, but GParted Live says Partition Table: gpt. It was my original intention to have the Windows boot partition and application partition, then install Ubuntu 11.10 using boot, root, swap, and home partitions, and maybe another partition just for data (mostly photos). Currently, I would be happy if I could just get Ubuntu installed along with Win7. I am aware of the MBR limits of 3 Primary partitions and 1 Extended partition. I suspect that my new HDD is partitioned for GPT and that is why Ubuntu can't see the Win7 installation. Am I on the right track? I was going to use Windows Disk Management to convert GPT to MBR but I only have the one drive on my AMD-64 mini-computer and it says I have to empty the drive of all partitions before I can access the Convert command. And I can't find any bootable software that would allow me to do that conversion. Here is the result of sudo fdisk -l: ubuntu@ubuntu:~$ sudo fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 224 heads, 19 sectors/track, 918004 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd4a68c18 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 419637247 209715200 7 HPFS/NTFS/exFAT ubuntu@ubuntu:~$ Keep in mind that I'm a definite newbie to screwing around with the inner workings of Ubuntu. I previously had Ubuntu 10.04 running with Vista and I don't remember even having to partition anything that wasn't automatic in the install. Thanks for taking a look here. My Win7 is running fine but I miss my Ubuntu.

    Read the article

  • Impossible to install Ubuntu 10.10 dual boot with Windows 7 on new Acer desktop computer

    - by Don Myers
    My brother has a brand new Acer Desktop with Windows 7. I have done many installs (40+) of Ubuntu starting with 8.10, and have never run into this. I've spent three hours trying to do a dual boot install of 10.10. When you get to the place where you normally would choose to install as a dual boot or overwrite the existing information on the hard drive, that block is just blank. Nothing. No choices even to do a manual partition setup. If you try to go on you get the message "No root file system is defined. Please correct this from the partitioning menu." but there is nothing in the partitioning menu. I tried a good 10.04 disc also. Same thing happens with it. I ran a gparted live cd, and it shows the hard drive as sda with 3 partitions on the original. sda1 is a small partition called PQService. sda2 is another small partition called System Reserved, and GParted says it is the boot partition. sda3 is the main partation with the operating system (Windows 7) and all of the empty space. There is a little unallocated space at the very beginning and very end of the hard drive. If I go to places in the Live CD, it shows a 640 gb hard disk called Acer, but it also shows a 640 gb hard disk called system reserved. They are the same disk. There is just one hard drive. If you click properties in the System Reserved 640 gb, it shows all information as unknown. I had to change the boot order in the bios in order to run the live cd. The hard drive instead of being listed as such is listed as Raid:Raid Ready. Something the way this computer is set up is preventing Ubuntu from being able to identify the hard drive partitions at all to do an install, even if you were not doing a dual boot and just wanted to overwrite Windows. Is this a bug that needs reported? This is a major problem for me and my brother, but also for Ubuntu if new users want to Ubuntu and find they cannot install it.

    Read the article

  • Új adatbázis-biztonsági termék: Audit Vault and Database Firewall, lényegesen olcsóbban

    - by user645740
    Az Oracle összevonta az Audit Vault és a Database Firewall termékeket, még szélesebb felhasználói körnek elérhetové téve az adatbázisok biztonságának magasabb szintjét. Az új termék, az Oracle Audit Vault and Database Firewall (AVDF) mostantól kedvezobb áron érheto el. A jelentések megtekintéséhez restricted use-ban tartalmazza  a Business Intelligence Publisher licencet. Az adatgyujto, management szerver komponensek kiemelten védettek, az Audit Vault Server és a Database Firewall szerverekre restricted use-ban használhatók:Oracle Database Enterprise Edition, Database Vault, Partitioning, Advanced Compression és Advanced Security.

    Read the article

  • Can I install Natty alongside Maverick and retain my encrypted /home partition?

    - by Jon
    This is my partitioning scheme: 10GB partition empty -- will be installing Natty here 10GB partition containing Maverick 2GB swap partition 300GB encrypted /home partition I've had few problems in the past with having two ubuntu installs on two separate partitions, giving /home it's own partition, but I'm a little concerned since I'm now using an encrypted /home partition. Install won't try to wipe my /home if I click " encrypt home directory," will it?

    Read the article

  • why does ubuntu 12.04 remove gparted during the installation?

    - by user09887
    04, and I expanded the progress bar when it was about 3/4 done - it was at the 'remove unnecessary parts from installation' stage. I was thinking that removing gparted was not a good idea, as it is useful for partitioning USB devices - not just the internal hds. Does anyone else think that Ubuntu shouldn't remove gparted? Thanks P.S Sorry if this is posted in the incorrect forum / place. This is my first post to Ubuntu forums. Thanks.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >