Search Results

Search found 28443 results on 1138 pages for 'partition table'.

Page 186/1138 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Setting up a dualboot by installing cloned partitions using clonezilla

    - by Nimjox
    I'm trying to setup a dual boot system where I have Windows 7 and Linux Mint. Here's the kicker both are partitions I've saved using Clonzezilla from different places and to make matters worse Linux Mint is formated as a LVM. I need both of these images specifically as windows is a corporate image that I must use and the other is a development image that took me a week to setup. I've gotten it almost all working but my issue is that I can't get clonezilla to not mess up the partition table of Windows when installing Mint or vise-vera. I can use the (-k1 option) which doens't copy the partition table but then I have a unusable partition when it clones and I'm not sure how to fix the partition table. Here's what I'm doing: Using Gparted to make partitions sda1 40GB ntfs (windows), sda2 extended 70GB, sda5 lvm2 pv 69.99 GB (Linux), sda3 500MB (GRUB) Clonezilla windows image into sda1 partition (keeping partition table) Clonezilla linux image into sda5 partition (not recreating partition table) After all that I can boot into windows using the default MBR. I can use rescue-repair cd to reinstall GRUB which will see Windows 7 but I can't get it to see the Linux OS. I'm thinking its because of the sda5 partition but I'm not sure any ideas on what I could do to get this working or where I might be going wrong. If there is any additional detail you need please let me know and I'll edit as this is a lot.

    Read the article

  • W2K INACCESSIBLE_BOOT_DEVICE, with System Commander

    - by Gary Kephart
    I have a system that was originally had Win NT. I added System Commander (SC7) and then added W2K. The relevant partitions are: 0 - Primary - MultiFAT (Has Win NT, mapped to C:) 1 - Extended - with many logical partitions: 1.1 NTFS which has W2K and is mapped to D: 1.2 other logical partitions which are irrelevant to this D: was getting full. It needed room for virus definitions and Windows upgrades. In the past, I had simple used SC7 to resize D: without problems. So I did it again this time. However, upon finishing, I got the message "Unable to create partition". It also marked the partition as unformatted. I checked that the files on the disk were still there using SC7's Partition Explorer, and they were there. I continued and the system managed to boot up fine anyways. Then I rebooted the system again. This time, I got a message saying "INACCESSIBLE_BOOT_DEVICE". I went back in to SC7 and to Partition Commander, and it was still saying that the partition was unformatted but the Partition Explorer still showed the files on the system. I finally decided to resize the partition again, figuring that this would force a rewrite of the partition information. That seemed to work, until I had to reboot again. Now I can't see the files using Partition Explorer, and the Resize button is now disabled. What now?

    Read the article

  • vmware vmdk disk problem

    - by dmtr
    Hello, I have a vmware esxi 4 server and 2 storage servers (mount as nfs). Between the storage servers (fedora 14) is made drbd cluster (dual primary) and ocfs2 filesystem, also every server has local partition with ext4 filesystem, both are mounted as nfs on esxi server. When i tried to copy a virtual machine (naturally it power off) files from ext4 partition to ocfs2 partition, vmdk total file size is different, but md5sum is the same. on ext4 partition: # ls -la total 28492228 -rw------- 1 root root 42949672960 Jan 14 14:46 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 on ocfs2 partition: # ls -la total 13974660 -rw------- 1 root root 42949672960 Jan 14 16:16 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 When i power on the virtual machine from ocfs2 partition it dosn't work. I have a windows on the virtual machine and it freez?s after windows logo. From ext4 partition the virtual machine is worked. Test with linux (create and install on ext4 partition and copy) the same problem appears. When i create a virtual machine directly from ocfs2 partition, there are no problems. I tried to copy via vSphere client, and i have the same problem. Any suggestions ?

    Read the article

  • Can not boot windows XP from cloned hard disk - what can I do?

    - by Martin
    My configuration: a PC (some years old) with MSI K8N-Neo-4F Motherboard, 1 GB RAM. Disk 1 (Maxtor, SATA II, 250 GB): 2 Partitions, on Partition 1 (48 GB): Windows XP Professional (NTFS) on Partition 2 (190 GB): data (NTFS) I wanted to have a larger and faster disk (the PC is incredibly slow and permanently the disk is rattling when I try to open an application or during Windows startup), so I took Disk 2 (Seagate, Sata II, 500 GB), installed in the PC, created at first a 400 GB-partition at the end of the disk and cloned the data to it, which worked well Installed a swap partition and a partition for Ubuntu Linux 12.10 on the first "part" of the disk so I was able to boot Linux and the old Windows XP with the Linux "System selection" at startup. Now I wanted to move Windows XP to the new disk, deleted the Linux partitions cloned Windows XP to the new disk (with free tools - EASESUS), left both disks in the PC and tried to select the new hard drive during boot as boot partition. This did not work, the PC refused to boot from this second disk. I tried many things like making the boot partition on the 2nd drive "active" in the Windows System Preferences modifying the boot.ini file to boot from the second disk - tried to boot from it, but ended with an error message stating that it was not possible to boot from this disk because of a hardware failure or something else or so removing the original disk and plugging the new one on the same SATA port as the original one - also booting failed with an error message repairing the MBR by booting into recovery mode from the Windows XP Installation CD-ROM, selecting the second disk and doing "FIXMBR" which said that everything was fine with the MBR. after that at least the PC tried to boot from the newer disk and then startup was hanging during the blue screen with the Windows Logo.... no luck. ... deleting the cloned partition and cloning again - this time with Macrium Reflect Free version... - no success during booting. I tried a lot of things with no success, so I wonder what I am doing wrong?! What could I do to successfully clone my Win XP partition to replace the original disk by a larger one which is bootable.

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

  • Practical mysql schema advice for eCommerce store - Products & Attributes

    - by Gravy
    I am currently planning my first eCommerce application (mySQL & Laravel Framework). I have various products, which all have different attributes. Describing products very simply, Some will have a manufacturer, some will not, some will have a diameter, others will have a width, height, depth and others will have a volume. Option 1: Create a master products table, and separate tables for specific product types (polymorphic relations). That way, I will not have any unnecessary null fields in the products table. Option 2: Create a products table, with all possible fields despite the fact that there will be a lot of null rows Option 3: Normalise so that each attribute type has it's own table. Option 4: Create an attributes table, as well as an attribute_values table with the value being varchar regardless of the actual data-type. The products table would have a many:many relationship with the attributes table. Option 5: Common attributes to all or most products put in the products table, and specific attributes to a particular category of product attached to the categories table. My thoughts are that I would like to be able to allow easy product filtering by these attributes and sorting. I would also want the frontend to be fast, less concern over the performance of the inserting and updating of product records. Im a bit overwhelmed with the vast implementation options, and cannot find a suitable answer in terms of the best method of approach. Could somebody point me in the right direction? In an ideal world, I would like to offer the following kind of functionality - http://www.glassesdirect.co.uk/products/ to my eCommerce store. As can be seen, in the sidebar, you can select an attribute the glasses to filter them. e.g. male / female or plastic / metal / titanium etc... Alternatively, should I just dump the mySql relational database idea and learn mongodb?

    Read the article

  • Copy Windows 7 environment running on external drive to a new HDD partition on a Mac?

    - by sam
    My PC has just given up on me. It's getting a bit old so I took the HDD out and bought a Mac. I've put the PC's HDD into an enclosure, plugged it into the Mac and dragged across most of my files. I've got a couple of programs that are PC-only that I have to run from time to time, so I'd like to make a partition on the Mac and have Windows 7 installed there. I know how to set up a clean version of Windows in Boot Camp. Is there an easy way to copy over the environment on the HDD to the Mac's partition? I don't really need all the stuff on the PC, just the operating system. Is there a better way to do this other than having to deactivate Windows, then reactivate it on the Mac? I have a free upgrade to Windows 7 from Vista, thus I have an upgrade disc, not the full disc.

    Read the article

  • The Legend of the Filtered Index

    - by Johnm
    Once upon a time there was a big and bulky twenty-nine million row table. He tempestuously hoarded data like a maddened shopper amid a clearance sale. Despite his leviathan nature and eager appetite he loved to share his treasures. Multitudes from all around would embark upon an epiphanous journey to sample contents of his mythical purse of knowledge. After a long day of performing countless table scans the table was overcome with fatigue. After a short period of unavailability, he decided that he needed to consider a new way to share his prized possessions in a more efficient manner. Thus, a non-clustered index was born. She dutifully directed the pilgrims that sought the table's data - no longer would those despicable table scans darken the doorsteps of this quaint village. and yet, the table's veracious appetite did not wane. Any bit or byte that wondered near him was consumed with vigor. His columns and rows continued to expand beyond the expectations of even the most liberal estimation. As his rows grew grander they became more difficult to organize and maintain. The once bright and cheerful disposition of the non-clustered index began to dim. The wait time for those who sought the table's treasures began to increase. Some of those who came to nibble upon the banquet of knowledge even timed-out and never realized their aspired enlightenment. After a period of heart-wrenching introspection, the table decided to drop the index and attempt another solution. At the darkest hour of the table's desperation came a grand flash of light. As his eyes regained their vision there stood several creatures who looked very similar to his former, beloved, non-clustered index. They all spoke in unison as they introduced themselves: "Fear not, for we come to organize your data and direct those who seek to partake in it. We are the filtered index." Immediately, the filtered indexes began to scurry about. One took control of the past quarter's data. Another took control of the previous quarter's data. All of the remaining filtered indexes followed suit. As the nearly gluttonous habits of the table scaled forward more filtered indexes appeared. Regardless of the table's size, all of the eagerly awaiting data seekers were delivered data as quickly as a Jimmy John's sandwich. The table was moved to tears. All in the land of data rejoiced and all lived happily ever after, at least until the next data challenge crept from the fearsome cave of the unknown. The End.

    Read the article

  • How can I resize a partition managed by LVM?

    - by Mike C
    I have a fresh CentOS install on my machine and I would like to make space on the drive available in order to install Arch Linux. Unfortunately, LVM is new to me and doesn't appear to work well with gParted (on my Ubuntu 9.0 LiveCD, anyways). It always seems to treat the LVM as some unknown filesystem. I tried to use the 'lvm' utility on the LiveCD in order to resize the partition down, but I ended up somehow corrupting my filesystem (hence the fresh CentOS install). I haven't been able to find any documentation on LVM that makes much sense to me as a *nix n00b. Is there anywhere I can find some helpful documentation on LVM as well as a clear step by step on how to successfully resize a partition? Thanks, Mike

    Read the article

  • TrueCrypt with ext2/3 partition write access under Mac OS X Snow Leopard ?

    - by ssc
    I'm using a TrueCrypt volume with an ext3 partition under Snow Leopard with MacFUSE. I can mount ordinary (unencrypted) ext3 partitions read/write from the shell by adding command line arguments as shown in "Mounting ext3 in Snow Leopard…". However, TrueCrypt mounts the partition read-only and I don't see any way to 'sneak in' the required additional arguments. How do I mount it read/write? I was hoping for a similar solution as for mounting NTFS, but diskutil info /Volumes/my_volume/ does not return a UUID; it does tell me Read-Only Media: No Read-Only Volume: Yes though...

    Read the article

  • Getting "GRUB loading ... no such partition"?

    - by shameedp
    I am having a dual os, windows 7 and linux, the c drive have 20 GB, in which 5 GB is allocated for windows 7 (original) and 15 gb for linux since the spacing for windows is very low i used EaseUS partition manager and deleted my linux OS, and merged the unused space into my C drive, now it becomes 20GB, the things, after the reboot, I am getting GRUB loading. Welcome to GRUB! error: no such partition. entering rescue mode. . . Kindly help me guys the problem i am facing is i dont have a DVD drive to resolve it, using recovery mode. Waiting for your reply guys. in ls command i have (hd0) (hd0,msdos8) (hd0,msdos7) (hd0,msdos6) (hd0,msdos5) (hd0,msdos2) (hd0,msdos1)

    Read the article

  • Excel tables creation upon MySQL data import (new feature in MySQL for Excel 1.2.x)

    - by Javier Treviño
    In this blog post we are going to talk about one of the features included since MySQL for Excel 1.2.0, you can install the latest GA or maintenance version using the MySQL Installer or optionally you can download directly any GA or non-GA version from the MySQL Developer Zone. Remember how easy is to dump data from a MySQL table, view or stored procedure to an Excel worksheet? (If you don't you can check out this other post: How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel). In version 1.2.0 we introduced some advanced options for the Import MySQL Data operation regarding Excel tables. The Advanced Options dialog shown above is accessible from any Import Data dialog. When the Create an Excel table for the imported MySQL table data option is checked (which is by default), MySQL for Excel will create an Excel table (also known in Excel jargon as a ListObject) from the Excel range containing the imported MySQL data. This "little feature" enables the right-away usage of the Excel table in data analysis, like including it for summarization on a PivotTable, including a summarization row at the end of the table's data, sorting or filtering the table's data by clicking the drop-down button next to each column's header, among other actions. The Excel tables that are created automatically from imported MySQL data will have a name like [UserPrefix].<SchemaName>.<DbObjectName> for tables and views, and <Prefix>.<SchemaName>.<ProcedureName>.<ResultSetName> for stored procedures.  Notice the first piece of the name is an optional [UserPrefix], the prefix is only used if the Prefix Excel tables with the following text option is checked, notice that the suggested prefix is "MySQL" but it can be changed to whatever text is suitable for you. Excel tables must have a table style so they are easily identified. There are a lot of predefined Excel table styles, by default the MySqlDefault style is applied, which is the style you have seen applied to imported data for Edit Sessions, and which adds simple and elegant formatting to the table. If you wish to change it to any of the predefined Excel table style you can do it through the drop-down list on the Use style [[styles drop-down]] for the new Excel table option. Excel tables are the basic construction blocks for building data analysis or self-service Business Intelligence using other more advanced Excel tools like Power Pivot, Power View or Power Map. This feature empowers imported MySQL data to use it in more advanced ways.  We hope you give this and the other new features in the 1.2.x version family a try! Remember that your feedback is very important for us, so drop us a message and follow us: MySQL on Windows (this) Blog: https://blogs.oracle.com/MySqlOnWindows/ MySQL for Excel forum: http://forums.mysql.com/list.php?172 Facebook: http://www.facebook.com/mysql YouTube channel: https://www.youtube.com/user/MySQLChannel Cheers!

    Read the article

  • Latest additions to Certify

    - by SadFab
    New releases added: FMW, OBIEE, OIAM, OFR, ODI, GOLDENGATE Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} FMW 11.1.1.6.0 o   Oracle WebLogic Server 10.3.5.0.0 o   Oracle WebLogic Server 10.3.6.0.0 o   Oracle HTTP Server o   Oracle Web Cache o   Oracle Application Development Framework o   Oracle Application Development Runtime o   Oracle SOA Suite o   Oracle Application Integration Architecture Foundation Pack o   Oracle B2B o   Oracle BPEL Process Manager o   Oracle Business Activity Monitoring o   Oracle Business Process Management o   Oracle Complex Event Processing o   Oracle Enterprise Repository o   Oracle Mediator o   Oracle Service Bus o   Oracle Internet Directory o   Oracle Virtual Directory o   Oracle Identity Federation o   Oracle Directory Services Manager o   Oracle Authentication Services for OS o   Oracle Portal o   Oracle WebCenter Portal o   Oracle Reports o   Reports Builder o   Oracle Forms o   Forms Builder o   Discoverer Administrator o   Discoverer Desktop Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} o   ECM certifications (renamed Oracle WebCenter Content o   WebCenter Sites (formerly Fatwire) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} OBIEE 11.1.1.6.0 o   Oracle Business Intelligence Enterprise Edition o   Oracle Business Intelligence Publisher o   Oracle Real-Time Decisions o   Oracle Segmentation Server Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}  Oracle Identity & Access Management 11.1.1.5.0 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 o   Oracle Access Manager o   Oracle Adaptive Access Manager o   Oracle Authorization Policy Manager o   Oracle Entitlements Server o   Oracle Identity Manager o   Oracle Identity Navigator o   Oracle Security Token Service Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Oracle Identity & Access Management 11.1.2.0.0 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 o   Oracle Access Manager o   Oracle Adaptive Access Manager o   Oracle Authorization Policy Manager o   Oracle Enterprise Single Sign On Suite o   Oracle Entitlements Server o   Oracle Identity Connect o   Oracle Identity Federation o   Oracle Identity Manager o   Oracle Identity Navigator o   Oracle Privileged Account Manager o   Oracle Security Token Service o   Oracle Unified Directory Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} OFR 11.1.2.0.0 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 o   Oracle Forms o   Oracle Reports Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} ODI 11.1.1.6.0 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 o   Oracle Data Integrator Agent o   Oracle Data Integrator Console o   Oracle Data Integrator Studio Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} OGG 11.1.1.1.2 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} o   Oracle GoldenGate o   Oracle GoldenGate Adapters for Java and Flat File o   Oracle GoldenGate for Base24 3.0.6 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} OGG 11.2.1.0.1 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} o   Oracle GoldenGate

    Read the article

  • How do I recover missing partition. Windows 8 with recovery USB?

    - by Akbar Ali
    i bought a Samsung laptop with Windows 8 preinstalled. After a year I removed Windows 8 and installed Windows 7. Before removing Windows 8, I made a Windows 8 recovery USB. Now I want to get back my original Windows 8. When I used the USB, it said missing recovery partition or partition has been deleted. Can I install Windows 8 from the internet, and if I use my recovery USB will it activate Windows or not? Or is there any other way to do this task?

    Read the article

  • What are the differences between MBR vs GPT vs any other partition scheme?

    - by Safran Ali
    Can anyone tell me what the main differences between i.e. MBR vs GPT or any other partition scheme are? Why would one choose one over the other? I am not an expert but from new release of Mac OS X which includes a feature called Time Machine, which I find highly useful. GPT is the requirement for Mac OS X Lion ... so on this basis I would say that GPT is more useful than MBR. What other partition schemes are there and which one should be used in which situation?

    Read the article

  • How to use robocopy to move Users folder from partition c to d on Windows 7?

    - by Bastian
    I just tried to copy my Users folder from partition C to partition D using the method mentioned in this post. Unfortunately I encountered two problems: When using the command robocopy c:\Users d:\Users /mir /xj /copyall, robocopy says that it can't find the file C:\Users\, although it exists. When using the command robocopy x:\Users d:\Users /mir /xj /copyall, robocopy says that it cannot find the path d:\Users\Administrator\Application Data, error code <0x00000003>. I started the command line mode of my Windows 7 installation disk (repair mode). Does anybody know what the reasons for these errors might be?

    Read the article

  • What is the best way to design a table with an arbitrary id?

    - by P.Brian.Mackey
    I have the need to create a table with a unique id as the PK. The ID is a surrogate key. Originally, I had a natural key, but requirement changes have undermined this idea. Then, I considered adding an auto incrementing identity. But, this presents problems. A. I can't specify my own ID. B. The ID's are difficult to reset. Both of these together make it difficult to copy over this table with new data or move the table across domains, e.g. Dev to QA. I need to refer to these ID's from the front end, JavaScript...so they must not change. So, the only way I am aware of to meet all these challenges is to make a GUID ID. This way, I can overwrite the ID's when I need to or I can generate a new one without concern for order (E.G. an int based id would require I know the last inserted ID). Is a GUID the best way to accomplish my goals? Considering that a GUID is a string and joining on a string is an expensive task, is there a better way?

    Read the article

  • Why to avoid SELECT * from tables in your Views

    - by Jeff Smith
    -- clean up any messes left over from before: if OBJECT_ID('AllTeams') is not null  drop view AllTeams go if OBJECT_ID('Teams') is not null  drop table Teams go -- sample table: create table Teams (  id int primary key,  City varchar(20),  TeamName varchar(20) ) go -- sample data: insert into Teams (id, City, TeamName ) select 1,'Boston','Red Sox' union all select 2,'New York','Yankees' go create view AllTeams as  select * from Teams go select * from AllTeams --Results: -- --id          City                 TeamName ------------- -------------------- -------------------- --1           Boston               Red Sox --2           New York             Yankees -- Now, add a new column to the Teams table: alter table Teams add League varchar(10) go -- put some data in there: update Teams set League='AL' -- run it again select * from AllTeams --Results: -- --id          City                 TeamName ------------- -------------------- -------------------- --1           Boston               Red Sox --2           New York             Yankees -- Notice that League is not displayed! -- Here's an even worse scenario, when the table gets altered in ways beyond adding columns: drop table Teams go -- recreate table putting the League column before the City: -- (i.e., simulate re-ordering and/or inserting a column) create table Teams (  id int primary key,  League varchar(10),  City varchar(20),  TeamName varchar(20) ) go -- put in some data: insert into Teams (id,League,City,TeamName) select 1,'AL','Boston','Red Sox' union all select 2,'AL','New York','Yankees' -- Now, Select again for our view: select * from AllTeams --Results: -- --id          City       TeamName ------------- ---------- -------------------- --1           AL         Boston --2           AL         New York -- The column labeled "City" in the View is actually the League, and the column labelled TeamName is actually the City! go -- clean up: drop view AllTeams drop table Teams

    Read the article

  • questions about dual-boot install Ubuntu 10.04 and Windows 7 on same hard drive

    - by Tim
    I'd like to dual-boot install Ubuntu 10.04 on the same hard drive as Windows 7 which has already been installed. As to sources on the internet: I found a website iinet about dual-boot installation of Ubuntu 10.10 and Windows 7 on the same hard drive, which I think more specific than the one on Ubuntu Community without specific version of the OSes. Since I am installing Ubuntu 10.04 instead of 10.10, my question is whether their installers are same or almost same and if I can follow iinet for my dual-boot installation? Or are there better websites for information about dual-boot installtion of Ubuntu 10.04 and Windows 7? As to shrinking Windows partitions to make free space for Ubuntu partitions: iinet uses the partition software in Ubuntu's installer to shrink the Windows partition. But I saw in many website that the partition software in Ubuntu's installer cannot guarantee shrinking Windows 7 partitions successfully, so they recommended in general to shrink Windows partitions under Windows itself using its softwares. For example, in Ubuntu Community, it says: Some people think that the Windows partition must be resized only from within Windows Vista and Windows 7 using the shrink/resize option. ... If you use GParted Partition Editor in the Ubuntu Live CD be careful. So I was wondering which way to go in my situation? As to partition for bootloader files: In iinet, I don't see there is a partition created and dedicated to boot files (i.e. Grub files). However, I saw in many websites strongly suggesting using a boot partition for Grub files, especially for the purpose of separation and protection from installed OS files. I was wondering which way I should choose and why? As to installing bootloader Grub, in iinet, I see that to install Grub it only needs to specify the hard drive device for bootloader installation. However, in ubuntuguide(for more than 2 OSes and Ubuntu 9.04), some commands are needed to run in order to put Grub configuration files in MBR, and OS partition, for the chain-load process (where to find the files for the next stage). In Ubuntu Community, there are some related sentences which I don't quite understand how to do in practice: the only thing in your computer outside of Ubuntu that needs to be changed is a small code in the MBR (Master Boot Record) of the first hard disk. The MBR code is changed to point to the boot loader in Ubuntu. If you have a problem with changing the MBR code, you might prefer to just install the code for pointing to GRUB to the first sector of your Ubuntu partition instead. If you do that during the Ubuntu installation process, then Ubuntu won't boot until you configure some other boot manager to point to Ubuntu's boot sector. Windows Vista no longer utilizes boot.ini, ntdetect.com, and ntldr when booting. Instead, Vista stores all data for its new boot manager in a boot folder. Windows Vista ships with an command line utility called bcdedit.exe, which requires administrator credentials to use. You may want to read http://go.microsoft.com/fwlink/?LinkId=112156 about it. Using a command line utility always has its learning curve, so a more productive and better job can be done with a free utility called EasyBCD, developed and mastered in during the times of Vista Beta already. EasyBCD is user friendly and many Vista users highly recommend EasyBCD. In what is quoted above, I was wondering how exactly I should change the MBR code to point to the bootloader in Ubuntu? if I fail to change MBR code, are the other suggested boot managers being bcdedit.exe and EasyBCD in Windows? With the three sources above, which one shall I follow? Thanks and regards

    Read the article

  • LASTDATE dates arguments and upcoming events #dax #tabular #powerpivot

    - by Marco Russo (SQLBI)
    Recently I had to write a DAX formula containing a LASTDATE within the logical condition of a FILTER: I found that its behavior was not the one I expected and I further investigated. At the end, I wrote my findings in this article on SQLBI, which can be applied to any Time Intelligence function with a <dates> argument.The key point is that when you write LASTDATE( table[column] )in reality you obtain something like LASTDATE( CALCULATETABLE( VALUES( table[column] ) ) )which converts an existing row context into a filter context.Thus, if you have something like FILTER( table, table[column] = LASTDATE( table[column] ) the FILTER will return all the rows of table, whereas you probably want to use FILTER( table, table[column] = LASTDATE( VALUES( table[column] ) ) )so that the existing filter context before executing FILTER is used to get the result from VALUES( table[column] ), avoiding the automatic expansion that would include a CALCULATETABLE that would hide the existing filter context.If after reading the article you want to get more insights, read the Jeffrey Wang's post here.In these days I'm speaking at SQLRally Nordic 2012 in Copenhagen and I will be in Cologne (Germany) next week for a SSAS Tabular Workshop, whereas Alberto will teach the same workshop in Amsterdam one week later. Both workshops still have seats available and the Amsterdam's one is still in early bird discount until October 3rd!Then, in November I expect to meet many blog readers at PASS Summit 2012 in Seattle and I hope to find the time to write other article on interesting things on Tabular and PowerPivot. Stay tuned!

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • Ubuntu on an XPS 14 Ultrabook with mSATA cache and 500GB HD - how to partition for dual boot?

    - by JDS
    I am getting an XPS 14 ( http://www.dell.com/us/p/xps-14-l421x/pd ) and I want to dual-boot Windows and Ubuntu. This thing has a 500GB standard HD and a 32GB mSATA that can be used as cache. Does anyone know how this thing is partitioned? Is the OS installed on the mSATA drive and data is on the big HD? Is there a BIOS controller or maybe even a Windows driver that makes the mSATA drive and 500GB HD appear contiguous? I get the impression that something makes the mSATA be used invisibly as cache, but I can't find any technical documentation how that works. My primary concern here is wrt dual-booting Ubuntu. I want to know if I need to partition the mSATA separately, or the big HD, or just partition the "magic" contiguous disk space that appears available to the OS.

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >