Search Results

Search found 3324 results on 133 pages for 'gb j'.

Page 130/133 | < Previous Page | 126 127 128 129 130 131 132 133  | Next Page >

  • How To Create A Download Quota.

    - by snikolov
    I need to create an handy file down loader which will count the amount of bytes downloaded and stop when it has exceed a preset limit. i need to mirror some files but i only have 7 gb per moth of bandwidth and i dont want to exceed the limit. Example limits can be in bytes or number of files, each user has their own limit, as well as a limit for Download Quota itself. So if you set a limit of 2 gigabytes for Download Quota, downloads stop at 2 gigabytes, even if you have 3 users with a limit of 1 gigabyte each. if ($range) { //pass client Range header to rapidshare // _insert($range); $cookie .= "\r\nRange: $range"; $multipart = true; header("X-UR-RANGE-Range: $range"); } //octet-stream + attachment => client always stores file header('Content-type: application/octet-stream'); header('Content-Disposition: attachment; filename="' . $fn . '"'); //always included so clients know this script supports resuming header("Accept-Ranges: bytes"); //awful hack to pass rapidshare the premium cookie $user_agent = ini_get("user_agent"); ini_set("user_agent", $user_agent . "\r\nCookie: enc=$cookie"); $httphandle = fopen($url, "r"); $headers = stream_get_meta_data($httphandle); //let's check the return header of rapidshare for range / length indicators //we'll just pass these to the client foreach ($headers["wrapper_data"] as $header) { $header = trim($header); if (substr(strtolower($header), 0, strlen("content-range")) == "content-range") { // _insert($range); header($header); header("X-RS-RANGE-" . $header); $multipart = true; //content-range indicates partial download } elseif (substr(strtolower($header), 0, strlen("Content-Length")) == "content-length") { // _insert($range); header($header); header("X-RS-CL-" . $header); } } //now show the client he has a partial download if ($multipart) header('HTTP/1.1 206 Partial Content'); flush(); $download_rate = 100; while (!feof($httphandle)) { // send the current file part to the browser $var_stat = fread($httphandle, round($download_rate * 1024)); $var12 = strlen($var_stat); ////////////////////////////////// echo $var_stat; ///////////////////////////////// // flush the content to the browser flush(); // sleep one second sleep(1); }

    Read the article

  • Designing a database file format

    - by RoliSoft
    I would like to design my own database engine for educational purposes, for the time being. Designing a binary file format is not hard nor the question, I've done it in the past, but while designing a database file format, I have come across a very important question: How to handle the deletion of an item? So far, I've thought of the following two options: Each item will have a "deleted" bit which is set to 1 upon deletion. Pro: relatively fast. Con: potentially sensitive data will remain in the file. 0x00 out the whole item upon deletion. Pro: potentially sensitive data will be removed from the file. Con: relatively slow. Recreating the whole database. Pro: no empty blocks which makes the follow-up question void. Con: it's a really good idea to overwrite the whole 4 GB database file because a user corrected a typo. I will sell this method to Twitter ASAP! Now let's say you already have a few empty blocks in your database (deleted items). The follow-up question is how to handle the insertion of a new item? Append the item to the end of the file. Pro: fastest possible. Con: file will get huge because of all the empty blocks that remain because deleted items aren't actually deleted. Search for an empty block exactly the size of the one you're inserting. Pro: may get rid of some blocks. Con: you may end up scanning the whole file at each insert only to find out it's very unlikely to come across a perfectly fitting empty block. Find the first empty block which is equal or larger than the item you're inserting. Pro: you probably won't end up scanning the whole file, as you will find an empty block somewhere mid-way; this will keep the file size relatively low. Con: there will still be lots of leftover 0x00 bytes at the end of items which were inserted into bigger empty blocks than they are. Rigth now, I think the first deletion method and the last insertion method are probably the "best" mix, but they would still have their own small issues. Alternatively, the first insertion method and scheduled full database recreation. (Probably not a good idea when working with really large databases. Also, each small update in that method will clone the whole item to the end of the file, thus accelerating file growth at a potentially insane rate.) Unless there is a way of deleting/inserting blocks from/to the middle of the file in a file-system approved way, what's the best way to do this? More importantly, how do databases currently used in production usually handle this?

    Read the article

  • Using pointers, references, handles to generic datatypes, as generic and flexible as possible

    - by Patrick
    In my application I have lots of different data types, e.g. Car, Bicycle, Person, ... (they're actually other data types, but this is just for the example). Since I also have quite some 'generic' code in my application, and the application was originally written in C, pointers to Car, Bicycle, Person, ... are often passed as void-pointers to these generic modules, together with an identification of the type, like this: Car myCar; ShowNiceDialog ((void *)&myCar, DATATYPE_CAR); The 'ShowNiceDialog' method now uses meta-information (functions that map DATATYPE_CAR to interfaces to get the actual data out of Car) to get information of the car, based on the given data type. That way, the generic logic only has to be written once, and not every time again for every new data type. Of course, in C++ you could make this much easier by using a common root class, like this class RootClass { public: string getName() const = 0; }; class Car : public RootClass { ... }; void ShowNiceDialog (RootClass *root); The problem is that in some cases, we don't want to store the data type in a class, but in a totally different format to save memory. In some cases we have hundreds of millions of instances that we need to manage in the application, and we don't want to make a full class for every instance. Suppose we have a data type with 2 characteristics: A quantity (double, 8 bytes) A boolean (1 byte) Although we only need 9 bytes to store this information, putting it in a class means that we need at least 16 bytes (because of the padding), and with the v-pointer we possibly even need 24 bytes. For hundreds of millions of instances, every byte counts (I have a 64-bit variant of the application and in some cases it needs 6 GB of memory). The void-pointer approach has the advantage that we can almost encode anything in a void-pointer and decide how to use it if we want information from it (use it as a real pointer, as an index, ...), but at the cost of type-safety. Templated solutions don't help since the generic logic forms quite a big part of the application, and we don't want to templatize all this. Additionally, the data model can be extended at run time, which also means that templates won't help. Are there better (and type-safer) ways to handle this than a void-pointer? Any references to frameworks, whitepapers, research material regarding this?

    Read the article

  • Cakephp file upload problem.

    - by vatismarty
    I am using Cakephp as my framework. I have a problem in uploading my files through a form. I am using an Uploader plugin from THIS website. My php ini file has this piece of code. upload_max_filesize = 10M post_max_size = 8M this is from uploader.php -- plugin file has var $maxFileSize = '5M'; //default max file size In my controller.php file, i use this code to set max file size as 1 MB at runtime. function beforeFilter() { parent::beforeFilter(); $this->Uploader->maxFileSize = '1M'; } In the uploader.php, we perform this ... if (empty($this->maxFileSize)) { $this->maxFileSize = ini_get('upload_max_filesize'); //landmark 1 } $byte = preg_replace('/[^0-9]/i', '', $this->maxFileSize); $last = $this->bytes($this->maxFileSize, 'byte'); if ($last == 'T' || $last == 'TB') { $multiplier = 1; $execTime = 20; } else if ($last == 'G' || $last == 'GB') { $multiplier = 3; $execTime = 10; } else if ($last == 'M' || $last == 'MB') { $multiplier = 5; $execTime = 5; } else { $multiplier = 10; $execTime = 3; } ini_set('memore_limit', (($byte * $multiplier) * $multiplier) . $last); ini_set('post_max_size', ($byte * $multiplier) . $last); //error suspected here ini_set('upload_tmp_dir', $this->tempDir); ini_set('upload_max_filesize', $this->maxFileSize); //landmark 2 EXPECTED RESULT: When i try uploading a file that is 2MB of size, it shouldn't take place because maxFileSize is 1MB at run time. So upload should fail. THE PROBLEM IS : But it is getting uploaded. Landmark 1 does not get executed. (in comments)... land mark 2 does not seem to work... upload_max_filesize does not get the value from maxFileSize. Please help me... thank you

    Read the article

  • Xcode + Perforce: it frequently shows me the spinning wheel for no reason! What can i do?

    - by GamingHorror
    While working on an Xcode project i keep getting the spinning wheel while switching files, scrolling, searching, typing, debugging, removing breakpoints, switching back from another app or saving. It also happens before compiling but usually it just happens from time to time for no apparent reason. This is the second time this started happening in a Xcode project and it's driving me nuts. It completely breaks my flow of work to have to wait for the spinning wheel to go away (2-5 seconds). What on earth could i possibly do to ... figure out what's causing the problem? resolve the problem? More details: When any project is small, everything is super-smooth with Xcode and Perforce. Two of my projects eventually had this spinning wheel problem after about 4 weeks of work. It only happened with those two projects so far. They consist of around 1000-1200 files in source control, most of them assets. The problem occurs even if i manually check out the whole project in Perforce. The problem is gone when i copy the project directory and work in the copy which is no longer under source control, or if i create a branch in Perforce and work in the branch (under source control). One of these projects i shared with a colleague, and he had exactly the same issues on his Mac. We eventually switched to Subversion and the spinning-wheel issue immediately went away. Now that i've received an updated copy of the project and simply put it under Perforce as a new project, the problem also went away (so far it did not resurface). It leads me to think that it may be caused by a larger number of file revisions. The server itself (version 2009.1) is on a different (Windows) machine on my LAN, so there's definetely no Internet lag involved. The complete repository is just 1 GB in size spread over a dozen projects or so. I'm sorry if the question seems more like a support inquiry for Perforce. However i'm using the free version of Perforce so i'm not entitled to get support from them. I hope no one minds me asking here. I'm really bummed out by this. I don't want to have to create a new branch for the project each time the spinning wheel problem surfaces.

    Read the article

  • StaX: Content not allowed in prolog

    - by RalfB
    I have the following (test) XML file below and Java code that uses StaX. I want to apply this code to a file that is about 30 GB large but with fairly small elements, so I thought StaX is a good choice. I am getting the following error: Exception in thread "main" javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:598) at at.tuwien.mucke.util.xml.staxtest.StaXTest.main(StaXTest.java:18) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) <?xml version='1.0' encoding='utf-8'?> <catalog> <book id="bk101"> <author>Gambardella, Matthew</author> <title>XML Developer's Guide</title> <price>44.95</price> <description>An in-depth look at creating applications with XML.</description> </book> <book id="bk102"> <author>Ralls, Kim</author> <title>Midnight Rain</title> <price>5.95</price> <description>A former architect battles corporate zombies, an evil sorceress, and her own childhood to become queen of the world.</description> </book> </catalog> Here the code: package xml.staxtest; import java.io.*; import javax.xml.stream.*; public class StaXTest { public static void main(String[] args) throws Exception { XMLInputFactory xif = XMLInputFactory.newInstance(); XMLStreamReader streamReader = xif.createXMLStreamReader(new FileReader("D:/Data/testFile.xml")); while(streamReader.hasNext()){ int eventType = streamReader.next(); if(eventType == XMLStreamReader.START_ELEMENT){ System.out.println(streamReader.getLocalName()); } //... more to come here later ... } } }

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • C++ Vector vs Array (Time)

    - by vsha041
    I have got here two programs with me, both are doing exactly the same task. They are just setting an boolean array / vector to the value true. The program using vector takes 27 seconds to run whereas the program involving array with 5 times greater size takes less than 1 s. I would like to know the exact reason as to why there is such a major difference ? Are vectors really that inefficient ? Program using vectors #include <iostream> #include <vector> #include <ctime> using namespace std; int main(){ const int size = 2000; time_t start, end; time(&start); vector<bool> v(size); for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - 27 seconds Program using Array #include <iostream> #include <ctime> using namespace std; int main(){ const int size = 10000; // 5 times more size time_t start, end; time(&start); bool v[size]; for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - < 1 seconds Platform - Visual Studio 2008 OS - Windows Vista 32 bit SP 1 Processor Intel(R) Pentium(R) Dual CPU T2370 @ 1.73GHz Memory (RAM) 1.00 GB Thanks Amare

    Read the article

  • Why can't I reserve 1,000,000,000 in my vector ?

    - by vipersnake005
    When I type in the foll. code, I get the output as 1073741823. #include <iostream> #include <vector> using namespace std; int main() { vector <int> v; cout<<v.max_size(); return 0; } However when I try to resize the vector to 1,000,000,000, by v.resize(1000000000); the program stops executing. How can I enable the program to allocate the required memory, when it seems that it should be able to? I am using MinGW in Windows 7. I have 2 GB RAM. Should it not be possible? In case it is not possible, can't I declare it as an array of integers and get away? BUt even that doesn't work. Another thing is that, suppose I would use a file(which can easily handle so much data ). How can I let it read and write and the same time. Using fstream file("file.txt', ios::out | ios::in ); doesn't create a file, in the first place. But supposing the file exists, I am unable to use to do reading and writing simultaneously. WHat I mean is this : Let the contents of the file be 111111 Then if I run : - #include <fstream> #include <iostream> using namespace std; int main() { fstream file("file.txt",ios:in|ios::out); char x; while( file>>x) { file<<'0'; } return 0; } Shouldn't the file's contents now be 101010 ? Read one character and then overwrite the next one with 0 ? Or incase the entire contents were read at once into some buffer, should there not be atleast one 0 in the file ? 1111110 ? But the contents remain unaltered. Please explain. Thank you.

    Read the article

  • Installing Lubuntu 14.04.1 forcepae fails

    - by Rantanplan
    I tried to install Lubuntu 14.04.1 from a CD. First, I chose Try Lubuntu without installing which gave: ERROR: PAE is disabled on this Pentium M (PAE can potentially be enabled with kernel parameter "forcepae" ... Following the description on https://help.ubuntu.com/community/PAE, I used forcepae and tried Try Lubuntu without installing again. That worked fine. dmesg | grep -i pae showed: [ 0.000000] Kernel command line: file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- forcepae [ 0.008118] PAE forced! On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Finally, I rebooted once more and tried Check disc for defects with forcepae option; no errors have been found. Now, I am wondering how to find the error or whether it would be better to follow advice c in https://help.ubuntu.com/community/PAE: "Move the hard disk to a computer on which the processor has PAE capability and PAE flag (that is, almost everything else than a Banias). Install the system as usual but don't add restricted drivers. After the install move the disk back." Thanks for some hints! Perhaps some of the following can help: On Lubuntu 12.04: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 bogomips : 1284.76 clflush size : 64 cache_alignment : 64 address sizes : 32 bits physical, 32 bits virtual power management: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise cpuid eax in eax ebx ecx edx 00000000 00000002 756e6547 6c65746e 49656e69 00000001 000006d6 00000816 00000180 afe9f9bf 00000002 02b3b001 000000f0 00000000 2c04307d 80000000 80000004 00000000 00000000 00000000 80000001 00000000 00000000 00000000 00000000 80000002 20202020 20202020 65746e49 2952286c 80000003 6e655020 6d756974 20295228 7270204d 80000004 7365636f 20726f73 30352e31 007a4847 Vendor ID: "GenuineIntel"; CPUID level 2 Intel-specific functions: Version 000006d6: Type 0 - Original OEM Family 6 - Pentium Pro Model 13 - Stepping 6 Reserved 0 Brand index: 22 [not in table] Extended brand string: " Intel(R) Pentium(R) M processor 1.50GHz" CLFLUSH instruction cache line size: 8 Feature flags afe9f9bf: FPU Floating Point Unit VME Virtual 8086 Mode Enhancements DE Debugging Extensions PSE Page Size Extensions TSC Time Stamp Counter MSR Model Specific Registers MCE Machine Check Exception CX8 COMPXCHG8B Instruction SEP Fast System Call MTRR Memory Type Range Registers PGE PTE Global Flag MCA Machine Check Architecture CMOV Conditional Move and Compare Instructions FGPAT Page Attribute Table CLFSH CFLUSH instruction DS Debug store ACPI Thermal Monitor and Clock Ctrl MMX MMX instruction set FXSR Fast FP/MMX Streaming SIMD Extensions save/restore SSE Streaming SIMD Extensions instruction set SSE2 SSE2 extensions SS Self Snoop TM Thermal monitor 31 reserved TLB and cache info: b0: unknown TLB/cache descriptor b3: unknown TLB/cache descriptor 02: Instruction TLB: 4MB pages, 4-way set assoc, 2 entries f0: unknown TLB/cache descriptor 7d: unknown TLB/cache descriptor 30: unknown TLB/cache descriptor 04: Data TLB: 4MB pages, 4-way set assoc, 8 entries 2c: unknown TLB/cache descriptor On Lubuntu 14.04.1 live-CD with forcepae: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe bts est tm2 bogomips : 1284.68 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 32 bits virtual power management: uname -a Linux lubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty cpuid CPU 0: vendor_id = "GenuineIntel" version information (1/eax): processor type = primary processor (0) family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) model = 0xd (13) stepping id = 0x6 (6) extended family = 0x0 (0) extended model = 0x0 (0) (simple synth) = Intel Pentium M (Dothan B1) / Celeron M (Dothan B1), 90nm miscellaneous (1/ebx): process local APIC physical ID = 0x0 (0) cpu count = 0x0 (0) CLFLUSH line size = 0x8 (8) brand index = 0x16 (22) brand id = 0x16 (22): Intel Pentium M, .13um feature information (1/edx): x87 FPU on chip = true virtual-8086 mode enhancement = true debugging extensions = true page size extensions = true time stamp counter = true RDMSR and WRMSR support = true physical address extensions = false machine check exception = true CMPXCHG8B inst. = true APIC on chip = false SYSENTER and SYSEXIT = true memory type range registers = true PTE global bit = true machine check architecture = true conditional move/compare instruction = true page attribute table = true page size extension = false processor serial number = false CLFLUSH instruction = true debug store = true thermal monitor and clock ctrl = true MMX Technology = true FXSAVE/FXRSTOR = true SSE extensions = true SSE2 extensions = true self snoop = true hyper-threading / multi-core supported = false therm. monitor = true IA64 = false pending break event = true feature information (1/ecx): PNI/SSE3: Prescott New Instructions = false PCLMULDQ instruction = false 64-bit debug store = false MONITOR/MWAIT = false CPL-qualified debug store = false VMX: virtual machine extensions = false SMX: safer mode extensions = false Enhanced Intel SpeedStep Technology = true thermal monitor 2 = true SSSE3 extensions = false context ID: adaptive or shared L1 data = false FMA instruction = false CMPXCHG16B instruction = false xTPR disable = false perfmon and debug = false process context identifiers = false direct cache access = false SSE4.1 extensions = false SSE4.2 extensions = false extended xAPIC support = false MOVBE instruction = false POPCNT instruction = false time stamp counter deadline = false AES instruction = false XSAVE/XSTOR states = false OS-enabled XSAVE/XSTOR = false AVX: advanced vector extensions = false F16C half-precision convert instruction = false RDRAND instruction = false hypervisor guest status = false cache and TLB information (2): 0xb0: instruction TLB: 4K, 4-way, 128 entries 0xb3: data TLB: 4K, 4-way, 128 entries 0x02: instruction TLB: 4M pages, 4-way, 2 entries 0xf0: 64 byte prefetching 0x7d: L2 cache: 2M, 8-way, sectored, 64 byte lines 0x30: L1 cache: 32K, 8-way, 64 byte lines 0x04: data TLB: 4M pages, 4-way, 8 entries 0x2c: L1 data cache: 32K, 8-way, 64 byte lines extended feature flags (0x80000001/edx): SYSCALL and SYSRET instructions = false execution disable = false 1-GB large page support = false RDTSCP = false 64-bit extensions technology available = false Intel feature flags (0x80000001/ecx): LAHF/SAHF supported in 64-bit mode = false LZCNT advanced bit manipulation = false 3DNow! PREFETCH/PREFETCHW instructions = false brand = " Intel(R) Pentium(R) M processor 1.50GHz" (multi-processing synth): none (multi-processing method): Intel leaf 1 (synth) = Intel Pentium M (Dothan B1), 90nm

    Read the article

  • Create a Bootable Ubuntu 9.10 USB Flash Drive

    - by Trevor Bekolay
    The Ubuntu Live CD isn’t just useful for trying out Ubuntu before you install it, you can also use it to maintain and repair your Windows PC. Even if you have no intention of installing Linux, every Windows user should have a bootable Ubuntu USB drive on hand in case something goes wrong in Windows. Creating a bootable USB flash drive is surprisingly easy with a small self-contained application called UNetbootin. It will even download Ubuntu for you! Note: Ubuntu will take up approximately 700 MB on your flash drive, so choose a flash drive with at least 1 GB of free space, formatted as FAT32. This process should not remove any existing files on the flash drive, but to be safe you should backup the files on your flash drive. Put Ubuntu on your flash drive UNetbootin doesn’t require installation; just download the application and run it. Select Ubuntu from the Distribution drop-down box, then 9.10_Live from the Version drop-down box. If you have a 64-bit machine, then select 9.10_Live_x64 for the Version. At the bottom of the screen, select the drive letter that corresponds to the USB drive that you want to put Ubuntu on. If you select USB Drive in the Type drop-down box, the only drive letters available will be USB flash drives. Click OK and UNetbootin will start doing its thing. First it will download the Ubuntu Live CD. Then, it will copy the files from the Ubuntu Live CD to your flash drive. The amount of time it takes will vary depending on your Internet speed, an when it’s done, click on Exit. You’re not planning on installing Ubuntu right now, so there’s no need to reboot. If you look at the USB drive now, you should see a bunch of new files and folders. If you had files on the drive before, they should still be present. You’re now ready to boot your computer into Ubuntu 9.10! How to boot into Ubuntu When the time comes that you have to boot into Ubuntu, or if you just want to test and make sure that your flash drive works properly, you will have to set your computer to boot off of the flash drive. The steps to do this will vary depending on your BIOS – which varies depending on your motherboard. To get detailed instructions on changing how your computer boots, search for your motherboard’s manual (or your laptop’s manual for a laptop). For general instructions, which will suffice for 99% of you, read on. Find the important keyboard keys When your computer boots up, a bunch of words and numbers flash across the screen, usually to be ignored. This time, you need to scan the boot-up screen for a few key words with some associated keys: Boot menu and Setup. Typically, these will show up at the bottom of the screen. If your BIOS has a Boot Menu, then read on. Otherwise, skip to the Hard: Using Setup section. Easy: Using the Boot Menu If your BIOS offers a Boot Menu, then during the boot-up process, press the button associated with the Boot Menu. In our case, this is ESC. Our example Boot Menu doesn’t have the ability to boot from USB, but your Boot Menu should have some options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others. Try the options that start with USB until you find one that works. Don’t worry if it doesn’t work – you can just restart and try again. Using the Boot Menu does not change the normal boot order on your system, so the next time you start up your computer it will boot from the hard drive as normal. Hard: Using Setup If your BIOS doesn’t offer a Boot Menu, then you will have to change the boot order in Setup. Note: There are some options in BIOS Setup that can affect the stability of your machine. Take care to only change the boot order options. Press the button associated with Setup. In our case, this is F2. If your BIOS Setup has a Boot tab, then switch to it and change the order such that one of the USB options occurs first. There may be several USB options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others; try them out to see which one works for you. If your BIOS does not have a boot tab, boot order is commonly found in Advanced CMOS Options. Note that this changes the boot order permanently until you change it back. If you plan on only plugging in a bootable flash drive when you want to boot from it, then you could leave the boot order as it is, but you may find it easier to switch the order back to the previous order when you reboot from Ubuntu. Booting into Ubuntu If you set the right boot option, then you should be greeted with the UNetbootin screen. Press enter to start Ubuntu with the default options, or wait 10 seconds for this to happen automatically. Ubuntu will start loading. It should go straight to the desktop with no need for a username or password. And that’s it! From this live desktop session, you can try out Ubuntu, and even install software that is not included in the live CD. Installed software will only last for the duration of your session – the next time you start up the live CD it will be back to its original state. Download UNetbootin from sourceforge.net Similar Articles Productive Geek Tips Create a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CDHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista SetupHow To Setup a USB Flash Drive to Install Windows 7Speed up Your Windows Vista Computer with ReadyBoost TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer Create Talking Photos using Fotobabble

    Read the article

  • Use an Ubuntu Live CD to Securely Wipe Your PC’s Hard Drive

    - by Trevor Bekolay
    Deleting files or quickly formatting a drive isn’t enough for sensitive personal information. We’ll show you how to get rid of it for good using a Ubuntu Live CD. When you delete a file in Windows, Ubuntu, or any other operating system, it doesn’t actually destroy the data stored on your hard drive, it just marks that data as “deleted.” If you overwrite it later, then that data is generally unrecoverable, but if the operating system don’t happen to overwrite it, then your data is still stored on your hard drive, recoverable by anyone who has the right software. By securely delete files or entire hard drives, your data will be gone for good. Note: Modern hard drives are extremely sophisticated, as are the experts who recover data for a living. There is no guarantee that the methods covered in this article will make your data completely unrecoverable; however, they will make your data unrecoverable to the majority of recovery methods, and all methods that are readily available to the general public. Shred individual files Most of the data stored on your hard drive is harmless, and doesn’t reveal anything about you. If there are just a few files that you know you don’t want someone else to see, then the easiest way to get rid of them is a built-in Linux utility called shred. Open a terminal window by clicking on Applications at the top-left of the screen, then expanding the Accessories menu and clicking on Terminal. Navigate to the file that you want to delete using cd to change directories and ls to list the files and folders in the current directory. As an example, we’ve got a file called BankInfo.txt on a Windows NTFS-formatted hard drive. We want to delete it securely, so we’ll call shred by entering the following in the terminal window: shred <file> which is, in our example: shred BankInfo.txt Notice that our BankInfo.txt file still exists, even though we’ve shredded it. A quick look at the contents of BankInfo.txt make it obvious that the file has indeed been securely overwritten. We can use some command-line arguments to make shred delete the file from the hard drive as well. We can also be extra-careful about the shredding process by upping the number of times shred overwrites the original file. To do this, in the terminal, type in: shred –remove –iterations=<num> <file> By default, shred overwrites the file 25 times. We’ll double this, giving us the following command: shred –remove –iterations=50 BankInfo.txt BankInfo.txt has now been securely wiped on the physical disk, and also no longer shows up in the directory listing. Repeat this process for any sensitive files on your hard drive! Wipe entire hard drives If you’re disposing of an old hard drive, or giving it to someone else, then you might instead want to wipe your entire hard drive. shred can be invoked on hard drives, but on modern file systems, the shred process may be reversible. We’ll use the program wipe to securely delete all of the data on a hard drive. Unlike shred, wipe is not included in Ubuntu by default, so we have to install it. Open up the Synaptic Package Manager by clicking on System in the top-left corner of the screen, then expanding the Administration folder and clicking on Synaptic Package Manager. wipe is part of the Universe repository, which is not enabled by default. We’ll enable it by clicking on Settings > Repositories in the Synaptic Package Manager window. Check the checkbox next to “Community-maintained Open Source software (universe)”. Click Close. You’ll need to reload Synaptic’s package list. Click on the Reload button in the main Synaptic Package Manager window. Once the package list has been reloaded, the text over the search field will change to “Rebuilding search index”. Wait until it reads “Quick search,” and then type “wipe” into the search field. The wipe package should come up, along with some other packages that perform similar functions. Click on the checkbox to the left of the label “wipe” and select “Mark for Installation”. Click on the Apply button to start the installation process. Click the Apply button on the Summary window that pops up. Once the installation is done, click the Close button and close the Synaptic Package Manager window. Open a terminal window by clicking on Applications in the top-left of the screen, then Accessories > Terminal. You need to figure our the correct hard drive to wipe. If you wipe the wrong hard drive, that data will not be recoverable, so exercise caution! In the terminal window, type in: sudo fdisk -l A list of your hard drives will show up. A few factors will help you identify the right hard drive. One is the file system, found in the System column of  the list – Windows hard drives are usually formatted as NTFS (which shows up as HPFS/NTFS). Another good identifier is the size of the hard drive, which appears after its identifier (highlighted in the following screenshot). In our case, the hard drive we want to wipe is only around 1 GB large, and is formatted as NTFS. We make a note of the label found under the the Device column heading. If you have multiple partitions on this hard drive, then there will be more than one device in this list. The wipe developers recommend wiping each partition separately. To start the wiping process, type the following into the terminal: sudo wipe <device label> In our case, this is: sudo wipe /dev/sda1 Again, exercise caution – this is the point of no return! Your hard drive will be completely wiped. It may take some time to complete, depending on the size of the drive you’re wiping. Conclusion If you have sensitive information on your hard drive – and chances are you probably do – then it’s a good idea to securely delete sensitive files before you give away or dispose of your hard drive. The most secure way to delete your data is with a few swings of a hammer, but shred and wipe from a Ubuntu Live CD is a good alternative! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDScan a Windows PC for Viruses from a Ubuntu Live CDRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 3

    - by SQLOS Team
    In parts 1 and 2 of this series we looked at the basics of Hyper-V Dynamic Memory and SQL Server memory management. In this part Serdar looks at configuration guidelines for SQL Server memory management. Part 3: Configuration Guidelines for Hyper-V Dynamic Memory and SQL Server Now that we understand SQL Server Memory Management and Hyper-V Dynamic Memory basics, let’s take a look at general configuration guidelines in order to utilize benefits of Hyper-V Dynamic Memory in your SQL Server VMs. Requirements Host Operating System Requirements Hyper-V Dynamic Memory feature is introduced with Windows Server 2008 R2 SP1. Therefore in order to use Dynamic Memory for your virtual machines, you need to have Windows Server 2008 R2 SP1 or Microsoft Hyper-V Server 2008 R2 SP1 in your Hyper-V host. Guest Operating System Requirements In addition to this Dynamic Memory is only supported in Standard, Web, Enterprise and Datacenter editions of windows running inside VMs. Make sure that your VM is running one of these editions. For additional requirements on each operating system see “Dynamic Memory Configuration Guidelines” here. SQL Server Requirements All versions of SQL Server support Hyper-V Dynamic Memory. However, only certain editions of SQL Server are aware of dynamically changing system memory. To have a truly dynamic environment for your SQL Server VMs make sure that you are running one of the SQL Server editions listed below: ·         SQL Server 2005 Enterprise ·         SQL Server 2008 Enterprise / Datacenter Editions ·         SQL Server 2008 R2 Enterprise / Datacenter Editions Configuration guidelines for other versions of SQL Server are covered below in the FAQ section. Guidelines for configuring Dynamic Memory Parameters Here is how to configure Dynamic Memory for your SQL VMs in a nutshell: Hyper-V Dynamic Memory Parameter Recommendation Startup RAM 1 GB + SQL Min Server Memory Maximum RAM > SQL Max Server Memory Memory Buffer % 5 Memory Weight Based on performance needs   Startup RAM In order to ensure that your SQL Server VMs can start correctly, ensure that Startup RAM is higher than configured SQL Min Server Memory for your VMs. Otherwise SQL Server service will need to do paging in order to start since it will not be able to see enough memory during startup. Also note that Startup Memory will always be reserved for your VMs. This will guarantee a certain level of performance for your SQL Servers, however setting this too high will limit the consolidation benefits you’ll get out of your virtualization environment. Maximum RAM This one is obvious. If you’ve configured SQL Max Server Memory for your SQL Server, make sure that Dynamic Memory Maximum RAM configuration is higher than this value. Otherwise your SQL Server will not grow to memory values higher than the value configured for Dynamic Memory. Memory Buffer % Memory buffer configuration is used to provision file cache to virtual machines in order to improve performance. Due to the fact that SQL Server is managing its own buffer pool, Memory Buffer setting should be configured to the lowest value possible, 5%. Configuring a higher memory buffer will prevent low resource notifications from Windows Memory Manager and it will prevent reclaiming memory from SQL Server VMs. Memory Weight Memory weight configuration defines the importance of memory to a VM. Configure higher values for the VMs that have higher performance requirements. VMs with higher memory weight will have more memory under high memory pressure conditions on your host. Questions and Answers Q1 – Which SQL Server memory model is best for Dynamic Memory? The best SQL Server model for Dynamic Memory is “Locked Page Memory Model”. This memory model ensures that SQL Server memory is never paged out and it’s also adaptive to dynamically changing memory in the system. This will be extremely useful when Dynamic Memory is attempting to remove memory from SQL Server VMs ensuring no SQL Server memory is paged out. You can find instructions on configuring “Locked Page Memory Model” for your SQL Servers here. Q2 – What about other SQL Server Editions, how should I configure Dynamic Memory for them? Other editions of SQL Server do not adapt to dynamically changing environments. They will determine how much memory they should allocate during startup and don’t change this value afterwards. Therefore make sure that you configure a higher startup memory for your VM because that will be all the memory that SQL Server utilize Tune Maximum Memory and Memory Buffer based on the other workloads running on the system. If there are no other workloads consider using Static Memory for these editions. Q3 – What if I have multiple SQL Server instances in a VM? Having multiple SQL Server instances in a VM is not a general recommendation for predictable performance, manageability and isolation. In order to achieve a predictable behavior make sure that you configure SQL Min Server Memory and SQL Max Server Memory for each instance in the VM. And make sure that: ·         Dynamic Memory Startup Memory is greater than the sum of SQL Min Server Memory values for the instances in the VM ·         Dynamic Memory Maximum Memory is greater than the sum of SQL Max Server Memory values for the instances in the VM Q4 – I’m using Large Page Memory Model for my SQL Server. Can I still use Dynamic Memory? The short answer is no. SQL Server does not dynamically change its memory size when configured with Large Page Memory Model. In virtualized environments Hyper-V provides large page support by default. Most of the time, Large Page Memory Model doesn’t bring any benefits to a SQL Server if it’s running in virtualized environments. Q5 – How do I monitor SQL performance when I’m trying Dynamic Memory on my VMs? Use the performance counters below to monitor memory performance for SQL Server: Process - Working Set: This counter is available in the VM via process performance counters. It represents the actual amount of physical memory being used by SQL Server process in the VM. SQL Server – Buffer Cache Hit Ratio: This counter is available in the VM via SQL Server counters. This represents the paging being done by SQL Server. A rate of 90% or higher is desirable. Conclusion These blog posts are a quick start to a story that will be developing more in the near future. We’re still continuing our testing and investigations to provide more detailed configuration guidelines with example performance numbers with a white paper in the upcoming months. Now it’s time to give SQL Server and Hyper-V Dynamic Memory a try. Use this guidelines to kick-start your environment. See what you think about it and let us know of your experiences. - Serdar Sutay Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Web Platform Installer bundles for Visual Studio 2010 SP1 - and how you can build your own WebPI bundles

    - by Jon Galloway
    Visual Studio SP1 is  now available via the Web Platform Installer, which means you've got three options: Download the 1.5 GB ISO image Run the 750KB Web Installer (which figures out what you need to download) Install via Web PI Note: I covered some tips for installing VS2010 SP1 last week - including some that apply to all of these, such as removing options you don't use prior to installing the service pack to decrease the installation time and download size. Two Visual Studio 2010 SP1 Web PI packages There are actually two WebPI packages for VS2010 SP1. There's the standard Visual Studio 2010 SP1 package [Web PI link], which includes (quoting ScottGu's post): VS2010 2010 SP1 ASP.NET MVC 3 (runtime + tools support) IIS 7.5 Express SQL Server Compact Edition 4.0 (runtime + tools support) Web Deployment 2.0 The notes on that package sum it up pretty well: Looking for the latest everything? Look no further. This will get you Visual Studio 2010 Service Pack 1 and the RTM releases of ASP.NET MVC 3, IIS 7.5 Express, SQL Server Compact 4.0 with tooling, and Web Deploy 2.0. It's the value meal of Microsoft products. Tell your friends! Note: This bundle includes the Visual Studio 2010 SP1 web installer, which will dynamically determine the appropriate service pack components to download and install. This is typically in the range of 200-500 MB and will take 30-60 minutes to install, depending on your machine configuration. There is also a Visual Studio 2010 SP1 Core package [Web PI link], which only includes only the SP without any of the other goodies (MVC3, IIS Express, etc.). If you're doing any web development, I'd highly recommend the main pack since it the other installs are small, simple installs, but if you're working in another space, you might want the core package. Installing via the Web Platform Installer I generally like to go with the Web PI when possible since it simplifies most software installations due to things like: Smart dependency management - installing apps or tools which have software dependencies will automatically figure out which dependencies you don't have and add them to the list (which you can review before install) Simultaneous download and install - if your install includes more than one package, it will automatically pull the dependencies first and begin installing them while downloading the others Lists the latest downloads - no need to search around, as they're all listed based on a live feed Includes open source applications - a lot of popular open source applications are included as well as Microsoft software and tools No worries about reinstallation - WebPI installations detect what you've got installed, so for instance if you've got MVC 3 installed you don't need to worry about the VS2010 SP1 package install messing anything up In addition to the links I included above, you can install the WebPI from http://www.microsoft.com/web/downloads/platform.aspx, and if you have Web PI installed you can just tap the Windows key and type "Web Platform" to bring it up in the Start search list. You'll see Visual Studio SP1 listed in the spotlight list as shown below. That's the standard package, which includes MVC 3 / IIS 7.5 Express / SQL Compact / Web Deploy. If you just want the core install, you can use the search box in the upper right corner, typing in "Visual Studio SP1" as shown. Core Install: Use Web PI or the Visual Studio Web Installer? I think the big advantage of using Web PI to install VS 2010 SP1 is that it includes the other new bits. If you're going to install the SP1 core, I don't think there's as much advantage to using Web PI, as the Web PI Core install just downloads the Visual Studio Web Installer anyways. I think Web PI makes it a little easier to find the download, but not a lot. The Visual Studio Web Installer checks dependencies, so there's no big advantage there. If you do happen to hit any problems installing Visual Studio SP1 via Web PI, I'd recommend running the Visual Studio Web Installer, then running the Web PI VS 2010 SP1 package to get all the other goodies. I talked to one person who hit some random snag, recommended that, and it worked out. Custom Web Platform Installer bundles You can create links that will launch the Web Platform Installer with a custom list of tools. You can see an example of this by clicking through on the install button at http://asp.net/downloads (cancelling the installation dialog). You'll see this in the address bar: http://www.microsoft.com/web/gallery/install.aspx?appsxml=&appid=MVC3;ASPNET;NETFramework4;SQLExpress;VWD Notice that the appid querystring parameter includes a semicolon delimited list, and you can make your own custom Web PI links with your own desired app list. I can think of a lot of cases where that would be handy: linking to a recommended software configuration from a software project or product, setting up a recommended / documented / supported install list for a software development team or IT shop, etc. For instance, here's a link that installs just VS2010 SP1 Core and the SQL CE tools: http://www.microsoft.com/web/gallery/install.aspx?appsxml=&appid=VS2010SP1Core;SQLCETools Note: If you've already got all or some of the products installed, the display will reflect that. On my dev box which has the full SP1 package, here's what the above link gives me: Here's another example - on a fresh box I created a link to install MVC 3 and the Web Farm Framework (http://www.microsoft.com/web/gallery/install.aspx?appsxml=&appid=MVC3;WebFarmFramework) and got the following items added to the cart: But where do I get the App ID's? Aha, that's the trick. You can link to a list of cool packages, but you need to know the App ID's to link to them. To figure that out, I turned on tracing in Web Platform Installer  (also handy if you're ever having trouble with a WebPI install) and from the trace logs saw that the list of packages is pulled from an XML file: DownloadManager Information: 0 : Loading product xml from: https://go.microsoft.com/?linkid=9763242 DownloadManager Verbose: 0 : Connecting to https://go.microsoft.com/?linkid=9763242 with (partial) headers: Referer: wpi://2.1.0.0/Microsoft Windows NT 6.1.7601 Service Pack 1 If-Modified-Since: Wed, 09 Mar 2011 14:15:27 GMT User-Agent:Platform-Installer/3.0.3.0(Microsoft Windows NT 6.1.7601 Service Pack 1) DownloadManager Information: 0 : https://go.microsoft.com/?linkid=9763242 responded with 302 DownloadManager Information: 0 : Response headers: HTTP/1.1 302 Found Cache-Control: private Content-Length: 175 Content-Type: text/html; charset=utf-8 Expires: Wed, 09 Mar 2011 22:52:28 GMT Location: https://www.microsoft.com/web/webpi/3.0/webproductlist.xml Server: Microsoft-IIS/7.5 X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Wed, 09 Mar 2011 22:53:27 GMT Browsing to https://www.microsoft.com/web/webpi/3.0/webproductlist.xml shows the full list. You can search through that in your browser / text editor if you'd like, open it in Excel as an XML table, etc. Here's a list of the App ID's as of today: SMO SMO32 PHP52ForIISExpress PHP53ForIISExpress StaticContent DefaultDocument DirectoryBrowse HTTPErrors HTTPRedirection ASPNET NETExtensibility ASP CGI ISAPIExtensions ISAPIFilters ServerSideIncludes HTTPLogging LoggingTools RequestMonitor Tracing CustomLogging ODBCLogging BasicAuthentication WindowsAuthentication DigestAuthentication ClientCertificateMappingAuthentication IISClientCertificateMappingAuthentication URLAuthorization RequestFiltering IPSecurity StaticContentCompression DynamicContentCompression IISManagementConsole IISManagementScriptsAndTools ManagementService MetabaseAndIIS6Compatibility WASProcessModel WASNetFxEnvironment WASConfigurationAPI IIS6WPICompatibility IIS6ScriptingTools IIS6ManagementConsole LegacyFTPServer FTPServer WebDAV LegacyFTPManagementConsole FTPExtensibility AdminPack AdvancedLogging WebFarmFrameworkNonLoc ExternalCacheNonLoc WebFarmFramework WebFarmFrameworkv2 WebFarmFrameworkv2_beta ExternalCache ECacheUpdate ARRv1 ARRv2Beta1 ARRv2Beta2 ARRv2RC ARRv2NonLoc ARRv2 ARRv2Update MVC MVCBeta MVCRC1 MVCRC2 DBManager DbManagerUpdate DynamicIPRestrictions DynamicIPRestrictionsUpdate DynamicIPRestrictionsLegacy DynamicIPRestrictionsBeta2 FTPOOB IISPowershellSnapin RemoteManager SEOToolkit VS2008RTM MySQL SQLDriverPHP52IIS SQLDriverPHP53IIS SQLDriverPHP52IISExpress SQLDriverPHP53IISExpress SQLExpress SQLManagementStudio SQLExpressAdv SQLExpressTools UrlRewrite UrlRewrite2 UrlRewrite2NonLoc UrlRewrite2RC UrlRewrite2Beta UrlRewrite10 UrlScan MVC3Installer MVC3 MVC3LocInstaller MVC3Loc MVC2 VWD VWD2010SP1Pack NETFramework4 WebMatrix WebMatrix_v1Refresh IISExpress IISExpress_v1 IIS7 AspWebPagesVS AspWebPagesVS_1_0 Plan9 Plan9Loc WebMatrix_WHP SQLCE SQLCETools SQLCEVSTools SQLCEVSTools_4_0 SQLCEVSToolsInstaller_4_0 SQLCEVSToolsInstallerNew_4_0 SQLCEVSToolsInstallerRepair_EN_4_0 SQLCEVSToolsInstallerRepair_JA_4_0 SQLCEVSToolsInstallerRepair_FR_4_0 SQLCEVSToolsInstallerRepair_DE_4_0 SQLCEVSToolsInstallerRepair_ES_4_0 SQLCEVSToolsInstallerRepair_IT_4_0 SQLCEVSToolsInstallerRepair_RU_4_0 SQLCEVSToolsInstallerRepair_KO_4_0 SQLCEVSToolsInstallerRepair_ZH_CN_4_0 SQLCEVSToolsInstallerRepair_ZH_TW_4_0 VWD2008 WebDAVOOB WDeploy WDeploy_v2 WDeployNoSMO WDeploy11 WinCache52 WinCache53 NETFramework35 WindowsImagingComponent VC9Redist NETFramework20SP2 WindowsInstaller31 PowerShell PowerShellMsu PowerShell2 WindowsInstaller45 FastCGIUpdate FastCGIBackport FastCGIIIS6 IIS51 IIS60 SQLNativeClient SQLNativeClient2008 SQLNativeClient2005 SQLCLRTypes SQLCLRTypes32 SMO_10_1 MySQLConnector PHP52 PHP53 PHPManager VSVWD2010Feature VWD2010WebFeature_0 VWD2010WebFeature_1 VWD2010WebFeature_2 VS2010SP1Prerequisite RIAServicesToolkitMay2010 Silverlight4Toolkit Silverlight4Tools VSLS SSMAMySQL WebsitePanel VS2010SP1Core VS2010SP1Installer VS2010SP1Pack MissingVWDOrVSVWD2010Feature VB2010Beta2Express VCS2010Beta2Express VC2010Beta2Express RIAServicesToolkitApr2010 VS2010Beta1 VS2010RC VS2010Beta2 VS2010Beta2Express VS2k8RTM VSCPP2k8RTM VSVB2k8RTM VSCS2k8RTM VSVWDFeature LegacyWinCache SQLExpress2005 SSMS2005

    Read the article

  • MySQL – Scalability on Amazon RDS: Scale out to multiple RDS instances

    - by Pinal Dave
    Today, I’d like to discuss getting better MySQL scalability on Amazon RDS. The question of the day: “What can you do when a MySQL database needs to scale write-intensive workloads beyond the capabilities of the largest available machine on Amazon RDS?” Let’s take a look. In a typical EC2/RDS set-up, users connect to app servers from their mobile devices and tablets, computers, browsers, etc.  Then app servers connect to an RDS instance (web/cloud services) and in some cases they might leverage some read-only replicas.   Figure 1. A typical RDS instance is a single-instance database, with read replicas.  This is not very good at handling high write-based throughput. As your application becomes more popular you can expect an increasing number of users, more transactions, and more accumulated data.  User interactions can become more challenging as the application adds more sophisticated capabilities. The result of all this positive activity: your MySQL database will inevitably begin to experience scalability pressures. What can you do? Broadly speaking, there are four options available to improve MySQL scalability on RDS. 1. Larger RDS Instances – If you’re not already using the maximum available RDS instance, you can always scale up – to larger hardware.  Bigger CPUs, more compute power, more memory et cetera. But the largest available RDS instance is still limited.  And they get expensive. “High-Memory Quadruple Extra Large DB Instance”: 68 GB of memory 26 ECUs (8 virtual cores with 3.25 ECUs each) 64-bit platform High I/O Capacity Provisioned IOPS Optimized: 1000Mbps 2. Provisioned IOPs – You can get provisioned IOPs and higher throughput on the I/O level. However, there is a hard limit with a maximum instance size and maximum number of provisioned IOPs you can buy from Amazon and you simply cannot scale beyond these hardware specifications. 3. Leverage Read Replicas – If your application permits, you can leverage read replicas to offload some reads from the master databases. But there are a limited number of replicas you can utilize and Amazon generally requires some modifications to your existing application. And read-replicas don’t help with write-intensive applications. 4. Multiple Database Instances – Amazon offers a fourth option: “You can implement partitioning,thereby spreading your data across multiple database Instances” (Link) However, Amazon does not offer any guidance or facilities to help you with this. “Multiple database instances” is not an RDS feature.  And Amazon doesn’t explain how to implement this idea. In fact, when asked, this is the response on an Amazon forum: Q: Is there any documents that describe the partition DB across multiple RDS? I need to use DB with more 1TB but exist a limitation during the create process, but I read in the any FAQ that you need to partition database, but I don’t find any documents that describe it. A: “DB partitioning/sharding is not an official feature of Amazon RDS or MySQL, but a technique to scale out database by using multiple database instances. The appropriate way to split data depends on the characteristics of the application or data set. Therefore, there is no concrete and specific guidance.” So now what? The answer is to scale out with ScaleBase. Amazon RDS with ScaleBase: What you get – MySQL Scalability! ScaleBase is specifically designed to scale out a single MySQL RDS instance into multiple MySQL instances. Critically, this is accomplished with no changes to your application code.  Your application continues to “see” one database.   ScaleBase does all the work of managing and enforcing an optimized data distribution policy to create multiple MySQL instances. With ScaleBase, data distribution, transactions, concurrency control, and two-phase commit are all 100% transparent and 100% ACID-compliant, so applications, services and tooling continue to interact with your distributed RDS as if it were a single MySQL instance. The result: now you can cost-effectively leverage multiple MySQL RDS instance to scale out write-intensive workloads to an unlimited number of users, transactions, and data. Amazon RDS with ScaleBase: What you keep – Everything! And how does this change your Amazon environment? 1. Keep your application, unchanged – There is no change your application development life-cycle at all.  You still use your existing development tools, frameworks and libraries.  Application quality assurance and testing cycles stay the same. And, critically, you stay with an ACID-compliant MySQL environment. 2. Keep your RDS value-added services – The value-added services that you rely on are all still available. Amazon will continue to handle database maintenance and updates for you. You can still leverage High Availability via Multi A-Z.  And, if it benefits youra application throughput, you can still use read replicas. 3. Keep your RDS administration – Finally the RDS monitoring and provisioning tools you rely on still work as they did before. With your one large MySQL instance, now split into multiple instances, you can actually use less expensive, smallersmaller available RDS hardware and continue to see better database performance. Conclusion Amazon RDS is a tremendous service, but it doesn’t offer solutions to scale beyond a single MySQL instance. Larger RDS instances get more expensive.  And when you max-out on the available hardware, you’re stuck.  Amazon recommends scaling out your single instance into multiple instances for transaction-intensive apps, but offers no services or guidance to help you. This is where ScaleBase comes in to save the day. It gives you a simple and effective way to create multiple MySQL RDS instances, while removing all the complexities typically caused by “DIY” sharding andwith no changes to your applications . With ScaleBase you continue to leverage the AWS/RDS ecosystem: commodity hardware and value added services like read replicas, multi A-Z, maintenance/updates and administration with monitoring tools and provisioning. SCALEBASE ON AMAZON If you’re curious to try ScaleBase on Amazon, it can be found here – Download NOW. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • LibPCL issues on Ubuntu 13.10

    - by user254885
    i wanted to install the Point Cloud Library but it does not work i use an ODROID board(ARM processor) Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libpcl-all : Depends: libpcl-1.7-all but it is not going to be installed E: Unable to correct problems, you have held broken packages. by compiling v1.7 , i get these errors : /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(ptw-fcntl.o): In function `__fcntl_nocancel': /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/i386/fcntl.c:37: undefined reference to `__libc_do_syscall' /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(ptw-fcntl.o): In function `__libc_fcntl': /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/i386/fcntl.c:53: undefined reference to `__libc_do_syscall' /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/i386/fcntl.c:57: undefined reference to `__libc_do_syscall' /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(ptw-open64.o): In function `__libc_open64': /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/open64.c:41: undefined reference to `__libc_do_syscall' /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/open64.c:45: undefined reference to `__libc_do_syscall' /usr/lib/gcc/arm-linux-gnueabihf/4.8/../../../arm-linux-gnueabihf/libpthread.a(cancellation.o):/build/buildd/eglibc-2.17/nptl/cancellation.c:96: more undefined references to `__libc_do_syscall' follow collect2: error: ld returned 1 exit status make[2]: *** [bin/pcl_convert_pcd_ascii_binary] Error 1 make[1]: *** [io/tools/CMakeFiles/pcl_convert_pcd_ascii_binary.dir/all] Error 2 make: *** [all] Error 2 i could not find anything in google to solve these errors i believe some packages were not ported for ARM processors any help would be appreciated $ dpkg --list | grep headers ii linux-headers-3.0.63-odroidx2 20130215 ii linux-headers-3.0.71-odroidx2 20130415 ii linux-headers-3.0.74-odroidx2 20130417 ii linux-headers-3.0.75-odroidx2 20130426 ii linux-headers-3.1.10-6 3.1.10-6.10 ii linux-headers-3.1.10-6-ac100 3.1.10-6.10 ii linux-headers-ac100 3.1.10.6.2 installing packages did'nt do well sudo apt-get install linux-generic [sudo] password for odroid: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: debugedit libasound2-dev libestools2.1-dev librpmbuild3 librpmsign1 thunderbird-locale-en thunderbird-locale-en-gb thunderbird-locale-en-us thunderbird-locale-ko Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-headers-3.11.0-17 linux-headers-3.11.0-17-generic linux-headers-generic linux-image-3.11.0-17-generic linux-image-generic Suggested packages: fdutils linux-doc-3.11.0 linux-source-3.11.0 linux-tools The following NEW packages will be installed: linux-generic linux-headers-3.11.0-17 linux-headers-3.11.0-17-generic linux-headers-generic linux-image-3.11.0-17-generic linux-image-generic 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 58.2 MB of archives. After this operation, 203 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-image-3.11.0-17-generic armhf 3.11.0-17.31 [44.5 MB] Get:2 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-image-generic armhf 3.11.0.17.18 [2,356 B] Get:3 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-headers-3.11.0-17 all 3.11.0-17.31 [12.6 MB] Get:4 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-headers-3.11.0-17-generic armhf 3.11.0-17.31 [1,128 kB] Get:5 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-headers-generic armhf 3.11.0.17.18 [2,350 B] Get:6 http://ports.ubuntu.com/ubuntu-ports/ saucy-updates/main linux-generic armhf 3.11.0.17.18 [1,726 B] Fetched 58.2 MB in 13s (4,379 kB/s) Selecting previously unselected package linux-image-3.11.0-17-generic. (Reading database ... 258618 files and directories currently installed.) Unpacking linux-image-3.11.0-17-generic (from .../linux-image-3.11.0-17-generic_3.11.0-17.31_armhf.deb) ... Examining /etc/kernel/preinst.d/ Done. Selecting previously unselected package linux-image-generic. Unpacking linux-image-generic (from .../linux-image-generic_3.11.0.17.18_armhf.deb) ... Selecting previously unselected package linux-headers-3.11.0-17. Unpacking linux-headers-3.11.0-17 (from .../linux-headers-3.11.0-17_3.11.0-17.31_all.deb) ... Selecting previously unselected package linux-headers-3.11.0-17-generic. Unpacking linux-headers-3.11.0-17-generic (from .../linux-headers-3.11.0-17-generic_3.11.0-17.31_armhf.deb) ... Selecting previously unselected package linux-headers-generic. Unpacking linux-headers-generic (from .../linux-headers-generic_3.11.0.17.18_armhf.deb) ... Selecting previously unselected package linux-generic. Unpacking linux-generic (from .../linux-generic_3.11.0.17.18_armhf.deb) ... Setting up linux-image-3.11.0-17-generic (3.11.0-17.31) ... Running depmod. update-initramfs: deferring update (hook will be called later) cp: cannot stat ‘/boot/initrd.img-3.11.0-17-generic’: No such file or directory Failed to copy /boot/initrd.img-3.11.0-17-generic to /boot/initrd.img at /var/lib/dpkg/info/linux-image-3.11.0-17-generic.postinst line 730. dpkg: error processing linux-image-3.11.0-17-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.11.0-17-generic; however: Package linux-image-3.11.0-17-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured Setting up linux-headers-3.11.0-17 (3.11.0-17.31) ... No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Setting up linux-headers-3.11.0-17-generic (3.11.0-17.31) ... Examining /etc/kernel/header_postinst.d. Setting up linux-headers-generic (3.11.0.17.18) ... dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.11.0.17.18); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.11.0-17-generic linux-image-generic linux-generic E: Sub-process /usr/bin/dpkg returned an error code (1) i had to uninstall these cos they were messing up other packages installation(buildessentials were already installed)

    Read the article

  • External usb 3.0 hard drive is not recognised when plugged into usb 3 port (ubuntu natty 64 bit).

    - by kimangroo
    I have an Iomega Prestige Portable External Hard Drive 1TB USB 3.0. It works fine on windows 7 as a usb 3.0 drive. It isn't detected on ubuntu natty 64bit, 2.6.38-8-generic. fdisk -l cannot see it at all: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1bed746b Device Boot Start End Blocks Id System /dev/sda1 1 1689 13560832 27 Unknown /dev/sda2 * 1689 1702 102400 7 HPFS/NTFS /dev/sda3 1702 19978 146805760 7 HPFS/NTFS /dev/sda4 19978 60802 327914497 5 Extended /dev/sda5 25555 60802 283120640 7 HPFS/NTFS /dev/sda6 19978 23909 31571968 83 Linux /dev/sda7 23909 25555 13218816 82 Linux swap / Solaris Partition table entries are not in disk order lsusb can see it: Bus 003 Device 003: ID 059b:0070 Iomega Corp. Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 004: ID 05fe:0011 Chic Technology Corp. Browser Mouse Bus 002 Device 003: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 005: ID 0489:e00f Foxconn / Hon Hai Bus 001 Device 004: ID 0c45:64b5 Microdia Bus 001 Device 003: ID 08ff:168f AuthenTec, Inc. Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub And dmesg | grep -i xhci (I may have unplugged the drive and plugged it back in again after booting): [ 1.659060] pci 0000:04:00.0: xHCI HW did not halt within 2000 usec status = 0x0 [ 11.484971] xhci_hcd 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 [ 11.484997] xhci_hcd 0000:04:00.0: setting latency timer to 64 [ 11.485002] xhci_hcd 0000:04:00.0: xHCI Host Controller [ 11.485064] xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 3 [ 11.636149] xhci_hcd 0000:04:00.0: irq 18, io mem 0xc5400000 [ 11.636241] xhci_hcd 0000:04:00.0: irq 43 for MSI/MSI-X [ 11.636246] xhci_hcd 0000:04:00.0: irq 44 for MSI/MSI-X [ 11.636251] xhci_hcd 0000:04:00.0: irq 45 for MSI/MSI-X [ 11.636256] xhci_hcd 0000:04:00.0: irq 46 for MSI/MSI-X [ 11.636261] xhci_hcd 0000:04:00.0: irq 47 for MSI/MSI-X [ 11.639654] xHCI xhci_add_endpoint called for root hub [ 11.639655] xHCI xhci_check_bandwidth called for root hub [ 11.956366] usb 3-1: new SuperSpeed USB device using xhci_hcd and address 2 [ 12.001073] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.007059] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.012932] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.018922] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.049139] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.056754] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.131607] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 12.179717] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 12.686876] xhci_hcd 0000:04:00.0: WARN: babble error on endpoint [ 12.687058] xhci_hcd 0000:04:00.0: WARN Set TR Deq Ptr cmd invalid because of stream ID configuration [ 12.687152] xhci_hcd 0000:04:00.0: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 43.330737] usb 3-1: reset SuperSpeed USB device using xhci_hcd and address 2 [ 43.422579] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 43.422658] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669af00 [ 43.422665] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669af40 [ 43.422671] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669af80 [ 43.422677] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88014669afc0 [ 43.531159] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 125.160248] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 903.766466] usb 3-1: new SuperSpeed USB device using xhci_hcd and address 3 [ 903.807789] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.813530] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.819400] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.825104] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.855067] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.862314] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 903.862597] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst [ 903.913211] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 904.424416] xhci_hcd 0000:04:00.0: WARN: babble error on endpoint [ 904.424599] xhci_hcd 0000:04:00.0: WARN Set TR Deq Ptr cmd invalid because of stream ID configuration [ 904.424700] xhci_hcd 0000:04:00.0: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 935.139021] usb 3-1: reset SuperSpeed USB device using xhci_hcd and address 3 [ 935.226075] xhci_hcd 0000:04:00.0: WARN: short transfer on control ep [ 935.226140] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186b00 [ 935.226148] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186b40 [ 935.226153] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186b80 [ 935.226159] xhci_hcd 0000:04:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff880148186bc0 [ 935.343339] xhci_hcd 0000:04:00.0: WARN no SS endpoint bMaxBurst I thought it might be that the firmware wasn't compatible with linux or something, but when booting a live image of partedmagic, (2.6.38.4-pmagic), the drive was detected fine, I could mount it and got usb 3.0 speeds (at least they double the speeds I got from plugging same drive in usb 2 ports). dmesg in partedmagic did say something about no SuperSpeed endpoint which was an error I saw in a previous dmesg of ubuntu: Jun 27 15:49:02 (none) user.info kernel: [ 2.978743] xhci_hcd 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 Jun 27 15:49:02 (none) user.debug kernel: [ 2.978771] xhci_hcd 0000:04:00.0: setting latency timer to 64 Jun 27 15:49:02 (none) user.info kernel: [ 2.978781] xhci_hcd 0000:04:00.0: xHCI Host Controller Jun 27 15:49:02 (none) user.info kernel: [ 2.978856] xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 3 Jun 27 15:49:02 (none) user.info kernel: [ 3.089458] xhci_hcd 0000:04:00.0: irq 18, io mem 0xc5400000 Jun 27 15:49:02 (none) user.debug kernel: [ 3.089541] xhci_hcd 0000:04:00.0: irq 42 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089544] xhci_hcd 0000:04:00.0: irq 43 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089546] xhci_hcd 0000:04:00.0: irq 44 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089548] xhci_hcd 0000:04:00.0: irq 45 for MSI/MSI-X Jun 27 15:49:02 (none) user.debug kernel: [ 3.089550] xhci_hcd 0000:04:00.0: irq 46 for MSI/MSI-X Jun 27 15:49:02 (none) user.warn kernel: [ 3.092857] usb usb3: No SuperSpeed endpoint companion for config 1 interface 0 altsetting 0 ep 129: using minimum values Jun 27 15:49:02 (none) user.info kernel: [ 3.092864] usb usb3: New USB device found, idVendor=1d6b, idProduct=0003 Jun 27 15:49:02 (none) user.info kernel: [ 3.092866] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Jun 27 15:49:02 (none) user.info kernel: [ 3.092867] usb usb3: Product: xHCI Host Controller Jun 27 15:49:02 (none) user.info kernel: [ 3.092869] usb usb3: Manufacturer: Linux 2.6.38.4-pmagic xhci_hcd Jun 27 15:49:02 (none) user.info kernel: [ 3.092870] usb usb3: SerialNumber: 0000:04:00.0 Jun 27 15:49:02 (none) user.debug kernel: [ 3.092961] xHCI xhci_add_endpoint called for root hub Jun 27 15:49:02 (none) user.debug kernel: [ 3.092963] xHCI xhci_check_bandwidth called for root hub Well I have no idea what's going wrong, and I haven't had much luck from google and the forums so far. A number of unanswered threads with people with similar error messages and problems only. Hopefully someone here can help or point me in the right direction?!

    Read the article

  • SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System

    - by pinaldave
    I have been recently exploring Elasticity and Scalability attributes of databases. You can see that in my earlier blog posts about NuoDB where I wanted to look at Elasticity and Scalability concepts. The concepts are very interesting, and intriguing as well. I have discussed these concepts with my friend Joyti M and together we have come up with this interesting read. The goal of this article is to answer following simple questions What is Elasticity? What is Scalability? How ACID properties vary from NOSQL Concepts? What are the prevailing problems in the current database system architectures? Why is NuoDB  an innovative and welcome change in database paradigm? Elasticity This word’s original form is used in many different ways and honestly it does do a decent job in holding things together over the years as a person grows and contracts. Within the tech world, and specifically related to software systems (database, application servers), it has come to mean a few things - allow stretching of resources without reaching the breaking point (on demand). What are resources in this context? Resources are the usual suspects – RAM/CPU/IO/Bandwidth in the form of a container (a process or bunch of processes combined as modules). When it is about increasing resources the simplest idea which comes to mind is the addition of another container. Another container means adding a brand new physical node. When it is about adding a new node there are two questions which comes to mind. 1) Can we add another node to our software system? 2) If yes, does adding new node cause downtime for the system? Let us assume we have added new node, let us see what the new needs of the system are when a new node is added. Balancing incoming requests to multiple nodes Synchronization of a shared state across multiple nodes Identification of “downstate” and resolution action to bring it to “upstate” Well, adding a new node has its advantages as well. Here are few of the positive points Throughput can increase nearly horizontally across the node throughout the system Response times of application will increase as in-between layer interactions will be improved Now, Let us put the above concepts in the perspective of a Database. When we mention the term “running out of resources” or “application is bound to resources” the resources can be CPU, Memory or Bandwidth. The regular approach to “gain scalability” in the database is to look around for bottlenecks and increase the bottlenecked resource. When we have memory as a bottleneck we look at the data buffers, locks, query plans or indexes. After a point even this is not enough as there needs to be an efficient way of managing such large workload on a “single machine” across memory and CPU bound (right kind of scheduling)  workload. We next move on to either read/write separation of the workload or functionality-based sharing so that we still have control of the individual. But this requires lots of planning and change in client systems in terms of knowing where to go/update/read and for reporting applications to “aggregate the data” in an intelligent way. What we ideally need is an intelligent layer which allows us to do these things without us getting into managing, monitoring and distributing the workload. Scalability In the context of database/applications, scalability means three main things Ability to handle normal loads without pressure E.g. X users at the Y utilization of resources (CPU, Memory, Bandwidth) on the Z kind of hardware (4 processor, 32 GB machine with 15000 RPM SATA drives and 1 GHz Network switch) with T throughput Ability to scale up to expected peak load which is greater than normal load with acceptable response times Ability to provide acceptable response times across the system E.g. Response time in S milliseconds (or agreed upon unit of measure) – 90% of the time The Issue – Need of Scale In normal cases one can plan for the load testing to test out normal, peak, and stress scenarios to ensure specific hardware meets the needs. With help from Hardware and Software partners and best practices, bottlenecks can be identified and requisite resources added to the system. Unfortunately this vertical scale is expensive and difficult to achieve and most of the operational people need the ability to scale horizontally. This helps in getting better throughput as there are physical limits in terms of adding resources (Memory, CPU, Bandwidth and Storage) indefinitely. Today we have different options to achieve scalability: Read & Write Separation The idea here is to do actual writes to one store and configure slaves receiving the latest data with acceptable delays. Slaves can be used for balancing out reads. We can also explore functional separation or sharing as well. We can separate data operations by a specific identifier (e.g. region, year, month) and consolidate it for reporting purposes. For functional separation the major disadvantage is when schema changes or workload pattern changes. As the requirement grows one still needs to deal with scale need in manual ways by providing an abstraction in the middle tier code. Using NOSQL solutions The idea is to flatten out the structures in general to keep all values which are retrieved together at the same store and provide flexible schema. The issue with the stores is that they are compromising on mostly consistency (no ACID guarantees) and one has to use NON-SQL dialect to work with the store. The other major issue is about education with NOSQL solutions. Would one really want to make these compromises on the ability to connect and retrieve in simple SQL manner and learn other skill sets? Or for that matter give up on ACID guarantee and start dealing with consistency issues? Hybrid Deployment – Mac, Linux, Cloud, and Windows One of the challenges today that we see across On-premise vs Cloud infrastructure is a difference in abilities. Take for example SQL Azure – it is wonderful in its concepts of throttling (as it is shared deployment) of resources and ability to scale using federation. However, the same abilities are not available on premise. This is not a mistake, mind you – but a compromise of the sweet spot of workloads, customer requirements and operational SLAs which can be supported by the team. In today’s world it is imperative that databases are available across operating systems – which are a commodity and used by developers of all hues. An Ideal Database Ability List A system which allows a linear scale of the system (increase in throughput with reasonable response time) with the addition of resources A system which does not compromise on the ACID guarantees and require developers to learn new paradigms A system which does not force fit a new way interacting with database by learning Non-SQL dialect A system which does not force fit its mechanisms for providing availability across its various modules. Well NuoDB is the first database which has all of the above abilities and much more. In future articles I will cover my hands-on experience with it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Network Access: I can't access 192.168.1.101 from 192.168.1.102.

    - by takpar
    Hi, I'm running Ubuntu 10.04 on my PC with IP 192.168.1.101. every thing work fine, e.g. my web server is running and I can see http://localhost/ or http://192.168.1.101 properly. But the problem is that I cannot see my PC from my laptop at 192.168.1.102 e.g. at my laptop http://192.168.1.101 gives Connection timed out in browser. or trying to telnet on any port leads to: telnet: Unable to connect to remote host: Connection timed out laptop is running a fresh install of Ubuntu as well and there is no setup for firewall stuff in both computers. PS: Both computers can ping each other well. The router is a cicso linksys wireless ADSL modem. Currently, I can connect to FTP server on the Windows running on 192.168.1.102 from 192.168.1.101 without problem. Theses are commands ran on my PC, 192.168.1.101: ifconfig: adp@adp-desktop:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:26:18:e1:8e:cf inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe70::226:18ff:fee1:8ecf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1831935 errors:0 dropped:0 overruns:0 frame:0 TX packets:1493786 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1996855925 (1.9 GB) TX bytes:215288238 (215.2 MB) Interrupt:27 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:951742 errors:0 dropped:0 overruns:0 frame:0 TX packets:951742 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:494351095 (494.3 MB) TX bytes:494351095 (494.3 MB) vmnet1 Link encap:Ethernet HWaddr 00:50:46:c0:00:01 inet addr:192.168.91.1 Bcast:192.168.91.255 Mask:255.255.255.0 inet6 addr: fe70::250:56ff:fec0:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:50 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vmnet8 Link encap:Ethernet HWaddr 00:50:46:c0:00:08 inet addr:192.168.156.1 Bcast:192.168.156.255 Mask:255.255.255.0 inet6 addr: fe70::250:56ff:fec0:8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:51 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) port 80 is set to 0.0.0.0 well: adp@adp-desktop:~$ netstat -ln | grep 'LISTEN ' tcp 0 0 127.0.0.1:52815 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4559 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:7634 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5269 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN tcp 0 0 127.0.1.1:7777 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:33601 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5222 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp6 0 0 :::139 :::* LISTEN tcp6 0 0 ::1:631 :::* LISTEN tcp6 0 0 :::445 :::* LISTEN /etc/hosts.deny is empty: adp@adp-desktop:~$ cat /etc/hosts.deny # /etc/hosts.deny: list of hosts that are _not_ allowed to access the system. # See the manual pages hosts_access(5) and hosts_options(5). # # Example: ALL: some.host.name, .some.domain # ALL EXCEPT in.fingerd: other.host.name, .other.domain # # If you're going to protect the portmapper use the name "portmap" for the # daemon name. Remember that you can only use the keyword "ALL" and IP # addresses (NOT host or domain names) for the portmapper, as well as for # rpc.mountd (the NFS mount daemon). See portmap(8) and rpc.mountd(8) # for further information. # # The PARANOID wildcard matches any host whose name does not match its # address. # # You may wish to enable this to ensure any programs that don't # validate looked up hostnames still leave understandable logs. In past # versions of Debian this has been the default. # ALL: PARANOID netstat -l: adp@adp-desktop:~$ netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 localhost:52815 *:* LISTEN tcp 0 0 *:hylafax *:* LISTEN tcp 0 0 *:www *:* LISTEN tcp 0 0 *:4369 *:* LISTEN tcp 0 0 localhost:7634 *:* LISTEN tcp 0 0 *:ftp *:* LISTEN tcp 0 0 *:xmpp-server *:* LISTEN tcp 0 0 localhost:ipp *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp 0 0 *:5280 *:* LISTEN tcp 0 0 adp-desktop:7777 *:* LISTEN tcp 0 0 *:33601 *:* LISTEN tcp 0 0 *:xmpp-client *:* LISTEN tcp 0 0 localhost:mysql *:* LISTEN tcp6 0 0 [::]:netbios-ssn [::]:* LISTEN tcp6 0 0 localhost:ipp [::]:* LISTEN tcp6 0 0 [::]:microsoft-ds [::]:* LISTEN udp 0 0 *:bootpc *:* udp 0 0 *:mdns *:* udp 0 0 *:47467 *:* udp 0 0 192.168.1.10:netbios-ns *:* udp 0 0 192.168.91.1:netbios-ns *:* udp 0 0 192.168.156.:netbios-ns *:* udp 0 0 *:netbios-ns *:* udp 0 0 192.168.1.1:netbios-dgm *:* udp 0 0 192.168.91.:netbios-dgm *:* udp 0 0 192.168.156:netbios-dgm *:* udp 0 0 *:netbios-dgm *:* raw 0 0 *:icmp *:* 7 netstat -rn: adp@adp-desktop:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.91.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet1 192.168.156.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet8 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 commands on the laptop, 192.168.1.102: ifconfig: root@fakeuser-laptop:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:1c:33:a2:31:15 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:21 eth1 Link encap:Ethernet HWaddr 00:2d:d9:3e:1f:6c inet addr:192.168.1.102 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe70::21d:d9ff:fe3e:1f6c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5681 errors:0 dropped:0 overruns:0 frame:10313 TX packets:6717 errors:6 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4055251 (4.0 MB) TX bytes:779308 (779.3 KB) Interrupt:18 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:206 errors:0 dropped:0 overruns:0 frame:0 TX packets:206 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:15172 (15.1 KB) TX bytes:15172 (15.1 KB) netstat -rn: root@fakeuser-laptop:~# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • Contracting as a Software Developer in the UK

    - by Frez
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} Having had some 15 years’ experience of working as a software contractor, I am often asked by developers who work as permanent employees (permies) about the pros and cons of working as a software consultant through my own limited company and whether the move would be a good one for them. Whilst it is possible to contract using other financial vehicles such as umbrella companies, this article will only consider limited companies as that is what I have experience of using. Contracting or consultancy requires a different mind-set from being a permanent member of staff, and not all developers are capable of this shift in attitude. Whilst you can look forward to an increase in the money you take home, there are real risks and expenses you would not normally be exposed to as a permie. So let us have a look at the pros and cons: Pros: More money There is no doubt that whilst you are working on contracts you will earn significantly more than you would as a permanent employee. Furthermore, working through a limited company is more tax efficient. Less politics You really have no need to involve yourself in office politics. When the end of the day comes you can go home and not think or worry about the power struggles within the company you are contracted to. Your career progression is not tied to the company. Expenses from gross income All your expenses of trading as a business will come out of your company’s gross income, i.e. before tax. This covers travelling expenses provided you have not been at the same client/location for more than two years, internet subscriptions, professional subscriptions, software, hardware, accountancy services and so on. Cons: Work is more transient Contracts typically range from a couple of weeks to a year, although will most likely start at 3 months. However, most contracts are extended either because the project you have been brought in to help with takes longer to deliver than expected, the client decides they can use you on other aspects of the project, or the client decides they would like to use you on other projects. The temporary nature of the work means that you will have down-time between contracts while you secure new opportunities during which time your company will have no income. You may need to attend several interviews before securing a new contract. Accountancy expenses Your company is a separate entity and there are accountancy requirements which, unless you like paperwork, means your company will need to appoint an accountant to prepare your company’s accounts. It may also be worth purchasing some accountancy software, so talk to your accountant about this as they may prefer you to use a particular software package so they can integrate it with their systems. VAT You will need to register your company for VAT. This is tax neutral for you as the VAT you charge your clients you will pass onto the government less any VAT you are reclaiming from expenses, but it is additional paperwork to undertake each quarter. It is worth checking out the Fixed Rate VAT Scheme that is available, particularly after the initial expenses of setting up your company are over. No training Clients take you on based on your skills, not to train you when they will lose that investment at the end of the contract, so understand that it is unlikely you will receive any training funded by a client. However, learning new skills during a contract is possible and you may choose to accept a contract on a lower rate if this is guaranteed as it will help secure future contracts. No financial extras You will have no free pension, life, accident, sickness or medical insurance unless you choose to purchase them yourself. A financial advisor can give you all the necessary advice in this area, and it is worth taking seriously. A year after I started as a consultant I contracted a serious illness, this kept me off work for over two months, my client was very understanding and it could have been much worse, so it is worth considering what your options might be in the case of illness, death and retirement. Agencies Whilst it is possible to work directly for end clients there are pros and cons of working through an agency.  The main advantage is cash flow, you invoice the agency and they typically pay you within a week, whereas working directly for a client could have you waiting up to three months to be paid. The downside of working for agencies, especially in the current difficult times, is that they may go out of business and you then have difficulty getting the money you are owed. Tax investigation It is possible that the Inland Revenue may decide to investigate your company for compliance with tax law. Insurance is available to cover you for this. My personal recommendation would be to join the PCG as this insurance is included as a benefit of membership, Professional Indemnity Some agencies require that you are covered by professional indemnity insurance; this is a cost you would not incur as a permie. Travel Unless you live in an area that has an abundance of opportunities, such as central London, it is likely that you will be travelling further, longer and with more expense than if you were permanently employed at a local company. This not only affects you monetarily, but also your quality of life and the ability to keep fit and healthy. Obtaining finance If you want to secure a mortgage on a property it can be more difficult or expensive, especially if you do not have three years of audited accounts to show a mortgage lender.   Caveat This post is my personal opinion and should not be used as a definitive guide or recommendation to contracting and whether it is suitable for you as an individual, i.e. I accept no responsibility if you decide to take up contracting based on this post and you fare badly for whatever reason.

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • laptop crashed: why?

    - by sds
    my linux (ubuntu 12.04) laptop crashed, and I am trying to figure out why. # last sds pts/4 :0 Tue Sep 4 10:01 still logged in sds pts/3 :0 Tue Sep 4 10:00 still logged in reboot system boot 3.2.0-29-generic Tue Sep 4 09:43 - 11:23 (01:40) sds pts/8 :0 Mon Sep 3 14:23 - crash (19:19) this seems to indicate a crash at 09:42 (= 14:23+19:19). as per another question, I looked at /var/log: auth.log: Sep 4 09:17:02 t520sds CRON[32744]: pam_unix(cron:session): session closed for user root Sep 4 09:43:17 t520sds lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) no messages file syslog: Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. kern.log: Sep 4 09:24:19 t520sds kernel: [219104.819969] CPU1: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819971] CPU2: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819974] CPU3: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpuset Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpu I had a computation running until 9:24, but the system crashed 18 minutes later! kern.log has many pages of these: Sep 4 09:43:16 t520sds kernel: [ 0.000000] total RAM covered: 8086M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 2M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 4M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 8M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 16M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 32M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 64M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1G num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 2G num_reg: 10 lose cover RAM: -1G does this mean that my RAM is bad?! it also says Sep 4 09:43:16 t520sds kernel: [ 2.944123] EXT4-fs (sda1): INFO: recovery required on readonly filesystem Sep 4 09:43:16 t520sds kernel: [ 2.944126] EXT4-fs (sda1): write access will be enabled during recovery Sep 4 09:43:16 t520sds kernel: [ 3.088001] firewire_core: created device fw0: GUID f0def1ff8fbd7dff, S400 Sep 4 09:43:16 t520sds kernel: [ 8.929243] EXT4-fs (sda1): orphan cleanup on readonly fs Sep 4 09:43:16 t520sds kernel: [ 8.929249] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 658984 ... Sep 4 09:43:16 t520sds kernel: [ 9.343266] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 525343 Sep 4 09:43:16 t520sds kernel: [ 9.343270] EXT4-fs (sda1): 56 orphan inodes deleted Sep 4 09:43:16 t520sds kernel: [ 9.343271] EXT4-fs (sda1): recovery complete Sep 4 09:43:16 t520sds kernel: [ 9.645799] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) does this mean my HD is bad? As per FaultyHardware, I tried smartctl -l selftest, which uncovered no errors: smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-30-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Momentus 7200.4 Device Model: ST9500420AS Serial Number: 5VJE81YK LU WWN Device Id: 5 000c50 0440defe3 Firmware Version: 0003LVM1 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Mon Sep 10 16:40:04 2012 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 109) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x103b) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 117 099 034 Pre-fail Always - 162843537 3 Spin_Up_Time 0x0003 100 100 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 571 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 069 060 030 Pre-fail Always - 17210154023 9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 174362787320258 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 571 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 1 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 061 043 045 Old_age Always In_the_past 39 (0 11 44 26) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 84 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 20 193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 2434 194 Temperature_Celsius 0x0022 039 057 000 Old_age Always - 39 (0 15 0 0) 195 Hardware_ECC_Recovered 0x001a 041 041 000 Old_age Always - 162843537 196 Reallocated_Event_Count 0x000f 095 095 030 Pre-fail Always - 4540 (61955, 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 4545 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Googling for the messages proved inconclusive, I can't even figure out whether the messages are routine or catastrophic. So, what do I do now?

    Read the article

  • Oracle Certification and virtualization Solutions.

    - by scoter
    As stated in official MOS ( My Oracle Support ) document 249212.1 support for Oracle products on non-Oracle VM platforms follow exactly the same stance as support for VMware and, so, the only x86 virtualization software solution certified for any Oracle product is "Oracle VM". Based on the fact that: Oracle VM is totally free ( you have the option to buy Oracle-Support ) Certified is pretty different from supported ( OracleVM is certified, others could be supported ) With Oracle VM you may not require to reproduce your issue(s) on physical server Oracle VM is the only x86 software solution that allows hard-partitioning *** *** see details to these Oracle public links: http://www.oracle.com/technetwork/server-storage/vm/ovm-hardpart-168217.pdf http://www.oracle.com/us/corporate/pricing/partitioning-070609.pdf people started asking to migrate from third party virtualization software (ex. RH KVM, VMWare) to Oracle VM. Migrating RH KVM guest to Oracle VM. OracleVM has a built-in P2V utility ( Official Documentation ) but in some cases we can't use it, due to : network inaccessibility between hypervisors ( KVM and OVM ) network slowness between hypervisors (KVM and OVM) size of the guest virtual-disks Here you'll find a step-by-step guide to "manually" migrate a guest machine from KVM to OVM. 1. Verify source guest characteristics. Using KVM web console you can verify characteristics of the guest you need to migrate, such as: CPU Cores details Defined Memory ( RAM ) Name of your guest Guest operating system Disks details ( number and size ) Network details ( number of NICs and network configuration ) 2. Export your guest in OVF / OVA format.  The export from Redhat KVM ( kernel virtual machine ) will create a structured export of your guest: [root@ovmserver1 mnt]# lltotal 12drwxrwx--- 5 36 36 4096 Oct 19 2012 b8296fca-13c4-4841-a50f-773b5139fcee b8296fca-13c4-4841-a50f-773b5139fcee is the ID of the guest exported from RH-KVM [root@ovmserver1 mnt]# cd b8296fca-13c4-4841-a50f-773b5139fcee/[root@ovmserver1 b8296fca-13c4-4841-a50f-773b5139fcee]# ls -ltrtotal 12drwxr-x--- 4 36 36 4096 Oct 19  2012 masterdrwxrwx--- 2 36 36 4096 Oct 29  2012 dom_mddrwxrwx--- 4 36 36 4096 Oct 31  2012 images images contains your virtual-disks exported [root@ovmserver1 b8296fca-13c4-4841-a50f-773b5139fcee]# cd images/[root@ovmserver1 images]# ls -ltratotal 16drwxrwx--- 5 36 36 4096 Oct 19  2012 ..drwxrwx--- 2 36 36 4096 Oct 31  2012 d4ef928d-6dc6-4743-b20d-568b424728a5drwxrwx--- 2 36 36 4096 Oct 31  2012 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1drwxrwx--- 4 36 36 4096 Oct 31  2012 .[root@ovmserver1 images]# cd d4ef928d-6dc6-4743-b20d-568b424728a5/[root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# ls -ltotal 5169092-rwxr----- 1 36 36 187904819200 Oct 31  2012 4c03b1cf-67cc-4af0-ad1e-529fd665dac1-rw-rw---- 1 36 36          341 Oct 31  2012 4c03b1cf-67cc-4af0-ad1e-529fd665dac1.meta[root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# file 4c03b1cf-67cc-4af0-ad1e-529fd665dac14c03b1cf-67cc-4af0-ad1e-529fd665dac1: LVM2 (Linux Logical Volume Manager) , UUID: sZL1Ttpy0vNqykaPahEo3hK3lGhwspv 4c03b1cf-67cc-4af0-ad1e-529fd665dac1 is the first exported disk ( physical volume ) [root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# cd ../4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/[root@ovmserver1 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1]# ls -ltotal 5568076-rwxr----- 1 36 36 107374182400 Oct 31  2012 9020f2e1-7b8a-4641-8f80-749768cc237a-rw-rw---- 1 36 36          341 Oct 31  2012 9020f2e1-7b8a-4641-8f80-749768cc237a.meta[root@ovmserver1 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1]# file 9020f2e1-7b8a-4641-8f80-749768cc237a9020f2e1-7b8a-4641-8f80-749768cc237a: x86 boot sector; partition 1: ID=0x83, active, starthead 1, startsector 63, 401562 sectors; partition 2: ID=0x82, starthead 0, startsector 401625, 65529135 sectors; startsector 63, 401562 sectors; partition 2: ID=0x82, starthead 0, startsector 401625, 65529135 sectors; partition 3: ID=0x83, starthead 254, startsector 65930760, 8385930 sectors; partition 4: ID=0x5, starthead 254, startsector 74316690, 135395820 sectors, code offset 0x48 9020f2e1-7b8a-4641-8f80-749768cc237a is the second exported disk, with partition 1 bootable 3. Prepare the new guest on Oracle VM. By Ovm-Manager we can prepare the guest where we will move the exported virtual-disks; under the Tab "Servers and VMs": click on  and create your guest with parameters collected before (point 1): - add NICs on different networks: - add virtual-disks; in this case we add two disks of 1.0 GB each one; we will extend the virtual disk copying the source KVM virtual-disk ( see next steps ) - verify virtual-disks created ( under Repositories tab ) 4. Verify OVM virtual-disks names. [root@ovmserver1 VirtualMachines]# grep -r hyptest_rdbms * 0004fb0000060000a906b423f44da98e/vm.cfg:OVM_simple_name = 'hyptest_rdbms' [root@ovmserver1 VirtualMachines]# cd 0004fb0000060000a906b423f44da98e [root@ovmserver1 0004fb0000060000a906b423f44da98e]# more vm.cfgvif = ['mac=00:21:f6:0f:3f:85,bridge=0004fb001089128', 'mac=00:21:f6:0f:3f:8e,bridge=0004fb00101971d'] OVM_simple_name = 'hyptest_rdbms' vnclisten = '127.0.0.1' disk = ['file:/OVS/Repositories/0004fb00000300004f17b7368139eb41/ VirtualDisks/0004fb000012000097c1bfea9834b17d.img,xvda,w', 'file:/OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img,xvdb,w'] vncunused = '1' uuid = '0004fb00-0006-0000-a906-b423f44da98e' on_reboot = 'restart' cpu_weight = 27500 memory = 32768 cpu_cap = 0 maxvcpus = 8 OVM_high_availability = True maxmem = 32768 vnc = '1' OVM_description = '' on_poweroff = 'destroy' on_crash = 'restart' name = '0004fb0000060000a906b423f44da98e' guest_os_type = 'linux' builder = 'hvm' vcpus = 8 keymap = 'en-us' OVM_os_type = 'Oracle Linux 5' OVM_cpu_compat_group = '' OVM_domain_type = 'xen_hvm' disk2 ovm ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img disk1 ovm ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb000012000097c1bfea9834b17d.img Summarizing disk1 --source ==> /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/9020f2e1-7b8a-4641-8f80-749768cc237a disk1 --dest ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb000012000097c1bfea9834b17d.img disk2 --source ==> /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/d4ef928d-6dc6-4743-b20d-568b424728a5/4c03b1cf-67cc-4af0-ad1e-529fd665dac1 disk2 --dest ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img 5. Copy KVM exported virtual-disks to OVM virtual-disks. Keeping your Oracle VM guest stopped you can copy KVM exported virtual-disks to OVM virtual-disks; what I did is only to locally mount the filesystem containing the exported virtual-disk ( by an usb device ) on my OVS; the copy automatically resize OVM virtual-disks ( previously created with a size of 1GB ) . nohup cp /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/9020f2e1-7b8a-4641-8f80-749768cc237a /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/0004fb000012000097c1bfea9834b17d.img & nohup cp /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/d4ef928d-6dc6-4743-b20d-568b424728a5/4c03b1cf-67cc-4af0-ad1e-529fd665dac1 /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/0004fb0000120000cde6a11c3cb1d0be.img & 7. When copy completed refresh repository to aknowledge the new-disks size. 7. After "refresh repository" is completed, start guest machine by Oracle VM manager. After the first start of your guest: - verify that you can see all disks and partitions - verify that your guest is network reachable ( MAC Address of your NICs changed ) Eventually you can also evaluate to convert your guest to PVM ( Paravirtualized virtual Machine ) following official Oracle documentation. Ciao Simon COTER ps: next-time I'd like to post an article reporting how to manually migrate Virtual-Iron guests to OracleVM.  Comments and corrections are welcome. 

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133  | Next Page >