Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 248/714 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • How can I avoid the random restart of the xserver?

    - by Bernd
    I'm using a desktop pc with 64bit Ubuntu 12.04 (Kernel 3.2.0-24-generic). Hardware specs are - Intel Core i7 CPU 860 @ 2,80GHz x 8 - Nvidia GeForce GTS 250 - 750 GB Hard Disk ATA WDC WD7501AALS-00E3A0 (for my /home partition) - 128 GB Solid-State Disk ATA PLEXTOR PX-128M2S (for all other partitions) Since I reinstalled the PC with Ubuntu 12.04 the xserver restarts randomly. Most times when I watch a video in the browser (maybe a flash issue?) but sometimes the restart/crash appears when I'm working in an text-editor. How can I locate the problem? Which information is needed for a useful answer?

    Read the article

  • Resize Partition with gparted

    - by arian
    I wanted to create more space for Ubuntu on my hard disk in favor of my windows partition. I booted the livecd and resized the ntfs partition to 100gb. Then I wanted to resize my ubuntu (ext4) partition to fill up the created unallocated space. A screenshot of my current disk. (With the livecd there's no 'key' icon after sda6) My first thought was just right click on sda6 ? move/resize ? done. Unfortunately I cannot resize or move the partition. However I can resize the ntfs partition. I guess it is because the extended sda4 partition is locked. I couldn't see an unlock possibility though… So how do I resize the ext4 partition anyway, probably by unlocking the extended partition, but how? Thanks in advance.

    Read the article

  • ssrs: the report execution has expired or cannot be found

    - by Alex Bransky
    Today I got an exception in a report using SQL Server Reporting Services 2008 R2, but only when attempting to go to the last page of a large report: The report execution sgjahs45wg5vkmi05lq4zaee has expired or cannot be found.;Digging into the logs I found this:library!ReportServer_0-47!149c!12/06/2012-12:37:58:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerStorageException: , An error occurred within the report server database.  This may be due to a connection failure, timeout or low disk condition within the database.;I knew it wasn't a network problem or timeout because I could repeat the problem at will.  I checked the disk space and that seemed fine as well.  The real issue was a lack of memory on the database server that had the ReportServer database.  Restarting the SQL Server engine freed up plenty of RAM and the problem immediately went away.

    Read the article

  • My Ubuntu drive is running out of space, how to fix, something is wrong

    - by Jamie Flores
    I'm moving from windows and am having trouble figuring this out: I'm getting a message that pops up saying disk space is low. It says I have 800MB free. I click on the disk usage analyzer and it shows 24.6 total capacity and 22.5 used. When I look in GParted it shows a partition at 72.6GB where I have Ubuntu installed. It also shows that 70.65GB used and 1.94 free in that partition. How do I figure out what else is in that partition? It's the only ext4 format. What am I missing?

    Read the article

  • Fix Windows MBR using Ubuntu Live CD

    - by kova
    I'm trying to fix the MBR using Ubuntu live CD. I already have the ms-sys installed but from the threads that I saw, I'm not completely sure in which /dev I should execute the command: sudo ms-sys --mbr7 /dev/??? (is it mbr7 the correct option when using Windows 7?) ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1f205b1f Device Boot Start End Blocks Id System /dev/sda1 * 38 38 0 0 Empty /dev/sda2 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda3 206848 155854847 77824000 7 HPFS/NTFS/exFAT /dev/sda4 155854848 625137663 234641408 7 HPFS/NTFS/exFAT ubuntu@ubuntu:~$ Why is /dev/sda1 empty? I'm trying to fix the MBR because I'm getting a black screen when trying to load the operating system.

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Unable to mount 1TB USB external HDD - Error

    - by superbDeveloper
    Hi, I'm getting an error when I plug in my 1TB USB external HDD, the wierd thing about this is that it was working fine before and I've been using it for about a couple of months now. yesterday I compressed one of the folders which had about 120GB of data but the compression failed after an hour and I decided to unmount the drive and shut everything down. Today when I tried to plug in the drive I got the following error: Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so output of cat /proc/partitions below muzikayise@muzikayise-supercom:~$ sudo fdisk -l /dev/sdc Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x39dcba64 Device Boot Start End Blocks Id System please will someone kindly assist with this? thanks in advance,

    Read the article

  • TestDrive equivalent for Ubuntu Server

    - by Marius Gedminas
    Every now and then I'd like to play with a fresh minimal install of Ubuntu (to test sysadminish scripts, application install instructions, package dependency lists etc.). I'd like to have a tool as simple to use as testdrive: pick a version (say, 'maverick'), run a command, get a shell in a new virtual machine. I'd like that shell to be in the current terminal, rather than a new GUI window that testdrive uses. Setting up the new VM to accept SSH logins with my ssh public key is fine. I'd like the VM to have network access out of the box; NAT to a virtual network interface is fine. Why a VM? Chroots don't really cut it: installing, say, Apache in a chroot would fail because it would try to listen on port 80, which is already taken. Containers might work, though, if there are any that are supported by standard Ubuntu kernels.

    Read the article

  • Installing software on an offline Ubuntu server

    - by Muhammad Gelbana
    Assuming that I have a server with Ubuntu server newly installed on it. I was thinking of installing the very same version on Virtual Box (Or any other virtualization software), connect it to the internet and use apt-get to only download the packages for upgrading the system and the new software such as (tomcat7, openjdk6-default-headless..etc). Then copy the downloaded packages from the archive folder to the offline server's archive folder through a USB stick. So now the virtual system won't actually be upgraded nor have any new software installed. But would running the very same apt-get commands on the offline system without the download directive -d be executed without issues ? *EDIT:*This needs to be as simple as possible because I'll have to write a guide for our client to do this on his own and so it won't be acceptable to require deep Linux knowledge to do this.

    Read the article

  • Run Blustacks or Android to play clash of clans on ubuntu 13.4+

    - by Joe Hanus
    I am trying to get rid of the need to dual boot Ubuntu and windows and one thing I can do with windows I can not do with Linux is to run Bluestacks to play android games my favourite one ow is clash of clans. I have tried different VM's to run android emulators and virtual box but nothing works for clash of clans I can download the game to the VM from Google Play Store but it fails to open If Ubuntu can fix this by making a way to successfully install Bluestacks on Ubuntu or Android with Virtual box with out loading errors of all apps/games it would help the Linux community to become less dependant of Windows. Thanks in advance! go Ubuntu!

    Read the article

  • Expand size of Edubuntu partition on dual boot PC

    - by trptplyr
    I wasn't allowed to update to the next release of Edubuntu recently. It gave me an error stating that I did not have enough space to run the update. How can I expand the size of the Edubuntu partition to allow me to update? I am new to Linux so I hope that I am giving you enough and correct information on my system. I am using an older Dell Inspiron 9400 laptop. My root.disk file is 16.3Gb and the system.disk file is 256Mb. I would appreciate someone to point me to documentation or give me instructions on how to do this. Thank you.

    Read the article

  • Updated Oracle Platinum Services Certified Configurations

    - by Javier Puerta
    Effective May 22, 2014, Oracle Platinum Services is now available with an updated combination of certified components based on Oracle engineered systems: Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster systems. The Certified Platinum Configuration matrix has been revised, and now includes the following key updates: Revisions to Oracle Database Patch Levels to include 12.1.0.1 Addition of the X4-2 Oracle Exalogic system Removal of the virtualization column as the versions are not optional and are based on inclusion in integrated software Revisions to Oracle Exalogic Elastic Cloud Software to clarify patch level requirements for virtual and non-virtual environments For more information, visit the Oracle Platinum Services web page where you will find information such as customer collateral, FAQ's, certified configurations, technical support policies, customer references, links to related services and more.

    Read the article

  • Faulty memtest result

    - by dhojgaard
    I've been a bit suspicious about my RAM lately. They seem not work like i expect them to. For instance i run a lot of Virtual Test Environments in VMware Workstation and lately Ubuntu starts to lag just running 4 virtual machines each dedicated 512 MB. BTW im having 6GB memory on my laptop. This did not use to be a problem for me on my last laptop that even had a lot lower CPU resources. So i was led to try a memtest after reading some websites about RAM testing. So i did a memtest overnight but when i woke up this morning i was just looping and i could not Exit or do anything. It was just looping the number of errors besides test 7 which you can see in the lower right part of the screenshot. Can anyone interpret this screenshot for me? Do i have a faulty set of RAMs?

    Read the article

  • How can I force the (re)discovery of PulseAudio network sound devices?

    - by Christian
    I'm using the PulseAudio feature of network sound devices (not Multicast/RTP) to play sound from my netbook on the audio equipment connected to the HTPC when at home. This creates a virtual sound device that I can then use instead of the physical built-in one. Most of the time this works just fine. Sometimes however, the virtual sound device just doesn't appear. Disconnecting from and reconnecting to the network helps sometimes but not always and it's annoying and potentially bad for existing TCP connections. So my question basically is: Is there some way to tell PulseAudio "Hey, just look again if you really can't find a network sound device."? Edit: Unloading and reloading the module-zeroconf-discover with pacmd does not help either and it doesn't appear to be an avahi problem per se since avahi-browse -t --all | grep PulseAudio shows lots of right-looking stuff, even when the devices aren't listed in pavucontrol or pacmd list-sinks. Edit 2: I'm using Ubuntu 12.04 on both boxes for all the difference it might make.

    Read the article

  • Windows 8 client virtualization

    - by John Paul Cook
    Hyper-V is coming to Windows 8, but you must have a processor that supports SLAT. Virtual machines created with Virtual PC aren’t easily transferred to Windows 2008 Hyper-V and vice-versa. With Windows 8, it will be easy to move vhds from Windows 8 on your laptop or desktop to Windows 8 server and back again. To find out if your processor supports SLAT, run coreinfo –v from a command window running as administrator. Download coreinfo from here . My MacBook Pro supports SLAT as this output shows:...(read more)

    Read the article

  • Dual Boot Ubuntu and Windows 7: BOOTMGR is missing when I tried to boot in Windows

    - by Simon Polak
    So, I don't know what exactly how I managed to delete the MBR record on windows partition. But let me explain what I did next, I ran the ubuntu boot repair tool and now Windows is not even listed in my grub loader. So I went and booted with windows cd and choose repair. Then I ran ubuntu boot repair again via live cd. Here is the log http://paste.ubuntu.com/1426181/. Still no luck. Looks like osprobe can't detect windows on my /dev/sda2 partition. Any clues ? Here is how my partitions look like: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x525400d1 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 509620669 254706911 7 HPFS/NTFS/exFAT /dev/sda3 509622270 976773119 233575425 5 Extended /dev/sda5 509622272 957757439 224067584 83 Linux /dev/sda6 957759488 976773119 9506816 82 Linux swap / Solaris

    Read the article

  • How To Enable 3D Acceleration and Use Windows Aero in VirtualBox

    - by Chris Hoffman
    VirtualBox’s experimental 3D acceleration allows you to use Windows 7’s Aero interface in a virtual machine. You can also run older 3D games in a virtual machine – newer ones probably won’t run very well. If you installed Windows 7 in VirtualBox, you may have been disappointed to see the Windows 7 Basic interface instead of Aero – but you can enable Aero with a few quick tweaks. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • What to do with my "unmounted drive"?

    - by Taylor Guistwite
    I just recently followed the tutorials on http://www.ubuntu.com/download/ubuntu/download for installing the ubuntu server onto my 1TB Seagate External. I was planning on using this to install it on my macbook and in these instructions it states to preform this line of code Run diskutil unmountDisk /dev/diskN (replace N with the disk number from the last command; in the previous example, N would be 2) Now my HD prompts "The disk you inserted was not readable by this computer". Would I just run diskutil mountDisk /dev/diskN in order to be able to access all my files again? here is a screenshot to the instructions i followed http://i17.photobucket.com/albums/b97/hello_screamo/Screenshot2011-11-11at113914AM.png

    Read the article

  • How to prevent system to generate log file

    - by shantanu
    My Question is little bit surprising, but i need it. I am using a slow processor laptop, now i found that HDD has some bad sectors and HDD response becomes slow. But disk health is ok(according to smart tools). I can not change my HDD right now. So decide to reduce disk operation. How do i prevent system to generate log file or any other file which are used to keep history? I know LOG file is very important but i don't care it right now. Please help.

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • WebLogic Application Server: free for developers!

    - by Bruno.Borges
    Great news! Oracle WebLogic Server is now free for developers! What does this mean for you? That you as a developer are permited to: "[...] deploy the programs only on your single developer desktop computer (of any type, including physical, virtual or remote virtual), to be used and accessed by only (1) named developer." But the most interesting part of the license change is this one: "You may continue to develop, test, prototype and demonstrate your application with the programs under this license after you have deployed the application for any internal data processing, commercial or production purposes" (Read the full license agreement here) If you want to take advantage of this licensing change and start developing Java EE applications with the #1 Application Server in the world, read now the previous post, How To Install WebLogic Zip on Linux!

    Read the article

  • Ubuntu 13.04 on UEFI system with Windows Boot Manager as the main loader

    - by Mehrdad
    On my old laptop (legacy BIOS, MBR disk), this was perfectly possible to get working: I turn on the computer and see the Windows Boot Manager I use EasyBCD (or BootPart, or something else) to add an option to the BCD menu which allows me to boot into GRUB, and then into Ubuntu I can't figure how to do this on my new laptop (UEFI, GPT disk), whether in UEFI or legacy mode. Currently I've installed (and even booted!) Ubuntu on my laptop, but only with the help of an external GRUB (on a USB flash drive). How can I add GRUB as an option in the Windows Boot Manager on a UEFI laptop? (No, I don't want to change my primary boot loader. So no, I don't want to overwrite the Windows boot loader with GRUB.)

    Read the article

  • Cursor (touchpad) moves and clicks erratically

    - by James Wood
    Sometimes (usually after two-finger scrolling) the touchpad on my Asus X54C becomes unresponsive and the cursor begins to click and move small distances. Clicking seems to happen more often than moving. Unlike with other similar problems, I've never seen the cursor move to (0, 0). Suspending (closing the lid) and unsuspending doesn't help, and neither does moving to a tty and back or rebooting. I've also tried disabling the touchpad via Fn+F9. That tends to take a long time, but doesn't have any effect. I'm on 13.10 at the moment, but I remember it happening on 13.04 as well. Here's the pointer section of xinput: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? ETPS/2 Elantech Touchpad id=12 [slave pointer (2)]

    Read the article

  • Real-Time Multi-User Gaming Platform

    - by Victor Engel
    I asked this question at Stack Overflow but was told it's more appropriate here, so I'm posting it again here. I'm considering developing a real-time multi-user game, and I want to gather some information about possibilities before I do some real development. I've thought about how best to ask the question, and for simplicity, the best way that occurred to me was to make an analogy to the field (or playground) game darebase. In the field game of darebase, there are two or more bases. To start, there is one team on each base. The game is a fancy game of tag. When two people meet out in the field, the person who left his base most recently timewise captures the other person. They then return to that person's base. Play continues until everyone is part of the same team. So, analogizing this to an online computer game, let's suppose there are an indefinite number of bases. When a person starts up the game, he has a team that is located at, for example, his current GPS coordinates. It could be a virtual world, but for sake of argument, let's suppose the virtual world corresponds to the player's actual GPS coordinates. The game software then consults the database to see where the closest other base is that is online, and the two teams play their game of virtual tag. Note that the user of the other base could have a different base than the one run by the current user as the closest base to him, in which case, he would be in two simultaneous battles, one with each base. When they go offline, the state of their players is saved on a server somewhere. Game logic calls for the players to have some automaton-logic of some sort, so they can fend for themselves in a limited way using basic rules, until their user goes online again. The user doesn't control the players' movements directly, but issues general directives that influence the players' movement logic. I think this analogy is good enough to frame my question. What sort of platforms are available to develop this sort of game? I've been looking at smartfoxserver, but I'm not convinced yet that it is the best option or even that it will work at all. One possibility, of course, would be to roll out my own web server, but I'd rather not do that if there is an existing service out there already that I could tap into. I will be developing for iOS devices at first. So any suggestions would be greatly appreciated. I think I need to establish the architecture first before proceeding with this project. Note that darbase is not the game I intend to implement, but, upon reflection, that might not be a bad idea either.

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >