Search Results

Search found 15239 results on 610 pages for 'array population'.

Page 223/610 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • How to send multiple MVP matrices to a vertex shader in OpenGL ES 2.0

    - by Carbon Crystal
    I'm working my way through optimizing the rendering of sprites in a 2D game using OpenGL ES and I've hit the limit of my knowledge when it comes to GLSL and vertex shaders. I have two large float buffers containing my vertex coordinates and texture coordinates (eventually this will be one buffer) for multiple sprites in order to perform a single glDrawArrays call. This works but I've hit a snag when it comes to passing the transformation matrix into the vertex shader. My shader code is: uniform mat4 u_MVPMatrix; attribute vec4 a_Position; attribute vec2 a_TexCoordinate; varying vec2 v_TexCoordinate; void main() { v_TexCoordinate = a_TexCoordinate; gl_Position = u_MVPMatrix * a_Position; } In Java (Android) I am using a FloatBuffer to store the vertex/texture data and this is provided to the shader like so: mGlEs20.glVertexAttribPointer(mVertexHandle, Globals.GL_POSITION_VERTEX_COUNT, GLES20.GL_FLOAT, false, 0, mVertexCoordinates); mGlEs20.glVertexAttribPointer(mTextureCoordinateHandle, Globals.GL_TEXTURE_VERTEX_COUNT, GLES20.GL_FLOAT, false, 0, mTextureCoordinates); (The Globals.GL_POSITION_VERTEX_COUNT etc are just integers with the value of 2 right now) And I'm passing the MVP (Model/View/Projection) matrix buffer like this: GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mModelCoordinates); (mModelCoordinates is a FloatBuffer containing 16-float sequences representing the MVP matrix for each sprite) This renders my scene but all the sprites share the same transformation, so it's obviously only picking the first 16 elements from the buffer which makes sense since I am passing in "1" as the second parameter. The documentation for this method says: "This should be 1 if the targeted uniform variable is not an array of matrices, and 1 or more if it is an array of matrices." So I tried modifying the shader with a fixed size array large enough to accomodate most of my scenarios: uniform mat4 u_MVPMatrix[1000]; But this lead to an error in the shader: cannot convert from 'uniform array of 4X4 matrix of float' to 'Position 4-component vector of float' This just seems wrong anyway as it's not clear to me how the shader would know when to transition to the next matrix anyway. Anyone have an idea how I can get my shader to pick up a different MVP matrix (i.e. the NEXT 16 floats) from my MVP buffer for every 4 vertices it encounters? (I am using GL_TRIANGLE_STRIP so each sprite has 4 vertices). Thanks!

    Read the article

  • How would I detect if two 2D arrays of any shape collided?

    - by user2104648
    Say there's two or more moveable objects of any shape in 2D plane, each object has its own 2D boolean array to act as a bounds box which can range from 10 to 100 pixels, the program then reads each pixel from a image that represents it, and appropriatly changes the array to true(pixel has a alpha more then 1) or false(pixel has a alpha less than one). Each time one of these objects moves, what would be the best accurate way to test if they hit another object in Java using as few APIs/libraries as possible?

    Read the article

  • Using a 64bit Linux kernel, can't see more than 4GB of RAM in /proc/meminfo

    - by Chris Huang-Leaver
    I'm running my new computer which has 8GB of RAM installed, which is visable from BIOS page, does not show in /proc/meminfo uname -a Linux localhost 3.0.6-gentoo #2 SMP PREEMPT Sat Nov 19 10:45:22 GMT-- x86_64 AMD Phenom(tm) II X4 955 Processor AuthenticAMD GNU/Linux The result of /proc/meminfo is as follows: (thans Andrey) MemTotal: 4021348 kB MemFree: 1440280 kB Buffers: 23696 kB Cached: 1710828 kB SwapCached: 4956 kB Active: 1389904 kB Inactive: 841364 kB Active(anon): 1337812 kB Inactive(anon): 714060 kB Active(file): 52092 kB Inactive(file): 127304 kB Unevictable: 32 kB Mlocked: 32 kB SwapTotal: 8388604 kB SwapFree: 8047900 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 492732 kB Mapped: 47528 kB Shmem: 1555120 kB Slab: 267724 kB SReclaimable: 177464 kB SUnreclaim: 90260 kB KernelStack: 1176 kB PageTables: 12148 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 10399276 kB Committed_AS: 3293896 kB VmallocTotal: 34359738367 kB VmallocUsed: 317008 kB VmallocChunk: 34359398908 kB AnonHugePages: 120832 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 23552 kB DirectMap2M: 3088384 kB DirectMap1G: 1048576 kB I have tried using mem=8G as a kernel boot parameter, I read a post about setting HIGHMEM64G to yes, before realising that only applies to 32bit kernels. Trying dmindecode -t memory SMBIOS 2.7 present. Handle 0x0026, DMI type 16, 23 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: Multi-bit ECC Maximum Capacity: 32 GB Error Information Handle: Not Provided Number Of Devices: 4 Handle 0x0028, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits Size: 4096 MB Form Factor: DIMM Set: None Locator: DIMM0 Bank Locator: BANK0 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz Manufacturer: Manufacturer0 Serial Number: SerNum0 Asset Tag: AssetTagNum0 Part Number: Array1_PartNumber0 Rank: Unknown Handle 0x002A, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: Unknown Data Width: 64 bits Size: No Module Installed Form Factor: DIMM Set: None Locator: DIMM1 Bank Locator: BANK1 Type: Unknown Type Detail: Synchronous Speed: Unknown Manufacturer: Manufacturer1 Serial Number: SerNum1 Asset Tag: AssetTagNum1 Part Number: Array1_PartNumber1 Rank: Unknown Handle 0x002C, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits Size: 4096 MB Form Factor: DIMM Set: None Locator: DIMM2 Bank Locator: BANK2 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz Manufacturer: Manufacturer2 Serial Number: SerNum2 Asset Tag: AssetTagNum2 Part Number: Array1_PartNumber2 Rank: Unknown Handle 0x002E, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: Unknown Data Width: 64 bits Size: No Module Installed Form Factor: DIMM Set: None Locator: DIMM3 Bank Locator: BANK3 Type: Unknown Type Detail: Synchronous Speed: Unknown Manufacturer: Manufacturer3 Serial Number: SerNum3 Asset Tag: AssetTagNum3 Part Number: Array1_PartNumber3 Rank: Unknown

    Read the article

  • Moving Windows XP from ICH10R RAID 5 to single disk using Linux [migrated]

    - by tudor
    A friend's machine running Windows XP refused to boot recently which is running 3 SATA disks on RAID 5 (which was previously upgraded from RAID 1 not by me). I have determined there to be a disk failure. The disks have been replaced many times in the past few years. I wish to backup the RAID5 partition before I try anything to fix it. The RAID chipset used is ICH10R/DO. So, I plugged in an extra IDE drive and an Ubuntu USB key and looked at the RAID. The partitioning is a mess, but I did find at least one degraded but working RAID array with two partitions, one 79GB and the other 86GB. Then I: 1) Partitioned my IDE disk using fdisk to have a partition of 80GB and bootable, and marked as NTFS. 2) dd the contents of the array to the partition 3) disconnected everything else 4) inserted a Windows XP CD and ran fixboot, fixmbr, and bootcfg. They all run ok and claim that they worked. (e.g. bootcfg detects the Windows partition, fixboot returns saying that it was written correctly.) However, I'm still getting an error like "DISK FAILURE, BOOT DISK NOT FOUND". I have tried running the GRUB rescue disk, which also runs ok, but won't boot into Windows. It just stops with a flashing cursor after chainloader +1, boot. One clue may be that the partitions appear to be wack. One disk has a 79GB RAID partition on a 500GB drive with a offset, the second disk has a 320GB RAID partition across the whole drive. Additionally, the BIOS lists the RAID size as being 149GB. I don't see how this works. How are they even assembling the array when the partitions are so different? I have also tried running the Windows XP automated repair tool, but that didn't work either. I'm presuming this is something simple. Perhaps Windows is attempting to boot into RAID and, upon not finding it, simply crashing? Perhaps the 79GB partitions offset means that it's looking into the disk by that much? Please help!! To clarify: I want to make the single IDE disk bootable with a copy of the array so that I can prove/disprove that it's just that Windows has become corrupted, and use windows tools to correct it before attempting the same thing on the RAID array. That way I have a working backup and can show the process I used to fix it.

    Read the article

  • Hot-swap drive got new name, can I change it on-the-fly?

    - by T.J. Crowder
    One of the HDDs in my server's RAID config failed, so I took it out of the array and had the data center hot-swap it. They've done that, but now the new drive is /dev/sdc rather than /dev/sda. I suspect — correct me if I'm wrong — that if I reboot the server, it will be /dev/sda again, so I'm hesitant to add it back to the array as /dev/sdc because I don't want to lay a trap for myself to fall into on the next reboot. I'd just as soon not reboot the server if I don't need to (if I do need to, well, too bad for me). Is there a way I can change the device name from /dev/sdc to /dev/sda without rebooting? This is on Ubuntu 10.04 LTS. It's an md array ("Linux Software RAID"), where currently one of the devices (there are a couple of them) looks like this ("degraded" because I've removed the old /dev/sda from it): # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Oct 11 21:07:54 2009 Raid Level : raid1 Array Size : 97536 (95.27 MiB 99.88 MB) Used Dev Size : 97536 (95.27 MiB 99.88 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 30 09:31:16 2011 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 496be7a5:ab9177ed:7792c71e:7dc17aa4 Events : 0.112 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed Thanks, Update: Reading through the kernel md documentation, I suspect that if the name changes on reboot, it won't matter. (Good design, that.) Here's why: Boot time autodetection of RAID arrays When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time. The kernel parameter "raid=partitionable" (or "raid=part") means that all auto-detected arrays are assembled as partitionable. I do have md compiled into the kernel, so I'm rebuilding the array now and will do the reboot to see what happens. Even if it works, the above doesn't answer the question I actually asked, so unless someone comes along and answers that question in the meantime (I'd be interested, even if it's not necessary for what I'm doing this very moment), I'll just delete the question to keep noise down.

    Read the article

  • C# slowdown while creating a bitmap - calculating distances from a large List of places for each pixel

    - by user576849
    I'm creating a graphic of the glow of lights above a geographic location based upon Walkers Law: Skyglow=0.01*Population*DistanceFromCenter^-2.5 I have a CSV file of places with 66,000 records using 5 fields (id,name,population,latitude,longitude), parsed on the FormLoad event and stored it in: List<string[]> placeDataList Then I set up nested loops to fill in a bitmap using SetPixel. For each pixel on the bitmap, which represents a coordinate on a map (latitude and longitude), the program loops through placeDataList – calculating the distance from that coordinate (pixel) to each place record. The distance (along with population) is used in a calculation to find how much cumulative sky glow is contributed to the coordinate from each place record. So, for every pixel, 66,000 distance calculations must be made. The problem is, this is predictably EXTREMELY slow – on the order of one line of pixels per 30 seconds or so on a 320 pixel wide image. This is unrelated to SetPixel, which I know is also slow, because the speed is similarly slow when adding the distance calculation results to an array. I don’t actually need to test all 66,000 records for every pixel, only the records within 150 miles (i.e. no skyglow is contributed to a coordinate from a small town 3000 miles away). But to find which records are within 150 miles of my coordinate I would still need to loop through all the records for each pixel. I can't use a smaller number of records because all 66,000 places contribute to skyglow for SOME coordinate in my map as it loops. This seems like a Catch-22, so I know there must be a better method out there. Like I mentioned, the slowdown is related to how many calculations I’m making per pixel, not anything to do with the bitmap. Any suggestions? private void fillPixels(int width) { Color pixelColor; int pixel_w = width; int pixel_h = (int)Math.Floor((width * 0.424088664)); Bitmap bmp = new Bitmap(pixel_w, pixel_h); for (int i = 0; i < pixel_h; i++) for (int j = 0; j < pixel_w; j++) { pixelColor = getPixelColor(i, j); bmp.SetPixel(j, i, pixelColor); } bmp.Save("Nightfall", System.Drawing.Imaging.ImageFormat.Jpeg); pictureBox1.Image = bmp; MessageBox.Show("Done"); } private Color getPixelColor(int height, int width) { int c; double glow,d,cityLat,cityLon,cityPop; double testLat, testLon; int size_h = (int)Math.Floor((size_w * 0.424088664)); ; testLat = (height * (24.443136 / size_h)) + 24.548874; testLon = (width * (57.636853 / size_w)) -124.640767; glow = 0; for (int i = 0; i < placeDataList.Count; i++) { cityPop=Convert.ToDouble(placeDataList[i][2]); cityLat=Convert.ToDouble(placeDataList[i][3]); cityLon=Convert.ToDouble(placeDataList[i][4]); d = distance(testLat, testLon, cityLat, cityLon,"M"); if(d<150) glow = glow+(0.01 * cityPop * Math.Pow(d, -2.5)); } if (glow >= 1) glow=1; c = (int)Math.Ceiling(glow * 255); return Color.FromArgb(c, c, c); }

    Read the article

  • Array with nested values. Display in ul list. php html.

    - by btwong
    i have a record set returned from a data base that is looking like this: id | level | lft | rgt | title --------------------------------- 1 |    | 1 | 8 | title 1 2 | -  | 2 | 5 | sub title 1-1 3 | -- | 3 | 4 | sub sub title 1 4 | -  | 6 | 7 | sub title 1-2 5 |    | 9 | 12 | title 2 6 | -  | 10 | 11 | sub title 2 AS you can see its a hierarchy list, with left n right values. I am trying to display this record set in a list with the correct indentation, so that it appears like this: Title 1 Sub title 1-1 Sub sub title sub title 1-2 Title 2 sub title 2 Any pointers to do this with the one record set? Or should i use multiple queries to display this?

    Read the article

  • HP P212 hangs on Initializing

    - by user927586
    I'm in trouble... Server: HP Microserver N36L Storage: HP P212/256 with BBWC Drives: 4 WD 2 Tb While expanding the RAID 5 array from 3 to 4 drives, the server rebooted itself. Expansion was at 76%. Now at boot the P212 seems to hang on initializing. After some time screen resets and the server boots, but in the os (Windows Server 2008R2) the P212 has errors (error code 10) and it's not seen by the HP ACU. Whats' going on? It's trying to complete the expansion? It's trying to make all over the expansion? It's simply stuck? What should I do? I NEED the data on the array! PS In the server BIOS, on the hd boot priority, the server still reports the array logical drive... maybe it's a good sign... PPS I'm not at the server physical location, so I cannot tell if the drives in the array are being accessed (that would be the case if the controller it's redoing/completing the expansion process) or not right now. Bye Dario

    Read the article

  • Problems migrating software RAID 5 to new server (linux)

    - by leleu
    I have a CentOS setup with sw RAID5 that holds my data. Well, the server died, so I bought another box to migrate my drives to. Only thing is, I cannot get the RAID array rebuilt (not even sure it needs rebuilding, might just need the /dev/md0 mapping created... but I don't even know how to determine what I need!) Some details: RAID5 software (used mdadm) 4x 250GB drives (2 are SATA, 2 are EIDE -- would this matter? It worked fine in the other box...) latest CentOS distro built using mdadm I've got a decent amount of experience with standard linux stuff, but the hardware level stuff runs me in circles. I've spent some time googling and elsewhere here on SF, so please be kind for my newbie questions :). My question is this: how can I diagnose the problem? For all I know, I'm using the wrong device blocks when I try to rebuild the array, but I can't find the command to display only the devices that have some physical attachment. Is there some simple way for me to run mdadm, having it scan over all my physical drives, and say "hey, drives 2,5,6,7 are a software array, want me to mount it?" I basically just took the drives from my old box and put it into my new one. They show up in the BIOS. What steps do I need to take in order to get the array up, running, and mounted? Thanks in advance!

    Read the article

  • smartctl not actually running self tests?

    - by canzar
    I want to run the smartctl self tests to check the health of the drives in my RAID array (PERC 5/i). The array is on sda and comprises six drives. I can check the status using sudo smartctl /dev/sda -d megaraid,0 -a And I see that SMART is available and enabled on all the drives. I have tried to run self tests using sudo smartctl /dev/sda -d megaraid,0 -t short and sudo smartctl /dev/sda -d megaraid,0 -t long I have also tried it on all of the drives 0-5. No matter what I try, when I run: sudo smartctl /dev/sda -d megaraid,0 -l selftest I always get the same result, which seems to always report that I have never run a self test. /dev/sda [megaraid_disk_00] [SAT]: Device open changed type from 'megaraid' to 'sat' ===START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] From what I read, I should have no problem running the short and long self tests on the array while it is mounted. Does anyone else have experience running these tests on a PERC 5/i raid array who could lend some insight into what is causing the problem? (smartmontools release 5.40 dated 2009-12-09 at 21:00:32 UTC)

    Read the article

  • Mirror a RAID0 volume

    - by Ghostrider
    I have two SSD running in RAID0. The capacity and speed are just great. I use Windows Home Server to do incremental daily backups. This is fine and well and I've successfully restored from these backups. However. When one of the disks physically died. I was stuck without a working system until the replacement arrives so that I can restore the array from backup. WHS restoration takes about 5 hours which basically means that I'm losing entire day for the process. Is it possible to set up kind of a recovery volume for the RAID array? Use a single mechanical HDD that would be updated with the exact clone of the RAID array on a daily basis. This way if the array goes offline for some reason, I can just boot from the mechanical HDD, lose some perf but will still be able to work. The machine in question runs Windows 7. Creating RAID01 is not an option because of the high price of the SSD and the fact that it still doesn't protect against failure of RAID controller. Is there any way it can be set up?

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • How to install a private user script in Chrome 21+?

    - by Mathias Bynens
    In Chrome 20 and older versions, you could simply open any .user.js file in Chrome and it would prompt you to install the user script. However, in Chrome 21 and up, it downloads the file instead, and displays a warning at the top saying “Extensions, apps, and user scripts can only be added from the Chrome Web Store”. The “Learn More” link points to http://support.google.com/chrome_webstore/bin/answer.py?hl=en&answer=2664769, but that page doesn’t say anything about user scripts, only about extensions in .crx format, apps, and themes. This part sounded interesting: Enterprise Administrators: You can specify URLs that are allowed to install extensions, apps, and themes directly through the ExtensionInstallSources policy. So, I ran the following commands, then restarted Chrome and Chrome Canary: defaults write com.google.Chrome ExtensionInstallSources -array "https://gist.github.com/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://gist.github.com/*" Sadly, these settings only seem to affect extensions, apps, and themes (as it says in the text), not user scripts. (I’ve filed a bug asking to make this setting affect user scripts as well.) Any ideas on how to install a private user script (that I don’t want to add to the Chrome Web Store) in Chrome 21+? Update: The problem was that gist.github.com’s raw URLs redirect to a different domain. So, use these commands instead: # Allow installing user scripts via GitHub or Userscripts.org defaults write com.google.Chrome ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" defaults write com.google.Chrome.canary ExtensionInstallSources -array "https://*.github.com/*" "http://userscripts.org/*" This works!

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • Is basing storage requirements based on IOPS sufficient?

    - by Boden
    The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio). I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec. Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server). Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array. So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

    Read the article

  • Google: "302 Moved" in Firefox

    - by Virtlink
    For some Google search queries executed from the Firefox search bar, or manually by typing in the URL, I get a ''302 Moved'' page. I did a quick virus scan, have checked the hosts file and the plugins and add-ons that are installed in Firefox. Nothing is out of the ordinary. What could be the problem? These URLs (and any URL with google.com, empty and firefox-a in them) show me a 302 Moved page: https://www.google.com/search?q=c%23+empty+array&client=firefox-a https://www.google.com/search?q=empty&client=firefox-a Whereas these URLs work fine: https://www.google.com/search?q=c%23+empty+array (no firefox-a) https://www.google.com/search?q=empty (no firefox-a) https://www.google.com/search?q=c%23+array&client=firefox-a (no empty) https://www.google.nl/search?q=c%23+empty+array&client=firefox-a (no .com) By default Google redirects my queries to their .nl website, and uses HTTPS. I am currently executing a full system virus scan using Security Essentials. My Firefox plugins were up-to-date. No unfamiliar Firefox plugins or add-ons found. Restarting Firefox did not solve the issue. The issue does not occur in Internet Explorer. The hosts file did not contain any unfamiliar entries.

    Read the article

  • MediaWiki migrated from Tiger to Snow Leopard throwing an exceptions

    - by Matt S
    I had an old laptop running Mac OS X 10.4 with macports for web development: Apache 2, PHP 5.3.2, Mysql 5, etc. I got a new laptop running Mac OS X 10.6 and installed macports. I installed the same web development apps: Apache 2, PHP 5.3.2, Mysql 5, etc. All versions the same as my old laptop. A Mediawiki site (version 1.15) was copied over from my old system (via the Migration Assistant). Having a fresh Mysql setup, I dumped my old database and imported it on the new system. When I try to browse to mediawiki's "Special" pages, I get the following exception thrown: Invalid language code requested Backtrace: #0 /languages/Language.php(2539): Language::loadLocalisation(NULL) #1 /includes/MessageCache.php(846): Language::getFallbackFor(NULL) #2 /includes/MessageCache.php(821): MessageCache->processMessagesArray(Array, NULL) #3 /includes/GlobalFunctions.php(2901): MessageCache->loadMessagesFile('/Users/matt/Sit...', false) #4 /extensions/OpenID/OpenID.setup.php(181): wfLoadExtensionMessages('OpenID') #5 [internal function]: OpenIDLocalizedPageName(Array, 'en') #6 /includes/Hooks.php(117): call_user_func_array('OpenIDLocalized...', Array) #7 /languages/Language.php(1851): wfRunHooks('LanguageGetSpec...', Array) #8 /includes/SpecialPage.php(240): Language->getSpecialPageAliases() #9 /includes/SpecialPage.php(262): SpecialPage::initAliasList() #10 /includes/SpecialPage.php(406): SpecialPage::resolveAlias('UserLogin') #11 /includes/SpecialPage.php(507): SpecialPage::getPageByAlias('UserLogin') #12 /includes/Wiki.php(229): SpecialPage::executePath(Object(Title)) #13 /includes/Wiki.php(59): MediaWiki->initializeSpecialCases(Object(Title), Object(OutputPage), Object(WebRequest)) #14 /index.php(116): MediaWiki->initialize(Object(Title), NULL, Object(OutputPage), Object(User), Object(WebRequest)) #15 {main} I tried to step through Mediawiki's code, but it's a mess. There are global variables everywhere. If I change the code slightly to get around the exception, the page comes up blank and there are no errors (implying there are multiple problems). Anyone else get Mediawiki 1.15 working on OS X 10.6 with macports? Anything in the migration from Tiger that could cause a problem? Any clues where to look for answers?

    Read the article

  • How to get multipath working for Ubuntu Server 12.04

    - by mlampi
    I'm working on a project which aims to make use of Ubuntu servers running on enterprise class hardware. In our case that means IBM HS23E blade servers, QLogic 4GB fibre channel extension cards and quite old IBM DS4500 disk array with two controllers. At the moment we have fibre channel as only boot option and Ubuntu Server 12.04 installed just fine and is able to boot without multipath. I'm not a linux professional myself but in our team we have people who will understand the technical stuff. Don't let my post confuse :) The current situation is that we have only one fibre channel connection to a single disk array controller. Real life case would be of course quite different. At minimum we should have two fibre channel ports connected to two different switches and two different controllers. However, we have no idea how to set up multipath tool. Is the DM-MPIO the right software? At minimum we should be able to boot when multiple connections are available and achieve fault tolerance when any of them should be down. Since the disk array is not the latest hardware, I managed to find RDAC driver sources only for 2.6.x kernel. And we have 3.2.x. Another issue is to build a multipath.conf. The said driver sources are from IBM support and the QLogic drivers provided to Ubuntu installer are from Ubuntu site. It seems that RHEL and SLES would have near out of the box support but that is not an option for our project. Actual questions: - What is the recommended software tool for multipath for Ubuntu Server 12.04? - Is there available pre-made configurations or templates? Does it require disk array / controller specific settings or do a more generic config work? - Do you have expriences on similar setup and like to share the knowledge? I'll provide you with any additional information you might require. Thanks in advance.

    Read the article

  • How do I configure custom URL handlers on OS X?

    - by cwd
    I've been reading a lot online about custom URL handlers / custom protocol handlers such as: Launching External Applications using Custom Protocols under OSX OS X URL handler to open links to local files I get that you can tell the system that a particular program is able to handle a certain scheme / protocol with the Info.plist file: <key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleURLName</key> <string>Local File</string> <key>CFBundleURLSchemes</key> <array> <string>local</string> </array> </dict> </array> <key>NSUIElement</key> <true/> But if there are multiple applications that are capable of opening the same URL handler, such as mailto: how do you specify which one you want the system to use? There were some references to utilities like the More Internet preference pane which no longer seems to be available from the author's site. I did find it online by Googling but it seems a bit shaky - like it was written for an older OSX - perhaps Tiger. I haven't been able to find information on how to set the URL handler for protocols and custom protocols. I'm assuming there is a plist file somewhere that I can edit - or maybe there is a newer, better utility that works well with Mountain Lion?

    Read the article

  • arrays in puppet

    - by paweloque
    I'm wondering how to solve the following puppet problem: I want to create several files based on an array of strings. The complication is that I want to create multiple directories with the files: dir1/ fileA fileB dir2/ fileA fileB fileC The problem is that the file resource titles must be unique. So if I keep the file names in an array, I need to iterate over the array in a custom way to be able to postfix the file names with the directory name: $file_names = ['fileA', 'fileB'] $file_names_2 = [$file_names, 'fileC'] file {'dir1': ensure => directory } file {'dir2': ensure => directory } file { $file_names: path = 'dir1', ensure =>present, } file { $file_names_2: path = 'dir2', ensure =>present, } This wont work because the file resource titles clash. So I need to append e.g. the dir name to the file title, however, this will cause the array of files to be concatenated and not treated as multiple files... arghh.. file { "${file_names}-dir1": path = 'dir1', ensure =>present, } file { "${file_names_2}-dir2": path = 'dir1', ensure =>present, } How to solve this problem without the necessity of repeating the file resource itself. Thanks

    Read the article

  • Reusing slot numbers in Linux software RAID arrays

    - by thkala
    When a hard disk drive in one of my Linux machines failed, I took the opportunity to migrate from RAID5 to a 6-disk software RAID6 array. At the time of the migration I did not have all 6 drives - more specifically the fourth and fifth (slots 3 and 4) drives were already in use in the originating array, so I created the RAID6 array with a couple of missing devices. I now need to add those drives in those empty slots. Using mdadm --add does result in a proper RAID6 configuration, with one glitch - the new drives are placed in new slots, which results in this /proc/mdstat snippet: ... md0 : active raid6 sde1[7] sdd1[6] sda1[0] sdf1[5] sdc1[2] sdb1[1] 25185536 blocks super 1.0 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU] ... mdadm -E verifies that the actual slot numbers in the device superblocks are correct, yet the numbers shown in /proc/mdstat are still weird. I would like to fix this glitch, both to satisfy my inner perfectionist and to avoid any potential sources of future confusion in a crisis. Is there a way to specify which slot a new device should occupy in a RAID array? UPDATE: I have verified that the slot number persists in the component device superblock. For the version 1.0 superblocks that I am using that would be the dev_number field as defined in include/linux/raid/md_p.h of the Linux kernel source. I am now considering direct modification of said field to change the slot number - I don't suppose there is some standard way to manipulate the RAID superblock?

    Read the article

  • Reusing Raid 5 Drive?

    - by User125
    We have two servers (ML530 G2 and DL380G2) w/ identical HP 10K RPM SCSI drives w/ a raid 5. One is decommissioned and the other will be decommissioned shortly. However, one of the drives on the production server had a drive failure. My hope was to take one of the drives from the decommissioned server and pop it into the production server. Both are running RAID 5. I broke the array on the decomm. server. To my knowledge, that should have wiped out all the volume and partition information. However, I do not know if it is safe to then take a drive from the decomm'ed server and replace the failed drive. Will the existing array see it as a replacement drive, wipe it and rebuild? Or will it fail because it was used in an array before. Are there any remnant data that resides on the drives after deleting a raid 5 array? These servers are 10-15 years old, so we're just trying to keep them alive until we decommission it. I'm not looking to pay a premium to find a vendor that still sells replacement drives for this system.

    Read the article

  • Failed Software RAID0 on Linux - Attempting to recover data

    - by Gizmo_the_Great
    I have a two disk RAID0 software raid (not hardware raid) that is reported to have failed during boot and my OS won't start. Using a Live CD, I get the following output : sudo mdadm -E /dev/sdc1 /dev/sdd1 /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 3710713d:fb301031:84b61247:d1d53e0f Name : HP-xw9300:0 Creation Time : Sun Sep 1 15:22:26 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 1465145328 (698.64 GiB 750.15 GB) Data Offset : 16 sectors Super Offset : 8 sectors State : active Device UUID : ad427cd2:9f885f57:7f41015f:90f8f6af Update Time : Sun Jun 8 12:35:11 2014 Checksum : a37407ff - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 3710713d:fb301031:84b61247:d1d53e0f Name : HP-xw9300:0 Creation Time : Sun Sep 1 15:22:26 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 976771056 (465.76 GiB 500.11 GB) Data Offset : 16 sectors Super Offset : 8 sectors State : active Device UUID : 2ea0199d:cb08d9e7:0830448a:a1e1e348 Update Time : Sun Jun 8 13:06:19 2014 Checksum : 8883c492 - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) GParted lists both disks, detects the flags as 'Raid' and lists the data usage. Can anyone please help me re-assemble just so that I can copy some of the data off that I have not backed up recently? Thanks

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >