Search Results

Search found 3602 results on 145 pages for 'jagged arrays'.

Page 104/145 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Storage product testing

    - by wildchild
    hello, I know this is out of place (being an active member here i am coming for the help from seniors) ,but i need some information regarding storage testing ,testing of Raid arrays, SCSI, SAS ,SATA and also test carried out on fabric manager(Cisco MDS series switches). I am aware that this is an administrative forum and i would really appreciate if you could direct me to the correct forum ar links where i can learn things . @ moderators-Sorry for posting at the wrong place,i would be deleting this as soon as i get the help. Thanks !

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • ZFS SAS/SATA controller recommendations

    - by ewwhite
    I've been working with OpenSolaris and ZFS for 6 months, primarily on a Sun Fire x4540 and standard Dell and HP hardware. One downside to standard Perc and HP Smart Array controllers is that they do not have a true "passthrough" JBOD mode to present individual disks to ZFS. One can configure multiple RAID 0 arrays and get them working in ZFS, but it impacts hotswap capabilities (thus requiring a reboot upon disk failure/replacement). I'm curious as to what SAS/SATA controllers are recommended for home-brewed ZFS JBODs. In addition, how does battery-backed write cache (BBWC) play into the solution?

    Read the article

  • SQL Server Transaction Log RAID

    - by Eric Maibach
    We have three SQL Server servers, and each server has a about five or six databases on it. We are in the process of moving these servers to a new SAN and I am working on the best RAID configuration. Currently all of the log files for all of the databases share a RAID array, there is nothing else on this RAID array except for the log files, but all of the databases use this same array for their log files. I have read that it is best to have log files on separate disks. But in our case I am not sure whether it would be best to have one big array with about 8 drives that all the log files are on. Or would it be better to create four two disk arrays and give some of the larger databases their own dedicated disks for their log files?

    Read the article

  • iSCSI RAID1 on two servers, fail scenario

    - by Franz Kafka
    Hallo, a simple question: Image I have two servers, each server has two disks in RAID1. Now I merge the two arrays with iSCSI to one RAID1 disk. Two questions: Can I do the merging of the 4 disks in one go? I can't image how. First I will have to install the os, and then the raid controller is already set up to RAID1. If a whole server fails the other server would continue working without any problems? Does iSCSI notice that the other server is missing and treet this as if the two disks were broken? When the server comes back online the data is resynced, as if I installed new disks into a array? Can I image that this way? Thanks alot.

    Read the article

  • How to perform fresh linux install while preserving software raid and user accounts

    - by slayton
    I have a system with two software raid arrays. The OS is Ubuntu 9.04 and is no longer receiving updates. I'd like to update the system to 12.04 rather than trying to do the automatic update from 9.04-> 9.10-> ... -> 12.04. My main drive has 2 partitions that are mounted at / and /home. Is it possible to do a fresh install of linux to the partition where / is mounted while preserving user accounts and preferences (such as passwords, home dir locations, etc...)? Additionally what do I need to do to keep my software raid array intact following the OS re-install?

    Read the article

  • How do I log file system read/writes by filename in Linux?

    - by Casey
    I'm looking for a simple method that will log file system operations. It should display the name of the file being accessed or modified. I'm familiar with powertop, and it appears this works to an extent, in so much that it show the user files that were written to. Is there any other utilities that support this feature. Some of my findings: powertop: best for write access logging, but more focused on CPU activity iotop: shows real time disk access by process, but not file name lsof: shows the open files per process, but not real time file access iostat: shows the real time I/O performance of disk/arrays but does not indicate file or process

    Read the article

  • Terrible Performance with SATA Drives on Dell PowerEdge, steps to troubleshoot?

    - by Tom
    I had asked this question earlier and the question went missing so here it is again. Bought a DELL Poweredge 2950 to use as in-house QA Server. Disk performance is beyond terrible, 1000-4000 ms response time on the drive with our SQL Server database .mdf. Sql Server disk queue upwards of 300 at times. I'm a software guy, can anyone help me with steps to determine the issue? I don't know what RAID controller it has, how can I determine that? I'm speculating it could be BIOS issue. Perhaps the server used to have another kind of drive in it and when I added SATA the ??? buffer size is wrong??? Perhaps I chose wrong options (chose defaults) when setting up the RAID 1 arrays? I thought RAID 1 was a performance array?

    Read the article

  • Any experience with SATA SAS Interposer Cards?

    - by korkman
    Driven by the current price difference between SATA and SAS disks on one side and the potentially bad behaviour of SATA disks in bigger storage arrays on the other side, I have found so-called SATA-to-SAS interposer cards. Advertised as "seamlessly adding SAS capabilities to existing SATA disk drives", I wonder if anyone here has had some experience with these or similar products. The major benefits I can identify are the increased cable voltage (if all drives are SAS connected), the ability to power-cycle the drive and multipath (if desired). Obviously the SATA drive will still have to be RAID edition. The question is: Do these cards indeed increase the overall reliability of a storage system, or will failing SATA disks cause trouble nevertheless? Edit: I'm not asking for hypothetical answers, only actual experience please. I'm well aware that the typical 10k SAS drive is more reliable (and better performing) than 7200 SATA drives. But how does a nearline SAS, which is phyiscally the same disk as its SATA counterpart, compare to the SATA version with interposer?

    Read the article

  • FreeNAS/ZFS/Raid-Z slightly different disks

    - by muskratt
    I'm considering using FreeNAS and "recycling" some of my older 1TB disks. Two are the exact same model Western Digital while another is Seagate and the fourth is Samsung. Typically, since all disks are not equal, I'll create my arrays on a Windows-based server 1GB undersized to prevent a replacement disk not being large enough. Dell is notorious for sending replacement SATA disks of different brand---knock on wood, no problems yet. Since not all drives are created equally and they can vary a few MBs, is there a way to make the the FreeNas/ZFS/Raid-Z function in the same way I do for my Windows-based servers above? Thanks

    Read the article

  • 5 x 3GB drives and 4 x 1500GB drive best raid setup?

    - by Zen_Silence
    Hello, I am building a file server my plan is the have the Operating system on one raid partition and the data storage on another partition. I currently have 5 x 3GB IDE drives that i would like to put the operating system on theses drives are old but that doesnt matter to me at the moment i have a ton of them so for this raid partition i would probably want to be able to pull out dead a drive and rebuild the array. My file partition is going to consist of 4 x 1.5TB SATA drives I would like the maximum storage with some redundancy. Any suggestions to which Raid level i should use would be greatly appreciated and if you could also suggest a PCI or PCI-e raid controller to handle theses arrays. Thanks in Advance, Zen_Silence

    Read the article

  • Remove values from array on foreach PHP

    - by user104531
    I have an array like this: Array ( [0] => Array ( [id] => 68 [type] => onetype [type_id] => 131 [name] => name1 ) [1] => Array ( [id] => 32 [type] => anothertype [type_id] => 101 [name] => name2 ) ) I need to remove some arrays from it if the users has permissions or not to see that kind of type. I am thinking on doing it with a for each, and do the needed ifs inside it to remove or let it as it. My question is: What's the most efficent way to do this? The array will have no more than 100 records. But several users will request it and do the filtering over and over.

    Read the article

  • RAID 10, how layout works ?

    - by Bastien974
    I'm trying to figure out how exactly works the RAID 10 in linux with mdadm. I want to create a RAID 10 out of 4 partitions, let's say a, b, c and d. a and b are on the array 1, c and d array 2. So what I want is to have the couple a and b, c and d in RAID 0. Then on top of that, a RAID 1. The option in the mdadm command to configure the layout is -p, --layout with option : near, far, offset see here I want to keep my data safe if the array 1 fails for example, that would mean that every chunk of data are always copied on both arrays. How do I have to set my RAID 10, near or far ?

    Read the article

  • When using RAID10 + BBWC why is it better to separate PostgreSQL data files from OS and transaction logs than to keep them all on the same array?

    - by Vlad
    I've seen the advice everywhere (including here and here): keep your OS partition, DB data files and DB transaction logs on separate discs/arrays. The general recommendation is to use RAID1 for OS, RAID10 for data (or RAID5 if load is very read-biased) and RAID1 for transaction logs. However, considering that you will need at least 6 or 8 drives to build this setup, wouldn't a RAID10 over 6-8 drives with BBWC perform better? What if the drives are SSDs? I'm talking here about internal server drives, not SAN.

    Read the article

  • How do I log file system read/writes by filename in Linux?

    - by Casey
    I'm looking for a simple method that will log file system operations. It should display the name of the file being accessed or modified. I'm familiar with powertop, and it appears this works to an extent, in so much that it show the user files that were written to. Is there any other utilities that support this feature. Some of my findings: powertop: best for write access logging, but more focused on CPU activity iotop: shows real time disk access by process, but not file name lsof: shows the open files per process, but not real time file access iostat: shows the real time I/O performance of disk/arrays but does not indicate file or process

    Read the article

  • How to remove a drive from a 2-drive RAID 5 array?

    - by DrSAR
    There is some information available on shape changes in RAID arrays but I'm a little nervous and would like confirmation: Problem: I have 2 500GB drive as software raid 5 (mdadm). I would like to free one of the two drives since RAID-redundancy is for wimps... Can I just mdadm --grow --array-size=1 followed by a mdadm --grow --raid-disks 1? This seems too simple. How would I specify which drive gets freed? Part of the reason for this maneuver is that I don't have additional space to run a backup.

    Read the article

  • HP DL380 Losing Drive Array

    - by jidl
    I have an HP Proliant DG380 G7 dropping one of it's arrays every hour, on the hour, for 2-5 minutes. The OS is SBS 2011 Standard, the servers runs Exchange, DC, files & Trend WFBS 8. I can watch the D Drive disappear for the duration of the problem - then it just comes back up and all is well again. There is no loss of network connectivity, although the mapped drives also disappear. We thought it might be to do with Sharepoint / VSS writers failing but it looks as though this is a symptom rather than cause. It survives a reboot. Any ideas as to what could be running on a regular schedule like this?

    Read the article

  • SAN for Medium Business - Where to start? [closed]

    - by Henson
    I've always run Linux on my home computers, and done PC repair for years, but this is my first experience with needing to buy a SAN. I thought I was knowledgeable, but I feel a bit lost. I need to be able to support 25 VMs, which are currently managed through vSphere. The company I'm at is growing quickly though, so I'd like to plan for the future. Ideally, I want a solution that I can just tack arrays onto and manage as one large, iSCSI drive. Suggestions? Good resources? If I can find something that appears to software as one large drive, am I better off going with a solution like FreeNAS or Starwind, or an all-in-one proprietary solution like NetApp? Cost, is (of course, and always I'm sure) an issue.

    Read the article

  • 5x5 matrix multiplication in C

    - by Rick
    I am stuck on this problem in my homework. I've made it this far and am sure the problem is in my three for loops. The question directly says to use 3 for loops so I know this is probably just a logic error. #include<stdio.h> void matMult(int A[][5],int B[][5],int C[][5]); int printMat_5x5(int A[5][5]); int main() { int A[5][5] = {{1,2,3,4,6}, {6,1,5,3,8}, {2,6,4,9,9}, {1,3,8,3,4}, {5,7,8,2,5}}; int B[5][5] = {{3,5,0,8,7}, {2,2,4,8,3}, {0,2,5,1,2}, {1,4,0,5,1}, {3,4,8,2,3}}; int C[5][5] = {0}; matMult(A,B,C); printMat_5x5(A); printf("\n"); printMat_5x5(B); printf("\n"); printMat_5x5(C); return 0; } void matMult(int A[][5], int B[][5], int C[][5]) { int i; int j; int k; for(i = 0; i <= 2; i++) { for(j = 0; j <= 4; j++) { for(k = 0; k <= 3; k++) { C[i][j] += A[i][k] * B[k][j]; } } } } int printMat_5x5(int A[5][5]){ int i; int j; for (i = 0;i < 5;i++) { for(j = 0;j < 5;j++) { printf("%2d",A[i][j]); } printf("\n"); } } EDIT: Here is the question, sorry for not posting it the first time. (2) Write a C function to multiply two five by five matrices. The prototype should read void matMult(int a[][5],int b[][5],int c[][5]); The resulting matrix product (a times b) is returned in the two dimensional array c (the third parameter of the function). Program your solution using three nested for loops (each generating the counter values 0, 1, 2, 3, 4) That is, DO NOT code specific formulas for the 5 by 5 case in the problem, but make your code general so it can be easily changed to compute the product of larger square matrices. Write a main program to test your function using the arrays a: 1 2 3 4 6 6 1 5 3 8 2 6 4 9 9 1 3 8 3 4 5 7 8 2 5 b: 3 5 0 8 7 2 2 4 8 3 0 2 5 1 2 1 4 0 5 1 3 4 8 2 3 Print your matrices in a neat format using a C function created for printing five by five matrices. Print all three matrices. Generate your test arrays in your main program using the C array initialization feature. enter code here

    Read the article

  • Visual C++ 2010 Winform Errors C2238, C2059, C1075.

    - by tracelez
    It errors when I try compile. I cut the code out of the program and the program works and complies correctly. Not sure why it doesn't like this. This part of the code does single digit math with numeric strings that are converted into a char arrays. ** Error 2 error C2238: unexpected token(s) preceding ';' C:\Users\Alpha\documents\visual studio 2010\Projects\Win32 Form c++\Win32 Form c++\Win32 Form c++.cpp 10 1 Win32 Form c++ ** Error 1 error C2059: syntax error : 'namespace' C:\Users\Alpha\documents\visual studio 2010\Projects\Win32 Form c++\Win32 Form c++\Win32 Form c++.cpp 10 1 Win32 Form c++ ** Error 3 error C1075: end of file found before the left brace '{' at 'c:\users\alpha\documents\visual studio 2010\projects\win32 form c++\win32 form c++\Form1.h(40)' was matched C:\Users\Alpha\documents\visual studio 2010\Projects\Win32 Form c++\Win32 Form c++\Win32 Form c++.cpp 23 1 Win32 Form c++ ////// ////// // chX[] and chY[] are char arrays from functional part of program std::reverse( chX, &chX[ strlen( chX ) ] ); std::reverse( chY, &chY[ strlen( chY ) ] ); // makes sure x is larger or equal to y...makes looping logic easier if (strlen(chX) < strlen(chY)) { char *chZ = chX; chX = chY; chY = chZ; } //Variables for this part of the program char chX2; char chY2; std::string strSum; int sum = 0; bool carryth1 = false; int x=0; int y=0; for (int i = 0; i <= (strlen(chX)-1); i++) { if (i <= strlen(chY)-1) { chX2= chX[i]; chY2= chY[i]; x = atoi(chX2); y = atoi(chY2); //x = atoi(chX[i]); sum = x+y+(int)carryth1; carryth1 = false; if (sum > 9) { if(i == 0) { sum -=10; strSum = itoa(sum); carryth1 = true; } else { sum -=10; strSum += itoa(sum); carryth1 = true; } } else { if(i == 0) { strSum = itoa(sum); } else { strSum += itoa(sum); } } else { y = 0; chX2= chX[i]; x = atoi(chX2); sum = x+y+(int)carryth1; if((i == strlen(chX)-1)&& (carryth1 == true) && (x == 9)) { strSum += "10"; } else { strSum = itoa(sum); } } std::reverse( strSum, &strSum[ strlen( strSum ) ] ); //Creates new string for txtDisplay this simplifies conversions String^ strDisplay = "X is " + gcnew String((strX1.c_str())) + " Y is " +gcnew String((strY1.c_str())) + " \r\n " ; strDisplay += "The sum of the X + Y = "; txtDisplay->Text = gcnew String((strSum.c_str())) ;

    Read the article

  • Save blob to DB using hibernate

    - by Link123
    Hey! I tried save file to MySQL using blob with hibernate3. But I always have java.lang.UnsupportedOperationException: Blob may not be manipulated from creating session org.hibernate.lob.BlobImpl.excep(BlobImpl.java:127) Here some code. package com.uni.domain; public class File extends Identifier { private byte[] data; private String contentType; public byte[] getData() { return data; } public File() {} public void setData(byte[] photo) { this.data = photo; } public boolean isNew() { return true; } public String getContentType() { return contentType; } public void setContentType(String contentType) { this.contentType = contentType; } } package com.uni.domain; import org.hibernate.Hibernate; import org.hibernate.HibernateException; import org.hibernate.usertype.UserType; import java.io.InputStream; import java.io.OutputStream; import java.io.Serializable; import java.sql.*; import java.util.Arrays; public class PhotoType implements UserType { public int[] sqlTypes() { return new int[]{Types.BLOB}; } public Class returnedClass() { return byte[].class; } public boolean equals(Object o, Object o1) throws HibernateException { return Arrays.equals((byte[]) o, (byte[]) o1); } public int hashCode(Object o) throws HibernateException { return o.hashCode(); } public Object nullSafeGet(ResultSet resultSet, String[] strings, Object o) throws HibernateException, SQLException { Blob blob = resultSet.getBlob(strings[0]); return blob.getBytes(1, (int) blob.length()); } public void nullSafeSet(PreparedStatement st, Object value, int index) throws HibernateException, SQLException { st.setBlob(index, Hibernate.createBlob((byte[]) value)); } public Object deepCopy(Object value) { if (value == null) return null; byte[] bytes = (byte[]) value; byte[] result = new byte[bytes.length]; System.arraycopy(bytes, 0, result, 0, bytes.length); return result; } public boolean isMutable() { return true; } public Serializable disassemble(Object o) throws HibernateException { return null; . } public Object assemble(Serializable serializable, Object o) throws HibernateException { return null; . } public Object replace(Object o, Object o1, Object o2) throws HibernateException { return null; . } <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="com.uni.domain"> <class name="com.uni.domain.File"> <id name="id"> <generator class="native"/> </id> <property name="data" type="com.uni.domain.FleType"/> <property name="contentType"/> </class> </hibernate-mapping> Help me please. Where I’m wrong?

    Read the article

  • Using UUIDs for cheap equals() and hashCode()

    - by Tom McIntyre
    I have an immutable class, TokenList, which consists of a list of Token objects, which are also immutable: @Immutable public final class TokenList { private final List<Token> tokens; public TokenList(List<Token> tokens) { this.tokens = Collections.unmodifiableList(new ArrayList(tokens)); } public List<Token> getTokens() { return tokens; } } I do several operations on these TokenLists that take multiple TokenLists as inputs and return a single TokenList as the output. There can be arbitrarily many TokenLists going in, and each can have arbitrarily many Tokens. These operations are expensive, and there is a good chance that the same operation (ie the same inputs) will be performed multiple times, so I would like to cache the outputs. However, performance is critical, and I am worried about the expense of performing hashCode() and equals() on these objects that may contain arbitrarily many elements (as they are immutable then hashCode could be cached, but equals will still be expensive). This led me to wondering whether I could use a UUID to provide equals() and hashCode() simply and cheaply by making the following updates to TokenList: @Immutable public final class TokenList { private final List<Token> tokens; private final UUID uuid; public TokenList(List<Token> tokens) { this.tokens = Collections.unmodifiableList(new ArrayList(tokens)); this.uuid = UUID.randomUUID(); } public List<Token> getTokens() { return tokens; } public UUID getUuid() { return uuid; } } And something like this to act as a cache key: @Immutable public final class TopicListCacheKey { private final UUID[] uuids; public TopicListCacheKey(TopicList... topicLists) { uuids = new UUID[topicLists.length]; for (int i = 0; i < uuids.length; i++) { uuids[i] = topicLists[i].getUuid(); } } @Override public int hashCode() { return Arrays.hashCode(uuids); } @Override public boolean equals(Object other) { if (other == this) return true; if (other instanceof TopicListCacheKey) return Arrays.equals(uuids, ((TopicListCacheKey) other).uuids); return false; } } I figure that there are 2^128 different UUIDs and I will probably have at most around 1,000,000 TokenList objects active in the application at any time. Given this, and the fact that the UUIDs are used combinatorially in cache keys, it seems that the chances of this producing the wrong result are vanishingly small. Nevertheless, I feel uneasy about going ahead with it as it just feels 'dirty'. Are there any reasons I should not use this system? Will the performance costs of the SecureRandom used by UUID.randomUUID() outweigh the gains (especially since I expect multiple threads to be doing this at the same time)? Are collisions going to be more likely than I think? Basically, is there anything wrong with doing it this way?? Thanks.

    Read the article

  • python: what are efficient techniques to deal with deeply nested data in a flexible manner?

    - by AlexandreS
    My question is not about a specific code snippet but more general, so please bear with me: How should I organize the data I'm analyzing, and which tools should I use to manage it? I'm using python and numpy to analyse data. Because the python documentation indicates that dictionaries are very optimized in python, and also due to the fact that the data itself is very structured, I stored it in a deeply nested dictionary. Here is a skeleton of the dictionary: the position in the hierarchy defines the nature of the element, and each new line defines the contents of a key in the precedent level: [AS091209M02] [AS091209M01] [AS090901M06] ... [100113] [100211] [100128] [100121] [R16] [R17] [R03] [R15] [R05] [R04] [R07] ... [1263399103] ... [ImageSize] [FilePath] [Trials] [Depth] [Frames] [Responses] ... [N01] [N04] ... [Sequential] [Randomized] [Ch1] [Ch2] Edit: To explain a bit better my data set: [individual] ex: [AS091209M02] [imaging session (date string)] ex: [100113] [Region imaged] ex: [R16] [timestamp of file] ex [1263399103] [properties of file] ex: [Responses] [regions of interest in image ] ex [N01] [format of data] ex [Sequential] [channel of acquisition: this key indexes an array of values] ex [Ch1] The type of operations I perform is for instance to compute properties of the arrays (listed under Ch1, Ch2), pick up arrays to make a new collection, for instance analyze responses of N01 from region 16 (R16) of a given individual at different time points, etc. This structure works well for me and is very fast, as promised. I can analyze the full data set pretty quickly (and the dictionary is far too small to fill up my computer's ram : half a gig). My problem comes from the cumbersome manner in which I need to program the operations of the dictionary. I often have stretches of code that go like this: for mk in dic.keys(): for rgk in dic[mk].keys(): for nk in dic[mk][rgk].keys(): for ik in dic[mk][rgk][nk].keys(): for ek in dic[mk][rgk][nk][ik].keys(): #do something which is ugly, cumbersome, non reusable, and brittle (need to recode it for any variant of the dictionary). I tried using recursive functions, but apart from the simplest applications, I ran into some very nasty bugs and bizarre behaviors that caused a big waste of time (it does not help that I don't manage to debug with pdb in ipython when I'm dealing with deeply nested recursive functions). In the end the only recursive function I use regularly is the following: def dicExplorer(dic, depth = -1, stp = 0): '''prints the hierarchy of a dictionary. if depth not specified, will explore all the dictionary ''' if depth - stp == 0: return try : list_keys = dic.keys() except AttributeError: return stp += 1 for key in list_keys: else: print '+%s> [\'%s\']' %(stp * '---', key) dicExplorer(dic[key], depth, stp) I know I'm doing this wrong, because my code is long, noodly and non-reusable. I need to either use better techniques to flexibly manipulate the dictionaries, or to put the data in some database format (sqlite?). My problem is that since I'm (badly) self-taught in regards to programming, I lack practical experience and background knowledge to appreciate the options available. I'm ready to learn new tools (SQL, object oriented programming), whatever it takes to get the job done, but I am reluctant to invest my time and efforts into something that will be a dead end for my needs. So what are your suggestions to tackle this issue, and be able to code my tools in a more brief, flexible and re-usable manner?

    Read the article

  • Why is processing a sorted array faster than an unsorted array?

    - by GManNickG
    Here is a piece of code that shows some very peculiar performance. For some strange reason, sorting the data miraculously speeds up the code by almost 6x: #include <algorithm> #include <ctime> #include <iostream> int main() { // generate data const unsigned arraySize = 32768; int data[arraySize]; for (unsigned c = 0; c < arraySize; ++c) data[c] = std::rand() % 256; // !!! with this, the next loop runs faster std::sort(data, data + arraySize); // test clock_t start = clock(); long long sum = 0; for (unsigned i = 0; i < 100000; ++i) { // primary loop for (unsigned c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC; std::cout << elapsedTime << std::endl; std::cout << "sum = " << sum << std::endl; } Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds. With the sorted data, the code runs in 1.93 seconds. Initially I thought this might be just a language or compiler anomaly. So I tried it Java... import java.util.Arrays; import java.util.Random; public class Main { public static void main(String[] args) { // generate data int arraySize = 32768; int data[] = new int[arraySize]; Random rnd = new Random(0); for (int c = 0; c < arraySize; ++c) data[c] = rnd.nextInt() % 256; // !!! with this, the next loop runs faster Arrays.sort(data); // test long start = System.nanoTime(); long sum = 0; for (int i = 0; i < 100000; ++i) { // primary loop for (int c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } System.out.println((System.nanoTime() - start) / 1000000000.0); System.out.println("sum = " + sum); } } with a similar but less extreme result. My first thought was that sorting brings the data into cache, but my next thought was how silly that is because the array was just generated. What is going on? Why is a sorted array faster than an unsorted array? The code is summing up some independent terms, the order should not matter.

    Read the article

  • Parallel processing via multithreading in Java

    - by Robz
    There are certain algorithms whose running time can decrease significantly when one divides up a task and gets each part done in parallel. One of these algorithms is merge sort, where a list is divided into infinitesimally smaller parts and then recombined in a sorted order. I decided to do an experiment to test whether or not I could I increase the speed of this sort by using multiple threads. I am running the following functions in Java on a Quad-Core Dell with Windows Vista. One function (the control case) is simply recursive: // x is an array of N elements in random order public int[] mergeSort(int[] x) { if (x.length == 1) return x; // Dividing the array in half int[] a = new int[x.length/2]; int[] b = new int[x.length/2+((x.length%2 == 1)?1:0)]; for(int i = 0; i < x.length/2; i++) a[i] = x[i]; for(int i = 0; i < x.length/2+((x.length%2 == 1)?1:0); i++) b[i] = x[i+x.length/2]; // Sending them off to continue being divided mergeSort(a); mergeSort(b); // Recombining the two arrays int ia = 0, ib = 0, i = 0; while(ia != a.length || ib != b.length) { if (ia == a.length) { x[i] = b[ib]; ib++; } else if (ib == b.length) { x[i] = a[ia]; ia++; } else if (a[ia] < b[ib]) { x[i] = a[ia]; ia++; } else { x[i] = b[ib]; ib++; } i++; } return x; } The other is in the 'run' function of a class that extends thread, and recursively creates two new threads each time it is called: public class Merger extends Thread { int[] x; boolean finished; public Merger(int[] x) { this.x = x; } public void run() { if (x.length == 1) { finished = true; return; } // Divide the array in half int[] a = new int[x.length/2]; int[] b = new int[x.length/2+((x.length%2 == 1)?1:0)]; for(int i = 0; i < x.length/2; i++) a[i] = x[i]; for(int i = 0; i < x.length/2+((x.length%2 == 1)?1:0); i++) b[i] = x[i+x.length/2]; // Begin two threads to continue to divide the array Merger ma = new Merger(a); ma.run(); Merger mb = new Merger(b); mb.run(); // Wait for the two other threads to finish while(!ma.finished || !mb.finished) ; // Recombine the two arrays int ia = 0, ib = 0, i = 0; while(ia != a.length || ib != b.length) { if (ia == a.length) { x[i] = b[ib]; ib++; } else if (ib == b.length) { x[i] = a[ia]; ia++; } else if (a[ia] < b[ib]) { x[i] = a[ia]; ia++; } else { x[i] = b[ib]; ib++; } i++; } finished = true; } } It turns out that function that does not use multithreading actually runs faster. Why? Does the operating system and the java virtual machine not "communicate" effectively enough to place the different threads on different cores? Or am I missing something obvious?

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >