Search Results

Search found 2823 results on 113 pages for 'perforce branch spec'.

Page 45/113 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

  • JCP.Next - Early Adopters of JCP 2.8

    - by Heather VanCura
    JCP.Next is a series of three JSRs (JSR 348, JSR 355 and JSR 358), to be defined through the JCP process itself, with the JCP Executive Committee serving as the Expert Group. The proposed JSRs will modify the JCP's processes  - the Process Document and Java Specification Participation Agreement (JSPA) and will apply to all new JSRs for all Java platforms.   The first - JCP.next.1, or more formally JSR 348, Towards a new version of the Java Community Process - was completed and put into effect in October 2011 as JCP 2.8. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. The second - JSR 355, Executive Committee Merge, is also Final. You can read the JCP 2.9 Process Document .  As part of the JSR 355 Final Release, the JCP Executive Committee published revisions to the JCP Process Document (version 2.9) and the EC Standing Rules (version 2.2).  The changes went into effect following the 2012 EC Elections in November. The third JSR 358, A major revision of the Java Community Process was submitted in June 2012.  This JSR will modify the Java Specification Participation Agreement (JSPA) as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons, the JCP EC (acting as the Expert Group for this JSR), expects to spend a considerable amount of time working on. The JSPA is defined by the JCP as "a one-year, renewable agreement between the Member and Oracle. The success of the Java community depends upon an open and transparent JCP program.  JSR 358, A major revision of the Java Community Process, is now in process and can be followed on java.net. The following JSRs and Spec Leads were the early adopters of JCP 2.8, who voluntarily migrated their JSRs from JCP 2.x to JCP 2.8 or above.  More candidates for 2012 JCP Star Spec Leads! JSR 236, Concurrency Utilities for Java EE (Anthony Lai/Oracle), migrated April 2012 JSR 308, Annotations on Java Types (Michael Ernst, Alex Buckley/Oracle), migrated September 2012 JSR 335, Lambda Expressions for the Java Programming Language (Brian Goetz/Oracle), migrated October 2012 JSR 337, Java SE 8 Release Contents (Mark Reinhold/Oracle) – EG Formation, migrated September 2012 JSR 338, Java Persistence 2.1 (Linda DeMichiel/Oracle), migrated January 2012 JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services (Santiago Pericas-Geertsen, Marek Potociar/Oracle), migrated July 2012 JSR 340, Java Servlet 3.1 Specification (Shing Wai Chan, Rajiv Mordani/Oracle), migrated August 2012 JSR 341, Expression Language 3.0 (Kin-man Chung/Oracle), migrated August 2012 JSR 343, Java Message Service 2.0 (Nigel Deakin/Oracle), migrated March 2012 JSR 344, JavaServer Faces 2.2 (Ed Burns/Oracle), migrated September 2012 JSR 345, Enterprise JavaBeans 3.2 (Marina Vatkina/Oracle), migrated February 2012 JSR 346, Contexts and Dependency Injection for Java EE 1.1 (Pete Muir/RedHat) – migrated December 2011

    Read the article

  • GoldenGate 12c Trail Encryption and Credentials with Oracle Wallet

    - by hamsun
    I have been asked more than once whether the Oracle Wallet supports GoldenGate trail encryption. Although GoldenGate has supported encryption with the ENCKEYS file for years, Oracle GoldenGate 12c now also supports encryption using the Oracle Wallet. This helps improve security and makes it easier to administer. Two types of wallets can be configured in Oracle GoldenGate 12c: The wallet that holds the master keys, used with trail or TCP/IP encryption and decryption, stored in the new 12c dirwlt/cwallet.sso file.   The wallet that holds the Oracle Database user IDs and passwords stored in the ‘credential store’ stored in the new 12c dircrd/cwallet.sso file.   A wallet can be created using a ‘create wallet’  command.  Adding a master key to an existing wallet is easy using ‘open wallet’ and ‘add masterkey’ commands.   GGSCI (EDLVC3R27P0) 42> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 43> add masterkey Master key 'OGG_DEFAULT_MASTERKEY' added to wallet at location 'dirwlt'.   Existing GUI Wallet utilities that come with other products such as the Oracle Database “Oracle Wallet Manager” do not work on this version of the wallet. The default Oracle Wallet can be changed.   GGSCI (EDLVC3R27P0) 44> sh ls -ltr ./dirwlt/* -rw-r----- 1 oracle oinstall 685 May 30 05:24 ./dirwlt/cwallet.sso GGSCI (EDLVC3R27P0) 45> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   The second wallet file is used for the credential used to connect to a database, without exposing the user id or password. Once it is configured, this file can be copied so that credentials are available to connect to the source or target database.   GGSCI (EDLVC3R27P0) 48> sh cp ./dircrd/cwallet.sso $GG_EURO_HOME/dircrd GGSCI (EDLVC3R27P0) 49> sh ls -ltr ./dircrd/* -rw-r----- 1 oracle oinstall 709 May 28 05:39 ./dircrd/cwallet.sso   The encryption wallet file can also be copied to the target machine so the replicat has access to the master key to decrypt records that are encrypted in the trail. Similar to the old ENCKEYS file, the master keys wallet created on the source host must either be stored in a centrally available disk or copied to all GoldenGate target hosts. The wallet is in a platform-independent format, although it is not certified for the iSeries, z/OS, and NonStop platforms.   GGSCI (EDLVC3R27P0) 50> sh cp ./dirwlt/cwallet.sso $GG_EURO_HOME/dirwlt   The new 12c UserIdAlias parameter is used to locate the credential in the wallet so the source user id and password does not need to be stored as a parameter as long as it is in the wallet.   GGSCI (EDLVC3R27P0) 52> view param extwest extract extwest exttrail ./dirdat/ew useridalias gguamer table west.*; The EncryptTrail parameter is used to encrypt the trail using the Advanced Encryption Standard and can be used with a primary extract or pump extract. GGSCI (EDLVC3R27P0) 54> view param pwest extract pwest encrypttrail AES256 rmthost easthost, mgrport 15001 rmttrail ./dirdat/pe passthru table west.*;   Once the extracts are running, records can be encrypted using the wallet.   GGSCI (EDLVC3R27P0) 60> info extract *west EXTRACT    EXTWEST   Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       00:00:17 (updated 00:00:01 ago) Process ID           24982 Log Read Checkpoint  Oracle Integrated Redo Logs                      2014-05-30 05:25:53                      SCN 0.0 (0) EXTRACT    PWEST     Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       24:02:32 (updated 00:00:05 ago) Process ID           24983 Log Read Checkpoint  File ./dirdat/ew000004                      2014-05-29 05:23:34.748949  RBA 1483   The ‘info masterkey’ command is used to confirm the wallet contains the key after copying it to the target machine. The key is needed to decrypt the data in the trail before the replicat applies the changes to the target database.   GGSCI (EDLVC3R27P0) 41> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 42> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   Once the replicat is running, records can be decrypted using the wallet.   GGSCI (EDLVC3R27P0) 44> info reast REPLICAT   REAST     Last Started 2014-05-30 05:28   Status RUNNING INTEGRATED Checkpoint Lag       00:00:00 (updated 00:00:02 ago) Process ID           25057 Log Read Checkpoint  File ./dirdat/pe000004                      2014-05-30 05:28:16.000000  RBA 1546   There is no need for the DecryptTrail parameter when using the Oracle Wallet, unlike when using the ENCKEYS file.   GGSCI (EDLVC3R27P0) 45> view params reast replicat reast assumetargetdefs discardfile ./dirrpt/reast.dsc, purge useridalias ggueuro map west.*, target east.*;   Once a record is inserted into the source table and committed, the encryption can be verified using logdump and then querying the target table.   AMER_SQL>insert into west.branch values (50, 80071); 1 row created.   AMER_SQL>commit; Commit complete.   The following encrypted record can be found using logdump. Logdump 40 >n 2014/05/30 05:28:30.001.154 Insert               Len    28 RBA 1546 Name: WEST.BRANCH After  Image:                                             Partition 4   G  s    0a3e 1ba3 d924 5c02 eade db3f 61a9 164d 8b53 4331 | .>...$\....?a..M.SC1   554f e65a 5185 0257                               | UO.ZQ..W  Bad compressed block, found length of  7075 (x1ba3), RBA 1546   GGS tokens: TokenID x52 'R' ORAROWID         Info x00  Length   20  4141 4157 7649 4141 4741 4141 4144 7541 4170 0001 | AAAWvIAAGAAAADuAAp..  TokenID x4c 'L' LOGCSN           Info x00  Length    7  3231 3632 3934 33                                 | 2162943  TokenID x36 '6' TRANID           Info x00  Length   10  3130 2e31 372e 3135 3031                          | 10.17.1501  The replicat automatically decrypted this record from the trail and then inserted the row to the target table using the wallet. This select verifies the row was inserted into the target database and the data is not encrypted. EURO_SQL>select * from branch where branch_number=50; BRANCH_NUMBER                  BRANCH_ZIP -------------                                   ----------    50                                              80071   Book a seat in an upcoming Oracle GoldenGate 12c: Fundamentals for Oracle course now to learn more about GoldenGate 12c new features including how to use GoldenGate with the Oracle wallet, credentials, integrated extracts, integrated replicats, the Oracle Universal Installer, and other new features. Looking for another course? View all Oracle GoldenGate training.   Randy Richeson joined Oracle University as a Senior Principal Instructor in March 2005. He is an Oracle Certified Professional (10g-12c) and a GoldenGate Certified Implementation Specialist (10-11g). He has taught GoldenGate since 2010 and also has experience teaching other technical curriculums including GoldenGate Monitor, Veridata, JD Edwards, PeopleSoft, and the Oracle Application Server.

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • Looking for best practice for version numbering of dependent software components

    - by bit-pirate
    We are trying to decide on a good way to do version numbering for software components, which are depending on each other. Let's be more specific: Software component A is a firmware running on an embedded device and component B is its respective driver for a normal PC (Linux/Windows machine). They are communicating with each other using a custom protocol. Since, our product is also targeted at developers, we will offer stable and unstable (experimental) versions of both components (the firmware is closed-source, while the driver is open-source). Our biggest difficulty is how to handle API changes in the communication protocol. While we were implementing a compatibility check in the driver - it checks if the firmware version is compatible to the driver's version - we started to discuss multiple ways of version numbering. We came up with one solution, but we also felt like reinventing the wheel. That is why I'd like to get some feedback from the programmer/software developer community, since we think this is a common problem. So here is our solution: We plan to follow the widely used major.minor.patch version numbering and to use even/odd minor numbers for the stable/unstable versions. If we introduce changes in the API, we will increase the minor number. This convention will lead to the following example situation: Current stable branch is 1.2.1 and unstable is 1.3.7. Now, a new patch for unstable changes the API, what will cause the new unstable version number to become 1.5.0. Once, the unstable branch is considered stable, let's say in 1.5.3, we will release it as 1.4.0. I would be happy about an answer to any of the related questions below: Can you suggest a best practice for handling the issues described above? Do you think our "custom" convention is good? What changes would you apply to the described convention? Thanks a lot for your feedback! PS: Since I'm new here, I can't create new tags (e.g. best-practice). So, I'm wondering if best-pactice is just misspelled or I don't get its meaning.

    Read the article

  • Bazaar issues while installing Ubuntu TV

    - by Aleksi Kinnunen
    I tried to install Ubuntu TV in Ubuntu 12.04 with this guide: https://wiki.ubuntu.com/UbuntuTV/Contributing. Everything had been OK until I wrote to the terminal bzr branch lp:~s-team/ubuntutv/trunk ubuntu-tv. It says: Permission denied (publickey). ConnectionReset reading response for 'BzrDir.open_2.1', retrying Permission denied (publickey). bzr: ERROR: Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist.

    Read the article

  • Github Organization Repositories, Issues, Multiple Developers, and Forking - Best Workflow Practices

    - by Jim Rubenstein
    A weird title, yes, but I've got a bit of ground to cover I think. We have an organization account on github with private repositories. We want to use github's native issues/pull-requests features (pull requests are basically exactly what we want as far as code reviews and feature discussions). We found the tool hub by defunkt which has a cool little feature of being able to convert an existing issue to a pull request, and automatically associate your current branch with it. I'm wondering if it is best practice to have each developer in the organization fork the organization's repository to do their feature work/bug fixes/etc. This seems like a pretty solid work flow (as, it's basically what every open source project on github does) but we want to be sure that we can track issues and pull requests from ONE source, the organization's repository. So I have a few questions: Is a fork-per-developer approach appropriate in this case? It seems like it could be a little overkill. I'm not sure that we need a fork for every developer, unless we introduce developers who don't have direct push access and need all their code reviewed. In which case, we would want to institute a policy like that, for those developers only. So, which is better? All developers in a single repository, or a fork for everyone? Does anyone have experience with the hub tool, specifically the pull-request feature? If we do a fork-per-developer (or even for less-privileged devs) will the pull-request feature of hub operate on the pull requests from the upstream master repository (the organization's repository?) or does it have different behavior? EDIT I did some testing with issues, forks, and pull requests and found that. If you create an issue on your organization's repository, then fork the repository from your organization to your own github account, do some changes, merge to your fork's master branch. When you try to run hub -i <issue #> you get an error, User is not authorized to modify the issue. So, apparently that work flow won't work.

    Read the article

  • Google Wave Conversation Model

    Google Wave Conversation Model Pamela Fox explains the Google Wave Conversation - waves, wavelets, conversations, and blips. The Prezi shown is here: prezi.com The Google Wave conversation model spec is here: www.waveprotocol.org From: GoogleDevelopers Views: 2 0 ratings Time: 08:09 More in Science & Technology

    Read the article

  • Das T5-4 TPC-H Ergebnis naeher betrachtet

    - by Stefan Hinker
    Inzwischen haben vermutlich viele das neue TPC-H Ergebnis der SPARC T5-4 gesehen, das am 7. Juni bei der TPC eingereicht wurde.  Die wesentlichen Punkte dieses Benchmarks wurden wie gewohnt bereits von unserer Benchmark-Truppe auf  "BestPerf" zusammengefasst.  Es gibt aber noch einiges mehr, das eine naehere Betrachtung lohnt. Skalierbarkeit Das TPC raet von einem Vergleich von TPC-H Ergebnissen in unterschiedlichen Groessenklassen ab.  Aber auch innerhalb der 3000GB-Klasse ist es interessant: SPARC T4-4 mit 4 CPUs (32 Cores mit 3.0 GHz) liefert 205,792 QphH. SPARC T5-4 mit 4 CPUs (64 Cores mit 3.6 GHz) liefert 409,721 QphH. Das heisst, es fehlen lediglich 1863 QphH oder 0.45% zu 100% Skalierbarkeit, wenn man davon ausgeht, dass die doppelte Anzahl Kerne das doppelte Ergebnis liefern sollte.  Etwas anspruchsvoller, koennte man natuerlich auch einen Faktor von 2.4 erwarten, wenn man die hoehere Taktrate mit beruecksichtigt.  Das wuerde die Latte auf 493901 QphH legen.  Dann waere die SPARC T5-4 bei 83%.  Damit stellt sich die Frage: Was hat hier nicht skaliert?  Vermutlich der Plattenspeicher!  Auch hier lohnt sich eine naehere Betrachtung: Plattenspeicher Im Bericht auf BestPerf und auch im Full Disclosure Report der TPC stehen einige interessante Details zum Plattenspeicher und der Konfiguration.   In der Konfiguration der SPARC T4-4 wurden 12 2540-M2 Arrays verwendet, die jeweils ca. 1.5 GB/s Durchsatz liefert, insgesamt also eta 18 GB/s.  Dabei waren die Arrays offensichtlich mit jeweils 2 Kabeln pro Array direkt an die 24 8GBit FC-Ports des Servers angeschlossen.  Mit den 2x 8GBit Ports pro Array koennte man so ein theoretisches Maximum von 2GB/s erreichen.  Tatsaechlich wurden 1.5GB/s geliefert, was so ziemlich dem realistischen Maximum entsprechen duerfte. Fuer den Lauf mit der SPARC T5-4 wurden doppelt so viele Platten verwendet.  Dafuer wurden die 2540-M2 Arrays mit je einem zusaetzlichen Plattentray erweitert.  Mit dieser Konfiguration wurde dann (laut BestPerf) ein Maximaldurchsatz von 33 GB/s erreicht - nicht ganz das doppelte des SPARC T4-4 Laufs.  Um tatsaechlich den doppelten Durchsatz (36 GB/s) zu liefern, haette jedes der 12 Arrays 3 GB/s ueber seine 4 8GBit Ports liefern muessen.  Im FDR stehen nur 12 dual-port FC HBAs, was die Verwendung der Brocade FC Switches erklaert: Es wurden alle 4 8GBit ports jedes Arrays an die Switches angeschlossen, die die Datenstroeme dann in die 24 16GBit HBA ports des Servers buendelten.  Das theoretische Maximum jedes Storage-Arrays waere nun 4 GB/s.  Wenn man jedoch den Protokoll- und "Realitaets"-Overhead mit einrechnet, sind die tatsaechlich gelieferten 2.75 GB/s gar nicht schlecht.  Mit diesen Zahlen im Hinterkopf ist die Verdopplung des SPARC T4-4 Ergebnisses eine gute Leistung - und gleichzeitig eine gute Erklaerung, warum nicht bis zum 2.4-fachen skaliert wurde. Nebenbei bemerkt: Weder die SPARC T4-4 noch die SPARC T5-4 hatten in der gemessenen Konfiguration irgendwelche Flash-Devices. Mitbewerb Seit die T4 Systeme auf dem Markt sind, bemuehen sich unsere Mitbewerber redlich darum, ueberall den Eindruck zu hinterlassen, die Leistung des SPARC CPU-Kerns waere weiterhin mangelhaft.  Auch scheinen sie ueberzeugt zu sein, dass (ueber)grosse Caches und hohe Taktraten die einzigen Schluessel zu echter Server Performance seien.  Wenn ich mir nun jedoch die oeffentlichen TPC-H Ergebnisse ansehe, sehe ich dies: TPC-H @3000GB, Non-Clustered Systems System QphH SPARC T5-4 3.6 GHz SPARC T5 4/64 – 2048 GB 409,721.8 SPARC T4-4 3.0 GHz SPARC T4 4/32 – 1024 GB 205,792.0 IBM Power 780 4.1 GHz POWER7 8/32 – 1024 GB 192,001.1 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64 – 512 GB 162,601.7 Kurz zusammengefasst: Mit 32 Kernen (mit 3 GHz und 4MB L3 Cache), liefert die SPARC T4-4 mehr QphH@3000GB ab als IBM mit ihrer 32 Kern Power7 (bei 4.1 GHz und 32MB L3 Cache) und auch mehr als HP mit einem 64 Kern Intel Xeon System (2.27 GHz und 24MB L3 Cache).  Ich frage mich, wo genau SPARC hier mangelhaft ist? Nun koennte man natuerlich argumentieren, dass beide Ergebnisse nicht gerade neu sind.  Nun, in Ermangelung neuerer Ergebnisse kann man ja mal ein wenig spekulieren: IBMs aktueller Performance Report listet die o.g. IBM Power 780 mit einem rPerf Wert von 425.5.  Ein passendes Nachfolgesystem mit Power7+ CPUs waere die Power 780+ mit 64 Kernen, verfuegbar mit 3.72 GHz.  Sie wird mit einem rPerf Wert von  690.1 angegeben, also 1.62x mehr.  Wenn man also annimmt, dass Plattenspeicher nicht der limitierende Faktor ist (IBM hat mit 177 SSDs getestet, sie duerfen das gerne auf 400 erhoehen) und IBMs eigene Leistungsabschaetzung zugrunde legt, darf man ein theoretisches Ergebnis von 311398 QphH@3000GB erwarten.  Das waere dann allerdings immer noch weit von dem Ergebnis der SPARC T5-4 entfernt, und gerade in der von IBM so geschaetzen "per core" Metric noch weniger vorteilhaft. In der x86-Welt sieht es nicht besser aus.  Leider gibt es von Intel keine so praktischen rPerf-Tabellen.  Daher muss ich hier fuer eine Schaetzung auf SPECint_rate2006 zurueckgreifen.  (Ich bin kein grosser Fan von solchen Kreuz- und Querschaetzungen.  Insb. SPECcpu ist nicht besonders geeignet, um Datenbank-Leistung abzuschaetzen, da fast kein IO im Spiel ist.)  Das o.g. HP System wird bei SPEC mit 1580 CINT2006_rate gelistet.  Das bis einschl. 2013-06-14 beste Resultat fuer den neuen Intel Xeon E7-4870 mit 8 CPUs ist 2180 CINT2006_rate.  Das ist immerhin 1.38x besser.  (Wenn man nur die Taktrate beruecksichtigen wuerde, waere man bei 1.32x.)  Hier weiter zu rechnen, ist muessig, aber fuer die ungeduldigen Leser hier eine kleine tabellarische Zusammenfassung: TPC-H @3000GB Performance Spekulationen System QphH* Verbesserung gegenueber der frueheren Generation SPARC T4-4 32 cores SPARC T4 205,792 2x SPARC T5-464 cores SPARC T5 409,721 IBM Power 780 32 cores Power7 192,001 1.62x IBM Power 780+ 64 cores Power7+  311,398* HP ProLiant DL980 G764 cores Intel Xeon X7560 162,601 1.38x HP ProLiant DL980 G780 cores Intel Xeon E7-4870    224,348* * Keine echten Resultate  - spekulative Werte auf der Grundlage von rPerf (Power7+) oder SPECint_rate2006 (HP) Natuerlich sind IBM oder HP herzlich eingeladen, diese Werte zu widerlegen.  Aber stand heute warte ich noch auf aktuelle Benchmark Veroffentlichungen in diesem Datensegment. Was koennen wir also zusammenfassen? Es gibt einige Hinweise, dass der Plattenspeicher der begrenzende Faktor war, der die SPARC T5-4 daran hinderte, auf jenseits von 2x zu skalieren Der Mythos, dass SPARC Kerne keine Leistung bringen, ist genau das - ein Mythos.  Wie sieht es umgekehrt eigentlich mit einem TPC-H Ergebnis fuer die Power7+ aus? Cache ist nicht der magische Performance-Schalter, fuer den ihn manche Leute offenbar halten. Ein System, eine CPU-Architektur und ein Betriebsystem jenseits einer gewissen Grenze zu skalieren ist schwer.  In der x86-Welt scheint es noch ein wenig schwerer zu sein. Was fehlt?  Nun, das Thema Preis/Leistung ueberlasse ich gerne den Verkaeufern ;-) Und zu guter Letzt: Nein, ich habe mich nicht ins Marketing versetzen lassen.  Aber manchmal kann ich mich einfach nicht zurueckhalten... Disclosure Statements The views expressed on this blog are my own and do not necessarily reflect the views of Oracle. TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org, results as of 6/7/13. Prices are in USD. SPARC T5-4 409,721.8 QphH@3000GB, $3.94/QphH@3000GB, available 9/24/13, 4 processors, 64 cores, 512 threads; SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads. SPEC and the benchmark names SPECfp and SPECint are registered trademarks of the Standard Performance Evaluation Corporation. Results as of June 18, 2013 from www.spec.org. HP ProLiant DL980 G7 (2.27 GHz, Intel Xeon X7560): 1580 SPECint_rate2006; HP ProLiant DL980 G7 (2.4 GHz, Intel Xeon E7-4870): 2180 SPECint_rate2006,

    Read the article

  • T4 Performance Counters explained

    - by user13346607
    Now that T4 is out for a few month some people might have wondered what details of the new pipeline you can monitor. A "cpustat -h" lists a lot of events that can be monitored, and only very few are self-explanatory. I will try to give some insight on all of them, some of these "PIC events" require an in-depth knowledge of T4 pipeline. Over time I will try to explain these, for the time being these events should simply be ignored. (Side note: some counters changed from tape-out 1.1 (*only* used in the T4 beta program) to tape-out 1.2 (used in the systems shipping today) The table only lists the tape-out 1.2 counters) 0 0 1 1058 6033 Oracle Microelectronics 50 14 7077 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} pic name (cpustat) Prose Comment Sel-pipe-drain-cycles, Sel-0-[wait|ready], Sel-[1,2] Sel-0-wait counts cycles a strand waits to be selected. Some reasons can be counted in detail; these are: Sel-0-ready: Cycles a strand was ready but not selected, that can signal pipeline oversubscription Sel-1: Cycles only one instruction or µop was selected Sel-2: Cycles two instructions or µops were selected Sel-pipe-drain-cycles: cf. PRM footnote 8 to table 10.2 Pick-any, Pick-[0|1|2|3] Cycles one, two, three, no or at least one instruction or µop is picked Instr_FGU_crypto Number of FGU or crypto instructions executed on that vcpu Instr_ld dto. for load Instr_st dto. for store SPR_ring_ops dto. for SPR ring ops Instr_other dto. for all other instructions not listed above, PRM footnote 7 to table 10.2 lists the instructions Instr_all total number of instructions executed on that vcpu Sw_count_intr Nr of S/W count instructions on that vcpu (sethi %hi(fc000),%g0 (whatever that is))  Atomics nr of atomic ops, which are LDSTUB/a, CASA/XA, and SWAP/A SW_prefetch Nr of PREFETCH or PREFETCHA instructions Block_ld_st Block loads or store on that vcpu IC_miss_nospec, IC_miss_[L2_or_L3|local|remote]\ _hit_nospec Various I$ misses, distinguished by where they hit. All of these count per thread, but only primary events: T4 counts only the first occurence of an I$ miss on a core for a certain instruction. If one strand misses in I$ this miss is counted, but if a second strand on the same core misses while the first miss is being resolved, that second miss is not counted This flavour of I$ misses counts only misses that are caused by instruction that really commit (note the "_nospec") BTC_miss Branch target cache miss ITLB_miss ITLB misses (synchronously counted) ITLB_miss_asynch dto. but asynchronously [I|D]TLB_fill_\ [8KB|64KB|4MB|256MB|2GB|trap] H/W tablewalk events that fill ITLB or DTLB with translation for the corresponding page size. The “_trap” event occurs if the HWTW was not able to fill the corresponding TLB IC_mtag_miss, IC_mtag_miss_\ [ptag_hit|ptag_miss|\ ptag_hit_way_mismatch] I$ micro tag misses, with some options for drill down Fetch-0, Fetch-0-all fetch-0 counts nr of cycles nothing was fetched for this particular strand, fetch-0-all counts cycles nothing was fetched for all strands on a core Instr_buffer_full Cycles the instruction buffer for a strand was full, thereby preventing any fetch BTC_targ_incorrect Counts all occurences of wrongly predicted branch targets from the BTC [PQ|ROB|LB|ROB_LB|SB|\ ROB_SB|LB_SB|RB_LB_SB|\ DTLB_miss]\ _tag_wait ST_q_tag_wait is listed under sl=20. These counters monitor pipeline behaviour therefore they are not strand specific: PQ_...: cycles Rename stage waits for a Pick Queue tag (might signal memory bound workload for single thread mode, cf. Mail from Richard Smith) ROB_...: cycles Select stage waits for a ROB (ReOrderBuffer) tag LB_...: cycles Select stage waits for a Load Buffer tag SB_...: cycles Select stage waits for Store Buffer tag combinations of the above are allowed, although some of these events can overlap, the counter will only be incremented once per cycle if any of these occur DTLB_...: cycles load or store instructions wait at Pick stage for a DTLB miss tag [ID]TLB_HWTW_\ [L2_hit|L3_hit|L3_miss|all] Counters for HWTW accesses caused by either DTLB or ITLB misses. Canbe further detailed by where they hit IC_miss_L2_L3_hit, IC_miss_local_remote_remL3_hit, IC_miss I$ prefetches that were dropped because they either miss in L2$ or L3$ This variant counts misses regardless if the causing instruction commits or not DC_miss_nospec, DC_miss_[L2_L3|local|remote_L3]\ _hit_nospec D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters DTLB_miss_asynch counts all DTLB misses asynchronously, there is no way to count them synchronously DC_pref_drop_DC_hit, SW_pref_drop_[DC_hit|buffer_full] L1-D$ h/w prefetches that were dropped because of a D$ hit, counted per core. The others count software prefetches per strand [Full|Partial]_RAW_hit_st_[buf|q] Count events where a load wants to get data that has not yet been stored, i. e. it is still inside the pipeline. The data might be either still in the store buffer or in the store queue. If the load's data matches in the SB and in the store queue the data in buffer takes precedence of course since it is younger [IC|DC]_evict_invalid, [IC|DC|L1]_snoop_invalid, [IC|DC|L1]_invalid_all Counter for invalidated cache evictions per core St_q_tag_wait Number of cycles pipeline waits for a store queue tag, of course counted per core Data_pref_[drop_L2|drop_L3|\ hit_L2|hit_L3|\ hit_local|hit_remote] Data prefetches that can be further detailed by either why they were dropped or where they did hit St_hit_[L2|L3], St_L2_[local|remote]_C2C, St_local, St_remote Store events distinguished by where they hit or where they cause a L2 cache-to-cache transfer, i.e. either a transfer from another L2$ on the same die or from a different die DC_miss, DC_miss_\ [L2_L3|local|remote]_hit D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters L2_[clean|dirty]_evict Per core clean or dirty L2$ evictions L2_fill_buf_full, L2_wb_buf_full, L2_miss_buf_full Per core L2$ buffer events, all count number of cycles that this state was present L2_pipe_stall Per core cycles pipeline stalled because of L2$ Branches Count branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_taken Counts taken branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_mispred, Br_dir_mispred, Br_trg_mispred, Br_trg_mispred_\ [far_tbl|indir_tbl|ret_stk] Counter for various branch misprediction events.  Cycles_user counts cycles, attribute setting hpriv, nouser, sys controls addess space to count in Commit-[0|1|2], Commit-0-all, Commit-1-or-2 Number of times either no, one, or two µops commit for a strand. Commit-0-all counts number of times no µop commits for the whole core, cf. footnote 11 to table 10.2 in PRM for a more detailed explanation on how this counters interacts with the privilege levels

    Read the article

  • What's the difference between MariaDB and MySQL?

    - by Chris J. Lee
    What's the difference between MariaDB and MySQL? I'm not very familiar with both. I'm primarily a front end developer for the most part. Are they syntactically similar? Where do these two query languages differ? Wikipedia only mentions the difference between licensing: MariaDB is a community-developed branch of the MySQL database, the impetus being the community maintenance of its free status under GPL, as opposed to any uncertainty of MySQL license status under its current ownership by Oracle.

    Read the article

  • Pete Muir Interview on CDI 1.1

    - by reza_rahman
    The 109th episode of the Java Spotlight podcast features an interview with CDI 1.1 spec lead Pete Muir of JBoss/Red Hat. Pete talks with Roger Brinkley about the backdrop to CDI, his work at JBoss, the features in CDI 1.1 and what to expect in the future. What's going on behind the scenes and the possible contents for CDI 1.1+ are particularly insightful. You can listen to the full interview here.

    Read the article

  • Marek Potociar on JAX-RS 2

    - by reza_rahman
    Java EE 7 is turning the last lap! Late last month JAX-RS 2 (JSR 339) and Bean Validation 1.1 (JSR 349) were adopted by public review ballot, making them the first two JSR's to be ratified. InfoQ interviewed Marek Potociar, JSR 339 co-spec lead (Marek and Santiago Pericas-Geertsen are the dynamic duo leading JAX-RS). Marek talks about JAX-RS 2 content, significance and future. Read the full interview here.

    Read the article

  • notify-send ignores timeout?

    - by Hooked
    Maybe I'm doing something wrong, but shouldn't the commands (run separately) notify-send -t 1 "test" notify-send -t 1000 "test" notify-send -t 10000 "test" Have different timeouts? The first being nearly instantaneous, the second one taking 1 sec and the third 100 seconds. In all cases it seems to take about six seconds. I'm using the development branch of Precise, if this isn't an issue in the released version I'll close this as being too localized, but I'm unable to test it now.

    Read the article

  • Clementine current song as empathy status

    - by pahnin
    Any plugin available to set Clementine's current playing song as empathy status [update] I've tried it and was not succesful, here is my problem http://stackoverflow.com/questions/7671763/error-setting-status-to-empathy-with-dbus/7760194#7760194 but I've figured out that it is not possible to set status of empathy through dbus http://telepathy.freedesktop.org/spec/Connection_Interface_Simple_Presence.html#Method:SetPresence regards [update]

    Read the article

  • WebLogic Server internal server error [migrated]

    - by Abhinav Pandey
    When I deployed a project in Apache Tomcat 6.0 it is working fine. When I deployed a same project in WebLogic Server 10.3 it's showing an error: Error 500--Internal Server Error javax.servlet.ServletException: [HTTP:101249][weblogic.servlet.internal.WebAppServletContext@ae43b8 - appName: '_appsdir_ab_dir', name: 'ab', context-path: '/ab', spec-version: 'null']: Servlet class FirstServlet for servlet FirstServlet could not be loaded because the requested class was not found in the classpath . java.lang.UnsupportedClassVersionError: FirstServlet : Unsupported major.minor version 51.0.

    Read the article

  • Third JCP.Next JSR Submitted

    - by heathervc
    JSR 358, A major revision of the Java Community Process was submitted for JSR Review on Thursday.  This JSR will modify the JSPA as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons, the JCP EC (acting as the Expert Group for this JSR), expects to spend a considerable amount of time working on it - at least a year, and probably more.  Read more from the Spec Lead, Patrick Curran, in his latest blog post for more details.

    Read the article

  • JSR 355 Final Release, and moves JCP to version 2.9

    - by heathervc
    JSR 355, JCP EC Merge, passed the JCP EC Final Approval Ballot on 13 August 2012, with 14 Yes votes, 1 abstain (1 member did not vote) on the SE/EE EC, and 12 yes votes (2 members were not eligible to vote) on the ME EC.  JSR 355 posted a Final Release this week, moving the JCP program version to JCP 2.9.  The transition to a merged EC will happen after the 2012 EC Elections, as defined in the Appendix B of the JCP (pasted below), and the EC will operate under the new EC Standing Rules. In the previous version (2.8) of this Process Document there were two separate Executive Committees, one for Java ME and one for Java SE and Java EE combined. The single Executive Committee described in this version of the Process Document will be implemented through the following process: The 2012 annual elections will be held as defined in JCP 2.8, but candidates will be informed that if they are elected their term will be for only a single year, since all candidates must stand for re-election in 2013. Immediately after the 2012 election the two ECs will be merged. Oracle and IBM's second seats will be eliminated, resulting in a single EC with 30 members. All subsequent JSR ballots (even for in-progress JSRs) will then be voted on by the merged EC. For the 2013 annual elections three Ratified and two Elected Seats will be eliminated, thereby reducing the EC to 25 members. All 25 seats will be up for re-election in 2013. Members elected in 2013 will be ranked to determine whether their initial term will be one or two years. The 50% of Ratified and 50% of Elected members who receive the most votes will serve an initial two-year term, while all others will serve an initial one year term. All members elected in 2014 and subsequently will serve a two-year term. For clarity, note that the provisions specified in this version of the Process Document regarding a merged EC will apply to subsequent ballots on all existing JSRs, whether or not the Spec Leads of those JSRs chose to adopt this version of the Process Document in its entirety. <end of Appendix> Also of note:  the materials and minutes from the July EC meeting and the June EC Meeting are now available--following the July EC Meeting, Samsung and SK Telecom lost their EC seats. The June EC meeting also had a public portion--the audio from the public portion of the EC meeting are now posted online.  For Spec Leads there is also the recording of the EG Nominations call.

    Read the article

  • Is there a term for quasi-open source proprietary software?

    - by mwhite
    Say a company wants to keep development of new features of a piece of software internal, but wants to make the source code for previous versions public, up to and including existing public features, so that other people can benefit from using and modifying the software themselves, and even possibly contribute changes that can be applied to the development branch. Is there a term for this sort of arrangement, and what is the best way of accomplishing it using existing version control tools and platforms?

    Read the article

  • samsung printer SCX 5737FW on Ubuntu 11.10

    - by Pepe
    can anyone please help? I am trying to install samsung printer SCX 5737FW on Ubuntu 11.10, but when I plug in the usb cable it tries to look for the driver, but can't find it. Indeed the webpage http://www.samsung.com/uk/consumer/print-solutions/print-solutions/mono-multi-function-products/SCX-5737FW/SEE-spec tells me that the printer is supported up to Ubuntu 10. This true because it installs perfectly on my other machine with ubuntu 9. Can anyone help please? Thanks

    Read the article

  • Do you know any independant keyword(phrase) statistics trend website?

    - by Sam
    Hi all, does anyone know an equally impressive service that shows the amount of times a specific keyword(phrase) has been searched, as well as a branch of other similar words? The one discussed in this video (Wordtracker.com) seems very good, but has gone commercial unfortunately which is not what Im looking for. I really would prefer free tool... http://www.youtube.com/watch?v=H2M1tXtAc18&feature=related Any suggestions for similar free online tools are very welcome. Thanks

    Read the article

  • How SEO Could Help Business

    Search engine optimisation, often abbreviated to SEO, is the process of increasing the volume of targeted traffic to your website using search engines and specifically their natural listings. Online marketers often use search engine optimisation as one branch of an online marketing campaign.

    Read the article

  • Progress of SEO Services

    The numbers of people who are providing SEO services in India are increasing day by day. This is because the software industry in India is getting successes at the rate of leaps and bounds. The progress in software industry can be verified by the fact that the world's largest software producing company like Microsoft has also launched its branch in India which shows what India is achieving these days.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >