Search Results

Search found 18454 results on 739 pages for 'oracle thoughts'.

Page 467/739 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • Creating an SMF service for mercurial web server

    - by Chris W Beal
    I'm working on a project at the moment, which has a number of contributers. We're managing the project gate (which is stand alone) with mercurial. We want to have an easy way of seeing the changelog, so we can show management what is going on.  Luckily mercurial provides a basic web server which allows you to see the changes, and drill in to change sets. This can be run as a daemon, but as it was running on our build server, every time it was rebooted, someone needed to remember to start the process again. This is of course a classic usage of SMF. Now I'm not an experienced person at writing SMF services, so it took me 1/2 an hour or so to figure it out the first time. But going forward I should know what I'm doing a bit better. I did reference this doc extensively. Taking a step back, the command to start the mercurial web server is $ hg serve -p <port number> -d So we somehow need to get SMF to run that command for us. In the simplest form, SMF services are really made up of two components. The manifest Usually lives in /var/svc/manifest somewhere Can be imported from any location The method Usually live in /lib/svc/method I simply put the script straight in that directory. Not very repeatable, but it worked Can take an argument of start, stop, or refresh Lets start with the manifest. This looks pretty complex, but all it's doing is describing the service name, the dependencies, the start and stop methods, and some properties. The properties can be by instance, that is to say I could have multiple hg serve processes handling different mercurial projects, on different ports simultaneously Here is the manifest I wrote. I stole extensively from the examples in the Documentation. So my manifest looks like this $ cat hg-serve.xml <?xml version="1.0"?> <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type='manifest' name='hg-serve'> <service name='application/network/hg-serve' type='service' version='1'> <dependency name='network' grouping='require_all' restart_on='none' type='service'> <service_fmri value='svc:/milestone/network:default' /> </dependency> <exec_method type='method' name='start' exec='/lib/svc/method/hg-serve %m' timeout_seconds='2' /> <exec_method type='method' name='stop' exec=':kill' timeout_seconds='2'> </exec_method> <instance name='project-gate' enabled='true'> <method_context> <method_credential user='root' group='root' /> </method_context> <property_group name='hg-serve' type='application'> <propval name='path' type='astring' value='/src/project-gate'/> <propval name='port' type='astring' value='9998' /> </property_group> </instance> <stability value='Evolving' /> <template> <common_name> <loctext xml:lang='C'>hg-serve</loctext> </common_name> <documentation> <manpage title='hg' section='1' /> </documentation> </template> </service> </service_bundle> So the only things I had to decide on in this are the service name "application/network/hg-serve" the start and stop methods (more of which later) and the properties. This is the information I need to pass to the start method script. In my case the port I want to start the web server on "9998", and the path to the source gate "/src/project-gate". These can be read in to the start method. So now lets look at the method scripts $ cat /lib/svc/method/hg-serve #!/sbin/sh # # # Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. # # Standard prolog # . /lib/svc/share/smf_include.sh if [ -z $SMF_FMRI ]; then echo "SMF framework variables are not initialized." exit $SMF_EXIT_ERR fi # # Build the command line flags # # Get the port and directory from the SMF properties port=`svcprop -c -p hg-serve/port $SMF_FMRI` dir=`svcprop -c -p hg-serve/path $SMF_FMRI` echo "$1" case "$1" in 'start') cd $dir /usr/bin/hg serve -d -p $port ;; *) echo "Usage: $0 {start|refresh|stop}" exit 1 ;; esac exit $SMF_EXIT_OK This is all pretty self explanatory, we read the port and directory using svcprop, and use those simply to run a command in the start case. We don't need to implement a stop case, as the manifest says to use "exec=':kill'for the stop method. Now all we need to do is import the manifest and start the service, but first verify the manifest # svccfg verify /path/to/hg-serve.xml If that doesn't give an error try importing it # svccfg import /path/to/hg-serve.xml If like me you originally put the hg-serve.xml file in /var/svc/manifest somewhere you'll get an error and told to restart the import service svccfg: Restarting svc:/system/manifest-import The manifest being imported is from a standard location and should be imported with the command : svcadm restart svc:/system/manifest-import # svcadm restart svc:/system/manifest-import and you're nearly done. You can look at the service using svcs -l # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled false state disabled next_state none state_time Thu May 31 16:11:47 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15749 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) And look at the interesting properties # svcprop hg-serve hg-serve/path astring /src/project-gate hg-serve/port astring 9998 ...stuff deleted.... Then simply enable the service and if every things gone right, you can point your browser at http://server:9998 and get a nice graphical log of project activity. # svcadm enable hg-serve # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled true state online next_state none state_time Thu May 31 16:18:11 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15858 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) None of this is rocket science, but a bit fiddly. Hence I thought I'd blog it. It might just be you see this in google and it clicks with you more than one of the many other blogs or how tos about it. Plus I can always refer back to it myself in 3 weeks, when I want to add another project to the server, and I've forgotten how to do it.

    Read the article

  • Das T5-4 TPC-H Ergebnis naeher betrachtet

    - by Stefan Hinker
    Inzwischen haben vermutlich viele das neue TPC-H Ergebnis der SPARC T5-4 gesehen, das am 7. Juni bei der TPC eingereicht wurde.  Die wesentlichen Punkte dieses Benchmarks wurden wie gewohnt bereits von unserer Benchmark-Truppe auf  "BestPerf" zusammengefasst.  Es gibt aber noch einiges mehr, das eine naehere Betrachtung lohnt. Skalierbarkeit Das TPC raet von einem Vergleich von TPC-H Ergebnissen in unterschiedlichen Groessenklassen ab.  Aber auch innerhalb der 3000GB-Klasse ist es interessant: SPARC T4-4 mit 4 CPUs (32 Cores mit 3.0 GHz) liefert 205,792 QphH. SPARC T5-4 mit 4 CPUs (64 Cores mit 3.6 GHz) liefert 409,721 QphH. Das heisst, es fehlen lediglich 1863 QphH oder 0.45% zu 100% Skalierbarkeit, wenn man davon ausgeht, dass die doppelte Anzahl Kerne das doppelte Ergebnis liefern sollte.  Etwas anspruchsvoller, koennte man natuerlich auch einen Faktor von 2.4 erwarten, wenn man die hoehere Taktrate mit beruecksichtigt.  Das wuerde die Latte auf 493901 QphH legen.  Dann waere die SPARC T5-4 bei 83%.  Damit stellt sich die Frage: Was hat hier nicht skaliert?  Vermutlich der Plattenspeicher!  Auch hier lohnt sich eine naehere Betrachtung: Plattenspeicher Im Bericht auf BestPerf und auch im Full Disclosure Report der TPC stehen einige interessante Details zum Plattenspeicher und der Konfiguration.   In der Konfiguration der SPARC T4-4 wurden 12 2540-M2 Arrays verwendet, die jeweils ca. 1.5 GB/s Durchsatz liefert, insgesamt also eta 18 GB/s.  Dabei waren die Arrays offensichtlich mit jeweils 2 Kabeln pro Array direkt an die 24 8GBit FC-Ports des Servers angeschlossen.  Mit den 2x 8GBit Ports pro Array koennte man so ein theoretisches Maximum von 2GB/s erreichen.  Tatsaechlich wurden 1.5GB/s geliefert, was so ziemlich dem realistischen Maximum entsprechen duerfte. Fuer den Lauf mit der SPARC T5-4 wurden doppelt so viele Platten verwendet.  Dafuer wurden die 2540-M2 Arrays mit je einem zusaetzlichen Plattentray erweitert.  Mit dieser Konfiguration wurde dann (laut BestPerf) ein Maximaldurchsatz von 33 GB/s erreicht - nicht ganz das doppelte des SPARC T4-4 Laufs.  Um tatsaechlich den doppelten Durchsatz (36 GB/s) zu liefern, haette jedes der 12 Arrays 3 GB/s ueber seine 4 8GBit Ports liefern muessen.  Im FDR stehen nur 12 dual-port FC HBAs, was die Verwendung der Brocade FC Switches erklaert: Es wurden alle 4 8GBit ports jedes Arrays an die Switches angeschlossen, die die Datenstroeme dann in die 24 16GBit HBA ports des Servers buendelten.  Das theoretische Maximum jedes Storage-Arrays waere nun 4 GB/s.  Wenn man jedoch den Protokoll- und "Realitaets"-Overhead mit einrechnet, sind die tatsaechlich gelieferten 2.75 GB/s gar nicht schlecht.  Mit diesen Zahlen im Hinterkopf ist die Verdopplung des SPARC T4-4 Ergebnisses eine gute Leistung - und gleichzeitig eine gute Erklaerung, warum nicht bis zum 2.4-fachen skaliert wurde. Nebenbei bemerkt: Weder die SPARC T4-4 noch die SPARC T5-4 hatten in der gemessenen Konfiguration irgendwelche Flash-Devices. Mitbewerb Seit die T4 Systeme auf dem Markt sind, bemuehen sich unsere Mitbewerber redlich darum, ueberall den Eindruck zu hinterlassen, die Leistung des SPARC CPU-Kerns waere weiterhin mangelhaft.  Auch scheinen sie ueberzeugt zu sein, dass (ueber)grosse Caches und hohe Taktraten die einzigen Schluessel zu echter Server Performance seien.  Wenn ich mir nun jedoch die oeffentlichen TPC-H Ergebnisse ansehe, sehe ich dies: TPC-H @3000GB, Non-Clustered Systems System QphH SPARC T5-4 3.6 GHz SPARC T5 4/64 – 2048 GB 409,721.8 SPARC T4-4 3.0 GHz SPARC T4 4/32 – 1024 GB 205,792.0 IBM Power 780 4.1 GHz POWER7 8/32 – 1024 GB 192,001.1 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64 – 512 GB 162,601.7 Kurz zusammengefasst: Mit 32 Kernen (mit 3 GHz und 4MB L3 Cache), liefert die SPARC T4-4 mehr QphH@3000GB ab als IBM mit ihrer 32 Kern Power7 (bei 4.1 GHz und 32MB L3 Cache) und auch mehr als HP mit einem 64 Kern Intel Xeon System (2.27 GHz und 24MB L3 Cache).  Ich frage mich, wo genau SPARC hier mangelhaft ist? Nun koennte man natuerlich argumentieren, dass beide Ergebnisse nicht gerade neu sind.  Nun, in Ermangelung neuerer Ergebnisse kann man ja mal ein wenig spekulieren: IBMs aktueller Performance Report listet die o.g. IBM Power 780 mit einem rPerf Wert von 425.5.  Ein passendes Nachfolgesystem mit Power7+ CPUs waere die Power 780+ mit 64 Kernen, verfuegbar mit 3.72 GHz.  Sie wird mit einem rPerf Wert von  690.1 angegeben, also 1.62x mehr.  Wenn man also annimmt, dass Plattenspeicher nicht der limitierende Faktor ist (IBM hat mit 177 SSDs getestet, sie duerfen das gerne auf 400 erhoehen) und IBMs eigene Leistungsabschaetzung zugrunde legt, darf man ein theoretisches Ergebnis von 311398 QphH@3000GB erwarten.  Das waere dann allerdings immer noch weit von dem Ergebnis der SPARC T5-4 entfernt, und gerade in der von IBM so geschaetzen "per core" Metric noch weniger vorteilhaft. In der x86-Welt sieht es nicht besser aus.  Leider gibt es von Intel keine so praktischen rPerf-Tabellen.  Daher muss ich hier fuer eine Schaetzung auf SPECint_rate2006 zurueckgreifen.  (Ich bin kein grosser Fan von solchen Kreuz- und Querschaetzungen.  Insb. SPECcpu ist nicht besonders geeignet, um Datenbank-Leistung abzuschaetzen, da fast kein IO im Spiel ist.)  Das o.g. HP System wird bei SPEC mit 1580 CINT2006_rate gelistet.  Das bis einschl. 2013-06-14 beste Resultat fuer den neuen Intel Xeon E7-4870 mit 8 CPUs ist 2180 CINT2006_rate.  Das ist immerhin 1.38x besser.  (Wenn man nur die Taktrate beruecksichtigen wuerde, waere man bei 1.32x.)  Hier weiter zu rechnen, ist muessig, aber fuer die ungeduldigen Leser hier eine kleine tabellarische Zusammenfassung: TPC-H @3000GB Performance Spekulationen System QphH* Verbesserung gegenueber der frueheren Generation SPARC T4-4 32 cores SPARC T4 205,792 2x SPARC T5-464 cores SPARC T5 409,721 IBM Power 780 32 cores Power7 192,001 1.62x IBM Power 780+ 64 cores Power7+  311,398* HP ProLiant DL980 G764 cores Intel Xeon X7560 162,601 1.38x HP ProLiant DL980 G780 cores Intel Xeon E7-4870    224,348* * Keine echten Resultate  - spekulative Werte auf der Grundlage von rPerf (Power7+) oder SPECint_rate2006 (HP) Natuerlich sind IBM oder HP herzlich eingeladen, diese Werte zu widerlegen.  Aber stand heute warte ich noch auf aktuelle Benchmark Veroffentlichungen in diesem Datensegment. Was koennen wir also zusammenfassen? Es gibt einige Hinweise, dass der Plattenspeicher der begrenzende Faktor war, der die SPARC T5-4 daran hinderte, auf jenseits von 2x zu skalieren Der Mythos, dass SPARC Kerne keine Leistung bringen, ist genau das - ein Mythos.  Wie sieht es umgekehrt eigentlich mit einem TPC-H Ergebnis fuer die Power7+ aus? Cache ist nicht der magische Performance-Schalter, fuer den ihn manche Leute offenbar halten. Ein System, eine CPU-Architektur und ein Betriebsystem jenseits einer gewissen Grenze zu skalieren ist schwer.  In der x86-Welt scheint es noch ein wenig schwerer zu sein. Was fehlt?  Nun, das Thema Preis/Leistung ueberlasse ich gerne den Verkaeufern ;-) Und zu guter Letzt: Nein, ich habe mich nicht ins Marketing versetzen lassen.  Aber manchmal kann ich mich einfach nicht zurueckhalten... Disclosure Statements The views expressed on this blog are my own and do not necessarily reflect the views of Oracle. TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org, results as of 6/7/13. Prices are in USD. SPARC T5-4 409,721.8 QphH@3000GB, $3.94/QphH@3000GB, available 9/24/13, 4 processors, 64 cores, 512 threads; SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads. SPEC and the benchmark names SPECfp and SPECint are registered trademarks of the Standard Performance Evaluation Corporation. Results as of June 18, 2013 from www.spec.org. HP ProLiant DL980 G7 (2.27 GHz, Intel Xeon X7560): 1580 SPECint_rate2006; HP ProLiant DL980 G7 (2.4 GHz, Intel Xeon E7-4870): 2180 SPECint_rate2006,

    Read the article

  • T4 Performance Counters explained

    - by user13346607
    Now that T4 is out for a few month some people might have wondered what details of the new pipeline you can monitor. A "cpustat -h" lists a lot of events that can be monitored, and only very few are self-explanatory. I will try to give some insight on all of them, some of these "PIC events" require an in-depth knowledge of T4 pipeline. Over time I will try to explain these, for the time being these events should simply be ignored. (Side note: some counters changed from tape-out 1.1 (*only* used in the T4 beta program) to tape-out 1.2 (used in the systems shipping today) The table only lists the tape-out 1.2 counters) 0 0 1 1058 6033 Oracle Microelectronics 50 14 7077 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} pic name (cpustat) Prose Comment Sel-pipe-drain-cycles, Sel-0-[wait|ready], Sel-[1,2] Sel-0-wait counts cycles a strand waits to be selected. Some reasons can be counted in detail; these are: Sel-0-ready: Cycles a strand was ready but not selected, that can signal pipeline oversubscription Sel-1: Cycles only one instruction or µop was selected Sel-2: Cycles two instructions or µops were selected Sel-pipe-drain-cycles: cf. PRM footnote 8 to table 10.2 Pick-any, Pick-[0|1|2|3] Cycles one, two, three, no or at least one instruction or µop is picked Instr_FGU_crypto Number of FGU or crypto instructions executed on that vcpu Instr_ld dto. for load Instr_st dto. for store SPR_ring_ops dto. for SPR ring ops Instr_other dto. for all other instructions not listed above, PRM footnote 7 to table 10.2 lists the instructions Instr_all total number of instructions executed on that vcpu Sw_count_intr Nr of S/W count instructions on that vcpu (sethi %hi(fc000),%g0 (whatever that is))  Atomics nr of atomic ops, which are LDSTUB/a, CASA/XA, and SWAP/A SW_prefetch Nr of PREFETCH or PREFETCHA instructions Block_ld_st Block loads or store on that vcpu IC_miss_nospec, IC_miss_[L2_or_L3|local|remote]\ _hit_nospec Various I$ misses, distinguished by where they hit. All of these count per thread, but only primary events: T4 counts only the first occurence of an I$ miss on a core for a certain instruction. If one strand misses in I$ this miss is counted, but if a second strand on the same core misses while the first miss is being resolved, that second miss is not counted This flavour of I$ misses counts only misses that are caused by instruction that really commit (note the "_nospec") BTC_miss Branch target cache miss ITLB_miss ITLB misses (synchronously counted) ITLB_miss_asynch dto. but asynchronously [I|D]TLB_fill_\ [8KB|64KB|4MB|256MB|2GB|trap] H/W tablewalk events that fill ITLB or DTLB with translation for the corresponding page size. The “_trap” event occurs if the HWTW was not able to fill the corresponding TLB IC_mtag_miss, IC_mtag_miss_\ [ptag_hit|ptag_miss|\ ptag_hit_way_mismatch] I$ micro tag misses, with some options for drill down Fetch-0, Fetch-0-all fetch-0 counts nr of cycles nothing was fetched for this particular strand, fetch-0-all counts cycles nothing was fetched for all strands on a core Instr_buffer_full Cycles the instruction buffer for a strand was full, thereby preventing any fetch BTC_targ_incorrect Counts all occurences of wrongly predicted branch targets from the BTC [PQ|ROB|LB|ROB_LB|SB|\ ROB_SB|LB_SB|RB_LB_SB|\ DTLB_miss]\ _tag_wait ST_q_tag_wait is listed under sl=20. These counters monitor pipeline behaviour therefore they are not strand specific: PQ_...: cycles Rename stage waits for a Pick Queue tag (might signal memory bound workload for single thread mode, cf. Mail from Richard Smith) ROB_...: cycles Select stage waits for a ROB (ReOrderBuffer) tag LB_...: cycles Select stage waits for a Load Buffer tag SB_...: cycles Select stage waits for Store Buffer tag combinations of the above are allowed, although some of these events can overlap, the counter will only be incremented once per cycle if any of these occur DTLB_...: cycles load or store instructions wait at Pick stage for a DTLB miss tag [ID]TLB_HWTW_\ [L2_hit|L3_hit|L3_miss|all] Counters for HWTW accesses caused by either DTLB or ITLB misses. Canbe further detailed by where they hit IC_miss_L2_L3_hit, IC_miss_local_remote_remL3_hit, IC_miss I$ prefetches that were dropped because they either miss in L2$ or L3$ This variant counts misses regardless if the causing instruction commits or not DC_miss_nospec, DC_miss_[L2_L3|local|remote_L3]\ _hit_nospec D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters DTLB_miss_asynch counts all DTLB misses asynchronously, there is no way to count them synchronously DC_pref_drop_DC_hit, SW_pref_drop_[DC_hit|buffer_full] L1-D$ h/w prefetches that were dropped because of a D$ hit, counted per core. The others count software prefetches per strand [Full|Partial]_RAW_hit_st_[buf|q] Count events where a load wants to get data that has not yet been stored, i. e. it is still inside the pipeline. The data might be either still in the store buffer or in the store queue. If the load's data matches in the SB and in the store queue the data in buffer takes precedence of course since it is younger [IC|DC]_evict_invalid, [IC|DC|L1]_snoop_invalid, [IC|DC|L1]_invalid_all Counter for invalidated cache evictions per core St_q_tag_wait Number of cycles pipeline waits for a store queue tag, of course counted per core Data_pref_[drop_L2|drop_L3|\ hit_L2|hit_L3|\ hit_local|hit_remote] Data prefetches that can be further detailed by either why they were dropped or where they did hit St_hit_[L2|L3], St_L2_[local|remote]_C2C, St_local, St_remote Store events distinguished by where they hit or where they cause a L2 cache-to-cache transfer, i.e. either a transfer from another L2$ on the same die or from a different die DC_miss, DC_miss_\ [L2_L3|local|remote]_hit D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters L2_[clean|dirty]_evict Per core clean or dirty L2$ evictions L2_fill_buf_full, L2_wb_buf_full, L2_miss_buf_full Per core L2$ buffer events, all count number of cycles that this state was present L2_pipe_stall Per core cycles pipeline stalled because of L2$ Branches Count branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_taken Counts taken branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_mispred, Br_dir_mispred, Br_trg_mispred, Br_trg_mispred_\ [far_tbl|indir_tbl|ret_stk] Counter for various branch misprediction events.  Cycles_user counts cycles, attribute setting hpriv, nouser, sys controls addess space to count in Commit-[0|1|2], Commit-0-all, Commit-1-or-2 Number of times either no, one, or two µops commit for a strand. Commit-0-all counts number of times no µop commits for the whole core, cf. footnote 11 to table 10.2 in PRM for a more detailed explanation on how this counters interacts with the privilege levels

    Read the article

  • The Faces in the Crowdsourcing

    - by Applications User Experience
    By Jeff Sauro, Principal Usability Engineer, Oracle Imagine having access to a global workforce of hundreds of thousands of people who can perform tasks or provide feedback on a design quickly and almost immediately. Distributing simple tasks not easily done by computers to the masses is called "crowdsourcing" and until recently was an interesting concept, but due to practical constraints wasn't used often. Enter Amazon.com. For five years, Amazon has hosted a service called Mechanical Turk, which provides an easy interface to the crowds. The service has almost half a million registered, global users performing a quarter of a million human intelligence tasks (HITs). HITs are submitted by individuals and companies in the U.S. and pay from $.01 for simple tasks (such as determining if a picture is offensive) to several dollars (for tasks like transcribing audio). What do we know about the people who toil away in this digital crowd? Can we rely on the work done in this anonymous marketplace? A rendering of the actual Mechanical Turk (from Wikipedia) Knowing who is behind Amazon's Mechanical Turk is fitting, considering the history of the actual Mechanical Turk. In the late 1800's, a mechanical chess-playing machine awed crowds as it beat master chess players in what was thought to be a mechanical miracle. It turned out that the creator, Wolfgang von Kempelen, had a small person (also a chess master) hiding inside the machine operating the arms to provide the illusion of automation. The field of human computer interaction (HCI) is quite familiar with gathering user input and incorporating it into all stages of the design process. It makes sense then that Mechanical Turk was a popular discussion topic at the recent Computer Human Interaction usability conference sponsored by the Association for Computing Machinery in Atlanta. It is already being used as a source for input on Web sites (for example, Feedbackarmy.com) and behavioral research studies. Two papers shed some light on the faces in this crowd. One paper tells us about the shifting demographics from mostly stay-at-home moms to young men in India. The second paper discusses the reliability and quality of work from the workers. Just who exactly would spend time doing tasks for pennies? In "Who are the crowdworkers?" University of California researchers Ross, Silberman, Zaldivar and Tomlinson conducted a survey of Mechanical Turk worker demographics and compared it to a similar survey done two years before. The initial survey reported workers consisting largely of young, well-educated women living in the U.S. with annual household incomes above $40,000. The more recent survey reveals a shift in demographics largely driven by an influx of workers from India. Indian workers went from 5% to over 30% of the crowd, and this block is largely male (two-thirds) with a higher average education than U.S. workers, and 64% report an annual income of less than $10,000 (keeping in mind $1 has a lot more purchasing power in India). This shifting demographic certainly has implications as language and culture can play critical roles in the outcome of HITs. Of course, the demographic data came from paying Turkers $.10 to fill out a survey, so there is some question about both a self-selection bias (characteristics which cause Turks to take this survey may be unrepresentative of the larger population), not to mention whether we can really trust the data we get from the crowd. Crowds can perform tasks or provide feedback on a design quickly and almost immediately for usability testing. (Photo attributed to victoriapeckham Flikr While having immediate access to a global workforce is nice, one major problem with Mechanical Turk is the incentive structure. Individuals and companies that deploy HITs want quality responses for a low price. Workers, on the other hand, want to complete the task and get paid as quickly as possible, so that they can get on to the next task. Since many HITs on Mechanical Turk are surveys, how valid and reliable are these results? How do we know whether workers are just rushing through the multiple-choice responses haphazardly answering? In "Are your participants gaming the system?" researchers at Carnegie Mellon (Downs, Holbrook, Sheng and Cranor) set up an experiment to find out what percentage of their workers were just in it for the money. The authors set up a 30-minute HIT (one of the more lengthy ones for Mechanical Turk) and offered a very high $4 to those who qualified and $.20 to those who did not. As part of the HIT, workers were asked to read an email and respond to two questions that determined whether workers were likely rushing through the HIT and not answering conscientiously. One question was simple and took little effort, while the second question required a bit more work to find the answer. Workers were led to believe other factors than these two questions were the qualifying aspect of the HIT. Of the 2000 participants, roughly 1200 (or 61%) answered both questions correctly. Eighty-eight percent answered the easy question correctly, and 64% answered the difficult question correctly. In other words, about 12% of the crowd were gaming the system, not paying enough attention to the question or making careless errors. Up to about 40% won't put in more than a modest effort to get paid for a HIT. Young men and those that considered themselves in the financial industry tended to be the most likely to try to game the system. There wasn't a breakdown by country, but given the demographic information from the first article, we could infer that many of these young men come from India, which makes language and other cultural differences a factor. These articles raise questions about the role of crowdsourcing as a means for getting quick user input at low cost. While compensating users for their time is nothing new, the incentive structure and anonymity of Mechanical Turk raises some interesting questions. How complex of a task can we ask of the crowd, and how much should these workers be paid? Can we rely on the information we get from these professional users, and if so, how can we best incorporate it into designing more usable products? Traditional usability testing will still play a central role in enterprise software. Crowdsourcing doesn't replace testing; instead, it makes certain parts of gathering user feedback easier. One can turn to the crowd for simple tasks that don't require specialized skills and get a lot of data fast. As more studies are conducted on Mechanical Turk, I suspect we will see crowdsourcing playing an increasing role in human computer interaction and enterprise computing. References: Downs, J. S., Holbrook, M. B., Sheng, S., and Cranor, L. F. 2010. Are your participants gaming the system?: screening mechanical turk workers. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI '10. ACM, New York, NY, 2399-2402. Link: http://doi.acm.org/10.1145/1753326.1753688 Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., and Tomlinson, B. 2010. Who are the crowdworkers?: shifting demographics in mechanical turk. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI EA '10. ACM, New York, NY, 2863-2872. Link: http://doi.acm.org/10.1145/1753846.1753873

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • 12c - SQL Text Expansion

    - by noreply(at)blogger.com (Thomas Kyte)
    Here is another small but very useful new feature in Oracle Database 12c - SQL Text Expansion.  It will come in handy in two cases:You are asked to tune what looks like a simple query - maybe a two table join with simple predicates.  But it turns out the two tables are each views of views of views and so on... In other words, you've been asked to 'tune' a 15 page query, not a two liner.You are asked to take a look at a query against tables with VPD (virtual private database) policies.  In order words, you have no idea what you are trying to 'tune'.A new function, EXPAND_SQL_TEXT, in the DBMS_UTILITY package makes seeing what the "real" SQL is quite easy. For example - take the common view ALL_USERS - we can now:ops$tkyte%ORA12CR1> variable x clobops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from all_users',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."USERNAME" "USERNAME","A1"."USER_ID" "USER_ID","A1"."CREATED" "CREATED","A1"."COMMON" "COMMON" FROM  (SELECT "A4"."NAME" "USERNAME","A4"."USER#" "USER_ID","A4"."CTIME" "CREATED",DECODE(BITAND("A4"."SPARE1",128),128,'YES','NO') "COMMON" FROM "SYS"."USER$" "A4","SYS"."TS$" "A3","SYS"."TS$" "A2" WHERE "A4"."DATATS#"="A3"."TS#" AND "A4"."TEMPTS#"="A2"."TS#" AND "A4"."TYPE#"=1) "A1"Now it is easy to see what query is really being executed at runtime - regardless of how many views of views you might have.  You can see the expanded text - and that will probably lead you to the conclusion that maybe that 27 table join to 25 tables you don't even care about might better be written as a two table join.Further, if you've ever tried to figure out what a VPD policy might be doing to your SQL, you know it was hard to do at best.  Christian Antognini wrote up a way to sort of see it - but you never get to see the entire SQL statement: http://www.antognini.ch/2010/02/tracing-vpd-predicates/.  But now with this function - it becomes rather trivial to see the expanded SQL - after the VPD has been applied.  We can see this by setting up a small table with a VPD policy ops$tkyte%ORA12CR1> create table my_table  2  (  data        varchar2(30),  3     OWNER       varchar2(30) default USER  4  )  5  /Table created.ops$tkyte%ORA12CR1> create or replace  2  function my_security_function( p_schema in varchar2,  3                                 p_object in varchar2 )  4  return varchar2  5  as  6  begin  7     return 'owner = USER';  8  end;  9  /Function created.ops$tkyte%ORA12CR1> begin  2     dbms_rls.add_policy  3     ( object_schema   => user,  4       object_name     => 'MY_TABLE',  5       policy_name     => 'MY_POLICY',  6       function_schema => user,  7       policy_function => 'My_Security_Function',  8       statement_types => 'select, insert, update, delete' ,  9       update_check    => TRUE ); 10  end; 11  /PL/SQL procedure successfully completed.And then expanding a query against it:ops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from my_table',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."DATA" "DATA","A1"."OWNER" "OWNER" FROM  (SELECT "A2"."DATA" "DATA","A2"."OWNER" "OWNER" FROM "OPS$TKYTE"."MY_TABLE" "A2" WHERE "A2"."OWNER"=USER@!) "A1"Not an earth shattering new feature - but extremely useful in certain cases.  I know I'll be using it when someone asks me to look at a query that looks simple but has a twenty page plan associated with it!

    Read the article

  • OS8- AK8- The bad news...

    - by Steve Tunstall
    Ok I told you I would give you the bad news of AK8 to go along with all the cool new stuff, so here it is. It's not that bad, really, just things you need to be aware of. First, the 2013.1 code is being called OS8, AK8 and 2013.1 by different people. I mean different people INSIDE Oracle!! It was supposed to be easy, but it never is. So for the rest of this blog entry, I'm calling it AK8. AK8 is not compatible with the 7x10 series. Ever. The 7x10 series is not supported with AK8, and if you try to upgrade one, it will fail at the healthcheck. All 7x20 series, all of them regardless of age, are supported with AK8. Drive trays. Let's talk about drive trays and SAS cards. The older drive trays for the 7x20 series were called the "Riverwalk 2" or "DS2" trays. They were technically the "J4410" series JBODs that Sun used to sell a la carte before we stopped selling JBODs. Don't get me started on that, it still makes me mad. We used these for many years, and you can still buy them right now until December 15th, 2013, when they will no longer be sold. The DS2 tray only came as a 4u, 24 drive shelf. It held 3.5" drives, and you had a choice of 2TB, 3TB, 300GB or 600GB drives. The SAS HBA in the 7x20 series was called a "Thebe" card, with a part # of 7105394. The 7420, for example, came standard with two of these "Thebe" cards for connecting to the disk trays. Two Thebe cards could handle up to 12 trays, so one would add two more cards to go to 24 trays, or have up to six Thebe cards to handle 36 trays. This card was for external SAS only. It did not connect to the internal OS drives or the Readzillas, both of which used the internal SCSI controller of the server. These Riverwalk 2 trays ARE supported with AK8. You can upgrade your older 7420 or 7320, no problem, as-is. The much older Riverwalk 1 trays or J4400 trays are NOT supported by AK8. However, they were only used by the 7x10 series, and we already said that the 7x10 series was not supported. Here's where it gets tricky. Since last January, we have been selling the new style disk trays. We call them the "DE2-24P" and the "DE2-24C" trays. The "C" tray is for capacity drives, which are 3.5" 3TB or 4TB drives. The "P" trays are for performance drives, which are 2.5" 300GB and 900GB drives. These trays are NOT Riverwalk 2 trays, even though the "C" series may kind of look like it. Different manufacturer and different firmware. They are not new. Like I said, we've been selling them with the 7x20 series since last January. They are the only disk trays we will be selling going forward. Of course, AK8 supports them. So what's the problem? The problem is going to be for people who have to mix drive trays. Remember, your older 7x20 series has Thebe SAS2 HBAs. These have 2 SAS ports per card.  The new ZS3-2 and ZS3-4 systems, however, have the new "Thebe2" SAS2 HBAs. These Thebe2 cards have 4 ports per card. This is very cool, as we can now do more SAS channels with less cards. Instead of needing 4 SAS cards to grow to 24 trays like we did with the old Thebe cards, I can now do 24 trays with only 2 Thebe2 cards. This means more IO slots for fun things like Infiniband and 10G. So far, so good, right? These Thebe2 cards work with any disk tray. You can even mix older DS2 trays with the newer DE2 trays in the same system, as long as you have Thebe2 cards. Ah, there's your problem. You don't have Thebe2 cards in your old 7420, do you? Well, I told you the bad news wasn't that bad, right? We can take out your Thebe cards and replace them with Thebe2. You can then plug your older DS2 trays right back in, and also now get newer DE2 trays going forward. However, it's important that the trays are on different SAS channels. You can mix them in the same system, but not on the same channel. Ask your local SC if you need help with the new cable layout. By the way, the new ZS3-2 and ZS3-4 systems also include a new IO card called "Erie" cards. These are for INTERNAL SAS to the OS drives and the Readzillas. So those are now SAS2 instead of SATA like the older models. Yes, the Erie card uses an IO slot, but that's OK, because the Thebe2 cards allow us to use less SAS HBAs to grow the system, right? That's it. Not too much bad news and really not that bad. AK8 does not support the 7x10 series, and you may need new Thebe2 cards in your older systems if you want to add on newer DE2 trays. I think we can all agree that there are worse things out there. Like our Congress.   Next up.... More good news and cool AK8 tricks. Such as virtual NICS. 

    Read the article

  • Making Those PanelBoxes Behave

    - by Duncan Mills
    I have a little problem to solve earlier this week - misbehaving <af:panelBox> components... What do I mean by that? Well here's the scenario, I have a page fragment containing a set of panelBoxes arranged vertically. As it happens, they are stamped out in a loop but that does not really matter. What I want to be able to do is to provide the user with a simple UI to close and open all of the panelBoxes in concert. This could also apply to showDetailHeader and similar items with a disclosed attrubute, but in this case it's good old panelBoxes.  Ok, so the basic solution to this should be self evident. I can set up a suitable scoped managed bean that the panelBoxes all refer to for their disclosed attribute state. Then the open all / close commandButtons in the UI can simply set the state of that bean for all the panelBoxes to pick up via EL on their disclosed attribute. Sound OK? Well that works basically without a hitch, but turns out that there is a slight problem and this is where the framework is attempting to be a little too helpful. The issue is that is the user manually discloses or hides a panelBox then that will override the value that the EL is setting. So for example. I start the page with all panelBoxes collapsed, all set by the EL state I'm storing on the session I manually disclose panelBox no 1. I press the Expand All button - all works as you would hope and all the panelBoxes are now disclosed, including of course panelBox 1 which I just expanded manually. Finally I press the Collapse All button and everything collapses except that first panelBox that I manually disclosed.  The problem is that the component remembers this manual disclosure and that overrides the value provided by the expression. If I change the viewId (navigate away and back) then the panelBox will start to behave again, until of course I touch it again! Now, the more astute amoungst you would think (as I did) Ah, sound like the MDS personalizaton stuff is getting in the way and the solution should simply be to set the dontPersist attribute to disclosed | ALL. Alas this does not fix the issue.  After a little noodling on the best way to approach this I came up with a solution that works well, although if you think of an alternative way do let me know. The principle is simple. In the disclosureListener for the panelBox I take a note of the clientID of the panelBox component that has been touched by the user along with the state. This all gets stored in a Map of Booleans in ViewScope which is keyed by clientID and stores the current disclosed state in the Boolean value.  The listener looks like this (it's held in a request scope backing bean for the page): public void handlePBDisclosureEvent(DisclosureEvent disclosureEvent) { String clientId = disclosureEvent.getComponent().getClientId(FacesContext.getCurrentInstance()); boolean state = disclosureEvent.isExpanded(); pbState.addTouchedPanelBox(clientId, state); } The pbState variable referenced here is a reference to the bean which will hold the state of the panelBoxes that lives in viewScope (recall that everything is re-set when the viewid is changed so keeping this in viewScope is just fine and cleans things up automatically). The addTouchedPanelBox() method looks like this: public void addTouchedPanelBox(String clientId, boolean state) { //create the cache if needed this is just a Map<String,Boolean> if (_touchedPanelBoxState == null) { _touchedPanelBoxState = new HashMap<String, Boolean>(); } // Simply put / replace _touchedPanelBoxState.put(clientId, state); } So that's the first part, we now have a record of every panelBox that the user has touched. So what do we do when the Collapse All or Expand All buttons are pressed? Here we do some JavaScript magic. Basically for each clientID that we have stored away, we issue a client side disclosure event from JavaScript - just as if the user had gone back and changed it manually. So here's the Collapse All button action: public String CloseAllAction() { submitDiscloseOverride(pbState.getTouchedClientIds(true), false); _uiManager.closeAllBoxes(); return null; }  The _uiManager.closeAllBoxes() method is just manipulating the master-state that all of the panelBoxes are bound to using EL. The interesting bit though is the line:  submitDiscloseOverride(pbState.getTouchedClientIds(true), false); To break that down, the first part is a call to that viewScoped state holder to ask for a list of clientIDs that need to be "tweaked": public String getTouchedClientIds(boolean targetState) { StringBuilder sb = new StringBuilder(); if (_touchedPanelBoxState != null && _touchedPanelBoxState.size() > 0) { for (Map.Entry<String, Boolean> entry : _touchedPanelBoxState.entrySet()) { if (entry.getValue() == targetState) { if (sb.length() > 0) { sb.append(','); } sb.append(entry.getKey()); } } } return sb.toString(); } You'll notice that this method only processes those panelBoxes that will be in the wrong state and returns those as a comma separated list. This is then processed by the submitDiscloseOverride() method: private void submitDiscloseOverride(String clientIdList, boolean targetDisclosureState) { if (clientIdList != null && clientIdList.length() > 0) { FacesContext fctx = FacesContext.getCurrentInstance(); StringBuilder script = new StringBuilder(); script.append("overrideDiscloseHandler('"); script.append(clientIdList); script.append("',"); script.append(targetDisclosureState); script.append(");"); Service.getRenderKitService(fctx, ExtendedRenderKitService.class).addScript(fctx, script.toString()); } } This method constructs a JavaScript command to call a routine called overrideDiscloseHandler() in a script attached to the page (using the standard <af:resource> tag). That method parses out the list of clientIDs and sends the correct message to each one: function overrideDiscloseHandler(clientIdList, newState) { AdfLogger.LOGGER.logMessage(AdfLogger.INFO, "Disclosure Hander newState " + newState + " Called with: " + clientIdList); //Parse out the list of clientIds var clientIdArray = clientIdList.split(','); for (var i = 0; i < clientIdArray.length; i++){ var panelBox = flipPanel = AdfPage.PAGE.findComponentByAbsoluteId(clientIdArray[i]); if (panelBox.getComponentType() == "oracle.adf.RichPanelBox"){ panelBox.broadcast(new AdfDisclosureEvent(panelBox, newState)); } }  }  So there you go. You can see how, with a few tweaks the same code could be used for other components with disclosure that might suffer from the same problem, although I'd point out that the behavior I'm working around here us usually desirable. You can download the running example (11.1.2.2) from here. 

    Read the article

  • VNIC - New feature of AK8 - Working with VNICs

    - by Steve Tunstall
    One of the important new features of the AK8 code is the ability to use multiple IP addresses on the same physical network port. This feature is called VNICs, or Virtual NICs. This allows us to no longer "burn" a whole port in a cluster when one cluster peer owns a network port. Traditionally, we have had to leave Net0 empty on controller 2, because it was used for managing controller 1. Vise-versa for Net1 on Controller 1. Then, if you have data going over 10GigE ports, you probably only had half of your ports running at any given time, and the partner 10GigE port on the other controller just sat there, doing nothing, unless the first controller went down. What a waste. Those days are over.  I want to thank and give a big shout-out to our good partner, OnX Enterprise Solutions, for allowing me to come into their lab and play around with their 7320 to do this demo. They let me make a big mess of their lab for the day as I played around with VNICs. If you're looking for a partner who knows Oracle well and can also piece together a solution from multiple vendors to get you what you need, OnX is a good choice. If you would like to talk to your local OnX rep, you can contact Scott Gill at [email protected] and he can point you in the right direction for your area.  Here we go: Here is what your Datalinks window looks like BEFORE you upgrade to AK8. Here's what the same screen looks like after you upgrade. See the new box? So here is my current network setup. I have my 4 physical interfaces setup each with an IP address. If I ping them, no problems.  So I can ping 180, 181, 251, and 252. However, if I try to ping 240, it does not work, as the 240 address is not being used by any of these interfaces, right?Let's change that. Here, I'm going to make a new Datalink by clicking the Datalink "Plus sign" button. I will check the VNIC box and tell it to use igb2, even though another interface is already using it. Now, I will create a new Interface, and choose "v_dl2" for it's datalink. My new network screen looks like this. A few things to take note of here. First, when I click the "igb2" device, it only highlights dl2 and int2. It does not highlight v_dl2 or v_int2.I think it should, but OK, it looks like VNICs don't highlight when you click the device. Second, note how the underscore character in v_dl2 and v_int2 do not seem to show on this screen. You can see it plainly if you go in and edit them, but from here it looks like a space instead of an underscore. Just a cosmetic bug, but something to be aware of. Now, if I click the VNIC datalink "v_dl2", on the other hand, it DOES highlight the device it belongs to, as it should. Seen here: Note that it did not, however, highlight int2 with it, even though int2 is connected to igb2. That's because we clicked v_dl2, which int2 has nothing to do with. So I'm OK with that. So let's try pinging 240 now. Of course, it works great.  So I now make another VNIC, and call it v_dl3 using igb3, and v_int3 with an address of 241. I then setup three shares, using ports 251, 240, and 241.Remember that IP 251 and 240 both are using the same physical port of igb2, and IP 241 is using port igb3. Next, I copy a folder full of stuff over to all three shares at the same time. I have analytics going so I can see the traffic. My top chart is showing the logical interfaces, and the bottom chart is showing the physical ports.Sure enough, look at the igb2 and vnic1 interfaces. They equal the traffic going over the igb2 physical port on the second chart. VNIC2, on the other hand, gets igb3 all to itself. This would work the same way with 10Gig or Infiniband ports. You can now have multiple IP addresses and even completely different subnets sharing the same physical ports. You may need to make route table entries for that. This allows us to use all of the ports you paid for with no more waste.  Very, very cool.  One small "bug" I found when doing this. It's really not a bug, it was designed to do this when VNICs were not around. But now that we have NVIC capability, they should probably change this. I've alerted the engineering team about this and they're looking into it, so perhaps it will be fixed in a later code. Here it is. Remember when we made the new VNIC datalink, I specifically said to click on the "Plus Sign" button to create it? I don't always do that. I really like to use the drag-and-drop method to create my datalinks in the network screen.HOWEVER, if you were to do that for building a VNIC, it will mess you up a little. Watch this. Here, I'm dragging igb3 over to make a new datalink. igb3 is already being used by dl3, but I'm going to make this a VNIC, so who cares, right? Well, the ZFSSA does not KNOW you are going to make it a VNIC, now does it? So... it works as designed and REMOVES the igb3 device from the current dl3 datalink in the background. See how it's now missing? At the same time, the dl3 datalink choice is missing from my list of possible VNICs for me to choose from!!!! Hey!!! I wanted to pick dl3. Why isn't it on the list??? Well, it can't be on this list because dl3 no longer has a device associated with it. Bummer for you. When you click cancel, the device is still missing from dl3. The fix is easy. Just edit dl3 by clicking the pencil button, do absolutely nothing, and click "Apply". The device will magically come back. Now, make the VNIC datalink by clicking the "Plus Sign" button. Sure enough, once you check the VNIC box, dl3 is a valid choice. No problem.  That's it for now. Have fun with VNICs.

    Read the article

  • How to Plug a Small Hole in NetBeans JSF (Join Table) Code Generation

    - by MarkH
    I was asked recently to provide an assist with designing and building a small-but-vital application that had at its heart some basic CRUD (Create, Read, Update, & Delete) functionality, built upon an Oracle database, to be accessible from various locations. Working from the stated requirements, I fleshed out the basic application and database designs and, once validated, set out to complete the first iteration for review. Using SQL Developer, I created the requisite tables, indices, and sequences for our first run. One of the tables was a many-to-many join table with three fields: one a primary key for that table, the other two being primary keys for the other tables, represented as foreign keys in the join table. Here is a simplified example of the trio of tables: Once the database was in decent shape, I fired up NetBeans to let it have first shot at the code. NetBeans does a great job of generating a mountain of essential code, saving developers what must be millions of hours of effort each year by building a basic foundation with a few clicks and keystrokes. Lest you think it (or any tool) can do everything for you, however, occasionally something tosses a paper clip into the delicate machinery and makes you open things up to fix them. Join tables apparently qualify.  :-) In the case above, the entity class generated for the join table (New Entity Classes from Database) included an embedded object consisting solely of the two foreign key fields as attributes, in addition to an object referencing each one of the "component" tables. The Create page generated (New JSF Pages from Entity Classes) worked well to a point, but when trying to save, we were greeted with an error: Transaction aborted. Hmm. A quick debugger session later and I'd identified the issue: when trying to persist the new join-table object, the embedded "foreign-keys-only" object still had null values for its two (required value) attributes...even though the embedded table objects had populated key attributes. Here's the simple fix: In the join-table controller class, find the public String create() method. It will look something like this:     public String create() {        try {            getFacade().create(current);            JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("JoinEntityCreated"));            return prepareCreate();        } catch (Exception e) {            JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));            return null;        }    } To restore balance to the force, modify the create() method as follows (changes in red):     public String create() {         try {            // Add the next two lines to resolve:            current.getJoinEntityPK().setTbl1id(current.getTbl1().getId().toBigInteger());            current.getJoinEntityPK().setTbl2id(current.getTbl2().getId().toBigInteger());            getFacade().create(current);            JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("JoinEntityCreated"));            return prepareCreate();        } catch (Exception e) {            JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));            return null;        }    } I'll be refactoring this code shortly, but for now, it works. Iteration one is complete and being reviewed, and we've met the milestone. Here's to happy endings (and customers)! All the best,Mark

    Read the article

  • Twitter ?? Nashorn ????(??)

    - by Homma
    ???? Nashorn ? Java ??????? Twitter ???????????????????? JavaFX ??????????????? ????? ??? jlaskey ??? Nashorn Blog ????????????? https://blogs.oracle.com/nashorn/entry/nashorn_in_the_twitterverse_continued ???????? ?? Twitter ???????????????????????? JavaFX ??????????????????????????????? Nashorn ?? JavaFX ??????????????JavaFX ???????????????????????????????????????Nashorn ? Java ????????????????????????????????????(JavaFX ?????????????????????)? ?????????????????????????????????????????????? Twitter ????????????????????????? var twitter4j = Packages.twitter4j; var TwitterFactory = twitter4j.TwitterFactory; var Query = twitter4j.Query; function getTrendingData() { var twitter = new TwitterFactory().instance; var query = new Query("nashorn OR nashornjs"); query.since("2012-11-21"); query.count = 100; var data = {}; do { var result = twitter.search(query); var tweets = result.tweets; for each (var tweet in tweets) { var date = tweet.createdAt; var key = (1900 + date.year) + "/" + (1 + date.month) + "/" + date.date; data[key] = (data[key] || 0) + 1; } } while (query = result.nextQuery()); return data; } ??????????????????getTrendingData() ??????????????(??????????Nashorn ???????? OpenJDK ?????? 2012 ? 11 ? 21 ???)??????????????????????????????????? ????JavaFX ? BarChart ??????????? var javafx = Packages.javafx; var Stage = javafx.stage.Stage var Scene = javafx.scene.Scene; var Group = javafx.scene.Group; var Chart = javafx.scene.chart.Chart; var FXCollections = javafx.collections.FXCollections; var ObservableList = javafx.collections.ObservableList; var CategoryAxis = javafx.scene.chart.CategoryAxis; var NumberAxis = javafx.scene.chart.NumberAxis; var BarChart = javafx.scene.chart.BarChart; var XYChart = javafx.scene.chart.XYChart; var Series = javafx.scene.chart.XYChart.Series; var Data = javafx.scene.chart.XYChart.Data; function graph(stage, data) { var root = new Group(); stage.scene = new Scene(root); var dates = Object.keys(data); var xAxis = new CategoryAxis(); xAxis.categories = FXCollections.observableArrayList(dates); var yAxis = new NumberAxis("Tweets", 0.0, 200.0, 50.0); var series = FXCollections.observableArrayList(); for (var date in data) { series.add(new Data(date, data[date])); } var tweets = new Series("Tweets", series); var barChartData = FXCollections.observableArrayList(tweets); var chart = new BarChart(xAxis, yAxis, barChartData, 25.0); root.children.add(chart); } ????????????????????????????????stage.scene = new Scene(root) ? stage.setScene(new Scene(root)) ????????????????????Nashorn ? stage ??????? scene ???????????????????(Dynalink ?????????)Java Beans ???????????????? (setScene()) ???????????????????????????????Nashorn ? FXCollections ??????????????????????????????observableArrayList(dates) ??????????Nashorn ? JavaScript ??? (dates) ? Java ???????????????????????????? JavaScript ?????????????????? Java ????????????????????????????????????????????????????????????? ????????????????????????????????? JavaFX ???????????????????????? JavaFX ??????????????javafx.application.Application ??????????????????????????? JavaFX ????????????????????????????????????????????????? import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import javafx.application.Application; import javafx.stage.Stage; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; public class TrendingMain extends Application { private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); private Trending trending; public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws Exception { trending = (Trending) load("Trending.js"); trending.start(stage); } @Override public void stop() throws Exception { trending.stop(); } private Object load(String script) throws IOException, ScriptException { try (final InputStream is = TrendingMain.class.getResourceAsStream(script)) { return engine.eval(new InputStreamReader(is, "utf-8")); } } } ???? Nashorn ??????? JSR-223 ? javax.script ????????? private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); ????????? JavaScript ???????? Nashorn ???????????????????? load ???????????????????????engine ???????????????load ????????????? ???????????????Java ???????????????????????????????????????????????????? Java ????????????????JavaFX ???????? start ????? stop ?????????????????????????????????????? public interface Trending { public void start(Stage stage) throws Exception; public void stop() throws Exception; } ?????????????????????????????? function newTrending() { return new Packages.Trending() { start: function(stage) { var data = getTrendingData(); graph(stage, data); stage.show(); }, stop: function() { } } } newTrending(); ?????? Trending ?????????????????????start ????? stop ??????????????????????????????????? eval ???? Java ??????????????? trending = (Trending) load("Trending.js"); ????????????????Trending.js ??????? getTrendingData ???????????? newTrending ????????????????????? Java ?????????newTrending ????????? eval ????????? Trending ????????????????????????????????????????????????????????? trending.start(stage); ???????? ???? Nashorn ????????? http://www.myexpospace.com/JavaOne2012/SessionFiles/CON5251_PDF_5251_0001.pdf ???????? Dynalink ??????? https://github.com/szegedi/dynalink ????????

    Read the article

  • How to escape or remove double quotes in rsyslog template

    - by Evgeny
    I want rsyslog to write log messages in JSON format, which requires to use double-quotes (") around strings. Problem is that values sometime include double-quotes themselves, and those need to be escaped - but I can't figure out how to do that. Currently my rsyslog.conf contains this format that I use (a bit simplified): $template JsonFormat,"{\"msg\":\"%msg%\",\"app-name\":\"%app-name%\"}\n",sql But when a msg arrives that contains double quotes, the JSON is broken, example: user pid=21214 uid=0 auid=4294967295 msg='PAM setcred: user="oracle" exe="/bin/su" (hostname=?, addr=?, terminal=? result=Success)' turns into: {"msg":"user pid=21214 uid=0 auid=4294967295 msg='PAM setcred: user="oracle" exe="/bin/su" (hostname=?, addr=?, terminal=? result=Success)'","app-name":"user"} but what I need it to become is: {"msg":"user pid=21214 uid=0 auid=4294967295 msg='PAM setcred: user=\"oracle\" exe=\"/bin/su\" (hostname=?, addr=?, terminal=? result=Success)'","app-name":"user"}

    Read the article

  • How can I create an “su” only user (no SSH or SFTP) and limit who can “su” into that account in RHEL5? [closed]

    - by Beaming Mel-Bin
    Possible Duplicate: How can I allow one user to su to another without allowing root access? We have a user account that our DBAs use (oracle). I do not want to set a password on this account and want to only allow users in the dba group to su - oracle. How can I accomplish this? I was thinking of just giving them sudo access to the su - oracle command. However, I wouldn't be surprised if there was a more polished/elegant/secure way.

    Read the article

  • Difficulty in running Tomcat v7.0 with Eclipse Juno

    - by user1673718
    I get the following error when I run my JSP file in Eclipse-Juno with Tomcat v7: 'starting Tomcat v7.0 server at localhost' has encountered a problem. Port 8080 required by Tomcat v7.0 server at localhost is already in use. The server may already be running in another process, or a system process may be using the port. To start this server you will need to stop the other process or change the port number(s). I have Oracle 10g installed in my System. When I type "http://localhost:8080" it opens the Oracle 10g license agreement so I think Oracle 10g is already running in that port. To change the port of Tomcat I tried Google, which said to change the port in the "C:\Program Files\Apache Software Foundation\Apache Tomcat 7.0.14\conf\httpd.conf" file But at "C:\Program Files\Apache Software Foundation\Apache Tomcat 7.0.14\conf" there was no httpd.conf file. I only have "catalina.policy,catalina.properties,context,logging.properties,server,tomcat-users,web" files in that conf folder. I use windows XP.

    Read the article

  • Where is the central ZFS website now?

    - by Stefan Lasiewski
    Oracle dumped OpenSolaris in Fall 2010, and it is unclear if Oracle will continue to publicly release updates to ZFS, except maybe after they release their next major version of Solaris. FreeBSD now has ZFS v28 available for testing. But where did v28 come from? I notice that the main ZFS website does not show version 28 available. Has this website been abandoned? If so, where is the central website for the ZFS project, so that I can browse the repo, read the mailing lists, read the release notes, etc. (I realize that OpenSolaris has been dumped by Oracle, and that they are limiting their ZFS releases to the community).

    Read the article

  • How to make VirtualBox headless answer on rdp port?

    - by stiv
    I'd like to run windows xp on RDP: $ VBoxManage modifyvm winxp32 --vrdeport 3389 $ VBoxHeadless -s winxp32 -v on Oracle VM VirtualBox Headless Interface 4.1.18_Debian (C) 2008-2012 Oracle Corporation All rights reserved. (waiting) in another window: $ telnet localhost 3389 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused Yes, I've read about extension: $ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.1.20-80170.vbox-extpack 0%... Progress state: NS_ERROR_FAILURE VBoxManage: error: Failed to install "Oracle_VM_VirtualBox_Extension_Pack-4.1.20- 80170.vbox-extpack": Extension pack 'Oracle VM VirtualBox Extension Pack' is already installed. In case of a reinstallation, please uninstall it first Looked through all manuals and all help requests. No success. What's wrong? Any ideas?

    Read the article

  • Is it safe to expect Sun Java 6 to supported for the life of RHEL 6?

    - by Ophidian
    I'm in the planning stages of a java application that we're targeting for Red Hat Enterprise Linux 6. Unfortunately, we're stuck at RHEL 6.1 which does not ship the java-1.7.0-oracle package set (they were added in 6.3) and I don't really have any control over when we will be upgraded to the more recent version. I don't have any specific technical requirements to use Java 7, but Java 6 is going to hit public EOL in February 2013. Am I safe to assume that since Red Hat (and subsequently Oracle with its Oracle Unbreakable Linux) has shipped a copy of Java 6 in the java-1.6.0-sun package, it will support it for the entire 10 year support life of RHEL6?

    Read the article

  • How to Extending a logical volume in WMWare

    - by Mercer
    down vote favorite i have a CentOS 6.3 into my Virtual Machine. I have 2 Disk: Disk#1 = 18G Disk#2 = 20G [root@vm ~]# df -h Filesystem Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_system-lv_root 1008M 250M 708M 27% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 194M 31M 154M 17% /boot /dev/mapper/vg_system-lv_home 504M 17M 462M 4% /home /dev/mapper/vg_system-lv_opt 2.0G 68M 1.9G 4% /opt /dev/mapper/vg_produits-lv_grid 6.9G 2.5G 4.1G 38% /opt/grid /dev/mapper/vg_produits-lv_oracle 6.9G 144M 6.4G 3% /opt/oracle /dev/mapper/vg_system-lv_tmp 2.8G 71M 2.6G 3% /tmp /dev/mapper/vg_system-lv_usr 2.5G 1.6G 799M 67% /usr /dev/mapper/vg_system-lv_var 2.0G 278M 1.6G 15% /var So i want to extend my /tmp and my /opt/oracle like this: 10Go in/tmp 13Go in /opt/oracle Thx.

    Read the article

  • What IT certification is most valuable without job experience? [closed]

    - by Eric Wilson
    I'm trying to change vocations towards IT. I'm learning JAVA, SQL, and other things, but I have no job experience or formal education (other than a math Ph.D.) I know that certifications only go so far, but I was curious which certifications might be the most valuable for a first IT job? To clarify my question: Oracle certification + Zero Oracle experience = 0% chance of Oracle DBA job. Perhaps, though: [foobar certification] + Zero IT job experience = nonzero chance of entry IT job? Please give specific suggestions of certifications that you would consider relevant towards an entry-level IT job.

    Read the article

  • Solaris 11 installed, no updates?

    - by Paul De Niro
    I was messing around with solaris and decided to give Solaris 11 a try so I downloaded it from the Oracle website. After installing the OS, I went into the package manager and did an update. It told me that there were to available updates! I find this hard to believe considering that it's running a vulnerable version of firefox and java, its own in-house software product! Many of the other software products that came with the default install are also out of date and vulnerable. Is this normal for an Oracle install, or did I do something wrong with the upgrade process? I typed "pkg update" at the prompt, and I noticed that it did call out to pkg.oracle.com looking for updates. I find it bizarre that there are no updates available for an OS that was released a couple months ago with vulnerable software...

    Read the article

  • How to start a cmd window and issue tail request in a bat file?

    - by Kari
    I can open a cmd window and start a tail by entering something like this: tail -f C:\Oracle\WebCenter\Sites\11gR1\Sites\11.1.1.6.1\logs\sites.log This is probably a stupid question, but how do I do this in a batch file? It should be easy but it's not working - I have tried a couple variations and no success. Can anyone tell me what I am doing wrong here? ECHO OFF CD C:\Oracle\WebCenter\Sites\11gR1\Sites\11.1.1.6.1\logs\ cmd tail -f sites.log I've also tried: ECHO OFF start cmd tail -f C:\Oracle\WebCenter\Sites\11gR1\Sites\11.1.1.6.1\logs\sites.log (am using Win7 Ultimate, on a 64-bit machine, if that has any bearing)

    Read the article

  • SQL SERVER – Microsoft SQL Server Migration Assistant V6.0 Released

    - by Pinal Dave
    Every company makes a different decision about the database when they start, but as they move forward they mature and make the decision which is based on their experience and best interest of the organization. Similarly, quite a many organizations make different decisions on database, like Sybase, MySQL, Oracle or Access and as time passes by they learn that now they want to move to a different platform. Microsoft makes it easy for SQL Server professional by releasing various Migration Assistant tools. Last week, Microsoft released Microsoft SQL Server Migration Assistant v6.0. Here are different tools released earlier last week to migrate various product to SQL Server. Microsoft SQL Server Migration Assistant v6.0 for Sybase SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Sybase Adaptive Server Enterprise (ASE) to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for MySQL SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from MySQL to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Oracle SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Oracle to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Access SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Access to SQL Server. SSMA for Access automates conversion of Microsoft Access database objects to SQL Server database objects, loads the objects into SQL Server and Azure SQL DB, and then migrates data from Microsoft Access to SQL Server and Azure SQL DB. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Migration

    Read the article

  • Can I use `UnlockCommercialFeatures` for developing Java applications without a commercial license?

    - by nondescript1
    As of Java 7 Update 40, Oracle is now including Java Mission Control in the JDK. Being always interested in a new profiling tool, I decided to check it out. However, trying to start Flight Recorder against a process, I get the following error, Now I'm getting cold feet about adding the JVM option -XX:+UnlockCommercialFeatures. I would only use this for profiling in development and not in production. From the article linked above, JMC is available under the Oracle Binary Code License for Java. The license allows you to use JMC for free during development and testing, though a different (paid for) licence is required for production use. I'm still leery about this. From Java SE Products, Flight Recorder certainly is a commercial feature; however, I find it very confusing that it's now included in the standard JDK release. Anyone else have a read on this? Clearly nothing here is legally binding and your legal department should be consulted. Reference: Oracle Binary Code License Agreement for the Java SE Platform Products and JavaFX

    Read the article

  • Latest Security Updates for Java are Available for Download

    - by Akemi Iwaya
    Oracle has released new updates that patch 40 security holes in their Java Runtime Environment software. Anyone who needs or actively uses the Java Runtime Environment for work or gaming should promptly update their Java installation as soon as possible. One thing to keep in mind is that there are limitations placed on updates for older versions of Java as shown in the following excerpt. If you are using an older version, then it is recommended that you update to the Java SE 7 release if possible (depending on your usage circumstances). From the The H Security blog post: Only the current version of Java, Java SE 7, will be updated for free; downloads of the new version, Java SE 7 Update 25, are available and existing installs should auto-update. Mac OS X users will get an updated Java SE 6 for their systems as an automatic update; Java SE 7 on Mac OS X is updated by Oracle. Users of other older versions of Java will only get updates if they have a maintenance contract with Oracle. Affected Product Releases and Versions: JDK and JRE 7 Update 21 and earlier JDK and JRE 6 Update 45 and earlier JDK and JRE 5.0 Update 45 and earlier JavaFX 2.2.21 and earlier Note: If you do not need Java on your system, we recommend uninstalling it entirely or disabling the browser plugin. You can download and read through the details about the latest Java updates by visiting the links shown below.    

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >