Search Results

Search found 6431 results on 258 pages for 'cache invalidation'.

Page 91/258 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Recap: Oracle Fusion Middleware Strategies Driving Business Innovation

    - by Harish Gaur
    Hasan Rizvi, Executive Vice President of Oracle Fusion Middleware & Java took the stage on Tuesday to discuss how Oracle Fusion Middleware helps enable business innovation. Through a series of product demos and customer showcases, Hassan demonstrated how Oracle Fusion Middleware is a complete platform to harness the latest technological innovations (cloud, mobile, social and Fast Data) throughout the application lifecycle. Fig 1: Oracle Fusion Middleware is the foundation of business innovation This Session included 4 demonstrations to illustrate these strategies: 1. Build and deploy native mobile applications using Oracle ADF Mobile 2. Empower business user to model processes, design user interface and have rich mobile experience for process interaction using Oracle BPM Suite PS6. 3. Create collaborative user experience and integrate social sign-on using Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Social Network & Oracle Identity Management 11g R2 4. Deploy and manage business applications on Oracle Exalogic Nike, LA Department of Water & Power and Nintendo joined Hasan on stage to share how their organizations are leveraging Oracle Fusion Middleware to enable business innovation. Managing Performance in the Wrld of Social and Mobile How do you provide predictable scalability and performance for an application that monitors active lifestyle of 8 million users on a daily basis? Nike’s answer is Oracle Coherence, a component of Oracle Fusion Middleware and Oracle Exadata. Fig 2: Oracle Coherence enabled data grid improves performance of Nike+ Digital Sports Platform Nicole Otto, Sr. Director of Consumer Digital Technology discussed the vision of the Nike+ platform, a platform which represents a shift for NIKE from a  "product"  to  a "product +" experience.  There are currently nearly 8 million users in the Nike+ system who are using digitally-enabled Nike+ devices.  Once data from the Nike+ device is transmitted to Nike+ application, users access the Nike+ website or via the Nike mobile applicatoin, seeing metrics around their daily active lifestyle and even engage in socially compelling experiences to compare, compete or collaborate their data with their friends. Nike expects the number of users to grow significantly this year which will drive an explosion of data and potential new experiences. To deal with this challenge, Nike envisioned building a shared platform that would drive a consumer-centric model for the company. Nike built this new platform using Oracle Coherence and Oracle Exadata. Using Coherence, Nike built a data grid tier as a distributed cache, thereby provide low-latency access to most recent and relevant data to consumers. Nicole discussed how Nike+ Digital Sports Platform is unique in the way that it utilizes the Coherence Grid.  Nike takes advantage of Coherence as a traditional cache using both cache-aside and cache-through patterns.  This new tier has enabled Nike to create a horizontally scalable distributed event-driven processing architecture. Current data grid volume is approximately 150,000 request per minute with about 40 million objects at any given time on the grid. Improving Customer Experience Across Multiple Channels Customer experience is on top of every CIO's mind. Customer Experience needs to be consistent and secure across multiple devices consumers may use.  This is the challenge Matt Lampe, CIO of Los Angeles Department of Water & Power (LADWP) was faced with. Despite being the largest utilities company in the country, LADWP had been relying on a 38 year old customer information system for serving its customers. Their prior system  had been unable to keep up with growing customer demands. Last year, LADWP embarked on a journey to improve customer experience for 1.6million LA DWP customers using Oracle WebCenter platform. Figure 3: Multi channel & Multi lingual LADWP.com built using Oracle WebCenter & Oracle Identity Management platform Matt shed light on his efforts to drive customer self-service across 3 dimensions – new website, new IVR platform and new bill payment service. LADWP has built a new portal to increase customer self-service while reducing the transactions via IVR. LADWP's website is powered Oracle WebCenter Portal and is accessible by desktop and mobile devices. By leveraging Oracle WebCenter, LADWP eliminated the need to build, format, and maintain individual mobile applications or websites for different devices. Their entire content is managed using Oracle WebCenter Content and secured using Oracle Identity Management. This new portal automated their paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account -  like Bill Pay, Payment History, Bill History and Usage Analysis. LADWP's solution went live in April 2012. Matt indicated that LADWP's Self-Service Portal has greatly improved customer satisfaction.  In a JD Power Associates website satisfaction survey, results indicate rankings have climbed by 25+ points, marking a remarkable increase in user experience. Bolstering Performance and Simplifying Manageability of Business Applications Ingvar Petursson, Senior Vice Preisdent of IT at Nintendo America joined Hasan on-stage to discuss their choice of Exalogic. Nintendo had significant new requirements coming their way for business systems, both internal and external, in the years to come, especially with new products like the WiiU on the horizon this holiday season. Nintendo needed a platform that could give them performance, availability and ease of management as they deploy business systems. Ingvar selected Engineered Systems for two reasons: 1. High performance  2. Ease of management Figure 4: Nintendo relies on Oracle Exalogic to run ATG eCommerce, Oracle e-Business Suite and several business applications Nintendo made a decision to run their business applications (ATG eCommerce, E-Business Suite) and several Fusion Middleware components on the Exalogic platform. What impressed Ingvar was the "stress” testing results during evaluation. Oracle Exalogic could handle their 3-year load estimates for many functions, which was better than Nintendo expected without any hardware expansion. Faster Processing of Big Data Middleware plays an increasingly important role in Big Data. Last year, we announced at OpenWorld the introduction of Oracle Data Integrator for Hadoop and Oracle Loader for Hadoop which helps in the ability to move, transform, load data to and from Big Data Appliance to Exadata.  This year, we’ve added new capabilities to find, filter, and focus data using Oracle Event Processing. This product can natively integrate with Big Data Appliance or runs standalone. Hasan briefly discussed how NTT Docomo, largest mobile operator in Japan, leverages Oracle Event Processing & Oracle Coherence to process mobile data (from 13 million smartphone users) at a speed of 700K events per second before feeding it Hadoop for distributed processing of big data. Figure 5: Mobile traffic data processing at NTT Docomo with Oracle Event Processing & Oracle Coherence    

    Read the article

  • Package management fails in update-manager with gzip problems and compilation errors. U12.04LTS

    - by HarveyP
    Similar to but not the same as Package management system corrupted. Cannot install or remove packages. U12.04LTS (an earlier problem) with package management system. Followed all of L. D. James suggestions in his answer to no avail. This time as well as the gzip error I am also getting compilation errors. The difference may be due to a lack of compilation in my earlier problem so it may be the same error. The packages concerned are enumerated in the output from update-manager below. Also included below that is the output from apt-get -f install apt-get autoremove gives same output. Tried update without SSL updates - 9 to install and got "Unhandled Error in aptdaemon". Output number 3 below. One at a time - output 4 - is for firefox, first in the list of packages. Falls over at libssl1.0.0 despite deselection of it from update ... Tried apt-get install --reinstall dpkg which succeeded, apt-get install --reinstall tar and apt-get install --reinstall gzip both of which failed at libssl1.0.0 as ever. (as suggested by Subv3rsion elsewhere in this forum) Now cannot apt-get update with complete success even after changing server and apt-get clean - output number 5 below ... 1). Output from update-manager The following packages will be upgraded:<> firefox firefox-globalmenu firefox-locale-en libavcodec-extra-53 libavformat53 libavutil-extra-51 libjson0 libpostproc52 libssl1.0.0 libswscale2 openssl 11 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade.<br> Need to get 0 B/46.5 MB of archives. After this operation, 1,416 kB of additional disk space will be used.<br> Do you want to continue [Y/n]? y debconf: Perl may be unconfigured (Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at (eval 1) line 3. BEGIN failed--compilation aborted at (eval 1) line 3. ) -- aborting (Reading database ... 160575 files and directories currently installed.) Preparing to replace libssl1.0.0 1.0.1-4ubuntu5.14 (using .../libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb) ... Unpacking replacement libssl1.0.0 ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:4>: data error' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb (--unpack):<br> subprocess dpkg-deb --fsys-tarfile returned error exit status 2 No apport report written because MaxReports has already been reached Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at /usr/share/perl5/Debconf/Template.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Template.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Question.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Question.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Config.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Config.pm line 7. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at /usr/share/perl5/Debconf/Db.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Db.pm line 7. Compilation failed in require at /usr/share/debconf/frontend line 6. BEGIN failed--compilation aborted at /usr/share/debconf/frontend line 6. dpkg: error whale cleanang up: subprgcess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) 2). Output from install -f harveyp@harveyp:~$ sudo apt-get -f install [sudo] password for harveyp: Reading package lists... Done Building dependency tree Reading state information... Done 0 to upgrade, 0 to newly install, 0 to remove and 11 not to upgrade. 1 not fully installed or removed.<br> After this operation, 0 B of additional disk space will be used. E: Internal Error, No file name for libssl1.0.0 3). Unhandled error from aptdaemon Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1045, in _simulate trans.unauthenticated = self.__simulate(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1160, in __simulate unauthenticated = self._get_unauthenticated() File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 347, in _get_unauthenticated for pkg in self._iterate_packages(): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1356, in _iterate_packages for enum, pkg in enumerate(self._cache): File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 216, in __iter__ yield self[pkgname] File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 201, in __getitem__ pkg = self._weakref[key] = Package(self, self._cache[key]) KeyError: 'librqrcode-rubq-doc 4). output from update of firefox installArchives() failed: Error in function: < Setting up libssl1.0.0 (1.0.1-4ubuntu5.14) ... Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at /usr/share/perl5/Debconf/Template.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Template.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Question.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Question.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Config.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Config.pm line 7. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. 5. output from apt-get update ...snip ... Hit http://ubuntu-archive.mirrors.free.org precise-security/multiverse Translation-en Hit http://ubuntu-archive.mirrors.free.org precise-security/restricted Translation-en Hit http://ubuntu-archive.mirrors.free.org precise-security/universe Translation-en Fetched 368 kB in 6s (59.5 kB/s) W: Failed to fetch gzip:/var/lib/apt/lists/partial/ubuntu-archive.mirrors.free.org_ubuntu_dists_precise_universe_source_Sources Hash Sum mismatch E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Das T5-4 TPC-H Ergebnis naeher betrachtet

    - by Stefan Hinker
    Inzwischen haben vermutlich viele das neue TPC-H Ergebnis der SPARC T5-4 gesehen, das am 7. Juni bei der TPC eingereicht wurde.  Die wesentlichen Punkte dieses Benchmarks wurden wie gewohnt bereits von unserer Benchmark-Truppe auf  "BestPerf" zusammengefasst.  Es gibt aber noch einiges mehr, das eine naehere Betrachtung lohnt. Skalierbarkeit Das TPC raet von einem Vergleich von TPC-H Ergebnissen in unterschiedlichen Groessenklassen ab.  Aber auch innerhalb der 3000GB-Klasse ist es interessant: SPARC T4-4 mit 4 CPUs (32 Cores mit 3.0 GHz) liefert 205,792 QphH. SPARC T5-4 mit 4 CPUs (64 Cores mit 3.6 GHz) liefert 409,721 QphH. Das heisst, es fehlen lediglich 1863 QphH oder 0.45% zu 100% Skalierbarkeit, wenn man davon ausgeht, dass die doppelte Anzahl Kerne das doppelte Ergebnis liefern sollte.  Etwas anspruchsvoller, koennte man natuerlich auch einen Faktor von 2.4 erwarten, wenn man die hoehere Taktrate mit beruecksichtigt.  Das wuerde die Latte auf 493901 QphH legen.  Dann waere die SPARC T5-4 bei 83%.  Damit stellt sich die Frage: Was hat hier nicht skaliert?  Vermutlich der Plattenspeicher!  Auch hier lohnt sich eine naehere Betrachtung: Plattenspeicher Im Bericht auf BestPerf und auch im Full Disclosure Report der TPC stehen einige interessante Details zum Plattenspeicher und der Konfiguration.   In der Konfiguration der SPARC T4-4 wurden 12 2540-M2 Arrays verwendet, die jeweils ca. 1.5 GB/s Durchsatz liefert, insgesamt also eta 18 GB/s.  Dabei waren die Arrays offensichtlich mit jeweils 2 Kabeln pro Array direkt an die 24 8GBit FC-Ports des Servers angeschlossen.  Mit den 2x 8GBit Ports pro Array koennte man so ein theoretisches Maximum von 2GB/s erreichen.  Tatsaechlich wurden 1.5GB/s geliefert, was so ziemlich dem realistischen Maximum entsprechen duerfte. Fuer den Lauf mit der SPARC T5-4 wurden doppelt so viele Platten verwendet.  Dafuer wurden die 2540-M2 Arrays mit je einem zusaetzlichen Plattentray erweitert.  Mit dieser Konfiguration wurde dann (laut BestPerf) ein Maximaldurchsatz von 33 GB/s erreicht - nicht ganz das doppelte des SPARC T4-4 Laufs.  Um tatsaechlich den doppelten Durchsatz (36 GB/s) zu liefern, haette jedes der 12 Arrays 3 GB/s ueber seine 4 8GBit Ports liefern muessen.  Im FDR stehen nur 12 dual-port FC HBAs, was die Verwendung der Brocade FC Switches erklaert: Es wurden alle 4 8GBit ports jedes Arrays an die Switches angeschlossen, die die Datenstroeme dann in die 24 16GBit HBA ports des Servers buendelten.  Das theoretische Maximum jedes Storage-Arrays waere nun 4 GB/s.  Wenn man jedoch den Protokoll- und "Realitaets"-Overhead mit einrechnet, sind die tatsaechlich gelieferten 2.75 GB/s gar nicht schlecht.  Mit diesen Zahlen im Hinterkopf ist die Verdopplung des SPARC T4-4 Ergebnisses eine gute Leistung - und gleichzeitig eine gute Erklaerung, warum nicht bis zum 2.4-fachen skaliert wurde. Nebenbei bemerkt: Weder die SPARC T4-4 noch die SPARC T5-4 hatten in der gemessenen Konfiguration irgendwelche Flash-Devices. Mitbewerb Seit die T4 Systeme auf dem Markt sind, bemuehen sich unsere Mitbewerber redlich darum, ueberall den Eindruck zu hinterlassen, die Leistung des SPARC CPU-Kerns waere weiterhin mangelhaft.  Auch scheinen sie ueberzeugt zu sein, dass (ueber)grosse Caches und hohe Taktraten die einzigen Schluessel zu echter Server Performance seien.  Wenn ich mir nun jedoch die oeffentlichen TPC-H Ergebnisse ansehe, sehe ich dies: TPC-H @3000GB, Non-Clustered Systems System QphH SPARC T5-4 3.6 GHz SPARC T5 4/64 – 2048 GB 409,721.8 SPARC T4-4 3.0 GHz SPARC T4 4/32 – 1024 GB 205,792.0 IBM Power 780 4.1 GHz POWER7 8/32 – 1024 GB 192,001.1 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64 – 512 GB 162,601.7 Kurz zusammengefasst: Mit 32 Kernen (mit 3 GHz und 4MB L3 Cache), liefert die SPARC T4-4 mehr QphH@3000GB ab als IBM mit ihrer 32 Kern Power7 (bei 4.1 GHz und 32MB L3 Cache) und auch mehr als HP mit einem 64 Kern Intel Xeon System (2.27 GHz und 24MB L3 Cache).  Ich frage mich, wo genau SPARC hier mangelhaft ist? Nun koennte man natuerlich argumentieren, dass beide Ergebnisse nicht gerade neu sind.  Nun, in Ermangelung neuerer Ergebnisse kann man ja mal ein wenig spekulieren: IBMs aktueller Performance Report listet die o.g. IBM Power 780 mit einem rPerf Wert von 425.5.  Ein passendes Nachfolgesystem mit Power7+ CPUs waere die Power 780+ mit 64 Kernen, verfuegbar mit 3.72 GHz.  Sie wird mit einem rPerf Wert von  690.1 angegeben, also 1.62x mehr.  Wenn man also annimmt, dass Plattenspeicher nicht der limitierende Faktor ist (IBM hat mit 177 SSDs getestet, sie duerfen das gerne auf 400 erhoehen) und IBMs eigene Leistungsabschaetzung zugrunde legt, darf man ein theoretisches Ergebnis von 311398 QphH@3000GB erwarten.  Das waere dann allerdings immer noch weit von dem Ergebnis der SPARC T5-4 entfernt, und gerade in der von IBM so geschaetzen "per core" Metric noch weniger vorteilhaft. In der x86-Welt sieht es nicht besser aus.  Leider gibt es von Intel keine so praktischen rPerf-Tabellen.  Daher muss ich hier fuer eine Schaetzung auf SPECint_rate2006 zurueckgreifen.  (Ich bin kein grosser Fan von solchen Kreuz- und Querschaetzungen.  Insb. SPECcpu ist nicht besonders geeignet, um Datenbank-Leistung abzuschaetzen, da fast kein IO im Spiel ist.)  Das o.g. HP System wird bei SPEC mit 1580 CINT2006_rate gelistet.  Das bis einschl. 2013-06-14 beste Resultat fuer den neuen Intel Xeon E7-4870 mit 8 CPUs ist 2180 CINT2006_rate.  Das ist immerhin 1.38x besser.  (Wenn man nur die Taktrate beruecksichtigen wuerde, waere man bei 1.32x.)  Hier weiter zu rechnen, ist muessig, aber fuer die ungeduldigen Leser hier eine kleine tabellarische Zusammenfassung: TPC-H @3000GB Performance Spekulationen System QphH* Verbesserung gegenueber der frueheren Generation SPARC T4-4 32 cores SPARC T4 205,792 2x SPARC T5-464 cores SPARC T5 409,721 IBM Power 780 32 cores Power7 192,001 1.62x IBM Power 780+ 64 cores Power7+  311,398* HP ProLiant DL980 G764 cores Intel Xeon X7560 162,601 1.38x HP ProLiant DL980 G780 cores Intel Xeon E7-4870    224,348* * Keine echten Resultate  - spekulative Werte auf der Grundlage von rPerf (Power7+) oder SPECint_rate2006 (HP) Natuerlich sind IBM oder HP herzlich eingeladen, diese Werte zu widerlegen.  Aber stand heute warte ich noch auf aktuelle Benchmark Veroffentlichungen in diesem Datensegment. Was koennen wir also zusammenfassen? Es gibt einige Hinweise, dass der Plattenspeicher der begrenzende Faktor war, der die SPARC T5-4 daran hinderte, auf jenseits von 2x zu skalieren Der Mythos, dass SPARC Kerne keine Leistung bringen, ist genau das - ein Mythos.  Wie sieht es umgekehrt eigentlich mit einem TPC-H Ergebnis fuer die Power7+ aus? Cache ist nicht der magische Performance-Schalter, fuer den ihn manche Leute offenbar halten. Ein System, eine CPU-Architektur und ein Betriebsystem jenseits einer gewissen Grenze zu skalieren ist schwer.  In der x86-Welt scheint es noch ein wenig schwerer zu sein. Was fehlt?  Nun, das Thema Preis/Leistung ueberlasse ich gerne den Verkaeufern ;-) Und zu guter Letzt: Nein, ich habe mich nicht ins Marketing versetzen lassen.  Aber manchmal kann ich mich einfach nicht zurueckhalten... Disclosure Statements The views expressed on this blog are my own and do not necessarily reflect the views of Oracle. TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org, results as of 6/7/13. Prices are in USD. SPARC T5-4 409,721.8 QphH@3000GB, $3.94/QphH@3000GB, available 9/24/13, 4 processors, 64 cores, 512 threads; SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads. SPEC and the benchmark names SPECfp and SPECint are registered trademarks of the Standard Performance Evaluation Corporation. Results as of June 18, 2013 from www.spec.org. HP ProLiant DL980 G7 (2.27 GHz, Intel Xeon X7560): 1580 SPECint_rate2006; HP ProLiant DL980 G7 (2.4 GHz, Intel Xeon E7-4870): 2180 SPECint_rate2006,

    Read the article

  • T4 Performance Counters explained

    - by user13346607
    Now that T4 is out for a few month some people might have wondered what details of the new pipeline you can monitor. A "cpustat -h" lists a lot of events that can be monitored, and only very few are self-explanatory. I will try to give some insight on all of them, some of these "PIC events" require an in-depth knowledge of T4 pipeline. Over time I will try to explain these, for the time being these events should simply be ignored. (Side note: some counters changed from tape-out 1.1 (*only* used in the T4 beta program) to tape-out 1.2 (used in the systems shipping today) The table only lists the tape-out 1.2 counters) 0 0 1 1058 6033 Oracle Microelectronics 50 14 7077 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} pic name (cpustat) Prose Comment Sel-pipe-drain-cycles, Sel-0-[wait|ready], Sel-[1,2] Sel-0-wait counts cycles a strand waits to be selected. Some reasons can be counted in detail; these are: Sel-0-ready: Cycles a strand was ready but not selected, that can signal pipeline oversubscription Sel-1: Cycles only one instruction or µop was selected Sel-2: Cycles two instructions or µops were selected Sel-pipe-drain-cycles: cf. PRM footnote 8 to table 10.2 Pick-any, Pick-[0|1|2|3] Cycles one, two, three, no or at least one instruction or µop is picked Instr_FGU_crypto Number of FGU or crypto instructions executed on that vcpu Instr_ld dto. for load Instr_st dto. for store SPR_ring_ops dto. for SPR ring ops Instr_other dto. for all other instructions not listed above, PRM footnote 7 to table 10.2 lists the instructions Instr_all total number of instructions executed on that vcpu Sw_count_intr Nr of S/W count instructions on that vcpu (sethi %hi(fc000),%g0 (whatever that is))  Atomics nr of atomic ops, which are LDSTUB/a, CASA/XA, and SWAP/A SW_prefetch Nr of PREFETCH or PREFETCHA instructions Block_ld_st Block loads or store on that vcpu IC_miss_nospec, IC_miss_[L2_or_L3|local|remote]\ _hit_nospec Various I$ misses, distinguished by where they hit. All of these count per thread, but only primary events: T4 counts only the first occurence of an I$ miss on a core for a certain instruction. If one strand misses in I$ this miss is counted, but if a second strand on the same core misses while the first miss is being resolved, that second miss is not counted This flavour of I$ misses counts only misses that are caused by instruction that really commit (note the "_nospec") BTC_miss Branch target cache miss ITLB_miss ITLB misses (synchronously counted) ITLB_miss_asynch dto. but asynchronously [I|D]TLB_fill_\ [8KB|64KB|4MB|256MB|2GB|trap] H/W tablewalk events that fill ITLB or DTLB with translation for the corresponding page size. The “_trap” event occurs if the HWTW was not able to fill the corresponding TLB IC_mtag_miss, IC_mtag_miss_\ [ptag_hit|ptag_miss|\ ptag_hit_way_mismatch] I$ micro tag misses, with some options for drill down Fetch-0, Fetch-0-all fetch-0 counts nr of cycles nothing was fetched for this particular strand, fetch-0-all counts cycles nothing was fetched for all strands on a core Instr_buffer_full Cycles the instruction buffer for a strand was full, thereby preventing any fetch BTC_targ_incorrect Counts all occurences of wrongly predicted branch targets from the BTC [PQ|ROB|LB|ROB_LB|SB|\ ROB_SB|LB_SB|RB_LB_SB|\ DTLB_miss]\ _tag_wait ST_q_tag_wait is listed under sl=20. These counters monitor pipeline behaviour therefore they are not strand specific: PQ_...: cycles Rename stage waits for a Pick Queue tag (might signal memory bound workload for single thread mode, cf. Mail from Richard Smith) ROB_...: cycles Select stage waits for a ROB (ReOrderBuffer) tag LB_...: cycles Select stage waits for a Load Buffer tag SB_...: cycles Select stage waits for Store Buffer tag combinations of the above are allowed, although some of these events can overlap, the counter will only be incremented once per cycle if any of these occur DTLB_...: cycles load or store instructions wait at Pick stage for a DTLB miss tag [ID]TLB_HWTW_\ [L2_hit|L3_hit|L3_miss|all] Counters for HWTW accesses caused by either DTLB or ITLB misses. Canbe further detailed by where they hit IC_miss_L2_L3_hit, IC_miss_local_remote_remL3_hit, IC_miss I$ prefetches that were dropped because they either miss in L2$ or L3$ This variant counts misses regardless if the causing instruction commits or not DC_miss_nospec, DC_miss_[L2_L3|local|remote_L3]\ _hit_nospec D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters DTLB_miss_asynch counts all DTLB misses asynchronously, there is no way to count them synchronously DC_pref_drop_DC_hit, SW_pref_drop_[DC_hit|buffer_full] L1-D$ h/w prefetches that were dropped because of a D$ hit, counted per core. The others count software prefetches per strand [Full|Partial]_RAW_hit_st_[buf|q] Count events where a load wants to get data that has not yet been stored, i. e. it is still inside the pipeline. The data might be either still in the store buffer or in the store queue. If the load's data matches in the SB and in the store queue the data in buffer takes precedence of course since it is younger [IC|DC]_evict_invalid, [IC|DC|L1]_snoop_invalid, [IC|DC|L1]_invalid_all Counter for invalidated cache evictions per core St_q_tag_wait Number of cycles pipeline waits for a store queue tag, of course counted per core Data_pref_[drop_L2|drop_L3|\ hit_L2|hit_L3|\ hit_local|hit_remote] Data prefetches that can be further detailed by either why they were dropped or where they did hit St_hit_[L2|L3], St_L2_[local|remote]_C2C, St_local, St_remote Store events distinguished by where they hit or where they cause a L2 cache-to-cache transfer, i.e. either a transfer from another L2$ on the same die or from a different die DC_miss, DC_miss_\ [L2_L3|local|remote]_hit D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters L2_[clean|dirty]_evict Per core clean or dirty L2$ evictions L2_fill_buf_full, L2_wb_buf_full, L2_miss_buf_full Per core L2$ buffer events, all count number of cycles that this state was present L2_pipe_stall Per core cycles pipeline stalled because of L2$ Branches Count branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_taken Counts taken branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_mispred, Br_dir_mispred, Br_trg_mispred, Br_trg_mispred_\ [far_tbl|indir_tbl|ret_stk] Counter for various branch misprediction events.  Cycles_user counts cycles, attribute setting hpriv, nouser, sys controls addess space to count in Commit-[0|1|2], Commit-0-all, Commit-1-or-2 Number of times either no, one, or two µops commit for a strand. Commit-0-all counts number of times no µop commits for the whole core, cf. footnote 11 to table 10.2 in PRM for a more detailed explanation on how this counters interacts with the privilege levels

    Read the article

  • Not attending the LUGM mini-meetup - 05. Oct 2013

    Not attending a meeting of the LUGM can be fun, too. It's getting a bit of a habit that Ish is organising small gatherings, aka mini-meetups, of the Linux User Group Mauritius/Meta (LUGM) almost every Saturday. There they mainly discuss and talk about various elements of using Linux as ones main operating systems and the possibilities you are going to have. On top of course, some tips & tricks about mastering the command line and initial steps in scripting or even writing HTML. In general, sounds like a good portion of fun and great spirit of community. Unfortunately, I'm usually quite busy with private and family matters during the weekend and so I already signalised that I wouldn't be around. Well, at least not physically... But this Saturday a couple of things worked out faster than expected and so I was hanging out on my machine. I made virtual contact with one of Pawan's messages over on Facebook... And somehow that kicked off some kind of an online game fun on basic configuration of Apache HTTPd 2.2.x, PHP 5.x and how to improve the overall performance of a newly installed blog based on WordPress. Default configuration files Nitin's website finally came alive and despite the dark theme and the hidden Apple 'fanboy' advertisement I was more interested in the technical situation. As with any new installation there is usually quite some adjustment to be done. And Nitin's page was no exception. Unfortunately, out of the box installations of Apache httpd and PHP are too verbose and expose too much information under the hood. You might think that this isn't really a problem at all, well, think about it again after completely reading this article. First, I checked the HTTP response headers - using either Chrome Developer Tools or Firefox Web Developer extension - of Nitin's page and based on that I advised him to lower the noise levels a little bit. It's not really necessary that detailed information about web server software and scripting language has to be published in every response made. Quite a number of script kiddies and exploits actually check for version specifics prior to an attack. So, removing at least version details hardens the system a little bit. In particular, I'm talking about these response values: Server X-Powered-By How to achieve that? By tweaking the configuration files... Namely, we are going to look into the following ones: apache2.conf httpd.conf .htaccess php.ini The above list contains some additional files, I'm talking about in the next paragraphs. Anyway, those are the ones involved. Tweaking Apache Open your favourite text editor and start to modify the apache2.conf. Eventually, you might like to have a quick peak at the file to see whether it is necessary to adjust it or not. Following is a handy combination of commands to get an overview of your active directives: # sudo grep -v '#' /etc/apache2/apache2.conf | grep -v '^$' | less There you keep an eye on those two Apache directives: ServerSignature Off ServerTokens Prod If that's not the case, change them as highlighted above. In order to activate your modifications you have to restart Apache httpd server. On Debian and Ubuntu you might use apache2ctl for that, on other distributions you might have to use service or run the init-scripts again: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Refresh your website and check the HTTP response header. Tweaking PHP5 (a little bit) Next, check your php.ini file with the following statement: # sudo grep -v ';' /etc/php5/apache2/php.ini | grep -v '^$' | less And check the value of expose_php = Off Again, if it's not as highlighted, change it... Some more Apache love Okay, back to Apache it might also be interesting to improve the situation about browser caching and removing more obsolete information. When you run your website against the usual performance checks like Google Page Speed and Yahoo YSlow you might see those check points with bad grades on a standard, default configuration. Well, this can be done easily. Configure entity tags (ETags) ETags are only interesting when you run your websites on a farm of multiple web servers. Removing this data for your static resources is very simple in Apache. As we are going to deal with the HTTP response header information you have to ensure that Apache is capable to manipulate them. First, check your enabled modules: # sudo ls -al /etc/apache2/mods-enabled/ | grep headers And in case that the 'headers' module is not listed, you have to enable it from the available ones: # sudo a2enmod headers Second, check your httpd.conf file (in case it exists): # sudo grep -v '#' /etc/apache2/httpd.conf | grep -v '^$' | less In newer (better said fresh) installations you might have to create a new configuration file below your conf.d folder with your favourite text editor like so: # sudo nano /etc/apache2/conf.d/headers.conf Then, in order to tweak your HTTP responses either check for those lines or add them: Header unset ETagFileETag None In case that your file doesn't exist or those lines are missing, feel free to create/add them. Afterwards, check your Apache configuration syntax and restart your running instances as already shown above: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Add Expires headers To improve the loading performance of your website, you should take some care into the proper configuration of how to leverage the browser's ability to cache certain resources and files. This is done by adding an Expires: value to the HTTP response header. Generally speaking it is advised that you specify a near-future, read: 1 week or a little bit more, for your static content like JavaScript files or Cascading Style Sheets. One solution to adjust this is to put some instructions into the .htaccess file in the root folder of your web site. Of course, this could also be placed into a more generic location of your Apache installation but honestly, I'd like to keep this at the web site level. Following some adjustments I'm currently using on this blog site: # Turn on Expires and set default to 0ExpiresActive OnExpiresDefault A0 # Set up caching on media files for 1 year (forever?)<FilesMatch "\.(flv|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav)$">ExpiresDefault A29030400Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 1 week<FilesMatch "\.(js|css)$">ExpiresDefault A604800Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 31 days<FilesMatch "\.(gif|jpg|jpeg|png|swf)$">ExpiresDefault A2678400Header append Cache-Control "public"</FilesMatch> As we are editing the .htaccess files, it is not necessary to restart Apache. In case that your web site doesn't load anymore or you're experiencing an error while trying to restart your httpd, check that the 'expires' module is actually an enabled module: # ls -al /etc/apache2/mods-enabled/ | grep expires# sudo a2enmod expires Of course, the instructions above a re not feature complete but I hope that they might provide a better default configuration for your LAMP stack. Resume of the day Within a couple of hours, and while being occupied with an eLearning course on SQL Server 2012, I had some good fun in helping and assisting other LUGM members while they were some kilometers away at Bagatelle. According to other blog articles it seems that Nitin had quite some moments of desperation. Just for the records: At no time it was my intention to either kick his butt or pull a leg on him. Simply, providing some input based on the lessons I've learned over the last couple of years configuring Apache HTTPd and PHP. Check out the other blogs, too: LUGM mini-meetup... Epic! Superb Saturday Linux Meetup And last but not least, the man himself: The end of a new beginning Cheers, and happy community'ing! Updates Due to our weekly Code & Coffee sessions in the MSCC community, I had a chance to talk to Nitin directly and he showed me the problems directly on his machine. This led to update this article hence the paragraphs on enabling the modules 'headers' and 'expires'.

    Read the article

  • plupload variable upload path?

    - by SoulieBaby
    Hi all, I'm using plupload to upload files to my server (http://www.plupload.com/index.php), however I wanted to know if there was any way of making the upload path variable. Basically I need to select the upload path folder first, then choose the files using plupload and then upload to the initially selected folder. I've tried a few different ways but I can't seem to pass along the variable folder path to the upload.php file. I'm using the flash version of plupload. If someone could help me out, that would be fantastic!! :) Here's my plupload jquery: jQuery.noConflict(); jQuery(document).ready(function() { jQuery("#flash_uploader").pluploadQueue({ // General settings runtimes: 'flash', url: '/assets/upload/upload.php', max_file_size: '10mb', chunk_size: '1mb', unique_names: false, // Resize images on clientside if we can resize: {width: 500, height: 350, quality: 100}, // Flash settings flash_swf_url: '/assets/upload/flash/plupload.flash.swf' }); }); And here's the upload.php file: <?php /** * upload.php * * Copyright 2009, Moxiecode Systems AB * Released under GPL License. * * License: http://www.plupload.com/license * Contributing: http://www.plupload.com/contributing */ // HTTP headers for no cache etc header('Content-type: text/plain; charset=UTF-8'); header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); header("Last-Modified: ".gmdate("D, d M Y H:i:s")." GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); // Settings $targetDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads"; //temp directory <- need these to be variable $finalDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads2"; //final directory <- need these to be variable $cleanupTargetDir = true; // Remove old files $maxFileAge = 60 * 60; // Temp file age in seconds // 5 minutes execution time @set_time_limit(5 * 60); // usleep(5000); // Get parameters $chunk = isset($_REQUEST["chunk"]) ? $_REQUEST["chunk"] : 0; $chunks = isset($_REQUEST["chunks"]) ? $_REQUEST["chunks"] : 0; $fileName = isset($_REQUEST["name"]) ? $_REQUEST["name"] : ''; // Clean the fileName for security reasons $fileName = preg_replace('/[^\w\._]+/', '', $fileName); // Create target dir if (!file_exists($targetDir)) @mkdir($targetDir); // Remove old temp files if (is_dir($targetDir) && ($dir = opendir($targetDir))) { while (($file = readdir($dir)) !== false) { $filePath = $targetDir . DIRECTORY_SEPARATOR . $file; // Remove temp files if they are older than the max age if (preg_match('/\\.tmp$/', $file) && (filemtime($filePath) < time() - $maxFileAge)) @unlink($filePath); } closedir($dir); } else die('{"jsonrpc" : "2.0", "error" : {"code": 100, "message": "Failed to open temp directory."}, "id" : "id"}'); // Look for the content type header if (isset($_SERVER["HTTP_CONTENT_TYPE"])) $contentType = $_SERVER["HTTP_CONTENT_TYPE"]; if (isset($_SERVER["CONTENT_TYPE"])) $contentType = $_SERVER["CONTENT_TYPE"]; if (strpos($contentType, "multipart") !== false) { if (isset($_FILES['file']['tmp_name']) && is_uploaded_file($_FILES['file']['tmp_name'])) { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen($_FILES['file']['tmp_name'], "rb"); if ($in) { while ($buff = fread($in, 4096)) fwrite($out, $buff); } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); unlink($_FILES['file']['tmp_name']); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } else die('{"jsonrpc" : "2.0", "error" : {"code": 103, "message": "Failed to move uploaded file."}, "id" : "id"}'); } else { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen("php://input", "rb"); if ($in) { while ($buff = fread($in, 4096)){ fwrite($out, $buff); } } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } //Moves the file from $targetDir to $finalDir after receiving the final chunk if($chunk == ($chunks-1)){ rename($targetDir . DIRECTORY_SEPARATOR . $fileName, $finalDir . DIRECTORY_SEPARATOR . $fileName); } // Return JSON-RPC response die('{"jsonrpc" : "2.0", "result" : null, "id" : "id"}'); ?>

    Read the article

  • Problem With View Helpers

    - by Richard Knop
    I wrote few custom view helpers but I have a little trouble using them. If I add the helper path in controller action like this: public function fooAction() { $this->view->addHelperPath('My/View/Helper', 'My_View_Helper'); } Then I can use the views from that path without a problem. But when I add the path in the bootstrap file like this: protected function _initView() { $this->view = new Zend_View(); $this->view->doctype('XHTML1_STRICT'); $this->view->headScript()->appendFile($this->view->baseUrl() . '/js/jquery-ui/jquery.js'); $this->view->headMeta()->appendHttpEquiv('Content-Type', 'text/html; charset=UTF-8'); $this->view->headMeta()->appendHttpEquiv('Content-Style-Type', 'text/css'); $this->view->headMeta()->appendHttpEquiv('Content-Language', 'sk'); $this->view->headLink()->appendStylesheet($this->view->baseUrl() . '/css/reset.css'); $this->view->addHelperPath('My/View/Helper', 'My_View_Helper'); } Then the view helpers don't work. Why is that? It's too troublesome to add the path in every controller action. Here is an example of how my custom view helpers look: class My_View_Helper_FooBar { public function fooBar() { return 'hello world'; } } I use them like this in views: <?php echo $this->fooBar(); ?> Should I post my whole bootstrap file? UPDATE: Added complete bootstrap file just in case: class Bootstrap extends Zend_Application_Bootstrap_Bootstrap { protected function _initFrontController() { $this->frontController = Zend_Controller_Front::getInstance(); $this->frontController->addModuleDirectory(APPLICATION_PATH . '/modules'); Zend_Controller_Action_HelperBroker::addPath( 'My/Controller/Action/Helper', 'My_Controller_Action_Helper' ); $this->frontController->registerPlugin(new My_Controller_Plugin_Auth()); $this->frontController->setBaseUrl('/'); } protected function _initView() { $this->view = new Zend_View(); $this->view->doctype('XHTML1_STRICT'); $this->view->headScript()->appendFile($this->view->baseUrl() . '/js/jquery-ui/jquery.js'); $this->view->headMeta()->appendHttpEquiv('Content-Type', 'text/html; charset=UTF-8'); $this->view->headMeta()->appendHttpEquiv('Content-Style-Type', 'text/css'); $this->view->headMeta()->appendHttpEquiv('Content-Language', 'sk'); $this->view->headLink()->appendStylesheet($this->view->baseUrl() . '/css/reset.css'); $this->view->addHelperPath('My/View/Helper', 'My_View_Helper'); } protected function _initDb() { $this->configuration = new Zend_Config_Ini(APPLICATION_PATH . '/configs/application.ini', APPLICATION_ENVIRONMENT); $this->dbAdapter = Zend_Db::factory($this->configuration->database); Zend_Db_Table_Abstract::setDefaultAdapter($this->dbAdapter); $stmt = new Zend_Db_Statement_Pdo($this->dbAdapter, "SET NAMES 'utf8'"); $stmt->execute(); } protected function _initAuth() { $this->auth = Zend_Auth::getInstance(); } protected function _initCache() { $frontend= array('lifetime' => 7200, 'automatic_serialization' => true); $backend= array('cache_dir' => 'cache'); $this->cache = Zend_Cache::factory('core', 'File', $frontend, $backend); } public function _initTranslate() { $this->translate = new Zend_Translate('Array', BASE_PATH . '/languages/Slovak.php', 'sk_SK'); $this->translate->setLocale('sk_SK'); } protected function _initRegistry() { $this->registry = Zend_Registry::getInstance(); $this->registry->configuration = $this->configuration; $this->registry->dbAdapter = $this->dbAdapter; $this->registry->auth = $this->auth; $this->registry->cache = $this->cache; $this->registry->Zend_Translate = $this->translate; } protected function _initUnset() { unset($this->frontController, $this->view, $this->configuration, $this->dbAdapter, $this->auth, $this->cache, $this->translate, $this->registry); } protected function _initGetRidOfMagicQuotes() { if (get_magic_quotes_gpc()) { function stripslashes_deep($value) { $value = is_array($value) ? array_map('stripslashes_deep', $value) : stripslashes($value); return $value; } $_POST = array_map('stripslashes_deep', $_POST); $_GET = array_map('stripslashes_deep', $_GET); $_COOKIE = array_map('stripslashes_deep', $_COOKIE); $_REQUEST = array_map('stripslashes_deep', $_REQUEST); } } public function run() { $frontController = Zend_Controller_Front::getInstance(); $frontController->dispatch(); } }

    Read the article

  • Threads to make video out of images

    - by masood
    updates: I think/ suspect the imageIO is not thread safe. shared by all threads. the read() call might use resources that are also shared. Thus it will give the performance of a single thread no matter how many threads used. ? if its correct . what is the solution (in practical code) Single request and response model at one time do not utilizes full network/internet bandwidth, thus resulting in low performance. (benchmark is of half speed utilization or even lower) This is to make a video out of an IP cam that gives a new image on each request. http://149.5.43.10:8001/snapshot.jpg It makes a delay of 3 - 8 seconds no matter what I do. Changed thread no. and thread time intervals, debugged the code by System.out.println statements to see if threads work. All seems normal. Any help? Please show some practical code. You may modify mine. This code works (javascript) with much smoother frame rate and max bandwidth usage. but the later code (java) dont. same 3 to 8 seconds gap. <!DOCTYPE html> <html> <head> <script type="text/javascript"> (function(){ var img="/*url*/"; var interval=50; var pointer=0; function showImg(image,idx) { if(idx<=pointer) return; document.body.replaceChild(image,document.getElementsByTagName("img")[0]); pointer=idx; preload(); } function preload() { var cache=null,idx=0;; for(var i=0;i<5;i++) { idx=Date.now()+interval*(i+1); cache=new Image(); cache.onload=(function(ele,idx){return function(){showImg(ele,idx);};})(cache,idx); cache.src=img+"?"+idx; } } window.onload=function(){ document.getElementsByTagName("img")[0].onload=preload; document.getElementsByTagName("img")[0].src="/*initial url*/"; }; })(); </script> </head> <body> <img /> </body> </html> and of java (with problem) : package camba; import java.applet.Applet; import java.awt.Button; import java.awt.Graphics; import java.awt.Image; import java.awt.Label; import java.awt.Panel; import java.awt.TextField; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.net.URL; import java.security.Timestamp; import java.util.Date; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import javax.imageio.ImageIO; public class Camba extends Applet implements ActionListener{ Image img; TextField textField; Label label; Button start,stop; boolean terminate = false; long viewTime; public void init(){ label = new Label("please enter camera URL "); add(label); textField = new TextField(30); add(textField); start = new Button("Start"); add(start); start.addActionListener(this); stop = new Button("Stop"); add(stop); stop.addActionListener(this); } public void actionPerformed(ActionEvent e){ Button source = (Button)e.getSource(); if(source.getLabel() == "Start"){ for (int i = 0; i < 7; i++) { myThread(50*i); } System.out.println("start..."); } if(source.getLabel() == "Stop"){ terminate = true; System.out.println("stop..."); } } public void paint(Graphics g) { update(g); } public void update(Graphics g){ try{ viewTime = System.currentTimeMillis(); g.drawImage(img, 100, 100, this); } catch(Exception e) { e.printStackTrace(); } } public void myThread(final int sleepTime){ new Thread(new Runnable() { public void run() { while(!terminate){ try { TimeUnit.MILLISECONDS.sleep(sleepTime); } catch (InterruptedException ex) { ex.printStackTrace(); } long requestTime= 0; Image tempImage = null; try { URL pic = null; requestTime= System.currentTimeMillis(); pic = new URL(getDocumentBase(), textField.getText()); tempImage = ImageIO.read(pic); } catch(Exception e) { e.printStackTrace(); } if(requestTime >= /*last view time*/viewTime){ img = tempImage; Camba.this.repaint(); } } }}).start(); System.out.println("thread started..."); } }

    Read the article

  • HTML Presence Controls for Communications Server 14 CodePlex Project

    Showing Presence on the Web If youre running Office Communicator Server 2007 R2, you know that your only out-of-the-box option for showing presence on the web is to use the NameControl ActiveX control that ships as part of Office.  Being an ActiveX control, this obviously means that youre limited to Internet Explorer.  Also, nobody likes ActiveX controls What if you want to show the presence of users in a pure ASP.NET or HTML application and cant assume that the user has Communicator installed you need anASP.NET or HTML presence control.  HTML Presence Controls for Microsoft Communications Server 14 We recently worked with the UC team at Microsoft on a keynote demo for TechEd 2010 in New Orleans.  The demo was for a fictitious airline Fabrikam Airlines that wanted to show the presence of customer service and reservations agents on its website.  Customers could also start an instant message conversation with the agents using a Silverlight web chat window that used WCF to communicate with the backend UCMA application. We built HTML Presence Controls that use AJAX to poll a REST-based WCF service running in IIS and hosting a UCMA 3.0 presence subscription application.   Microsoft has graciously allowed us to publish these on CodePlex so that the development community can benefit from them:  http://htmlpresencecontrols.codeplex.com/ We will be maintaining the CodePlex project as new builds of UCMA 3.0 become available.  Check out the project home page on CodePlex for some more in-depth details on how the controls are implemented. ASP.NET Server Control Implementation Were providing an ASP.NET Server Control implementation that you can use stand-alone or in a GridView or Repeater (or other layout control).  The control has properties that allow you to control its appearance, e.g. you can choose whether or not to show the contacts name or availability text. You can also use the server control in a layout control such as a GridView by putting it in a TemplateColumn and binding to the Sip Uri in the data source. Disclaimer Once we started working on these, we realized why Microsoft hasnt shipped such controls as part of the product.  There are some tradeoffs you have to be aware of when using these controls, heres the high level. Privacy The backend UCMA 3.0 application that subscribes to presence of contacts runs as a trusted application and can thus retrieve the presence of any user in the organization.  Theres currently no good way in UCMA to apply any privacy rules to ensure that the consumer of the presence controls has permission to see the presence of the contacts that the controls are bound to.  Just to be absolutely crystal clear These controls provide a way to query the presence of any user in the organization, regardless of the privacy relationship between the person consuming the controls and the contacts whose presence is being displayed. Were exploring options for a design pattern that would allow you to inject some privacy controls.  Keep in mind though that you would most likely be responsible for implementing this logic, as there is currently no functionality in UCMA that allows you to do that. Polling the WCF REST Service The controls poll the backend WCF service to retrieve the presence of contacts - you can control the refresh interval so that they poll less often. We implemented a caching layer so that the WCF service is always communicating with a presence cache it never communicates directly with Communications Server.  For example, if your web page is showing the presence of sip:[email protected] and 500 people have the page open, the presence cache only contains one instance of the subscription Communications Server is not being polled 500 times for the presence of that contact. Once the presence of a contact changes, it is updated in the cache.  There are some server-based push mechanisms that would work nicely here, such as the one that Outlook Web Access 2010 uses.  Unfortunately we didnt have time to explore these options. Community Contribution Take a look at the project Issue Tracker, there are a couple of things we can use some help with.  Shoot me a note if youre interested in contributing to the project. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Tuxedo 11gR1 Client Server Affinity

    - by todd.little
    One of the major new features in Oracle Tuxedo 11gR1 is the ability to define an affinity between clients and servers. In previous releases of Tuxedo, the only way to ensure that multiple requests from a client went to the same server was to establish a conversation with tpconnect() and then use tpsend() and tprecv(). Although this works it has some drawbacks. First for single-threaded servers, the server is tied up for the entire duration of the conversation and cannot service other clients, an obvious scalability issue. I believe the more significant drawback is that the application programmer has to switch from the simple request/response model provided by tpcall() to the half duplex tpsend() and tprecv() calls used with conversations. Switching between the two typically requires a fair amount of redesign and recoding. The Client Server Affinity feature in Tuxedo 11gR1 allows by way of configuration an application to define affinities that can exist between clients and servers. This is done in the *SERVICES section of the UBBCONFIG file. Using new parameters for services defined in the *SERVICES section, customers can determine when an affinity session is created or deleted, the scope of the affinity, and whether requests can be routed outside the affinity scope. The AFFINITYSCOPE parameter can be MACHINE, GROUP, or SERVER, meaning that while the affinity session is in place, all requests from the client will be routed to the same MACHINE, GROUP, or SERVER. The creation and deletion of affinity is defined by the SESSIONROLE parameter and a service can be defined as either BEGIN, END, or NONE, where BEGIN starts an affinity session, END deletes the affinity session, and NONE does not impact the affinity session. Finally customers can define how strictly they want the affinity scope adhered to using the AFFINITYSTRICT parameter. If set to MANDATORY, all requests made during an affinity session will be routed to a server in the affinity scope. Thus if the affinity scope is SERVER, all subsequent tpcall() requests will be sent to the same server the affinity scope was established with. If the server doesn't offer that service, even though other servers do offer the service, the call will fail with TPNOENT. Setting AFFINITYSTRICT to PRECEDENT tells Tuxedo to try and route the request to a server in the affinity scope, but if that's not possible, then Tuxedo can try to route the request to servers out of scope. All of this begs the question, why? Why have this feature? There many uses for this capability, but the most common is when there is state that is maintained in a server, group of servers, or in a machine and subsequent requests from a client must be routed to where that state is maintained. This might be something as simple as a database cursor maintained by a server on behalf of a client. Alternatively it might be that the server has a connection to an external system and subsequent requests need to go back to the server that has that connection. A more sophisticated case is where a group of servers maintains some sort of cache in shared memory and subsequent requests need to be routed to where the cache is maintained. Although this last case might be able to be handled by data dependent routing, using client server affinity allows the cache to be partitioned dynamically instead of statically.

    Read the article

  • SQL Server – SafePeak “Logon Trigger” Feature for Managing Data Access

    - by pinaldave
    Lately I received an interesting question about the abilities of SafePeak for SQL Server acceleration software: Q: “I would like to use SafePeak to make my CRM application faster. It is an application we bought from some vendor, after a while it became slow and we can’t reprogram it. SafePeak automated caching sounds like an easy and good solution for us. But, in my application there are many servers and different other applications services that address its main database, and some even change data, and I feel that there is a chance that some servers that during the connection process we may miss some. Is there a way to ensure that SafePeak will be aware of all connections to the SQL Server, so its cache will remain intact?” Interesting question, as I remember that SafePeak (http://www.safepeak.com/Product/SafePeak-Overview) likes that all traffic to the database will go thru it. I decided to check out the features of SafePeak latest version (2.1) and seek for an answer there. A: Indeed I found SafePeak has a feature they call “Logon Trigger” and is designed for that purpose. It is located in the user interface, under: Settings -> SQL instances management  ->  [your instance]  ->  [Logon Trigger] tab. From here you activate / deactivate it and control a white-list of enabled server IPs and Login names that SafePeak will ignore them. Click to Enlarge After activation of the “logon trigger” Safepeak server is notified by the SQL Server itself on each new opened connection. Safepeak monitors those connections and decides if there is something to do with them or not. On a typical installation SafePeak likes all application and users connections to go via SafePeak – this way it knows about data and schema updates immediately (real time). With activation of the safepeak “logon trigger”  a special CLR trigger is deployed on the SQL server and notifies Safepeak on any connection that has not arrived via SafePeak. In such cases Safepeak can act to clear and lock the cache or to ignore it. This feature enables to make sure SafePeak will be aware of all connections so SafePeak cache will maintain exactly correct all times. So even if a user, like a DBA will connect to the SQL Server not via SafePeak, SafePeak will know about it and take actions. The notification does not impact the work of that connection, the user or application still continue to do whatever they planned to do. Note: I found that activation of logon trigger in SafePeak requires that SafePeak SQL login will have the next permissions: 1) CONTROL SERVER; 2) VIEW SERVER STATE; 3) And the SQL Server instance is CLR enabled; Seeing SafePeak in action, I can say SafePeak brings fantastic resource for those who seek to get performance for SQL Server critical apps. SafePeak promises to accelerate SQL Server applications in just several hours of installation, automatic learning and some optimization configuration (no code changes!!!). If better application and database performance means better business to you – I suggest you to download and try SafePeak. The solution of SafePeak is indeed unique, and the questions I receive are very interesting. Have any more questions on SafePeak? Please leave your question as a comment and I will try to get an answer for you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • MP3 player mpman TS300 not recognized

    - by Nick Lemaire
    I just received a mpman ts300 mp3 player for Christmas. But when I try to connect it to ubuntu (10.10) through usb it doesn't seem to be recognized. I searched for a linuxdriver but came up with nothing. I even tried installing mpman manager from the software center, but still the same problem... Has anybody got any ideas about how to fix this? Thank you edit: some additional information When I plug it in, it doesn't show up under /dev. And it is not listed in the result of 'lsusb' This is the output of 'dmesg' [19571.732541] usb 1-4: new high speed USB device using ehci_hcd and address 2 [19571.889154] scsi7 : usb-storage 1-4:1.0 [19572.883155] scsi 7:0:0:0: Direct-Access TS300 USB DISK 1.00 PQ: 0 ANSI: 0 [19572.885856] sd 7:0:0:0: Attached scsi generic sg2 type 0 [19572.887266] sd 7:0:0:0: [sdb] 7868416 512-byte logical blocks: (4.02 GB/3.75 GiB) [19573.012547] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19573.292550] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19573.572565] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19573.725019] sd 7:0:0:0: [sdb] Test WP failed, assume Write Enabled [19573.725024] sd 7:0:0:0: [sdb] Assuming drive cache: write through [19573.728521] sd 7:0:0:0: [sdb] Attached SCSI removable disk [19575.333015] sd 7:0:0:0: [sdb] 7868416 512-byte logical blocks: (4.02 GB/3.75 GiB) [19575.452547] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19580.603253] usb 1-4: device descriptor read/all, error -110 [19580.722560] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19595.842579] usb 1-4: device descriptor read/64, error -110 [19611.072656] usb 1-4: device descriptor read/64, error -110 [19611.302540] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19616.332568] usb 1-4: device descriptor read/8, error -110 [19621.462692] usb 1-4: device descriptor read/8, error -110 [19621.692551] usb 1-4: reset high speed USB device using ehci_hcd and address 2 [19626.722567] usb 1-4: device descriptor read/8, error -110 [19631.852802] usb 1-4: device descriptor read/8, error -110 [19631.962587] usb 1-4: USB disconnect, address 2 [19631.962587] sd 7:0:0:0: Device offlined - not ready after error recovery [19631.962840] sd 7:0:0:0: [sdb] Assuming drive cache: write through [19631.965104] sd 7:0:0:0: [sdb] READ CAPACITY failed [19631.965109] sd 7:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [19631.965112] sd 7:0:0:0: [sdb] Sense not available. [19631.965125] sd 7:0:0:0: [sdb] Assuming drive cache: write through [19631.965130] sdb: detected capacity change from 4028628992 to 0 [19632.130042] usb 1-4: new high speed USB device using ehci_hcd and address 3 [19647.242550] usb 1-4: device descriptor read/64, error -110 [19662.472560] usb 1-4: device descriptor read/64, error -110 [19662.702566] usb 1-4: new high speed USB device using ehci_hcd and address 4 [19677.822587] usb 1-4: device descriptor read/64, error -110 [19693.052575] usb 1-4: device descriptor read/64, error -110 [19693.282582] usb 1-4: new high speed USB device using ehci_hcd and address 5 [19698.312600] usb 1-4: device descriptor read/8, error -110 [19703.442594] usb 1-4: device descriptor read/8, error -110 [19703.672548] usb 1-4: new high speed USB device using ehci_hcd and address 6 [19708.702581] usb 1-4: device descriptor read/8, error -110 [19713.840077] usb 1-4: device descriptor read/8, error -110 [19713.942555] hub 1-0:1.0: unable to enumerate USB device on port 4 [19714.262545] usb 4-2: new full speed USB device using uhci_hcd and address 2 [19729.382549] usb 4-2: device descriptor read/64, error -110 [19744.612534] usb 4-2: device descriptor read/64, error -110 [19744.842543] usb 4-2: new full speed USB device using uhci_hcd and address 3 [19759.962714] usb 4-2: device descriptor read/64, error -110 [19775.200047] usb 4-2: device descriptor read/64, error -110 [19775.422630] usb 4-2: new full speed USB device using uhci_hcd and address 4 [19780.444498] usb 4-2: device descriptor read/8, error -110 [19785.574491] usb 4-2: device descriptor read/8, error -110 [19785.802547] usb 4-2: new full speed USB device using uhci_hcd and address 5 [19790.824490] usb 4-2: device descriptor read/8, error -110 [19795.961473] usb 4-2: device descriptor read/8, error -110 [19796.062556] hub 4-0:1.0: unable to enumerate USB device on port 2

    Read the article

  • Url rewrite subfolder to root and forbid accessing subfolder

    - by Alessandro Pezzato
    I have drupal installed in a subfolder drupal, but I want to access pages as it is in root folder: http://www.example.com instead of http://www.example.com/drupal I'm able to have this working, but it's also working with url containing subfolder, so I have http://www.example.com and a clone site in http://www.example.com/drupal What is the rule to forbid access to subfolder? I want all url starting with http://www.example.com/drupal being forbidden. This is .htaccess in / directory: Options -Indexes Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [L,R=301] RewriteRule ^(.*+)$ drupal/$1 [L,QSA] </IfModule> And this is drupal .htaccess in /drupal/ directory: Options -Indexes Options +FollowSymLinks ErrorDocument 404 index.php DirectoryIndex index.php index.html index.htm # Override PHP settings that cannot be changed at runtime. See # sites/default/default.settings.php and drupal_initialize_variables() in # includes/bootstrap.inc for settings that can be changed at runtime. # PHP 5, Apache 1 and 2. <IfModule mod_php5.c> php_flag magic_quotes_gpc off php_flag magic_quotes_sybase off php_flag register_globals off php_flag session.auto_start off php_value mbstring.http_input pass php_value mbstring.http_output pass php_flag mbstring.encoding_translation off </IfModule> # Requires mod_expires to be enabled. <IfModule mod_expires.c> # Enable expirations. ExpiresActive On # Cache all files for 2 weeks after access (A). ExpiresDefault A1209600 <FilesMatch \.php$> # Do not allow PHP scripts to be cached unless they explicitly send cache # headers themselves. Otherwise all scripts would have to overwrite the # headers set by mod_expires if they want another caching behavior. This may # fail if an error occurs early in the bootstrap process, and it may cause # problems if a non-Drupal PHP file is installed in a subdirectory. ExpiresActive Off </FilesMatch> </IfModule> # Various rewrite rules. <IfModule mod_rewrite.c> RewriteEngine on # Block access to "hidden" directories whose names begin with a period. This # includes directories used by version control systems such as Subversion or # Git to store control files. Files whose names begin with a period, as well # as the control files used by CVS, are protected by the FilesMatch directive # above. RewriteRule "(^|/)\." - [F] # To redirect all users to access the site WITH the 'www.' prefix, # (http://example.com/... will be redirected to http://www.example.com/...) # uncomment the following: # RewriteCond %{HTTP_HOST} !^www\. [NC] # RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # # To redirect all users to access the site WITHOUT the 'www.' prefix, # (http://www.example.com/... will be redirected to http://example.com/...) # uncomment the following: RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [L,R=301] RewriteBase /drupal # Pass all requests not referring directly to files in the filesystem to # index.php. Clean URLs are handled in drupal_environment_initialize(). RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico #RewriteRule ^ index.php [L] RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch> </IfModule> </IfModule>

    Read the article

  • No sound lenovo t60 alsa ad1981 iec958

    - by Nate
    Any help on getting the sound to come through my lenovo t60 build in speakers, headphones, or mic would greatly be appreciated. The three buttons to increase, decrease sound seem to work. Bios has sound card enabled and the buttons beep when pressed. When going to Utube or playing music, no sound is heard. Thanks Nate Feb 23 - Didn't see anything specific in the sys logs with Rhythmbox when connecting my ipod. Rhythmbox is playing, but still no sound. Here is the syslog details for today. Output is set to analog output. Feb 23 17:42:32 itgis01398 rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="824" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'. Feb 23 17:42:33 itgis01398 rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="824" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'. Feb 23 17:42:49 itgis01398 anacron[968]: Job `cron.daily' terminated Feb 23 17:42:49 itgis01398 anacron[968]: Job `cron.weekly' started Feb 23 17:42:49 itgis01398 anacron[12067]: Updated timestamp for job `cron.weekly' to 2011-02-23 Feb 23 17:42:53 itgis01398 anacron[968]: Job `cron.weekly' terminated Feb 23 17:42:53 itgis01398 anacron[968]: Normal exit (2 jobs run) Feb 23 18:01:19 itgis01398 kernel: [ 2731.324067] usb 1-5: new high speed USB device using ehci_hcd and address 3 Feb 23 18:01:19 itgis01398 kernel: [ 2731.482879] Initializing USB Mass Storage driver... Feb 23 18:01:19 itgis01398 kernel: [ 2731.483061] usb-storage 1-5:1.0: Quirks match for vid 05ac pid 1205: 10 Feb 23 18:01:19 itgis01398 kernel: [ 2731.483116] scsi6 : usb-storage 1-5:1.0 Feb 23 18:01:19 itgis01398 kernel: [ 2731.483306] usbcore: registered new interface driver usb-storage Feb 23 18:01:19 itgis01398 kernel: [ 2731.483310] USB Mass Storage support registered. Feb 23 18:01:20 itgis01398 kernel: [ 2732.481116] scsi 6:0:0:0: Direct-Access Apple iPod 1.62 PQ: 0 ANSI: 0 Feb 23 18:01:20 itgis01398 kernel: [ 2732.482466] sd 6:0:0:0: Attached scsi generic sg2 type 0 Feb 23 18:01:20 itgis01398 kernel: [ 2732.485095] sd 6:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 Feb 23 18:01:20 itgis01398 kernel: [ 2732.485110] sd 6:0:0:0: [sdb] 7999487 512-byte logical blocks: (4.09 GB/3.81 GiB) Feb 23 18:01:20 itgis01398 kernel: [ 2732.487933] sd 6:0:0:0: [sdb] Write Protect is off Feb 23 18:01:20 itgis01398 kernel: [ 2732.487941] sd 6:0:0:0: [sdb] Mode Sense: 64 00 00 08 Feb 23 18:01:20 itgis01398 kernel: [ 2732.487947] sd 6:0:0:0: [sdb] Assuming drive cache: write through Feb 23 18:01:20 itgis01398 kernel: [ 2732.489927] sd 6:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 Feb 23 18:01:20 itgis01398 kernel: [ 2732.491150] sd 6:0:0:0: [sdb] Assuming drive cache: write through Feb 23 18:01:20 itgis01398 kernel: [ 2732.491163] sdb: sdb1 sdb2 Feb 23 18:01:20 itgis01398 kernel: [ 2732.510428] sd 6:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 Feb 23 18:01:20 itgis01398 kernel: [ 2732.511288] sd 6:0:0:0: [sdb] Assuming drive cache: write through Feb 23 18:01:20 itgis01398 kernel: [ 2732.511297] sd 6:0:0:0: [sdb] Attached SCSI removable disk Feb 23 18:01:21 itgis01398 kernel: [ 2733.746675] FAT: invalid media value (0x2f) Feb 23 18:01:21 itgis01398 kernel: [ 2733.746682] VFS: Can't find a valid FAT filesystem on dev sdb1. Feb 23 18:01:22 itgis01398 upstart-udev-bridge[330]: Env must be KEY=VALUE pairs Feb 23 18:02:07 itgis01398 kernel: [ 2780.115826] sd 6:0:0:0: [sdb] Unhandled sense code Feb 23 18:02:07 itgis01398 kernel: [ 2780.115835] sd 6:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Feb 23 18:02:07 itgis01398 kernel: [ 2780.115844] sd 6:0:0:0: [sdb] Sense Key : Medium Error [current] Feb 23 18:02:07 itgis01398 kernel: [ 2780.115855] Info fld=0x0 Feb 23 18:02:07 itgis01398 kernel: [ 2780.115859] sd 6:0:0:0: [sdb] Add. Sense: Unrecovered read error Feb 23 18:02:07 itgis01398 kernel: [ 2780.115870] sd 6:0:0:0: [sdb] CDB: Read(10): 28 00 00 08 fd e9 00 00 f0 00 Feb 23 18:02:07 itgis01398 kernel: [ 2780.115892] end_request: I/O error, dev sdb, sector 589289 Feb 23 18:02:49 itgis01398 kernel: [ 2821.351464] sd 6:0:0:0: [sdb] Unhandled sense code Feb 23 18:02:49 itgis01398 kernel: [ 2821.351473] sd 6:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Feb 23 18:02:49 itgis01398 kernel: [ 2821.351482] sd 6:0:0:0: [sdb] Sense Key : Medium Error [current] Feb 23 18:02:49 itgis01398 kernel: [ 2821.351493] Info fld=0x0 Feb 23 18:02:49 itgis01398 kernel: [ 2821.351497] sd 6:0:0:0: [sdb] Add. Sense: No additional sense information Feb 23 18:02:49 itgis01398 kernel: [ 2821.351507] sd 6:0:0:0: [sdb] CDB: Read(10): 28 00 00 08 fe d9 00 00 10 00 Feb 23 18:02:49 itgis01398 kernel: [ 2821.351530] end_request: I/O error, dev sdb, sector 589529 Feb 23 18:17:01 itgis01398 CRON[12709]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) volume is all of the way up.

    Read the article

  • Notes from a short presentation on NodeJs

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/05/30/notes-from-a-short-presentation-on-nodejs.aspxI volunteered myself to give a short 30 minute presentation at a work lunch and learn on NodeJs. With my limited experience I see using Node as a great tool for build process improvement, scaffolding with yeoman, and running tests with Karma. I haven’t looked into using as a full server or development stack. I guess I’m too stuck on IIS and Visual Studio :-). Here are my notes, that aren’t very well formatted, but I wanted to share it anyways. What is it? "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Why should you be interested? another popular tool that can help you get the job done you can use the command prompt! can be run at build or release time to automate tasks What are some uses? https://www.npmjs.org/ - NuGet for Node packages http://bower.io/ - NuGet for UI JavaScript libraries (jQuery, Bootstrap, Angular, etc) http://yeoman.io/ "Our workflow is comprised of three tools for improving your productivity and satisfaction when building a web app: yo (the scaffolding tool), grunt (the build tool) and bower (for package management)." -> yeoman asks which components you want alternative - http://joakimbeng.eu01.aws.af.cm/slush-replacing-yeoman-with-gulp/ https://www.npmjs.org/package/generator-cg-angular - phantom js, less, // git is needed for bower http://git-scm.com/ run installer in Windows before you can use bower // select Run Git from the Windows Command Prompt in the installer // requires a reboot http://stackoverflow.com/questions/20069297/bower-git-not-in-the-path-error npm install -g git npm install -g yo npm install -g generator-cg-angular mkdir myapp cd myapp yo cg-angular npm install -g bower npm install -g grunt-cli yo bower grunt serve grunt test grunt build // there are many generators (generator-angular) is another one // I like the Nuget HotTowel-Angular from John Papa myself // needed IIS Node for Express -> prompt from WebMatrix Karma bat to startup Karma - see below image compression - https://www.npmjs.org/search?q=optimize+images, https://github.com/heldr/node-smushit - do it from the command line LESS compiling js and css combine and minification at build with Gulp for requireJS apps quick lightweight HTTP server - "Express" Build pipeline with Grunt or Gulp http://www.johnpapa.net/gulp-and-grunt-at-anglebrackets/ Gulp is the newer and improved over Grunt. Supposed to be easier to use, but Grunt is more established. https://github.com/johnpapa/ng-demos/tree/master/grunt-gulp https://github.com/assetgraph/assetgraph-builder Does a lot of the minimizing, combining, image optimization etc using Node. Looks interesting.... http://nodejs.org http://nodeschool.io/ http://sub.watchmecode.net/getting-started-with-nodejs-installing-and-writing-your-first-code/ https://stormpath.com/blog/build-a-killer-node-dot-js-client-for-your-rest-plus-json-api/ https://codio.com/ http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx run unit tests - Karma in msBuild karma-start.bat @echo off cd %~dp0\.. REM 604800 is to make sure we only update once every 7 days call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 call npm install --cache-min 604800 -g karma-cli karma start UnitTests\karma.conf.js REM karma start UnitTests\karma.conf.js --single-run REM see karma-start.bat and karam.config.js REM jsHint comes from Nuget

    Read the article

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

  • Handling Configuration Changes in Windows Azure Applications

    - by Your DisplayName here!
    While finalizing StarterSTS 1.5, I had a closer look at lifetime and configuration management in Windows Azure. (this is no new information – just some bits and pieces compiled at one single place – plus a bit of reality check) When dealing with lifetime management (and especially configuration changes), there are two mechanisms in Windows Azure – a RoleEntryPoint derived class and a couple of events on the RoleEnvironment class. You can find good documentation about RoleEntryPoint here. The RoleEnvironment class features two events that deal with configuration changes – Changing and Changed. Whenever a configuration change gets pushed out by the fabric controller (either changes in the settings section or the instance count of a role) the Changing event gets fired. The event handler receives an instance of the RoleEnvironmentChangingEventArgs type. This contains a collection of type RoleEnvironmentChange. This in turn is a base class for two other classes that detail the two types of possible configuration changes I mentioned above: RoleEnvironmentConfigurationSettingsChange (configuration settings) and RoleEnvironmentTopologyChange (instance count). The two respective classes contain information about which configuration setting and which role has been changed. Furthermore the Changing event can trigger a role recycle (aka reboot) by setting EventArgs.Cancel to true. So your typical job in the Changing event handler is to figure if your application can handle these configuration changes at runtime, or if you rather want a clean restart. Prior to the SDK 1.3 VS Templates – the following code was generated to reboot if any configuration settings have changed: private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e) {     // If a configuration setting is changing     if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))     {         // Set e.Cancel to true to restart this role instance         e.Cancel = true;     } } This is a little drastic as a default since most applications will work just fine with changed configuration – maybe that’s the reason this code has gone away in the 1.3 SDK templates (more). The Changed event gets fired after the configuration changes have been applied. Again the changes will get passed in just like in the Changing event. But from this point on RoleEnvironment.GetConfigurationSettingValue() will return the new values. You can still decide to recycle if some change was so drastic that you need a restart. You can use RoleEnvironment.RequestRecycle() for that (more). As a rule of thumb: When you always use GetConfigurationSettingValue to read from configuration (and there is no bigger state involved) – you typically don’t need to recycle. In the case of StarterSTS, I had to abstract away the physical configuration system and read the actual configuration (either from web.config or the Azure service configuration) at startup. I then cache the configuration settings in memory. This means I indeed need to take action when configuration changes – so in my case I simply clear the cache, and the new config values get read on the next access to my internal configuration object. No downtime – nice! Gotcha A very natural place to hook up the RoleEnvironment lifetime events is the RoleEntryPoint derived class. But with the move to the full IIS model in 1.3 – the RoleEntryPoint methods get executed in a different AppDomain (even in a different process) – see here.. You might no be able to call into your application code to e.g. clear a cache. Keep that in mind! In this case you need to handle these events from e.g. global.asax.

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Moving StarterSTS to the (Azure) Cloud

    - by Your DisplayName here!
    Quite some people asked me about an Azure version of StarterSTS. While I kinda knew what I had to do to make the move, I couldn’t find the time. Until recently. This blog post briefly documents the necessary changes and design decisions for the next version of StarterSTS which will work both on-premise and on Azure. Provider Fortunately StarterSTS is already based on the idea of “providers”. Authentication, roles and claims generation is based on the standard ASP.NET provider infrastructure. This makes the migration to different data stores less painful. In my case I simply moved the ASP.NET provider database to SQL Azure and still use the standard SQL Server based membership, roles and profile provider. In addition StarterSTS has its own providers to abstract resource access for certificates, relying party registration, client certificate registration and delegation. So I only had to provide new implementations. Signing and SSL keys now go in the Azure certificate store and user mappings (client certificates and delegation settings) have been moved to Azure table storage. The one thing I didn’t anticipate when I originally wrote StarterSTS was the need to also encapsulate configuration. Currently configuration is “locked” to the standard .NET configuration system. The new version will have a pluggable SettingsProvider with versions for .NET configuration as well as Azure service configuration. If you want to externalize these settings into e.g. a database, it is now just a matter of supplying a corresponding provider. Moving between the on-premise and Azure version will be just a matter of using different providers. URL Handling Another thing that’s substantially different on Azure (and load balanced scenarios in general) is the handling of URLs. In farm scenarios, the standard APIs like ASP.NET’s Request.Url return the current (internal) machine name, but you typically need the address of the external facing load balancer. There’s a hotfix for WCF 3.5 (included in v4) that fixes this for WCF metadata. This was accomplished by using the HTTP Host header to generate URLs instead of the local machine name. I now use the same approach for generating WS-Federation metadata as well as information card files. New Features I introduced a cache provider. Since we now have slightly more expensive lookups (e.g. relying party data from table storage), it makes sense to cache certain data in the front end. The default implementation uses the ASP.NET web cache and can be easily extended to use products like memcached or AppFabric Caching. Starting with the relying party provider, I now also provide a read/write interface. This allows building management interfaces on top of this provider. I also include a (very) simple web page that allows working with the relying party provider data. I guess I will use the same approach for other providers in the future as well. I am also doing some work on the tracing and health monitoring area. Especially important for the Azure version. Stay tuned.

    Read the article

  • Optimal Data Structure for our own API

    - by vermiculus
    I'm in the early stages of writing an Emacs major mode for the Stack Exchange network; if you use Emacs regularly, this will benefit you in the end. In order to minimize the number of calls made to Stack Exchange's API (capped at 10000 per IP per day) and to just be a generally responsible citizen, I want to cache the information I receive from the network and store it in memory, waiting to be accessed again. I'm really stuck as to what data structure to store this information in. Obviously, it is going to be a list. However, as with any data structure, the choice must be determined by what data is being stored and what how it will be accessed. What, I would like to be able to store all of this information in a single symbol such as stack-api/cache. So, without further ado, stack-api/cache is a list of conses keyed by last update: `(<csite> <csite> <csite>) where <csite> would be (1362501715 . <site>) At this point, all we've done is define a simple association list. Of course, we must go deeper. Each <site> is a list of the API parameter (unique) followed by a list questions: `("codereview" <cquestion> <cquestion> <cquestion>) Each <cquestion> is, you guessed it, a cons of questions with their last update time: `(1362501715 <question>) (1362501720 . <question>) <question> is a cons of a question structure and a list of answers (again, consed with their last update time): `(<question-structure> <canswer> <canswer> <canswer> and ` `(1362501715 . <answer-structure>) This data structure is likely most accurately described as a tree, but I don't know if there's a better way to do this considering the language, Emacs Lisp (which isn't all that different from the Lisp you know and love at all). The explicit conses are likely unnecessary, but it helps my brain wrap around it better. I'm pretty sure a <csite>, for example, would just turn into (<epoch-time> <api-param> <cquestion> <cquestion> ...) Concerns: Does storing data in a potentially huge structure like this have any performance trade-offs for the system? I would like to avoid storing extraneous data, but I've done what I could and I don't think the dataset is that large in the first place (for normal use) since it's all just human-readable text in reasonable proportion. (I'm planning on culling old data using the times at the head of the list; each inherits its last-update time from its children and so-on down the tree. To what extent this cull should take place: I'm not sure.) Does storing data like this have any performance trade-offs for that which must use it? That is, will set and retrieve operations suffer from the size of the list? Do you have any other suggestions as to what a better structure might look like?

    Read the article

  • installer can't find partition, but fdisk can find them

    - by pxd
    I'm installing ubuntu 12.04, my system had install 2 system -- winxp and ubuntu 10.10. Now, I want to update ubuntu to 12.04. I use usb disk to install 12.04. But, the installer can't not find my partition in my harddisk. But, the fdisk can find them. Can you help me? How to do? ubuntu@ubuntu:~$ sudo lshw -short H/W path Device Class Description system HP 2230s (NN868PA#AB2) /0 bus 3037 /0/9 memory 64KiB BIOS /0/0 processor Intel(R) Core(TM)2 Duo CPU T6570 @ 2.10GHz /0/0/1 memory 2MiB L2 cache /0/0/3 memory 32KiB L1 cache /0/0/0.1 processor Logical CPU /0/0/0.2 processor Logical CPU /0/2 memory 32KiB L1 cache /0/4 memory 2GiB System Memory /0/4/0 memory SODIMM [empty] /0/4/1 memory 2GiB SODIMM DDR2 Synchronous 800 MHz (1.2 ns) /0/100 bridge Mobile 4 Series Chipset Memory Controller Hub /0/100/2 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/2.1 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/1a bus 82801I (ICH9 Family) USB UHCI Controller #4 /0/100/1a.1 bus 82801I (ICH9 Family) USB UHCI Controller #5 /0/100/1a.2 bus 82801I (ICH9 Family) USB UHCI Controller #6 /0/100/1a.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #2 /0/100/1b multimedia 82801I (ICH9 Family) HD Audio Controller /0/100/1c bridge 82801I (ICH9 Family) PCI Express Port 1 /0/100/1c.1 bridge 82801I (ICH9 Family) PCI Express Port 2 /0/100/1c.1/0 wlan1 network PRO/Wireless 5100 AGN [Shiloh] Network Connection /0/100/1c.2 bridge 82801I (ICH9 Family) PCI Express Port 3 /0/100/1c.4 bridge 82801I (ICH9 Family) PCI Express Port 5 /0/100/1c.5 bridge 82801I (ICH9 Family) PCI Express Port 6 /0/100/1c.5/0 eth1 network 88E8072 PCI-E Gigabit Ethernet Controller /0/100/1d bus 82801I (ICH9 Family) USB UHCI Controller #1 /0/100/1d.1 bus 82801I (ICH9 Family) USB UHCI Controller #2 /0/100/1d.2 bus 82801I (ICH9 Family) USB UHCI Controller #3 /0/100/1d.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #1 /0/100/1e bridge 82801 Mobile PCI Bridge /0/100/1f bridge ICH9M LPC Interface Controller /0/100/1f.2 scsi0 storage 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] /0/100/1f.2/0 /dev/sda disk 500GB WDC WD5000BEVT-0 /0/100/1f.2/0/1 /dev/sda1 volume 48GiB Windows NTFS volume /0/100/1f.2/0/2 /dev/sda2 volume 416GiB Extended partition /0/100/1f.2/0/2/5 /dev/sda5 volume 97GiB HPFS/NTFS partition /0/100/1f.2/0/2/6 /dev/sda6 volume 198GiB HPFS/NTFS partition /0/100/1f.2/0/2/7 /dev/sda7 volume 27GiB Linux filesystem partition /0/100/1f.2/0/2/8 /dev/sda8 volume 93GiB Linux filesystem partition /0/100/1f.2/1 /dev/cdrom disk CDDVDW TS-L633M /0/1 scsi6 storage /0/1/0.0.0 /dev/sdb disk 15GB STORAGE DEVICE /0/1/0.0.0/0 /dev/sdb disk 15GB /0/1/0.0.0/0/1 /dev/sdb1 volume 14GiB Windows FAT volume /1 power HZ04037 ubuntu@ubuntu:~$ ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x31263125 Device Boot Start End Blocks Id System /dev/sda1 * 63 102277727 51138832+ 7 HPFS/NTFS/exFAT /dev/sda2 102277728 976784129 437253201 f W95 Ext'd (LBA) /dev/sda5 102277791 307078127 102400168+ 7 HPFS/NTFS/exFAT /dev/sda6 307078191 724141151 208531480+ 7 HPFS/NTFS/exFAT /dev/sda7 724142080 781459455 28658688 83 Linux /dev/sda8 781461504 976771071 97654784 83 Linux Disk /dev/sdb: 15.9 GB, 15931539456 bytes 64 heads, 32 sectors/track, 15193 cylinders, total 31116288 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009eb92 Device Boot Start End Blocks Id Systemfile:///home/ubuntu/Pictures/Screenshot%20from%202012-07-07%2010:25:40.png /dev/sdb1 * 32 31115263 15557616 c W95 FAT32 (LBA) ubuntu 12.04 installer can't find the partition in my hard disk, only find device - /dev/sda.(sorry, I'm new user, so can't send image.)

    Read the article

  • perl Client-SSL-Warning: Peer certificate not verified

    - by Jeremey
    I am having trouble with a perl screenscraper to an HTTPS site. In debugging, I ran the following: print $res->headers_as_string; and in the output, I have the following line: Client-SSL-Warning: Peer certificate not verified Is there a way I can auto-accept this certificate, or is that not the problem? #!/usr/bin/perl use LWP::UserAgent; use Crypt::SSLeay::CTX; use Crypt::SSLeay::Conn; use Crypt::SSLeay::X509; use LWP::Simple qw(get); my $ua = LWP::UserAgent->new; my $req = HTTP::Request->new(GET => 'https://vzw-cat.sun4.lightsurf.net/vzwcampaignadmin/'); my $res = $ua->request($req); print $res->headers_as_string; output: Cache-Control: no-cache Connection: close Date: Tue, 01 Jun 2010 19:28:08 GMT Pragma: No-cache Server: Apache Content-Type: text/html Expires: Wed, 31 Dec 1969 16:00:00 PST Client-Date: Tue, 01 Jun 2010 19:28:09 GMT Client-Peer: 64.152.68.114:443 Client-Response-Num: 1 Client-SSL-Cert-Issuer: /O=VeriSign Trust Network/OU=VeriSign, Inc./OU=VeriSign International Server CA - Class 3/OU=www.verisign.com/CPS Incorp.by Ref. LIABILITY LTD.(c)97 VeriSign Client-SSL-Cert-Subject: /C=US/ST=Massachusetts/L=Boston/O=verizon wireless/OU=TERMS OF USE AT WWW.VERISIGN.COM/RPA (C)00/CN=PSMSADMIN.VZW.COM Client-SSL-Cipher: DHE-RSA-AES256-SHA Client-SSL-Warning: Peer certificate not verified Client-Transfer-Encoding: chunked Link: <css/vtext_style.css>; rel="stylesheet"; type="text/css" Set-Cookie: JSESSIONID=DE6C99EA2F3DD1D4DF31456B94F16C90.vz3; Path=/vzwcampaignadmin; Secure Title: Verizon Wireless - Campaign Administrator

    Read the article

  • Very slow performance deserializing using datacontractserializer in a Silverlight Application.

    - by caryden
    Here is the situation: Silverlight 3 Application hits an asp.net hosted WCF service to get a list of items to display in a grid. Once the list is brought down to the client it is cached in IsolatedStorage. This is done by using the DataContractSerializer to serialize all of these objects to a stream which is then zipped and then encrypted. When the application is relaunched, it first loads from the cache (reversing the process above) and the deserializes the objects using the DataContractSerializer.ReadObject() method. All of this was working wonderfully under all scenarios until recently with the entire "load from cache" path (decrypt/unzip/deserialize) taking hundreds of milliseconds at most. On some development machines but not all (all machines Windows 7) the deserialize process - that is the call to ReadObject(stream) takes several minutes an seems to lock up the entire machine BUT ONLY WHEN RUNNING IN THE DEBUGGER in VS2008. Running the Debug configuration code outside the debugger has no problem. One thing that seems to look suspicious is that when you turn on stop on Exceptions, you can see that the ReadObject() throws many, many System.FormatException's indicating that a number was not in the correct format. When I turn off "Just My Code" thousands of these get dumped to the screen. None go unhandled. These occur both on the read back from the cache AND on a deserialization at the conclusion of a web service call to get the data from the WCF Service. HOWEVER, these same exceptions occur on my laptop development machine that does not experience the slowness at all. And FWIW, my laptop is really old and my desktop is a 4 core, 6GB RAM beast. Again, no problems unless running under the debugger in VS2008. Anyone else seem this? Any thoughts? Here is the bug report link: https://connect.microsoft.com/VisualStudio/feedback/details/539609/very-slow-performance-deserializing-using-datacontractserializer-in-a-silverlight-application-only-in-debugger

    Read the article

  • grails run-war connects to mysql but grails run-war doesn't

    - by damian
    Hi, I have a unexpected problem. I had create a war with grails war. Then I had deployed in Tomcat. For my surprise the crud works fine but I don't know what persistence is using. So I did this test: Compare grails prod run-app with grails prod run-war. The first works fine, and does conect with the mysql database. The other don't. This is my DataSource.groovy: dataSource { pooled = true driverClassName = "com.mysql.jdbc.Driver" username = "grails" password = "mysqlgrails" } hibernate { cache.use_second_level_cache=false cache.use_query_cache=false cache.provider_class='net.sf.ehcache.hibernate.EhCacheProvider' } // environment specific settings environments { development { dataSource { dbCreate = "update" // one of 'create', 'create-drop','update' url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } test { dataSource { dbCreate = "update" // one of 'create', 'create-drop','update' url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } production { dataSource { dbCreate = "update" url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } } Also I extract the war to see if I could find some data source configuration file without success. More info: Grails version: 1.2.1 JVM version: 1.6.0_17 Also I think this question it's similar, but doesn't have a awnser.

    Read the article

  • grails prod run-app connects to mysql but grails prod run-war doesn't

    - by damian
    Hi, I have a unexpected problem. I had create a war with grails war. Then I had deployed in Tomcat. For my surprise the crud works fine but I don't know what persistence is using. So I did this test: Compare grails prod run-app with grails prod run-war. The first works fine, and does conect with the mysql database. The other don't. This is my DataSource.groovy: dataSource { pooled = true driverClassName = "com.mysql.jdbc.Driver" username = "grails" password = "mysqlgrails" } hibernate { cache.use_second_level_cache=false cache.use_query_cache=false cache.provider_class='net.sf.ehcache.hibernate.EhCacheProvider' } // environment specific settings environments { development { dataSource { dbCreate = "update" // one of 'create', 'create-drop','update' url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } test { dataSource { dbCreate = "update" // one of 'create', 'create-drop','update' url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } production { dataSource { dbCreate = "update" url = "jdbc:mysql://some-key.amazonaws.com/MyDB" } } } Also I extract the war to see if I could find some data source configuration file without success. More info: Grails version: 1.2.1 JVM version: 1.6.0_17 Also I think this question it's similar, but doesn't have a awnser.

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >