Search Results

Search found 31311 results on 1253 pages for 'oracle product info'.

Page 623/1253 | < Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >

  • Isis Finally Rolls Out

    - by David Dorf
    Google has rolled their wallet out for several chains; I see the NFC readers in Walgreen's when I'm sent their for milk.  But Isis has been relatively quiet until now.  As of last week they have finally launched in their two test cities: Austin, and Salt Lake City.  Below are the supported carriers and phones as of now, but more phones will be added later. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} AT&T supports: HTC One™ X, LG Escape™, Samsung Galaxy Exhilarate™, Samsung Galaxy S® III, Samsung Galaxy Rugby Pro™ T-Mobile supports: Samsung Galaxy S® II, Samsung Galaxy S® III, Samsung Galaxy S® Relay 4G Verizon supports: Droid Incredible 4G LTE. Of course iPhone owners have no wallet since Apple didn't included an NFC chip. To start using Isis, you have to take your NFC-capable phone to your carrier's store to get the SIM replaced with a more sophisticated one that has a secure element configured for Isis.  The "secure element" is the cryptographic logic that secures mobile payments.  Carriers like the secure element in the SIM while non-carriers (like Google) prefer the secure element in the phone's electronics. (I'm not entirely sure if you could support both Isis and Google Wallet on the same phone.  Anybody know?) Then you can download the Isis app from Google Play and load your cards.  Most credit cards are supported, and there's a process to verify the credit cards are valid.  Then you can select from the list of participating retailers to "follow."  Selecting a retailer allows that retailer to give you offers via the app. The app is well done and easy to use.  You can select a default payment type and also switch between them easily.  When the phone is tapped on the reader, there are two exchanges of information.  The payment information is transferred, and then the Isis "SmartTap" information which includes optional loyalty number and digital coupons.  Of course the value of mobile wallets comes from the ease of handling all three data types (i.e. payment, loyalty, offers). There are several advertisements for Isis running now, and my favorite is below.

    Read the article

  • Will we ever lose the human touch?

    - by divya.malik
    I was at a conference two weeks ago, which was targeted to sales and marketing professionals. The discussions around the changing scenario in sales was very interesting. More and more of selling is moving to the internet- sales people are delivering more of their presentations online, or via the phone. Budget constraints and new technologies have dramatically decreased the need for face-to-face interactions. At the same time, customers are also researching for products on their own, taking the advice of peers, making up their mind, and then contacting the vendor. That takes care of more than half of the usual selling process. But humans are social animals, and because of that I believe that despite these changing trends and technologies, the need to maintain the human touch will always be necessary. One of the presenters at the conference shared this video, which stayed in my mind.

    Read the article

  • Hands On Workshop "APEX Mobile": Sommer 2013

    - by Carsten Czarski
    Anwendungen für Mobile Endgeräte sind derzeit in aller Munde - nahezu überall taucht die Anforderung "Unterstützung von Smartphones oder Tablets" auf. Wie die meisten wissen, werden mobile Endgeräte mit der aktuellen APEX Version 4.2 out-of-the-box unterstützt. Und mobile Anwendungen werden in typischer APEX-Manier schnell und einfach erstellt. Wie einfach das geht, können Sie nicht nur mit dem neuen Workshop Guide: APEX-Anwendungen für mobile Endgeräte selbst nachvollziehen, sondern auch in einem der APEX Mobile Hands On Workshops 'LIVE' erleben. Dort erfahren Sie ... Wie man eine APEX-Anwendung, basierend auf Tabellen erstellt APEX Komponenten wie Formulare, Berichte, Diagramme oder Kalender einbindet Wie man auf das GPS in einem Smartphone zugreift, die Koordinaten in der Datenbank speichern und damit arbeiten kann Wie man auf die Kamera zugreift, die Bilder speichert und einfache Bildoperationen durchführen kann Und vieles mehr ... Darüber hinaus ist natürlich auch Zeit für den Austausch und zur Diskussion vorgesehen. Details zu Agenda, Terminen und Workshop-Voraussetzungen finden Sie auf der Webseite. Die Teilnahme an der Veranstaltung ist kostenlos - melden Sie sich am besten gleich an.

    Read the article

  • APEX Tabs als Pulldown-Menü: wie im Application Builder

    - by carstenczarski
    Jeder kennt die Reiterkarten im APEX Application Builder, mit der eleganten Möglichkeit, das Untermenü als Pulldown-Menü aufzuklappen. Und viele fragen sich, wie man sowas in eigenen APEX-Anwendungen verwenden könnte. Spätestens, wenn man dabei noch mehr als eine Hiararchieebene unterstützen möchte, kommen APEX Reiterkarten (Tabs) nicht mehr in Frage, denn diese unterstützen nur zwei Ebenen. Im Internet findet sich der eine oder andere Tipp zum Thema; allerdings basieren viele dieser Tipps auf den JavaScript-Funktionen, die auch der Application Builder intern verwendet. Allerdings sind diese nicht dokumentiert - man kann sich also nicht darauf verlassen, dass der Ansatz in künftigen APEX-Versionen noch funktioniert. Besser ist es also, eine Lösung zu erstellen, die keinerlei Abhängigkeiten zu undokumentierten Funktionen hat. Dieser Tipp stellt eine Lösung auf der Basis von APEX-Listen vor. Listen haben den Vorteil, dass Sie beliebig geschachtelt werden können, bei Klick können sie auf beliebige Ziele verweisen und mit Listentemplates kann die Darstellung ebenfalls beliebig gestaltet werden. Mehr dazu in unserem aktuellen Tipp.

    Read the article

  • OSB 11g & SAP – Single Channel/Program ID for Multiple IDOCs

    - by Shub Lahiri, A-Team
    Background This note is a supplement to the blog entry, SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs by Greg Mally. Greg has shown how a single SOA Suite composite can be used with iWay Adapters to receive multiple IDOC types via a single channel in the adapter, corresponding to a single programID on the SAP system. We will try to address the same requirements within the OSB framework here. Project Built - Design Time The basic build of an OSB project with iWay SAP Adapter, as seen in another entry in this blog, consists of working in OSB Design console and Application Explorer. OSB Design Time - Part 1 We will create a placeholder project first in OSB with a proper directory structure, so that we can export the WSDL, XSD and the JCA binding information from Application Explorer directly into this project. Application Explorer - iWay Design Time Tool Receiving IDOCs is classified as an inbound event within Application Explorer. For setting up events, a channel is first defined (e.g. iDoc_Channel) using the same PROGRAMID (RFC destination), as defined within SAP for the OSB server. Next, the same channel is used to export the JCA Inbound Event artifacts for the candidate IDOC, e.g. DEBMAS06 directly to the pre-created OSB project. Note that the validation for schema has been turned off. As a result, this will allow the adapter, at runtime, to use a single channel to receive multiple IDOC types from SAP and pass them on to the OSB runtime engine without any validation. In other words, we do not have to repeat the above step for each IDOC type. OSB Design Time - Part 2 Create 2 simple XML based Business Services to write to a file, e.g.  SAP_DEBMAS_File and SAP_MATMAS_File. Next, generate a Proxy Service using the JCA binding file exported from Application Explorer in the previous section. In the generated proxy service, edit the message flow and add a route node. Add a routing table in the route node with the following routing function. fn:local-name-from-QName(fn:node-name($body/*[1])) This function takes advantage of the fact that the XML payload at runtime, after translation by adapter, has the IDOC type as the top element. With the routing function in place, build the routing table to add 2 branches to route the IDOCs to the appropriate Business Service for writing the XML payload to files in separate directories. This completes the build of the OSB project. Testing - Run-Time After deployment and activation, the SAP adapter will wait to receive multiple types of IDOCs sent from the SAP system using a single channel. Upon receipt of the IDOCs, the OSB project will route them appropriately to save the corresponding XML payloads for different IDOC types in different directories.

    Read the article

  • What was the first consumer-oriented hardware/software solution?

    - by Maksee
    We all know the story of the personal computer as a consumer-oriented product. But I just thought that real end user solution should have appeared before that time. So a product that was probably expensive, but allowed using it as a service charging for it, for example computer-terminal for transport time-table access or game machine. On the other site, the video terminals as we know them appeared not so long ago. So if there was something like this, this could be hardware/software most likely offering no interactivity, but probably printing some information based on user actions.

    Read the article

  • Wrapping up an Exciting Mobile World Congress

    - by Jacob Lehrbaum
    Its been a busy week here in Barcelona, with noticeably more energy at the show than in 2010. This year, we decided to move the Java booth to the App Planet and really engage with the increasing number of developers that are attending the event. Our booth featured 10 demos and a series of nearly 25 workshops featuring a variety of topics ranging from information about Java Verified, to the use of web technologies with Java ME, to sessions hosted by Operators such as Orange and Telefonica (see image to the left).One of the more popular topics in our booth was the use of Java in the Smart Grid. In our booth we were showing off some of the work of the Hydra Consortium whose goal it is to leverage the emerging smart grid infrastructure to securely enable the delivery of personal health data (weight, blood pressure, etc) from the home to your doctor. If you'd like to learn more about this innovative project, you can watch a video that was filmed at the event featuring Charles Palmer of Onzo. If you'd like to learn more about Java in the Smart Grid, check out our on-demand webinar

    Read the article

  • Fuzzing for Security

    - by Sylvain Duloutre
    Yesterday, I attended an internal workshop about ethical hacking. Hacking skills like fuzzing can be used to quantitatively assess and measure security threats in software.  Fuzzing is a software testing technique used to discover coding errors and security loopholes in software, operating systems or networks by injecting massive amounts of random data, called fuzz, to the system in an attempt to make it crash. If the program contains a vulnerability that can leads to an exception, crash or server error (in the case of web apps), it can be determined that a vulnerability has been discovered.A fuzzer is a program that generates and injects random (and in general faulty) input to an application. Its main purpose is to make things easier and automated.There are typically two methods for producing fuzz data that is sent to a target, Generation or Mutation. Generational fuzzers are capable of building the data being sent based on a data model provided by the fuzzer creator. Sometimes this is simple and dumb as sending random bytes, swapping bytes or much smarter by knowing good values and combining them in interesting ways.Mutation on the other hand starts out with a known good "template" which is then modified. However, nothing that is not present in the "template" or "seed" will be produced.Generally fuzzers are good at finding buffer overflow, DoS, SQL Injection, Format String bugs etc. They do a poor job at finding vulnerabilites related to information disclosure, encryption flaws and any other vulnerability that does not cause the program to crash.  Fuzzing is simple and offers a high benefit-to-cost ratio but does not replace other proven testing techniques.What is your computer doing over the week-end ?

    Read the article

  • Not Drowning, Being Saved By a Dog

    - by Aaron Lazenby
    Really, there's no dog in this story. Just a week without travel to get some actual work done.I had plans to blog ambitiously from from Collaborate 10 (Wi-Fi was limited; iPad is still untested), but it's a much busier week than your agenda suggests.Scheduling sessions is one thing: you can count on those chunks of time being lost to the universe. It's the bumping into people in the hall and dropping in on an impromptu lunch that really knocks things out of whack.Good think too: I met with some great folks from

    Read the article

  • Marek's JAX-RS 2.0 content from Devoxx 2011

    - by alexismp
    Marek Potociar, one of the two co-spec leads for the upcoming JAX-RS 2.0 had a very well-attended session at Devoxx and wrote a blog post about it detailing his conference experience (1st time at Devoxx) and running through the new features of the specification. A link to slides is also included in his post. The work by the expert group seems very solid at this point as you can read for yourself in details in the recently published early draft document. You can follow the remaining work between now and the middle of new year on the specification project pages on java.net.

    Read the article

  • Collapsing Bookmarks

    - by Tim Dexter
    I said I would tackle documenting some of the new features in the 10.1.3.4.1 roll up patch I mentioned last week. With the patch you can now set the default state of bookmarks (if you create them) in your PDF outputs. If your users prefer to see them all collapsed to the base level or may be collapsed to the second level to ease navigation; whatever they need. Its another opportunity for you to look like a star! You of course need to start with a table of contents; then add the convert|copy to bookmarks command. You can then add the new collapse command to set the appropriate level in the bookmarks. <?copy-to-bookmark:?> <?collapse-bookmark:show;2?> <<< Table of Contents >>> <?end convert-to-bookmark?> The command allows you to expand or collapse the bookmarks as you need. Of course you will know how many levels you will have in the final output document. The command takes the form: <?collapse-bookmark:show|hide;level int?> Some examples <?collapse-bookmark:hide;1?> <?collapse-bookmark:hide;2?> <?collapse-bookmark:hide;3?> Sample template and data here. Dont forget you need that 10.1.3.4.1 roll up!

    Read the article

  • jtreg update, December 2012

    - by jjg
    There is a new version of jtreg available. The primary new feature is support for tests that have been written for use with TestNG, the popular open source testing framework. TestNG is supported by a variety of tools and plugins, which means that it is now possible to develop tests for OpenJDK using those tools, while still retaining the ability to have the tests be part of the OpenJDK test suite, and run with a single test harness, jtreg. jtreg can be downloaded from the OpenJDK jtreg page: http://openjdk.java.net/jtreg. TestNG support jtreg supports both single TestNG tests, which can be freely intermixed with other types of jtreg tests, and groups of TestNG tests. A single TestNG test class can be compiled and run by providing a test description using the new action tag: @run testng classname The test will be executed by using org.testng.TestNG. No main method is required. A group of TestNG tests organized in a standard package hierarchy can also be compiled and run by jtreg. Any such group must be identified by specifying the root directory of the package hierarchy. You can either do this in the top level TEST.ROOT file, or in a TEST.properties file in any subdirectory enclosing the group of tests. In either case, add a line to the file of the form: TestNG.dirs = dir ... Directories beginning with '/' are evaluated relative to the root directory of the test suite; otherwise they are evaluated relative to the directory containing the declaring file. In particular, note that you can simply use "TestNG.dirs = ." in a TEST.properties file in the root directory of the test group's package hierarchy. No additional test descriptions are necessary, but test descriptions containing information tags, such as @bug, @summary, etc are permitted. All the Java source files in the group will be compiled if necessary, before any of the tests in the group are run. The selected tests within the group will be run, one at a time, using org.testng.TestNG. Library classes The specification for the @library tag has been extended so that any paths beginning with '/' will be evaluated relative to the root directory of the test suite. In addition, some bugs have been fixed that prevented sharing the compiled versions of library classes between tests in different directories. Note: This has uncovered some issues in tests that use a combination of @build and @library tags, such that some tests may fail unexpectedly with ClassNotFoundException. The workaround for now is to ensure that library classes are listed before the test classes in any @build tags. To specify one or more library directories for a group of TestNG tests, add a line of the following form to the TEST.properties file in the root directory of the group's package hierarchy: lib.dirs = dir ... As before, directories beginning with '/' are evaluated relative to the root directory of the test suite; otherwise they are evaluated relative to the directory containing the declaring file. The libraries will be available to all classes in the group; you cannot specify different libraries for different tests within the group. Coming soon ... From this point on, jtreg development will be using the new jtreg repository in the OpenJDK code-tools project. There is a new email alias jtreg-dev at openjdk.java.net for discussions about jtreg development. The existing alias jtreg-use at openjdk.java.net will continue to be available for questions about using jtreg. For more information ... An updated version of the jtreg Tag Language Specification is being prepared, and will be made available when it is ready. In the meantime, you can find more information about the support for TestNG by executing the following command: $ jtreg -onlinehelp TestNG For more information on TestNG itself, visit testng.org.

    Read the article

  • JSF 2.2, Interceptors 1.2, and JPA 2.1 Replay: Java EE 7 Launch Webinar Technical Breakouts on YouTube

    - by arungupta
    As stated previously (here, here, and here), the On-Demand Replay of Java EE 7 Launch Webinar is already available. You can watch the entire Strategy and Technical Keynote there, and all other Technical Breakout sessions as well. We are releasing the next set of Technical Breakout sessions on GlassFishVideos YouTube channel as well. In this series, we are releasing JSF 2.2, Interceptors 1.2, and JPA 2.1. Here's the JSF 2.2 session: Here's the Interceptors 1.1 session: Here's the JPA 2.1 session: Enjoy watching them over the next few days before we release the next set of videos! And don't forget to download Java EE 7 SDK and try numerous bundled samples.

    Read the article

  • Twitter Storm VS. Google's MapReduce

    - by Edward J. Yoon
    IMO, the era of Information Retrieval is dead with the advent of SNS. And the question type is changed from "How many backlinks your site has?" to "How many people have clicked URL you've shared on SNS?". So many people who newbie in Big Data Analytics often asks me "How can I analyze stream data time-series pattern mining methods using Map/Reduce?", "How can I mining the valuable insights using Map/Reduce?", "blah~ blah~ using Map/Reduce?". The answer is No Map/Reduce.

    Read the article

  • YouTube: Promotional AgroSense Movie

    - by Geertjan
    Here's a cool YouTube promotional movie on AgroSense created by Ordina in the Netherlands. AgroSense is an open source Java system for the precision agriculture industry, which won the IT Environment Award in the Netherlands last week: If your understanding of Dutch limits your appreciation of the movie above, here's a rough translation, together with the names of the speakers in the movie: Precision agriculture, an innovative form of agriculture in which local variations in soil, crop, and atmosphere are taken into account, is the high-tech sustainable agriculture of tomorrow. The use of fertilizer, water, and energy can in this way be significantly reduced. "If, ten or twenty years from now, we are to continue having our agricultural industry in good shape, and in a continuing state of health, we'll need to register and work with data because if we want to enable crops to provide higher value, we'll need to create higher levels of transparency throughout the agriculture chain." Lenus Hamster, farmer in Nieuwolda Groningen "Industry is becoming increasingly data intensive. By combining pragmatic usefulness with innovative sustainability, AgroSense offers the Netherlands the possibility to continue being a leading player in the agrofood sector." Art Lighthart, Architect at Ordina AgroSense offers an open source solution in which all services for precision agriculture are brought together. In 2012, co-operation is being sought with organizations to make AgroSense available to around 10,000 Dutch farmers in the arable crop sector. By the way, the last sentence above implies the NetBeans Platform will be used by around 10,000 Dutch farmers.

    Read the article

  • RPi and Java Embedded GPIO: Sensor Connections for Java Enabled Interface

    - by hinkmond
    Now we're ready to connect the hardware needed to make a static electricity sensor for the Raspberry Pi and use Java code to access it through a GPIO port. First, very carefully bend the NTE312 (or MPF-102) transistor "gate" pin (see the diagram on the back of the package or refer to the pin diagram on the Web). You can see it in the inset photo on the bottom left corner. I bent the leftmost pin of the NTE312 transistor as I held the flat part toward me. That is going to be your antenna. So, connect one of the jumper wires to the bent pin. I used the dark green jumper wire (looks almost black; coiled at the bottom) in the photo. Then push the other 2 pins of the transistor into your breadboard. Connect one of the pins to Pin # 1 (3.3V) on the GPIO header of your RPi. See the diagram if you need to glance back at it. In the photo, that's the orange jumper wire. And connect the final unconnected transistor pin to Pin # 22 (GPIO25) on the RPi header. That's the blue jumper wire in my photo. For reference, connect the LED anode (long pin on a common anode LED/short pin on a common cathode LED, check your LED pin diagram) to the same breadboard hole that is connecting to Pin # 22 (same row of holes where the blue wire is connected), and connect the other pin of the LED to GROUND (row of holes that connect to the black wire in the photo). Test by blowing up a balloon, rubbing it on your hair (or your co-worker's hair, if you are hair-challenged) to statically charge it, and bringing it near your antenna (green wire in the photo). The LED should light up when it's near and go off when you pull it away. If you need more static charge, find a co-worker with really long hair, or rub the balloon on a piece of silk (which is just as good but not as fun). Next blog post is where we do some Java coding to access this sensor on your RPi. Finally, back to software! Ha! Hinkmond

    Read the article

  • Introducing the Documentation Workflows

    - by Owen Allen
    The how-to documents  provide end to end examples of specific features, such as creating a new zone or discovering a new system. We are enhancing the individual how-tos with documents called Workflows. These workflows are each built around procedural flowcharts that show these larger and more complex tasks. The workflow indicates which how-tos or other workflows you should follow to complete a more complex process, and give you a flow for planning the execution of a process. Over the coming days I'll highlight each of these workflows, and talk about the tasks that each one guides you through.

    Read the article

  • Sign-On für APEX Anwendungen mit Kerberos

    - by Carsten Czarski
    Endbenutzer von APEX-Anwendungen arbeiten fast immer von einem Windows-PC aus - und sehr oft sind sie in einer Windows-Domäne eingeloggt. Da liegt es doch nahe, diesen Login auch für die APEX-Anwendung zu verwenden und sich nicht erneut anmelden zu müssen. Leider unterstützt APEX ein solches Verfahren nicht out-of-the-box. Nimmt man jedoch einige Open-Source Komponenten hinzu, so lässt sich die Anforderung leicht umsetzen. Niels de Bruijn von der MT AG hat ein Dokument zusammengestellt, welches die Vorgehensweise beschreibt: Single Sign-On für APEX Anwendungen mit Kerberos - schauen Sie einfach mal rein.

    Read the article

  • Recent uploaded slides for the Upgrade Talks last week

    - by Mike Dietrich
    Welcome 2011 :-) And here you'll find the newest talks Carol, Roy and Brian delivered last week in several cities (please find the also in the DOWNLOAD SLIDES section on the right side of this blog): Upgrade Methods and Upgrade Planning: Click here to Download and use the keyword: roy2011 +500 Slides Upgrade Workshop Presentation: Click here to Download and use the keyword (Schlüsselwort): upgrade112 Hope you had a nice weekend and wonderful weather, too, as we had yesterday south of Munich. Click pic for a higher resolution: Starnberg Lake - View towards the Alps

    Read the article

  • Can't get Wireless to work! (Fujitsu siemens ESPRIMO Mobile u9200) Ubuntu 12.4

    - by Martin Oscarsson
    I can't get wireless to work on my computer. I have recently installed 12.04. Computer name: (Fujitsu siemens ESPRIMO Mobile u9200) Hardware button starts bluetooth - so can't start that way. Have searched the Internet for help but can't find any on my specific problem! State: connected (global) - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: ath5k State: disconnected Default: no *-network beskrivning: Trådlöst gränssnitt produkt: AR242x / AR542x Wireless Network Adapter (PCI-Express) tillverkare: Atheros Communications Inc. *-network beskrivning: Ethernet interface produkt: 88E8055 PCI-E Gigabit Ethernet Controller tillverkare: Marvell Technology Group Ltd. HERE IS ALL THE NETWORK INFO: ellika@ellikas:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1e:33:00:96:9a inet addr:192.168.1.26 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:33ff:fe00:969a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13778 errors:0 dropped:0 overruns:0 frame:0 TX packets:9510 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14022669 (14.0 MB) TX bytes:1001621 (1.0 MB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1542 errors:0 dropped:0 overruns:0 frame:0 TX packets:1542 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:125040 (125.0 KB) TX bytes:125040 (125.0 KB) ellika@ellikas:~$ sudo ifconfig [sudo] password for ellika: eth0 Link encap:Ethernet HWaddr 00:1e:33:00:96:9a inet addr:192.168.1.26 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:33ff:fe00:969a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13801 errors:0 dropped:0 overruns:0 frame:0 TX packets:9528 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14024965 (14.0 MB) TX bytes:1002836 (1.0 MB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1542 errors:0 dropped:0 overruns:0 frame:0 TX packets:1542 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:125040 (125.0 KB) TX bytes:125040 (125.0 KB) ellika@ellikas:~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operationen inte möjlig p.g.a. RF-kill ellika@ellikas:~$ phy0 Wireless LAN phy0: command not found ellika@ellikas:~$ rfkill Usage: rfkill [options] command Options: --version show version (0.4-1ubuntu2 (Ubuntu)) Commands: help event list [IDENTIFIER] block IDENTIFIER unblock IDENTIFIER where IDENTIFIER is the index no. of an rfkill switch or one of: <idx> all wifi wlan bluetooth uwb ultrawideband wimax wwan gps fm ellika@ellikas:~$ sudo rf-kill unblock all sudo: rf-kill: kommandot hittades inte ellika@ellikas:~$ sudo rfkill unblock all ellika@ellikas:~$ sedan sudo ifconfig wlan0 sedan: command not found ellika@ellikas:~$ sudo ifconfig wlan0 wlan0 Link encap:Ethernet HWaddr 00:22:5f:3f:63:76 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ellika@ellikas:~$ ^C ellika@ellikas:~$ ^C ellika@ellikas:~$ sudo rfkill unblock all ellika@ellikas:~$ sudo ifconfig wlan0 up SIOCSIFFLAGS: Operationen inte möjlig p.g.a. RF-kill ellika@ellikas:~$ sudo rfkill unblock all ellika@ellikas:~$ sudo rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes ellika@ellikas:~$ sudo rfkill unblock all ellika@ellikas:~$ echo -e "sudo lshw --class network:\n\n$(sudo lshw -c network)\n\nlspci -nnn | grep Ethernet:\n\n$(lspci -nnn | grep Ethernet)\n\nlsusb:\n\n$(lsusb)\n\niwlist wlan0 scanning:\n\n$(iwlist wlan0 scanning)\n\nrfkill list:\n\n$(rfkill list)\n\nping -c 5 google.com:\n\n$(ping -c 5 google.com)\n\nhost google.com 8.8.8.8:\n\n$(host google.com 8.8.8.8)\n\nlsb_release -a:\n\n$(lsb_release -a)\n\nuname -a:\n\n$(uname -a)" ^[[C^[[C^[[C^[[C^[[C^[[B wlan0 Failed to read scan data : Network is down No LSB modules are available. sudo lshw --class network: *-network beskrivning: Ethernet interface produkt: 88E8055 PCI-E Gigabit Ethernet Controller tillverkare: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:04:00.0 logiskt namn: eth0 version: 14 serienummer: 00:1e:33:00:96:9a storlek: 100Mbit/s kapacitet: 1Gbit/s bredd: 64 bits klocka: 33MHz förmågor: pm vpd msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation konfiguration: autonegotiation=on broadcast=yes driver=sky2 driverversion=1.30 duplex=full firmware=N/A ip=192.168.1.26 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resurser: irq:44 memory:f8000000-f8003fff ioport:3000(storlek=256) memory:f2000000-f201ffff *-network INAKTIVERAD beskrivning: Trådlöst gränssnitt produkt: AR242x / AR542x Wireless Network Adapter (PCI-Express) tillverkare: Atheros Communications Inc. physical id: 0 bus info: pci@0000:06:00.0 logiskt namn: wlan0 version: 04 serienummer: 00:22:5f:3f:63:76 bredd: 64 bits klocka: 33MHz förmågor: pm msi pciexpress msix bus_master cap_list ethernet physical wireless konfiguration: broadcast=yes driver=ath5k driverversion=3.2.0-30-generic-pae firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bg resurser: irq:18 memory:fa000000-fa00ffff lspci -nnn | grep Ethernet: 04:00.0 Ethernet controller [0200]: Marvell Technology Group Ltd. 88E8055 PCI-E Gigabit Ethernet Controller [11ab:4363] (rev 14) 06:00.0 Ethernet controller [0200]: Atheros Communications Inc. AR242x / AR542x Wireless Network Adapter (PCI-Express) [168c:001c] (rev 04) lsusb: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 05e3:0715 Genesys Logic, Inc. USB 2.0 microSD Reader Bus 001 Device 003: ID 05c8:0103 Cheng Uei Precision Industry Co., Ltd (Foxlink) FO13FF-65 PC-CAM iwlist wlan0 scanning: rfkill list: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes ping -c 5 google.com: PING google.com (173.194.32.34) 56(84) bytes of data. 64 bytes from arn06s02-in-f2.1e100.net (173.194.32.34): icmp_req=1 ttl=55 time=10.6 ms 64 bytes from arn06s02-in-f2.1e100.net (173.194.32.34): icmp_req=2 ttl=55 time=10.5 ms 64 bytes from arn06s02-in-f2.1e100.net (173.194.32.34): icmp_req=3 ttl=55 time=10.4 ms 64 bytes from arn06s02-in-f2.1e100.net (173.194.32.34): icmp_req=4 ttl=55 time=10.4 ms 64 bytes from arn06s02-in-f2.1e100.net (173.194.32.34): icmp_req=5 ttl=55 time=10.4 ms --- google.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4004ms rtt min/avg/max/mdev = 10.451/10.517/10.631/0.062 ms host google.com 8.8.8.8: Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: google.com has address 173.194.32.36 google.com has address 173.194.32.38 google.com has address 173.194.32.41 google.com has address 173.194.32.37 google.com has address 173.194.32.35 google.com has address 173.194.32.39 google.com has address 173.194.32.33 google.com has address 173.194.32.34 google.com has address 173.194.32.46 google.com has address 173.194.32.32 google.com has address 173.194.32.40 google.com has IPv6 address 2a00:1450:400f:801::100e google.com mail is handled by 40 alt3.aspmx.l.google.com. google.com mail is handled by 20 alt1.aspmx.l.google.com. google.com mail is handled by 30 alt2.aspmx.l.google.com. google.com mail is handled by 50 alt4.aspmx.l.google.com. google.com mail is handled by 10 aspmx.l.google.com. lsb_release -a: Distributor ID: Ubuntu Description: Ubuntu 12.04.1 LTS Release: 12.04 Codename: precise uname -a: Linux ellikas 3.2.0-30-generic-pae #48-Ubuntu SMP Fri Aug 24 17:14:09 UTC 2012 i686 i686 i386 GNU/Linux ellika@ellikas:~$ ellika@ellikas:~$ clear ellika@ellikas:~$ echo -e "sudo lshw --class network:\n\n$(sudo lshw -c network)\n\nlspci -nnn | grep Ethernet:\n\n$(lspci -nnn | grep Ethernet)\n\nlsusb:\n\n$(lsusb)\n\niwlist wlan0 scanning:\n\n$(iwlist wlan0 scanning)\n\nrfkill list:\n\n$(rfkill list)\n\nping -c 5 google.com:\n\n$(ping -c 5 google.com)\n\nhost google.com 8.8.8.8:\n\n$(host google.com 8.8.8.8)\n\nlsb_release -a:\n\n$(lsb_release -a)\n\nuname -a:\n\n$(uname -a)" wlan0 Failed to read scan data : Network is down No LSB modules are available. sudo lshw --class network: *-network beskrivning: Ethernet interface produkt: 88E8055 PCI-E Gigabit Ethernet Controller tillverkare: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:04:00.0 logiskt namn: eth0 version: 14 serienummer: 00:1e:33:00:96:9a storlek: 100Mbit/s kapacitet: 1Gbit/s bredd: 64 bits klocka: 33MHz förmågor: pm vpd msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation konfiguration: autonegotiation=on broadcast=yes driver=sky2 driverversion=1.30 duplex=full firmware=N/A ip=192.168.1.26 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resurser: irq:44 memory:f8000000-f8003fff ioport:3000(storlek=256) memory:f2000000-f201ffff *-network INAKTIVERAD beskrivning: Trådlöst gränssnitt produkt: AR242x / AR542x Wireless Network Adapter (PCI-Express) tillverkare: Atheros Communications Inc. physical id: 0 bus info: pci@0000:06:00.0 logiskt namn: wlan0 version: 04 serienummer: 00:22:5f:3f:63:76 bredd: 64 bits klocka: 33MHz förmågor: pm msi pciexpress msix bus_master cap_list ethernet physical wireless konfiguration: broadcast=yes driver=ath5k driverversion=3.2.0-30-generic-pae firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bg resurser: irq:18 memory:fa000000-fa00ffff lspci -nnn | grep Ethernet: 04:00.0 Ethernet controller [0200]: Marvell Technology Group Ltd. 88E8055 PCI-E Gigabit Ethernet Controller [11ab:4363] (rev 14) 06:00.0 Ethernet controller [0200]: Atheros Communications Inc. AR242x / AR542x Wireless Network Adapter (PCI-Express) [168c:001c] (rev 04) lsusb: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 05e3:0715 Genesys Logic, Inc. USB 2.0 microSD Reader Bus 001 Device 003: ID 05c8:0103 Cheng Uei Precision Industry Co., Ltd (Foxlink) FO13FF-65 PC-CAM iwlist wlan0 scanning: rfkill list: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes ping -c 5 google.com: PING google.com (173.194.32.34) 56(84) bytes of data. 64 bytes from arn06s02-in-f2.1e100.net (173.194.32.34): icmp_req=1 ttl=55 time=10.6 ms 64 bytes from arn06s02-in-f2.1e100.net (173.194.32.34): icmp_req=2 ttl=55 time=10.5 ms 64 bytes from arn06s02-in-f2.1e100.net (173.194.32.34): icmp_req=3 ttl=55 time=10.4 ms 64 bytes from arn06s02-in-f2.1e100.net (173.194.32.34): icmp_req=4 ttl=55 time=10.4 ms 64 bytes from arn06s02-in-f2.1e100.net (173.194.32.34): icmp_req=5 ttl=55 time=10.5 ms --- google.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4004ms rtt min/avg/max/mdev = 10.476/10.522/10.602/0.045 ms host google.com 8.8.8.8: Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: google.com has address 173.194.32.36 google.com has address 173.194.32.38 google.com has address 173.194.32.41 google.com has address 173.194.32.37 google.com has address 173.194.32.35 google.com has address 173.194.32.39 google.com has address 173.194.32.33 google.com has address 173.194.32.34 google.com has address 173.194.32.46 google.com has address 173.194.32.32 google.com has address 173.194.32.40 google.com has IPv6 address 2a00:1450:400f:801::100e google.com mail is handled by 40 alt3.aspmx.l.google.com. google.com mail is handled by 20 alt1.aspmx.l.google.com. google.com mail is handled by 30 alt2.aspmx.l.google.com. google.com mail is handled by 50 alt4.aspmx.l.google.com. google.com mail is handled by 10 aspmx.l.google.com. lsb_release -a: Distributor ID: Ubuntu Description: Ubuntu 12.04.1 LTS Release: 12.04 Codename: precise uname -a: Linux ellikas 3.2.0-30-generic-pae #48-Ubuntu SMP Fri Aug 24 17:14:09 UTC 2012 i686 i686 i386 GNU/Linux

    Read the article

  • Parallel Classloading Revisited: Fully Concurrent Loading

    - by davidholmes
    Java 7 introduced support for parallel classloading. A description of that project and its goals can be found here: http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. This was then used in the following way: protected Class loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // First, check if the class has already been loaded Class c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // ClassNotFoundException thrown if class not found // from the non-null parent class loader } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } Where getClassLoadingLock simply does: protected Object getClassLoadingLock(String className) { Object lock = this; if (parallelLockMap != null) { Object newLock = new Object(); lock = parallelLockMap.putIfAbsent(className, newLock); if (lock == null) { lock = newLock; } } return lock; } This approach is very inefficient in terms of the space used per map and the number of maps. First, there is a map per-classloader. As per the code above under normal delegation the current classloader creates and acquires a lock for the given class, checks if it is already loaded, then asks its parent to load it; the parent in turn creates another lock in its own map, checks if the class is already loaded and then delegates to its parent and so on till the boot loader is invoked for which there is no map and no lock. So even in the simplest of applications, you will have two maps (in the system and extensions loaders) for every class that has to be loaded transitively from the application's main class. If you knew before hand which loader would actually load the class the locking would only need to be performed in that loader. As it stands the locking is completely unnecessary for all classes loaded by the boot loader. Secondly, once loading has completed and findClass will return the class, the lock and the map entry is completely unnecessary. But as it stands, the lock objects and their associated entries are never removed from the map. It is worth understanding exactly what the locking is intended to achieve, as this will help us understand potential remedies to the above inefficiencies. Given this is the support for parallel classloading, the class loader itself is unlikely to need to guard against concurrent load attempts - and if that were not the case it is likely that the classloader would need a different means to protect itself rather than a lock per class. Ultimately when a class file is located and the class has to be loaded, defineClass is called which calls into the VM - the VM does not require any locking at the Java level and uses its own mutexes for guarding its internal data structures (such as the system dictionary). The classloader locking is primarily needed to address the following situation: if two threads attempt to load the same class, one will initiate the request through the appropriate loader and eventually cause defineClass to be invoked. Meanwhile the second attempt will block trying to acquire the lock. Once the class is loaded the first thread will release the lock, allowing the second to acquire it. The second thread then sees that the class has now been loaded and will return that class. Neither thread can tell which did the loading and they both continue successfully. Consider if no lock was acquired in the classloader. Both threads will eventually locate the file for the class, read in the bytecodes and call defineClass to actually load the class. In this case the first to call defineClass will succeed, while the second will encounter an exception due to an attempted redefinition of an existing class. It is solely for this error condition that the lock has to be used. (Note that parallel capable classloaders should not need to be doing old deadlock-avoidance tricks like doing a wait() on the lock object\!). There are a number of obvious things we can try to solve this problem and they basically take three forms: Remove the need for locking. This might be achieved by having a new version of defineClass which acts like defineClassIfNotPresent - simply returning an existing Class rather than triggering an exception. Increase the coarseness of locking to reduce the number of lock objects and/or maps. For example, using a single shared lockMap instead of a per-loader lockMap. Reduce the lifetime of lock objects so that entries are removed from the map when no longer needed (eg remove after loading, use weak references to the lock objects and cleanup the map periodically). There are pros and cons to each of these approaches. Unfortunately a significant "con" is that the API introduced in Java 7 to support parallel classloading has essentially mandated that these locks do in fact exist, and they are accessible to the application code (indirectly through the classloader if it exposes them - which a custom loader might do - and regardless they are accessible to custom classloaders). So while we can reason that we could do parallel classloading with no locking, we can not implement this without breaking the specification for parallel classloading that was put in place for Java 7. Similarly we might reason that we can remove a mapping (and the lock object) because the class is already loaded, but this would again violate the specification because it can be reasoned that the following assertion should hold true: Object lock1 = loader.getClassLoadingLock(name); loader.loadClass(name); Object lock2 = loader.getClassLoadingLock(name); assert lock1 == lock2; Without modifying the specification, or at least doing some creative wordsmithing on it, options 1 and 3 are precluded. Even then there are caveats, for example if findLoadedClass is not atomic with respect to defineClass, then you can have concurrent calls to findLoadedClass from different threads and that could be expensive (this is also an argument against moving findLoadedClass outside the locked region - it may speed up the common case where the class is already loaded, but the cost of re-executing after acquiring the lock could be prohibitive. Even option 2 might need some wordsmithing on the specification because the specification for getClassLoadingLock states "returns a dedicated object associated with the specified class name". The question is, what does "dedicated" mean here? Does it mean unique in the sense that the returned object is only associated with the given class in the current loader? Or can the object actually guard loading of multiple classes, possibly across different class loaders? So it seems that changing the specification will be inevitable if we wish to do something here. In which case lets go for something that more cleanly defines what we want to be doing: fully concurrent class-loading. Note: defineClassIfNotPresent is already implemented in the VM as find_or_define_class. It is only used if the AllowParallelDefineClass flag is set. This gives us an easy hook into existing VM mechanics. Proposal: Fully Concurrent ClassLoaders The proposal is that we expand on the notion of a parallel capable class loader and define a "fully concurrent parallel capable class loader" or fully concurrent loader, for short. A fully concurrent loader uses no synchronization in loadClass and the VM uses the "parallel define class" mechanism. For a fully concurrent loader getClassLoadingLock() can return null (or perhaps not - it doesn't matter as we won't use the result anyway). At present we have not made any changes to this method. All the parallel capable JDK classloaders become fully concurrent loaders. This doesn't require any code re-design as none of the mechanisms implemented rely on the per-name locking provided by the parallelLockMap. This seems to give us a path to remove all locking at the Java level during classloading, while retaining full compatibility with Java 7 parallel capable loaders. Fully concurrent loaders will still encounter the performance penalty associated with concurrent attempts to find and prepare a class's bytecode for definition by the VM. What this penalty is depends on the number of concurrent load attempts possible (a function of the number of threads and the application logic, and dependent on the number of processors), and the costs associated with finding and preparing the bytecodes. This obviously has to be measured across a range of applications. Preliminary webrevs: http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.hotspot/ http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.jdk/ Please direct all comments to the mailing list [email protected].

    Read the article

  • The Sound of Two Toilets Flushing: Constructive Criticism for Virgin Atlantic Complaints Department

    - by Geertjan
    I recently had the experience of flying from London to Johannesburg and back with Virgin Atlantic. The good news was that it was the cheapest flight available and that the take off and landing were absolutely perfect. Hence I really have no reason to complain. Instead, I'd like to offer some constructive criticism which hopefully Richard Branson will find sometime while googling his name. Or maybe someone from the Virgin Atlantic Complaints Department will find it, whatever, just want to put this information out there. Arrangement of restroom facilities. Maybe next time you design an airplane, consider not putting your toilets at a right angle right next to your rows of seats. Being able to reach, without even needing to stretch your arm, from your seat to close, yet again, a toilet door that someone, someone obviously sitting very far from the toilets, carelessly forgot to close is not an indicator of quality interior design. Have you noticed how all other airplanes have their toilets in a cubicle separated from the rows of seats? On those airplanes, people sitting in the seats near the toilets are not constantly being woken up throughout the night whenever someone enters/exits the toilet, whenever the light in the toilet is suddenly switched on, and whenever one of the toilets flushes. Bonus points for Virgin Atlantic passengers in the seats adjoining the toilets is when multiple toilets are flushed simultaneously and multiple passengers enter/exit them at the same time, a bit like an unasked for low budget musical of suddenly illuminated grumpy people in crumpled clothes. What joy that brings at 3 AM is hard to describe. Seats with extra leg room. You know how other airplanes have the seats with the extra leg room? You know what those seats tend to have? Extra leg room. It's really interesting how Virgin Atlantic's seats with extra leg room actually have no extra leg room at all. It should have been a give away, the fact that these special seats are found in the same rows as the standard seats, rather than on the cusp of real glory which is where most airlines put their extra leg room seats, with the only actual difference being that they have a slightly different color. Had you called them "seats with a different color" (i.e., almost not quite green, rather than something vaguely hinting at blue), at least I'd have known what I was getting. Picture the joy at 3 AM, rudely awakened from nightmarish slumber, partly grateful to have been released from a grayish dream of faceless zombies resembling one or two of those in a recent toilet line, by multiple adjoining toilets flushing simultaneously, while you're sitting in a seat with extra leg room that has exactly as much leg room as the seats in neighboring rows. You then have a choice of things to be sincerely annoyed about. Food from the '80's. In the '80's, airplane food came in soggy containers and even breakfast, the most important meal of the day, was a sad heap of vaguely gray colors. The culinary highlight tended to be a squashed tomato, which must have been mashed to a pulp with a brick prior to being regurgitated by a small furry animal, and there was also always a piece of immensely horrid pumpkin, as well as a slice of spongy something you'd never seen before. Sausages and mash at 6 AM on an airplane was always a heavy lump of horribleness. Thankfully, all airlines throughout the world changed from this puke inducing strategy around 1987 sometime. Not Virgin Atlantic, of course. The fatty sausages and mash are still there, bringing you flashbacks to Duran Duran, which is what you were listening to (on your walkman) the last time you saw it in an airplane. Even the golden oldie "squashed tomato attached by slime to three wet peas" is on the menu. How wonderful to have all this in a cramped seat with a long row of early morning bleariness lined up for the toilets, right at your side, bumping into your elbow, groggily, one by one, one after another, more and more, fumble-open-door-silence-flush-fumble-open-door, and on and on, while you tentatively push your fork through a soggy pile of colorless mush, fighting the urge to throw up on the stinky socks of whatever nightmarish zombie is bumping into your elbow at the time. But, then again, the plane landed without a hitch, in fact, extremely smoothly, so I'm certainly not blaming the pilots.

    Read the article

  • Current SPARC Architectures

    - by Darryl Gove
    Different generations of SPARC processors implement different architectures. The architecture that the compiler targets is controlled implicitly by the -xtarget flag and explicitly by the -arch flag. If an application targets a recent architecture, then the compiler gets to play with all the instructions that the new architecture provides. The downside is that the application won't work on older processors that don't have the new instructions. So for developer's there is a trade-off between performance and portability. The way we have solved this in the compiler is to assume a "generic" architecture, and we've made this the default behaviour of the compiler. The only flag that doesn't make this assumption is -fast which tells the compiler to assume that the build machine is also the deployment machine - so the compiler can use all the instructions that the build machine provides. The -xtarget=generic flag tells the compiler explicitly to use this generic model. We work hard on making generic code work well across all processors. So in most cases this is a very good choice. It is also of interest to know what processors support the various architectures. The following Venn diagram attempts to show this: A textual description is as follows: The T1 and T2 processors, in addition to most other SPARC processors that were shipped in the last 10+ years supported V9b, or sparcvis2. The SPARC64 processors from Fujitsu, used in the M-series machines, added support for the floating point multiply accumulate instruction in the sparcfmaf architecture. Support for this instruction also appeared in the T3 - this is called sparcvis3 Later SPARC64 processors added the integer multiply accumulate instruction, this architecture is sparcima. Finally the T4 includes support for both the integer and floating point multiply accumulate instructions in the sparc4 architecture. So the conclusion should be: Floating point multiply accumulate is supported in both the T-series and M-series machines, so it should be a relatively safe bet to start using it. The T4 is a very good machine to deploy to because it supports all the current instruction sets.

    Read the article

< Previous Page | 619 620 621 622 623 624 625 626 627 628 629 630  | Next Page >