Search Results

Search found 13947 results on 558 pages for 'television shows'.

Page 487/558 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Antenna Aligner Part 4: Role'ing in the deep

    - by Chris George
    Since last time I've been trying to sort out the general workflow of the app. It's fundamentally not hard, there is a list of transmitters, you select a transmitter and it shows the compass view. Having done quite a bit of ajax/asp.net/html in the past, I immediately started off by creating two divs within my 'page', one for the list, one for the compass. Then using the onClick event in the list, this will switch the display attribute on the divs. This seemed to work, but did lead to some dodgy transitional redrawing artefacts which I was not happy with. So after some Googling I realised I was doing it all wrong! JQuery mobile has the concept of giving an object in html a data-role. By giving a div the attribute data-role="page" it is then treated as a separate page on the mobile device. Within the code, this is referenced like a html anchor in the form #mypage. Using this system, page transitions such as fade or slide are automatically applied which adds to the whole authenticity of the app! Here is a simple example: . <a href="#'compasspage">compass</a> . <div data-role="page" id="compasspage" data-add-back-btn="true"> But I don't want just a static link, I want to dynamically create my list, and get each list elements to switch to the compass page with the right information. So here is the jquery that I used to dynamically inject new <li> rows into the <ul> block. $('ul').append($('<li/>', {    //here appendin `<li>`     'data-role': "list-divider" }).append($('<a/>', {    //here appending `<a>` into `<li>`     'href': '#compasspage',     'data-transition': 'none',     'onclick': 'selectTx(' + i + ')',     'html': buttonHtml }))); $('ul').listview('refresh'); This is called within a for loop so the first 5 appropriate transmitters are used. There are several things of interest to note here. Firstly, I could not find a more elegant way to tell the target page which transmitter I've clicked on, so I have used the onclick event as well as the href attribute. The onclick event fires 'selectTx' which simply sets a global member variable to the specific index number I've clicked on. Yes it's not nice, but it works. Secondly, the data-transition attribute is set to 'none'. I wanted the transition between the pages to be a whooshy slidey effect. However this worked going to the compass page, but returning to the list page gave some undesirable visual artefacts (flickering, redrawing etc.). So I decided to remove the transitions all together, which was a shame. Thirdly, rather than embedding loads of html into the append command, I removed this out into a variable 'buttonHtml'. Doing this really tidied up my code. Until next time!

    Read the article

  • October in Review

    - by Richard Bingham
    With OpenWorld over October was time to get back to serious work for everyone, including the Fusion Applications Developer Relations team. Don't forget the OpenWorld content is still available, including presentation downloads, for a limited period of time so be sure to grab anything you found useful or take another scan for anything you might have missed. Of all the announcements, the continued evolution of the Oracle Cloud services for extending and integrating with Fusion Applications is increasing in popularity, and certainly the Cloud Marketplace is something we're becoming involved in. More details to follow. Fusion Concepts Last week Vik from our team started the new "Fusion Concepts" series of articles, providing those new to Fusion Applications an explanation of the architectural basics, with the aim to reduce the learning curve and lay the platform for more efficient and effective development. The series begun with an insightful first post on the different schemas that exist in the Fusion Applications database. Look out for upcoming posts on multi-lingual entities, profile options, look-ups and more. New Learning Resources Our YouTube channel continued to expand with more 'how to' videos on using page composer, extending the Simplified UI (aka FUSE), and integrating BI reports and analytics. Also the Oracle Learning Library is now well established as a central resource for knowledge, now with thousands of tutorials, videos, and documents. Of particular note are the great new extensibility-related videos added by the CRM Product Management team, including more on the ever-expanding capabilities of Application Composer. To see some examples of these search using keyword 'customization' or the product 'Sales Cloud'. Finally on learning resources, as Oliver mentioned the Oracle Press book on Fusion Application Customization and Extensibility is now available for pre-order on Amazon (due out 1st Jan). Out And About October also saw us attend the annual Apps Conference held by the UK Oracle User Group in London. Interestingly there was an Applications Transformation stream of sessions and content that included Fusion Applications with all the latest in the Oracle Applications evolution, as always focused around the three tenets of social, mobile, and cloud. Read more in Richard's post-event write up. Other teams around Oracle have also been busy. Angelo from the Platform Technical Services group has done quite a bit of work using web services with Fusion SaaS and has published many interesting findings on his blog. It's definitely recommended reading if you are working on any related integration projects. The middleware-for-applications group has built a new tool called "AppAdvantage" offering an online assessment of your use of Fusion Middleware technologies with Oracle Applications. As the popularity of integrating cloud applications with on-premises systems continued to grow, leveraging existing middleware technologies (and licenses) to support the integration solution is likely to be of paramount importance. Similarly the "Build Enterprise Application Extensions with Ease" section of the related webpage has AppsUX director Killan Evers speaking about customization using the composer tools. Both are useful resources for those just getting started with a move to Fusion Applications. The Oracle A-Team, specialists in middleware technical architecture, always publish superb content via their 'chronicles' site, now with a substantial amount specifically related to Fusion Applications. Click on the Fusion Applications menu on the top right of their homepage to see more. Last month of particular note was an article on customizing the timeout pop-up message that shows to inactive users, providing design-time insight and easy-to-follow steps. Finally if you're looking at using Oracle Middleware and Cloud to tailor and extend your applications then you may also be interested in this new blog post on the roadmap for Oracle SOA and the latest on-demand Cloud Development webcast.

    Read the article

  • Three Buckets of Knowledge

    - by BuckWoody
    As I learn more and more about SQL Server every day, I divide up my information into three “buckets”: Concepts In the first bucket are the general concepts about the topic. What is it? What does it do (or sometimes, what is is supposed to do?) How does one operation flow to another? For this information I use books, magazine articles and believe it or not – Wikipedia. I don’t always trust that last source, but I do use it to see how others lay out their thoughts around a concept. I really like graphical charts that show me the process flow if I can get it, and this is an ideal place for a good presentation. In fact, this may be the only real use for a presentation – I’ll explain what I mean in a moment. Reference The references for a topic include things like Transact-SQL (T-SQL) syntax, or the screen layout on a panel, things like that. Think Dictionary. The only reference I trust for this information is Books Online – presentations are fine, but we’re talking about a dictionary. Ever go to a movie that just reads through a dictionary? Me neither. But I have gone to presentations where people try to include tons of reference materials in their slides. Even if you give me the presentation material later, it’s not really a searchable, readable medium. How To A how-to for me is an example, or even better, a tutorial about an example. Whatever it is shows me a practical use for the concepts and of course involves the syntax. The important thing here is that you need to be able to separate out the example the person is showing you from the stuff you need to know. I can’t tell you how many times folks have told me, “well, sure, if yours is red then that works. But mine is blue.” And I have to explain, “then use “blue” for the search word here.” You get the idea. No one will do your work for you – the examples are meant as a teaching tool only. I accept that, learn what I can, and then run off to create my own thing. You might think a How To works well in a presentation, and it does, for the most part. For a complex example or tutorial, I still prefer the printed word (electronic if possible) so that I can go over the example multiple times, skip around and so on.   The order here isn’t actually that important. Most of the time I start with a concept, look at an example, and then read the reference material. But sometimes I look up an example, read a little of concepts and then check the reference. The only primary thing I try to enforce is to read something from each of them. It’s dangerous to base your work on any single example, reference or concept.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Must go through Windows Boot Loader to get to Grub

    - by Zach
    I just installed a fresh copy of Precise alongside Windows 7. I have to separate 750GB hard drives; /dev/sda holds the Windows partitions and /dev/sdb holds the Ubuntu partitions. Other than that, these are fresh installs of both Windows 7 and Ubuntu 12.04. Whenever I boot, Grub doesn't load, instead it goes to a black screen with a single blinking (horizontal bar) cursor in the top right corner. However, if I boot, hit escape right as the BIOS/POST screen finishes up, see the Windows Boot Loader and hit escape to make it go back to the BIOS screen. After the BIOS screen, grub shows up and everything functions normally; I can boot into Ubuntu or Win7. I don't want to have to do the Escape, Escape, Wait, Boot trick every time. I have no idea what would be wrong or what information I could give you guys to help diagnose. I have run a sudo update-grub and it found everything normally. I tried adding nomodeset flag in the /etc/default/grub line GRUB_CMDLINE_LINUX_DEFAULT which searching around made me think might work. Thoughts on what I could do to fix this? EDIT: I've tried changing the boot order so that both drives in the BIOS (both are labeled as "Internal HDD") have had a try booting first. I think the problem may be that every time I boot, the BIOS boot order is different... and I have to reset it. It seems to not be stable... but I'm not sure how to go about fixing that either. The machine has both traditional BIOS and UEFI. It came standard in "Legacy" mode; so it is currently set to boot through Legacy mode. I've reinstalled Ubuntu now, and now if I hit escape at the end of the BIOS/POST startup screen, it takes me to GRUB menu. Otherwise it automatically loads Windows. It seems like GRUB is now the acting bootloader, it just doesn't automatically start that unless I ask it to open a bootloader. In my other machines, it has always automatically started at the end of BIOS/POST. EDIT2: Using gparted, I just looked at my partitions, it would seem that my linux-swap partition is currently flagged as the boot partition for my Ubuntu install. I currently only have 2 partitions: one of "ext4" with a mount point of "/" and flag " "; and the "linux-swap" with mount point " " and flag "boot." If I change the boot flag to be on "/," it does not reliably solve the problem. After 10 boots: 2 Booted successfully to GRUB 5 Booted directly to Windows 7 3 booted to the black screen with the cursor and hung there Further research makes me think this is an issue of the BIOS not reliably booting hard drives in the same order or not finding both hard drives. If I ask it to create a "boot menu" sometimes it has 2 entries for "Internal HDD," sometimes 1. Also the list it creates changes order every time I bring it up; so it is not following a consistent boot sequence. Will report back if this is not an issue with GRUB.

    Read the article

  • Schizophrenic Ubuntu 12.10-12.04: Atheros 922 PCI WIFI is disabled in Unity but enabled in terminal - How to getit to work?

    - by zewone
    I am trying to get my PCI Wireless Atheros 922 card to work. It is disabled in Unity: both the network utility and the desktop (see screenshot http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) I tried many different advises on many different forums. Installed 12.10 instead of 12.04, enabled all interfaces... etc. I have read about the aht9 driver... The terminal shows no hw or sw lock for the Atheros card, nevertheless, it is still disabled. Nothing worked so far, the card is still disabled. Any help is much appreciated. Here are more tech details: myuser@adri1:~$ sudo lshw -C network *-network:0 DISABLED description: Wireless interface product: AR922X Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 2 bus info: pci@0000:03:02.0 logical name: wlan1 version: 01 serial: 00:18:e7:cd:68:b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.5.0-17-generic firmware=N/A latency=168 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:d8000000-d800ffff *-network:1 description: Ethernet interface product: VT6105/VT6106S [Rhine-III] vendor: VIA Technologies, Inc. physical id: 6 bus info: pci@0000:03:06.0 logical name: eth0 version: 8b serial: 00:11:09:a3:76:4a size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=via-rhine driverversion=1.5.0 duplex=half latency=32 link=no maxlatency=8 mingnt=3 multicast=yes port=MII speed=10Mbit/s resources: irq:18 ioport:d300(size=256) memory:d8013000-d80130ff *-network DISABLED description: Wireless interface physical id: 1 bus info: usb@1:8.1 logical name: wlan0 serial: 00:11:09:51:75:36 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=3.5.0-17-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg myuser@adri1:~$ sudo rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy1: Wireless LAN Soft blocked: no Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no myuser@adri1:~$ dmesg | grep wlan0 [ 15.114235] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready myuser@adri1:~$ dmesg | egrep 'ath|firm' [ 14.617562] ath: EEPROM regdomain: 0x30 [ 14.617568] ath: EEPROM indicates we should expect a direct regpair map [ 14.617572] ath: Country alpha2 being used: AM [ 14.617575] ath: Regpair used: 0x30 [ 14.637778] ieee80211 phy0: >Selected rate control algorithm 'ath9k_rate_control' [ 14.639410] Registered led device: ath9k-phy0 myuser@adri1:~$ dmesg | grep wlan1 [ 15.119922] IPv6: ADDRCONF(NETDEV_UP): wlan1: link is not ready myuser@adri1:~$ lspci -nn | grep 'Atheros' 03:02.0 Network controller [0280]: Atheros Communications Inc. AR922X Wireless Network Adapter [168c:0029] (rev 01) myuser@adri1:~$ sudo ifconfig eth0 Link encap:Ethernet HWaddr 00:11:09:a3:76:4a inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fea3:764a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5457 errors:0 dropped:0 overruns:0 frame:0 TX packets:2548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3425684 (3.4 MB) TX bytes:282192 (282.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:590 errors:0 dropped:0 overruns:0 frame:0 TX packets:590 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:53729 (53.7 KB) TX bytes:53729 (53.7 KB) myuser@adri1:~$ sudo iwconfig wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on lo no wireless extensions. eth0 no wireless extensions. wlan1 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off myuser@adri1:~$ lsmod | grep "ath9k" ath9k 116549 0 mac80211 461161 3 rt2x00usb,rt2x00lib,ath9k ath9k_common 13783 1 ath9k ath9k_hw 376155 2 ath9k,ath9k_common ath 19187 3 ath9k,ath9k_common,ath9k_hw cfg80211 175375 4 rt2x00lib,ath9k,mac80211,ath myuser@adri1:~$ iwlist scan wlan0 Failed to read scan data : Network is down lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. wlan1 Failed to read scan data : Network is down myuser@adri1:~$ lsb_release -d Description: Ubuntu 12.10 myuser@adri1:~$ uname -mr 3.5.0-17-generic i686 ![Schizophrenic Ubuntu](http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) Any help much appreciated... Thanks, Philippe

    Read the article

  • New spreadsheet accompanying SmartAssembly 6.0 provides statistics for prioritizing bug fixes

    - by Jason Crease
    One problem developers face is how to prioritize the many voices providing input into software bugs. If there is something wrong with a function that is the darling of a particular user, he or she tends to want action - now! The developer's dilemma is how to ascertain that the problem is major or minor, and when it should be addressed. Now there is a new spreadsheet accompanying SmartAssembly that provides exactly that information in an objective manner. This might upset those used to getting their way by being the loudest or pushiest, but ultimately it will ensure that the biggest problems get the priority they deserve. Here's how it works: Feature Usage Reporting (FUR) in SmartAssembly 6.0 provides a wealth of data about how your software is used by its end-users, but in the SmartAssembly UI the data isn't mined to its full extent. The new Excel spreadsheet for FUR extracts statistics from that data and presents them in easy-to-understand forms. I developed the spreadsheet feature in Microsoft Excel, using a fair amount of VBA. The spreadsheet connects directly to the database which stores the feature-usage data, and shows a wide variety of statistics and tables extracted from that data.  You want to know what percentage of users have used the 'Export as XML' button?  No problem.  How popular is v5.3 is compared to v5.1?  There's graphs for that. You need to know whether you have more users in Russia or Brazil? There's a big pie chart for that. I recently witnessed the spreadsheet in use here at Red Gate Software. My bug is exposed as minor While testing new features in .NET Reflector, I found a usability bug in the Refresh button and filed it in the Red Gate bug-tracking system. The bug was labelled "V.NEXT MINOR," which means it would be fixed in the next point release. Although I'm a professional tester, I'm not much different than most software users when they discover a bug that affects them personally: I wanted it fixed immediately. There was an ulterior motive at play here, of course. I would get to see my colleagues put the spreadsheet to work. The Reflector team loaded up the spreadsheet to view the feature-usage statistics that SmartAssembly collected for the refresh button. The resulting statistics showed that only 8% of users have ever pressed the Refresh button, and only 2.6% of sessions involve pressing the button. When Refresh is used, it's only pressed on average 1.6 times a session, with a maximum of 8 times during a session. This was in stark contrast to what I was doing as a conscientious tester: pressing it dozens of times per session. The spreadsheet provides evidence that my bug was a minor one. On to more serious things Based on the solid evidence uncovered by the spreadsheet, the Reflector team concluded that my experience does not represent that of the vast majority of Reflector's recorded users. The Reflector team had ample data to send me back to my desk and keep the bug classified as "V.NEXT MINOR." The team then went back to fixing more serious bugs. If I'm in the shoes of the user, I might not be thoroughly happy, but I cannot deny that the evidence clearly placed me in a very small minority. Next time I'm hoping the spreadsheet will prove that my bug is more important. Find out more about Feature-Usage Reporting here. The spreadsheet is available for free download here.

    Read the article

  • Why is there no service-oriented language?

    - by Wolfgang
    Edit: To avoid further confusion: I am not talking about web services and such. I am talking about structuring applications internally, it's not about how computers communicate. It's about programming languages, compilers and how the imperative programming paradigm is extended. Original: In the imperative programming field, we saw two paradigms in the past 20 years (or more): object-oriented (OO), and service-oriented (SO) aka. component-based (CB). Both paradigms extend the imperative programming paradigm by introducing their own notion of modules. OO calls them objects (and classes) and lets them encapsulates both data (fields) and procedures (methods) together. SO, in contrast, separates data (records, beans, ...) from code (components, services). However, only OO has programming languages which natively support its paradigm: Smalltalk, C++, Java and all other JVM-compatibles, C# and all other .NET-compatibles, Python etc. SO has no such native language. It only comes into existence on top of procedural languages or OO languages: COM/DCOM (binary, C, C++), CORBA, EJB, Spring, Guice (all Java), ... These SO frameworks clearly suffer from the missing native language support of their concepts. They start using OO classes to represent services and records. This leads to designs where there is a clear distinction between classes that have methods only (services) and those that have fields only (records). Inheritance between services or records is then simulated by inheritance of classes. Technically, its not kept so strictly but in general programmers are adviced to make classes to play only one of the two roles. They use additional, external languages to represent the missing parts: IDL's, XML configurations, Annotations in Java code, or even embedded DSL like in Guice. This is especially needed, but not limited to, since the composition of services is not part of the service code itself. In OO, objects create other objects so there is no need for such facilities but for SO there is because services don't instantiate or configure other services. They establish an inner-platform effect on top of OO (early EJB, CORBA) where the programmer has to write all the code that is needed to "drive" SO. Classes represent only a part of the nature of a service and lots of classes have to be written to form a service together. All that boiler plate is necessary because there is no SO compiler which would do it for the programmer. This is just like some people did it in C for OO when there was no C++. You just pass the record which holds the data of the object as a first parameter to the procedure which is the method. In a OO language this parameter is implicit and the compiler produces all the code that we need for virtual functions etc. For SO, this is clearly missing. Especially the newer frameworks extensively use AOP or introspection to add the missing parts to a OO language. This doesn't bring the necessary language expressiveness but avoids the boiler platform code described in the previous point. Some frameworks use code generation to produce the boiler plate code. Configuration files in XML or annotations in OO code is the source of information for this. Not all of the phenomena that I mentioned above can be attributed to SO but I hope it clearly shows that there is a need for a SO language. Since this paradigm is so popular: why isn't there one? Or maybe there are some academic ones but at least the industry doesn't use one.

    Read the article

  • Working and Studying in Oracle, how I balance my time....

    - by anca.rosu
    Hi, my name is Laura. I am working as an Intern within Executive Administration at Oracle Denmark, whilst studying Information Management at Copenhagen Business school. I have recently handeding a paper on Information Systems which gave me exposure to Oracle. Once completing this paper I came across a job posting on my University’s intranet site and I applied directly online. When I submitted my application for the job offer, I wondered about what language I should use for the application form, as the job posting was in Danish, but the contact person and number looked Irish. I therefore chose English. Later that same day, Fiona, one of Oracle’s Graduates Recruitment Consultants based in Ireland, contacted me. This shows how global Oracle truly is. I went for my face-to-face interview in Oracle Denmark with Charlotte, one of the team managers. I spent 5 minutes waiting in the lobby, just looking around, thinking to myself, I really want to work here. The atmosphere seemed so pleasant with a relaxed approach between colleagues, employees and guests. The interview took about an hour, but we touched on a lot of different subjects. The profile I got of Oraclewas that this is a place where you are encouraged to think for yourself, and you are given the freedom to use your ideas. Later that evening, Fiona called and offered me the job. I was very happy. At Oracle Denmark we have 4 different zones: a Quiet Zone, a Project Zone, a Dialogue Zone and a Call Zone. Everyday when you arrive you consider what will be the most productive for the day’s task, and you take your toolbox and go find a desk in the zone you have decided on. It is therefore very unusual to be next to the same person two days in a row. At Oracle, people are located all over the world, and everybody has team members, colleagues or leaders in other countries, or even other time zones. Initially,I was worried about how I would adapt to this approach but I soon realized I had nothing to worry about and now I appreciate working this way. My colleagues have been very supportive and they have openly welcomed me into my new role. I typically work two days a week and have three days at University. During exam periods, I have the flexibility to work less hours and focus on the exams, in return for putting in more hours at work when needed. The first time I had to ask for time off before handing in a paper, my boss looked at me and said, ”Of course! Your education is the most important!” I hope that by sharing my experiences with you, I can inspire or encourage you to consider Oracle as a potential employer, where you can grow both professionally and personally. If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com Technorati Tags: Intern,Oracle Denmark,Information Systems,Business school,Copenhagen,Graduates Recruitment,Ireland,Quiet Zone,Project Zone,Dialogue Zone,Call Zone,University,flexibility

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2012

    - by Bob Rhubart
    Every day ArchBeat searches the web for content created by and for community members, and then shares that content via social media. Here's the list of the Top 10 most popular items posted on the OTN ArchBeat Facebook Page for November 2012. One-Stop Shop for Oracle Webcasts Webcasts can be a great way to get information about Oracle products without having to go cross-eyed reading yet another document off your computer screen. Oracle's new Webcast Center offers selectable filtering to make it easy to get to the information you want. Yes, you have to register to gain access, but that process is quick, and with over 200 webcasts to choose from you know you'll find useful content. OAM/OVD JVM Tuning Vinay from the Oracle Fusion Middleware Architecture Group (otherwise known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. White Paper: Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads This new white paper by Adam Hawley (with contributions from Yoav Eilat) describes in great detail the incorporation into Oracle Exalogic of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology. Architected Systems: "If you don't develop an architecture, you will get one anyway..." "Can you build a system without taking care of architecture?," asks Manuel Ricca. "You certainly can. But inevitably the system will be unbalanced, neglecting the interests of key stakeholders, and problems will soon emerge." Backup and Recovery of an Exalogic vServer via rsync "On Exalogic a vServer will consist of a number of resources from the underlying machine," says the man known only as Donald. "These resources include compute power, networking and storage. In order to recover a vServer from a failure in the underlying rack all of these components have to be thoughts about. This article only discusses the backup and recovery strategies that apply to the storage system of a vServer." This Week on the OTN Architect Community Home Page Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Technical article by Yuli Vasiliev on Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Podcast: Are You Future Proof? Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. OIM 11g : Multi-thread approach for writing custom scheduled job | Saravanan V S Saravanan shares insight and expertise relevant to "designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs." How to Create Virtual Directory in Weblogic Server | Zeeshan Baig Oracle ACE Zeeshan Baig shows you how in six easy steps. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Thought for the Day "Humans are the best value in computers -- where else can you get a non-linear computer weighing only about 160lbs, having a billion binary decision elements, that can be mass-produced by unskilled labour?" — Anonymous Source: SoftwareQuotes.com

    Read the article

  • Simple OpenGL program major slow down at high resolution

    - by Grieverheart
    I have created a small OpenGL 3.3 (Core) program using freeglut. The whole geometry is two boxes and one plane with some textures. I can move around like in an FPS and that's it. The problem is I face a big slow down of fps when I make my window large (i.e. above 1920x1080). I have monitors GPU usage when in full-screen and it shows GPU load of nearly 100% and Memory Controller load of ~85%. When at 600x600, these numbers are at about 45%, my CPU is also at full load. I use deferred rendering at the moment but even when forward rendering, the slow down was nearly as severe. I can't imagine my GPU is not powerful enough for something this simple when I play many games at 1080p (I have a GeForce GT 120M btw). Below are my shaders, First Pass #VS #version 330 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform float scale; layout(location = 0) in vec3 in_Position; layout(location = 1) in vec3 in_Normal; layout(location = 2) in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; void main(void){ pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; pass_Normal = NormalMatrix * in_Normal; TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } #FS #version 330 core uniform sampler2D inSampler; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; layout(location = 0) out vec3 outPosition; layout(location = 1) out vec3 outDiffuse; layout(location = 2) out vec3 outNormal; void main(void){ outPosition = pass_Position; outDiffuse = texture(inSampler, TexCoord).xyz; outNormal = pass_Normal; } Second Pass #VS #version 330 core uniform float scale; layout(location = 0) in vec3 in_Position; void main(void){ gl_Position = mat4(1.0) * vec4(scale * in_Position, 1.0); } #FS #version 330 core struct Light{ vec3 direction; }; uniform ivec2 ScreenSize; uniform Light light; uniform sampler2D PositionMap; uniform sampler2D ColorMap; uniform sampler2D NormalMap; out vec4 out_Color; vec2 CalcTexCoord(void){ return gl_FragCoord.xy / ScreenSize; } vec4 CalcLight(vec3 position, vec3 normal){ vec4 DiffuseColor = vec4(0.0); vec4 SpecularColor = vec4(0.0); vec3 light_Direction = -normalize(light.direction); float diffuse = max(0.0, dot(normal, light_Direction)); if(diffuse 0.0){ DiffuseColor = diffuse * vec4(1.0); vec3 camera_Direction = normalize(-position); vec3 half_vector = normalize(camera_Direction + light_Direction); float specular = max(0.0, dot(normal, half_vector)); float fspecular = pow(specular, 128.0); SpecularColor = fspecular * vec4(1.0); } return DiffuseColor + SpecularColor + vec4(0.1); } void main(void){ vec2 TexCoord = CalcTexCoord(); vec3 Position = texture(PositionMap, TexCoord).xyz; vec3 Color = texture(ColorMap, TexCoord).xyz; vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); out_Color = vec4(Color, 1.0) * CalcLight(Position, Normal); } Is it normal for the GPU to be used that much under the described circumstances? Is it due to poor performance of freeglut? I understand that the problem could be specific to my code, but I can't paste the whole code here, if you need more info, please tell me.

    Read the article

  • Future Of F# At Jazoon 2011

    - by Alois Kraus
    I was at the Jazoon 2011 in Zurich (Switzerland). It was a really cool event and it had many top notch speaker not only from the Microsoft universe. One of the most interesting talks was from Don Syme with the title: F# Today/F# Tomorrow. He did show how to use F# scripting to browse through open databases/, OData Web Services, Sharepoint, …interactively. It looked really easy with the help of F# Type Providers which is the next big language feature in a future F# version. The object returned by a Type Provider is used to access the data like in usual strongly typed object model. No guessing how the property of an object is called. Intellisense will show it just as you expect. There exists a range of Type Providers for various data sources where the schema of the stored data can somehow be dynamically extracted. Lets use e.g. a free database it would be then let data = DbProvider(http://.....); data the object which contains all data from e.g. a chemical database. It has an elements collection which contains an element which has the properties: Name, AtomicMass, Picture, …. You can browse the object returned by the Type Provider with full Intellisense because the returned object is strongly typed which makes this happen. The same can be achieved of course with code generators that use an input the schema of the input data (OData Web Service, database, Sharepoint, JSON serialized data, …) and spit out the necessary strongly typed objects as an assembly. This does work but has the downside that if the schema of your data source is huge you will quickly run against a wall with traditional code generators since the generated “deserialization” assembly could easily become several hundred MB. *** The following part contains guessing how this exactly work by asking Don two questions **** Q: Can I use Type Providers within C#? D: No. Q: F# is after all a library. I can reference the F# assemblies and use the contained Type Providers? D: F# does annotate the generated types in a special way at runtime which is not a static type that C# could use. The F# type providers seem to use a hybrid approach. At compilation time the Type Provider is instantiated with the url of your input data. The obtained schema information is used by the compiler to generate static types as usual but only for a small subset (the top level classes up to certain nesting level would make sense to me). To make this work you need to access the actual data source at compile time which could be a problem if you want to keep the actual url in a config file. Ok so this explains why it does work at all. But in the demo we did see full intellisense support down to the deepest object level. It looks like if you navigate deeper into the object hierarchy the type provider is instantiated in the background and attach to a true static type the properties determined at run time while you were typing. So this type is not really static at all. It is static if you define as a static type that its properties shows up in intellisense. But since this type information is determined while you are typing and it is not used to generate a true static type and you cannot use these “intellistatic” types from C#. Nonetheless this is a very cool language feature. With the plotting libraries you can generate expressive charts from any datasource within seconds to get quickly an overview of any structured data storage. My favorite programming language C# will not get such features in the near future there is hope. If you restrict yourself to OData sources you can use LINQPad to query any OData enabled data source with LINQ with ease. There you can query Stackoverflow with The output is also nicely rendered which makes it a very good tool to explore OData sources today.

    Read the article

  • Oracle MAA Part 1: When One Size Does Not Fit All

    - by JoeMeeks
    The good news is that Oracle Maximum Availability Architecture (MAA) best practices combined with Oracle Database 12c (see video) introduce first-in-the-industry database capabilities that truly make unplanned outages and planned maintenance transparent to users. The trouble with such good news is that Oracle’s enthusiasm in evangelizing its latest innovations may leave some to wonder if we’ve lost sight of the fact that not all database applications are created equal. Afterall, many databases don’t have the business requirements for high availability and data protection that require all of Oracle’s ‘stuff’. For many real world applications, a controlled amount of downtime and/or data loss is OK if it saves money and effort. Well, not to worry. Oracle knows that enterprises need solutions that address the full continuum of requirements for data protection and availability. Oracle MAA accomplishes this by defining four HA service level tiers: BRONZE, SILVER, GOLD and PLATINUM. The figure below shows the progression in service levels provided by each tier. Each tier uses a different MAA reference architecture to deploy the optimal set of Oracle HA capabilities that reliably achieve a given service level (SLA) at the lowest cost.  Each tier includes all of the capabilities of the previous tier and builds upon the architecture to handle an expanded fault domain. Bronze is appropriate for databases where simple restart or restore from backup is ‘HA enough’. Bronze is based upon a single instance Oracle Database with MAA best practices that use the many capabilities for data protection and HA included with every Oracle Enterprise Edition license. Oracle-optimized backups using Oracle Recovery Manager (RMAN) provide data protection and are used to restore availability should an outage prevent the database from being able to restart. Silver provides an additional level of HA for databases that require minimal or zero downtime in the event of database instance or server failure as well as many types of planned maintenance. Silver adds clustering technology - either Oracle RAC or RAC One Node. RMAN provides database-optimized backups to protect data and restore availability should an outage prevent the cluster from being able to restart. Gold raises the game substantially for business critical applications that can’t accept vulnerability to single points-of-failure. Gold adds database-aware replication technologies, Active Data Guard and Oracle GoldenGate, which synchronize one or more replicas of the production database to provide real time data protection and availability. Database-aware replication greatly increases HA and data protection beyond what is possible with storage replication technologies. It also reduces cost while improving return on investment by actively utilizing all replicas at all times. Platinum introduces all of the sexy new Oracle Database 12c capabilities that Oracle staff will gush over with great enthusiasm. These capabilities include Application Continuity for reliable replay of in-flight transactions that masks outages from users; Active Data Guard Far Sync for zero data loss protection at any distance; new Oracle GoldenGate enhancements for zero downtime upgrades and migrations; and Global Data Services for automated service management and workload balancing in replicated database environments. Each of these technologies requires additional effort to implement. But they deliver substantial value for your most critical applications where downtime and data loss are not an option. The MAA reference architectures are inherently designed to address conflicting realities. On one hand, not every application has the same objectives for availability and data protection – the Not One Size Fits All title of this blog post. On the other hand, standard infrastructure is an operational requirement and a business necessity in order to reduce complexity and cost. MAA reference architectures address both realities by providing a standard infrastructure optimized for Oracle Database that enables you to dial-in the level of HA appropriate for different service level requirements. This makes it simple to move a database from one HA tier to the next should business requirements change, or from one hardware platform to another – whether it’s your favorite non-Oracle vendor or an Oracle Engineered System. Please stay tuned for additional blog posts in this series that dive into the details of each MAA reference architecture. Meanwhile, more information on Oracle HA solutions and the Maximum Availability Architecture can be found at: Oracle Maximum Availability Architecture - Webcast Maximize Availability with Oracle Database 12c - Technical White Paper

    Read the article

  • Backup SQL Database Federation

    - by Herve Roggero
    One of the amazing features of Windows Azure SQL Database is the ability to create federations in order to scale your cloud databases. However until now, there were very few options available to backup federated databases. In this post I will show you how Enzo Cloud Backup can help you backup, and restore your federated database easily. You can restore federated databases in SQL Database, or even on SQL Server (as regular databases). Generally speaking, you will need to perform the following steps to backup and restore the federations of a SQL Database: Backup the federation root Backup the federation members Restore the federation root Restore the federation members These actions can be automated using: the built-in scheduler of Enzo Cloud Backup, the command-line utilities, or the .NET Cloud Backup API provided, giving you complete control on how you want to perform your backup and restore operations. Backing up federations Let’s look at the tool to backup federations. You can explore your existing federations by using the Enzo Cloud Backup application as shown below. As you can see, the federation root and the various federations available are shown in separate tabs for convenience. You would first need to backup the federation root (unless you intend to restore the federation member on a local SQL Server database and you don’t need what’s in the federation root). The steps are similar than those to backup a federation member, so let’s proceed to backing up a federation member. You can click on a specific federation member to view the database details by clicking at the tab that contains your federation member. You can see the size currently consumed and a summary of its content at the bottom of the screen. If you right-click on a specific range, you can choose to backup the federation member. This brings up a window with the details of the federation member already filled out for you, including the value of the member that is used to select the federation member. Notice that the list of Federations includes “Federation Root”, which is what you need to select to backup the federation root (you can also do that directly from the root database tab).  Once you provide at least one backup destination, you can begin the backup operation.  From this window, you can also schedule this operation as a job and perform this operation entirely in the cloud. You can also “filter” the connection, so that only the specific member value is backed up (this will backup all the global tables, and only the records for which the distribution value is the one specified). You can repeat this operation for every federation member in your federation. Restoring Federations Once backed up, you can restore your federations easily. Select the backup device using the tool, then select Restore. The following window will appear. From here you can create a new root database. You can also view the backup properties, showing you exactly which federations will be created. Under the Federations tab, you can select how the federations will be created. I chose to recreate the federations and let the tool perform all the SPLIT operations necessary to recreate the same number of federation members. Other options include to create the first federation member only, or not to create the federation members at all. Once the root database has been restored and the federation members have been created, you can restore the federation members you previously backed up. The screen below shows you how to restore a backup of a federation member into a specific federation member (the details of the federation member are provided to make it easier to identify). Conclusion This post gave you an overview on how to backup and restore federation roots and federation members. The backup operations can be setup once, then scheduled daily.

    Read the article

  • NFJS Central Iowa Software Symposium Des Moines Trip Report

    - by reza_rahman
    As some of you may be aware, I recently joined the well-respected US based No Fluff Just Stuff (NFJS) Tour. If you work in the US and still don't know what the No Fluff Just Stuff (NFJS) Tour is, you are doing yourself a very serious disfavor. NFJS is by far the cheapest and most effective way to stay up to date through some world class speakers and talks. Following the US cultural tradition of old-fashioned roadshows, NFJS is basically a set program of speakers and topics offered at major US cities year round. The NFJS Central Iowa Software Symposium was held August 8 - 10 in Des Moines. The attendance at the event and my sessions was moderate by comparison to some of the other shows. It is one of the few events of it's kind that take place this part the country so it is extremely important. I had five talks total over two days, more or less back-to-back. The first one was my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. The second talk (on the second day) was our flagship Java EE 7/8 talk. Currently the talk is basically about Java EE 7 but I'm slowly evolving the talk to transform it into a Java EE 8 talk as we move forward. The following is the slide deck for the talk: JavaEE.Next(): Java EE 7, 8, and Beyond from Reza Rahman The next talk I delivered was my Cargo Tracker/Java EE + DDD talk. This talk basically overviews DDD and describes how DDD maps to Java EE using code examples/demos from the Cargo Tracker Java EE Blue Prints project. Applied Domain-Driven Design Blue Prints for Java EE from Reza Rahman The third was my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE". The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. I finishd off the event with a talk titled Building Java HTML5/WebSocket Applications with JSR 356. The talk introduces HTML 5 WebSocket, overviews JSR 356, tours the API and ends with a small WebSocket demo on GlassFish 4. The slide deck for the talk is posted below. Building Java HTML5/WebSocket Applications with JSR 356 from Reza Rahman The demo code is posted on GitHub: https://github.com/m-reza-rahman/hello-websocket. My next NFJS show is the Greater Atlanta Software Symposium on September 12 - 14. Here's my tour schedule so far, I'll keep you up-to-date as the tour goes forward: September 12 - 14, Atlanta. September 19 - 21, Boston. October 17 - 19, Seattle. I hope you'll take this opportunity to get some updates on Java EE as well as the other useful content on the tour?

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • Tuple - .NET 4.0 new feature

    - by nmarun
    Something I hit while playing with .net 4.0 – Tuple. MSDN says ‘Provides static methods for creating tuple objects.’ and the example below is: 1: var primes = Tuple.Create(2, 3, 5, 7, 11, 13, 17, 19); Honestly, I’m still not sure with what intention MS provided us with this feature, but the moment I saw this, I said to myself – I could use it instead of anonymous types. In order to put this to test, I created an XML file: 1: <Activities> 2: <Activity id="1" name="Learn Tuples" eventDate="4/1/2010" /> 3: <Activity id="2" name="Finish Project" eventDate="4/29/2010" /> 4: <Activity id="3" name="Attend Birthday" eventDate="4/17/2010" /> 5: <Activity id="4" name="Pay bills" eventDate="4/12/2010" /> 6: </Activities> In my console application, I read this file and let’s say I want to pull all the attributes of the node with id value of 1. Now, I have two ways – either define a class/struct that has these three properties and use in the LINQ query or create an anonymous type on the fly. But if we go the .NET 4.0 way, we can do this using Tuples as well. Let’s see the code I’ve written below: 1: var myActivity = (from activity in loaded.Descendants("Activity") 2:       where (int)activity.Attribute("id") == 1 3:       select Tuple.Create( 4: int.Parse(activity.Attribute("id").Value), 5: activity.Attribute("name").Value, 6: DateTime.Parse(activity.Attribute("eventDate").Value))).FirstOrDefault(); Line 3 is where I’m using a Tuple.Create to define my return type. There are three ‘items’ (that’s what the elements are called) in ‘myActivity’ type.. aptly declared as Item1, Item2, Item3. So there you go, you have another way of creating anonymous types. Just out of curiosity, wanted to see what the type actually looked like. So I did a: 1: Console.WriteLine(myActivity.GetType().FullName); and the return was (formatted for better readability): "System.Tuple`3[                            [System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],                            [System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],                            [System.DateTime, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]                           ]" The `3 specifies the number of items in the tuple. The other interesting thing about the tuple is that it knows the data type of the elements it’s holding. This is shown in the above snippet and also when you hover over myActivity.Item1, it shows the type as an int, Item2 as string and Item3 as DateTime. So you can safely do: 1: int id = myActivity.Item1; 2: string name = myActivity.Item2; 3: DateTime eventDate = myActivity.Item3; Wow.. all I can say is: HAIL 4.0.. HAIL 4.0.. HAIL 4.0

    Read the article

  • Can't remove JPanel from JFrame while adding new class into it

    - by A.K.
    Basically, I have my Frame class, which instantiates all the properties for the JFrame, and draws a JLabel with an image (my title screen). Then I made a separate JPanel with a start button on it, and made a mouse listener that will allow me to remove these objects while adding in a new Board() class (Which paints the main game). *Note: The JLabel is SEPARATE from the JPanel, but it still gets moved to the side by it. Problem: Whenever I click the button though, it only shows a little square of what I presume is my board class trying to run. Code below for the Frame Class: package OurPackage; //Made By A.K. 5/24/12 //Contains Frame. import java.awt.BorderLayout; import java.awt.Color; import java.awt.Container; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.GridBagLayout; import java.awt.GridLayout; import java.awt.Image; import java.awt.Rectangle; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.KeyEvent; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import javax.swing.*; import javax.swing.plaf.basic.BasicOptionPaneUI.ButtonActionListener; public class Frame implements MouseListener { public static boolean StartGame = false; ImageIcon img = new ImageIcon(getClass().getResource("/Images/ActionJackTitle.png")); ImageIcon StartImg = new ImageIcon(getClass().getResource("/Images/JackStart.png")); public Image Title; JLabel TitleL = new JLabel(img); public JPanel panel = new JPanel(); JButton StartB = new JButton(StartImg); JFrame frm = new JFrame("Action-Packed Jack"); public Frame() { TitleL.setPreferredSize(new Dimension(1200, 420)); frm.add(TitleL); frm.setLayout(new GridBagLayout()); frm.add(panel); panel.setSize(new Dimension(220, 45)); panel.setLayout(new GridBagLayout ()); panel.add(StartB); StartB.addMouseListener(this); StartB.setPreferredSize(new Dimension(220, 45)); frm.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frm.setSize(1200, 420); frm.setVisible(true); frm.setResizable(false); frm.setLocationRelativeTo(null); } public static void main(String[] args) { new Frame(); } public void mouseClicked(MouseEvent e) { StartB.setContentAreaFilled(false); panel.remove(StartB); frm.remove(panel); frm.remove(TitleL); //frm.setLayout(null); frm.add(new Board()); //Add Game "Tiles" Or Content. x = 1200 frm.validate(); System.out.println("Hit!"); } @Override public void mouseEntered(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mouseExited(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mousePressed(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mouseReleased(MouseEvent arg0) { // TODO Auto-generated method stub } }

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • Where Twitter Stands Heading Into 2013

    - by Mike Stiles
    As Twitter continued throughout 2012 to be a stage on which global politics and culture played itself out, the company itself underwent some adjustments that give us a good indication of what users and brands can expect from the platform in 2013. The power of the network did anything but fade. Celebrities continued to use it to connect one-on-one. Even the Pope signed on this year. It continued to fuel revolutions. It played an exponentially large factor in this US Presidential election. And around the world, the freedom to speak was challenged as users were fired, sued, sometimes even jailed for their tweets. Expect more of the same in 2013, as Twitter has entrenched itself, for individuals, causes and brands, as the fastest, easiest, most efficient way to message the masses so some measure of impact can come from it. It’s changed everything, and it’s not finished. These fun facts reveal the position of strength with which Twitter enters 2013: It now generates a billion tweets every 2.5 days It has 500 million+ users The average Twitter user has tweeted 307 times 32% of everyone using the Internet uses Twitter It’s expected to bring in $540 million in ad revenue by 2014 11 new accounts are created every second High-level Executive Summary: people love it, people use it, and they’re going to keep loving and using it. Whether or not outside developers love it is a different matter. 2012 marked a shift from welcoming the third party support that played at least some role in Twitter being so warmly embraced, to discouraging anything that replicates what Twitter can do itself…or plans to do itself. It’s not the open playground it once was. Now Twitter must spend 2013 proving it can innovate in-house and keep us just as entranced. Likewise, Twitter is distancing itself from Facebook. Images from the #1 platform’s Instagram don’t work on Twitter anymore, and Twitter’s rolling out their own photo filter product. Where the two have lived in a “plenty of room for everybody” symbiosis up to now, 2013 could see the giants ramping up a full-on rivalry. Twitter is exhibiting a deliberate strategy. Updates have centered on more visually appealing search results, and making finding and sharing content easier. Deals have been cut with some media entities so their content stands out. CEO Dick Costolo has said tweets aren’t the attraction, they’re what leads you to content. Twitter aims to be a key distributor of media and info. Add the addition of former News Corp. President Peter Chernin to the board, and their hashtag landing page experience for events, and their media behemoth ambitions get pretty clear. There are challenges ahead and Costolo has also laid those out; entry into China, figuring out how to have Twitter deliver both comprehensive and relevant, targeted experiences, and the visualization of big data. What does this mean for corporations? They can expect a more media-rich evolution and growing emphases on imagery. They can expect more opportunities to create great media content and leverage Twitter for its distribution. And they can expect new ways to surface in searches. Are brands diving in? 56% of customer tweets to companies get completely and totally ignored. Ugh. A study Twitter recently conducted with Compete shows people who see tweets from retailers are more likely to buy a product. And, the more retailer tweets they see, the more likely they are to purchase on the retail site. As more of those tweets point to engaging media content from the brand, the results should get even better. Twitter appears ready for 2013. Enterprise brands have some work to do. @mikestilesPhoto Stuart Miles, freedigitalphotos.net

    Read the article

  • Personal Financial Management – The need for resuscitation

    - by Salil Ravindran
    Until a year or so ago, PFM (Personal Financial Management) was the blue eyed boy of every channel banking head. In an age when bank account portability is still fiction, PFM was expected to incentivise customers to switch banks. It still is, in some emerging economies, but if the state of PFM in matured markets is anything to go by, it is in a state of coma and badly requires resuscitation. Studies conducted around the year show an alarming decline and stagnation in PFM usage in mature markets. A Sept 2012 report by Aite Group – Strategies for PFM Success shows that 72% of users hadn’t used PFM and worse, 58% of them were not kicked about using it. Of the rest who had used it, only half did on a bank site. While there are multiple reasons for this lack of adoption, some are glaringly obvious. While pretty graphs and pie charts are important to provide a visual representation of my income and expense, it is simply not enough to encourage me to return. Static representation of data without any insightful analysis does not help me. Budgeting and Cash Flow is important but when I have an operative account, a couple of savings accounts, a mortgage loan and a couple of credit cards help me with what my affordability is in specific contexts rather than telling me I just busted my budget. Help me with relative importance of each budget category so that I know it is fine to go over budget on books for my daughter as against going over budget on eating out. Budget over runs and spend analysis are post facto and I am informed of my sins only when I return to online banking. That too, only if I decide to come to the PFM area. Fundamentally, PFM should be a part of my banking engagement rather than an analysis tool. It should be contextual so that I can make insight based decisions. So what can be done to resuscitate PFM? Amalgamation with banking activities – In most cases, PFM tools are integrated into online banking pages and they are like chapter 37 of a long story. PFM needs to be a way of banking rather than a tool. Available balances should shift to Spendable Balances. Budget and goal related insights should be integrated with transaction sessions to drive pre-event financial decisions. Personal Financial Guidance - Banks need to think ground level and see if their PFM offering is really helping customers achieve self actualisation. Banks need to recognise that most customers out there are non-proficient about making the best value of their money. Customers return when they know that they are being guided rather than being just informed on their finance. Integrating contextual financial offers and financial planning into PFM is one way ahead. Yet another way is to help customers tag unwanted spending thereby encouraging sound savings habits. Mobile PFM – Most banks have left all those numbers on online banking. With access mostly having moved to devices and the success of apps, moving PFM on to devices will give it a much needed shot in the arm. This is not only about presenting the same wine in a new bottle but also about leveraging the power of the device in pushing real time notifications to make pre-purchase decisions. The pursuit should be to analyse spend, budgets and financial goals real time and push them pre-event on to the device. So next time, I should know that I have over run my eating out budget before walking into that burger joint and not after. Increase participation and collaboration – Peer group experiences and comments are valued above those offered by the bank. Integrating social media into PFM engagement will let customers share and solicit their financial management experiences with their peer group. Peer comparisons help benchmark one’s savings and spending habits with those of the peer group and increases stickiness. While mature markets have gone through this learning in some way over the last one year, banks in maturing digital banking economies increasingly seem to be falling into this trap. Best practices lie in profiling and segmenting customers, being where they are and contextually guiding them to identify and achieve their financial goals. Banks could look at the likes of Simple and Movenbank to draw inpiration from.

    Read the article

  • Social HCM: Is Your Team Listening?

    - by Mike Stiles
    Does integrating Social HCM into your enterprise make sense? Consider Sam and Christina. Sam is a new hire at a big company. On the job 3 weeks, a question has come up on how to properly file an expense report to get reimbursed. It was covered in the onboarding session, but shockingly enough, Sam didn’t memorize or write down every word of the session. The answer is probably in a handout, in a stack of handouts 2 inches thick. It also might be on the employee web site…somewhere. Christina is a new hire at a different big company. She has the same question. She logs into her company’s social network, goes to the “new hires” group, asks her question and gets an answer in seconds. Christina says, “Cool!” Sam says, “Grrrr.” It’s safe to say the qualified talent your company wants is accustomed to using social platforms to communicate and get quick answers. As such, Christina is comfortable at her new company, whereas Sam is wondering what he’s gotten himself into. Companies that cling to talent communication and management systems that don’t speak to talent’s needs or expectations put themselves at risk. Right from the recruiting stage, prospects can determine if a company has embraced the communications tools of the 21st century. If they don’t see it, alarm bells go off. With great talent more in demand than ever, enterprises should reconsider making “this is the way we do it, you adapt to us” their mantra. Other blogs have clearly outlined that apart from meeting top recruits’ expectations, Social HCM benefits the organization itself in terms of efficiency, talent performance & measurement. Recruiting: Jobvite shows 64% of companies hired using social. 89% of job seekers are using social in their search. Social can give employers access to relevant communities of prospects and advance the brand. Nucleus Research found general hiring software can provide over 1,000% ROI by reducing churn and improving screening. Social talent acquisition should perform at least as well. Learning & Development:Employees, learning from the company or from peers, can be kept on top of the latest needed skillsets and engage in self-paced training so as to advance within the company. Performance Management:Just as gamers are egged on by levels and achievements, talent can reach for workplace kudos, be they shout-outs from peers & managers or formally established milestones. Plus employee reviews become consistent and fair as managers have access to the cumulative feedback social offers. Workflow and Collaboration:With workforces dispersing in terms of physical location, social provides a platform that helps eliminate drawbacks that would have brought just 10 years ago. Finding and connecting with just the right colleague to get the most relevant info at any given time has never been more possible…or expected. While yes, marketing has taken the social lead inside the enterprise, HCM (with the word “human” right there in its name) is the obvious locale for the next big integration of social in business. The technology is there. At Oracle, Fusion HCM apps are deeply embedded with Social HCM…just one example of systems taking social across the enterprise. Christina’s company is communicating with her in ways she’s used to. Sam’s company may as well be trying to talk to him using signal flags. @mikestilesPhoto via stock.xchng

    Read the article

  • The Best Data Integration for Exadata Comes from Oracle

    - by maria costanzo
    Oracle Data Integrator and Oracle GoldenGate offer unique and optimized data integration solutions for Oracle Exadata. For example, customers that choose to feed their data warehouse or reporting database with near real-time throughout the day, can do so without decreasing  performance or availability of source and target systems. And if you ask why real-time, the short answer is: in today’s fast-paced, always-on world, business decisions need to use more relevant, timely data to be able to act fast and seize opportunities. A longer response to "why real-time" question can be found in a related blog post. If we look at the solution architecture, as shown on the diagram below,  Oracle Data Integrator and Oracle GoldenGate are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components. Oracle Data Integrator (ODI) is the best bulk data loading solution for Exadata. ODI is the only ETL platform that can leverage the full power of Exadata, integrate directly on the Exadata machine without any additional hardware, and by far provides the simplest setup and fastest overall performance on an Exadata system. We regularly see customers achieving a 5-10 times boost when they move their ETL to ODI on Exadata. For  some companies the performance gain is even much higher. For example a large insurance company did a proof of concept comparing ODI vs a traditional ETL tool (one of the market leaders) on Exadata. The same process that was taking 5hrs and 11 minutes to complete using the competing ETL product took 7 minutes and 20 seconds with ODI. Oracle Data Integrator was 42 times faster than the conventional ETL when running on Exadata.This shows that Oracle's own data integration offering helps you to gain the most out of your Exadata investment with a truly optimized solution. GoldenGate is the best solution for streaming data from heterogeneous sources into Exadata in real time. Oracle GoldenGate can also be used together with Data Integrator for hybrid use cases that also demand non-invasive capture, high-speed real time replication. Oracle GoldenGate enables real-time data feeds from heterogeneous sources non-invasively, and delivers to the staging area on the target Exadata system. ODI runs directly on Exadata to use the database engine power to perform in-database transformations. Enterprise Data Quality is integrated with Oracle Data integrator and enables ODI to load trusted data into the data warehouse tables. Only Oracle can offer all these technical benefits wrapped into a single intelligence data warehouse solution that runs on Exadata. Compared to traditional ETL with add-on CDC this solution offers: §  Non-invasive data capture from heterogeneous sources and avoids any performance impact on source §  No mid-tier; set based transformations use database power §  Mini-batches throughout the day –or- bulk processing nightly which means maximum availability for the DW §  Integrated solution with Enterprise Data Quality enables leveraging trusted data in the data warehouse In addition to Starwood Hotels and Resorts, Morrison Supermarkets, United Kingdom’s fourth-largest food retailer, has seen the power of this solution for their new BI platform and shared their story with us. Morrisons needed to analyze data across a large number of manufacturing, warehousing, retail, and financial applications with the goal to achieve single view into operations for improved customer service. The retailer deployed Oracle GoldenGate and Oracle Data Integrator to bring new data into Oracle Exadata in near real-time and replicate the data into reporting structures within the data warehouse—extending visibility into operations. Using Oracle's data integration offering for Exadata, Morrisons produced financial reports in seconds, rather than minutes, and improved staff productivity and agility. You can read more about Morrison’s success story here and hear from Starwood here. From an Irem Radzik article.

    Read the article

  • How can I gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting?

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • WiFi stops working after a while in Lenovo ThinkPad W520 (Ubuntu 12.04)

    - by el10780
    After several minutes(I do not know how many) there is no internet connection on my laptop via Wi-Fi.Ubuntu doesn't show any kind of message that my WiFi was disconnected neither there is a signal drop,but suddenly Firefox stops connecting to web pages.I checked my modem/router and it seems that it is working fine.I tried also to reboot the WiFi device and nothing happens.The only thing that it makes it work again is a reboot of the system and if I do not want to do a reboot then I am enforced to connect to the Internet using Ethernet cable.Does anybody know what is happening? ## Some Hardware info that might be helpful ## el10780@ThinkPad-W520:~$ sudo lshw -class network *-network description: Ethernet interface product: 82579LM Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: eth0 version: 04 serial: f0:de:f1:f1:be:10 size: 100Mbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=1.5.1-k duplex=full firmware=0.13-3 ip=192.168.0.10 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:50 memory:f3a00000-f3a1ffff memory:f3a2b000-f3a2bfff ioport:6080(size=32) *-network description: Wireless interface product: Centrino Advanced-N + WiMAX 6250 vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 5e serial: 64:80:99:63:14:74 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-26-generic firmware=41.28.5.1 build 33926 ip=192.168.0.6 latency=0 link=yes multicast=yes wireless=IEEE 802.11abgn resources: irq:52 memory:f3900000-f3901fff *-network description: Ethernet interface physical id: 1 bus info: usb@2:1.3 logical name: wmx0 serial: 00:1d:e1:53:b2:e8 capabilities: ethernet physical configuration: driver=i2400m firmware=i6050-fw-usb-1.5.sbcf link=no el10780@ThinkPad-W520:~$ lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:16.3 Serial controller: Intel Corporation 6 Series/C200 Series Chipset Family KT Controller (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b4) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b4) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b4) 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b4) 00:1c.6 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 7 (rev b4) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation QM67 Express Chipset Family LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [Quadro 1000M] (rev a1) 03:00.0 Network controller: Intel Corporation Centrino Advanced-N + WiMAX 6250 (rev 5e) 0d:00.0 System peripheral: Ricoh Co Ltd Device e823 (rev 08) 0d:00.3 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 PCIe IEEE 1394 Controller (rev 04) 0e:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) el10780@ThinkPad-W520:~$ rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no 3: i2400m-usb:2-1.3:1.0: WiMAX Soft blocked: yes Hard blocked: no The weirdest thing is this screenshot which I took after running the **Additional Drivers** program.I mean I have a NVidia Quadro 1000M and my Intel Centrino WiFi Card and this shows that there are not proprietay drivers for my system. http://imageshack.us/photo/my-images/268/screenshotfrom201207062.png/

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >