Search Results

Search found 18121 results on 725 pages for 'support esemenyek'.

Page 654/725 | < Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >

  • Mandatory look back at 2010

    - by Bertrand Le Roy
    Yeah, it's one of those posts, sorry. First, the mildly depressing: the most popular post on this blog this year with 47,000 hits was a post from last year about a fix to a bug in ASP.NET. A content-less post except for that link to the KB article that people should have found by going directly to the support site in the first place. Then, the really depressing: the second most popular post this year with 34,000 hits was a post from 2005 about how to display message boxes on a web page. I mean come on. This was kind of fun five years ago and it did solve one of the most common n00b mistakes VB programmers trying to move to the web were making. But come on, we've traveled about 4.7 billion miles around the Earth since then. Do people still do that kind of stuff? I should probably put a big red banner on top of this post. Oh [supernatural entity of your choice]. Hand me that gun, please. Third most popular post with 24,000 hits is from 2004. It's about how to set a session variable before redirecting. That problem has been fixed a long time ago. Oh well. Fourth most popular post. 21,000 hits. 2007. How to work around a stupid bug in ASP.NET Ajax 1.0. Fixed in ASP.NET 3.5? ASP.NET Ajax 1.0? Need I say more? The fifth one (20,000 hits) is an old post as well but I'm kind of fond of it: it's about that photo album handler I've been organically growing for a few years. It reminds me that I need to refresh it and make a new release. Good SEO title too. Back to insanity with the sixth one (16,000) that's about working around a bug in IE6. IE6. Please just refuse to pander to that browser any more. It's about time. Let's move on, please. Actually, the first post from 2010 is 15th in the list. We have a trio of these actually with server-side image resizing and FluentPath. So what happened? Well, I like the ad money, but not to the point that I'm going to write my stuff to inflate it. Actually I think if I tried I would fail miserably (I mean, I would fail worse). What really happened this year was new stuff: Orchard, FluentPath and the stuff with the Netduino. That stuff needs time to get off the ground but my hope is that it's going to be useful in the long run and that five years from now I'll be lamenting on how well those posts are still doing. So, no regret. 2010 was a good year. Oh, and I was on This Developer's Life this year! Yay! Anyways, thank you all for reading me. Please continue doing that. And happy 2011!

    Read the article

  • Cutting Edge versus Just Average? Your SOA, Got BPM? by Mala Ramakrishnan

    - by JuergenKress
    Service Oriented Architecture (SOA) has completely transformed IT from the time it was introduced well over a decade ago. Organizations have been re-plumbing their infrastructure for reusability, efficiency and gain and succeeding with it. Best practices have emerged and people and technology have matured. We have got better at delivering on a stable platform on mission critical applications and services. Yet, there is this one secret that sets some SOA customers apart from the others. These companies grow and revolutionize their business and not just transform their IT infrastructure. The differences seem subtle for an untrained eye examining these organizations externally. And from within the company, it’s a bit like an ant sitting on an elephant, hard to differentiate between the IT trunk and business tail. What is it that some organizations do differently that makes them succeed beyond SOA? These organizations pull in business people more and more to weigh into their IT decisions. They wrench understanding process over services. They don’t settle easily when bridging business metrics and IT performance. They anguish over business requirements not translating seamlessly and quickly into IT. IT is not just an enabler but a pillar that revolutionizes their business. Okay, I’ll give it to you. These organizations layer Business Process Management (BPM) on top of their SOA. Think about lifeblood business processes in your own organizations. If you are Fedex, this would be shipping and handling. If you are Stanford Hospital, this would be patient case-management: from on-boarding through discharge and follow-up care. If you are Wells Fargo, this would be loan origination. Now think about how your SOA ties into your business process. Can you decouple your business processes from your SOA so that the two can transform and change independent of each other? Can you forecast success metrics for your business process, make the changes across the board and then look back over different periods of time to see if you are on track? Are your critical business processes entrenched in the minds of few experts in your organization or does everyone from the receptionist to your enterprise architect to your CEO understand what they can do to revolutionize it? Business Process Management is a superset of SOA. It is the process of getting your business to articulate business value and metrics and have it implemented in IT without any loss in translation. It is the act of extracting the business process from the minds of experts and IT applications in your organization and valuing them as assets for performance and gain. BPM is stepping outside your SOA and moving your organization to the next level of innovation. Oracle is accelerating BPM across industries with the latest launch. Join us to understand how BPM can give your organization a cutting edge over your SOA. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Bluetooth not found on BCM43228

    - by TK Kocheran
    I've got a Broadcom BCM43228 mPCIe card which came with my motherboard (ASUS ROG Maximus V Extreme, can't seem to find a link to what the card is) which is working great for WiFi right now, but I can't detect the Bluetooth hardware onboard. In Windows, I have full Bluetooth 4.0 support. $ lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.4 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 5 (rev c4) 00:1c.6 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 7 (rev c4) 00:1c.7 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 8 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1189 (rev a1) 01:00.1 Audio device: NVIDIA Corporation Device 0e0a (rev a1) 0d:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller 0e:00.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:01.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:04.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:05.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:06.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:07.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:08.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:09.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 10:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller 12:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01) 15:00.0 Network controller: Broadcom Corporation BCM43228 802.11a/b/g/n 17:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01) The key line seems to be: 15:00.0 Network controller: Broadcom Corporation BCM43228 802.11a/b/g/n If I try to detect the Bluetooth card, I don't see anything: $ hcitool dev Devices: rfkill list all: Output lspci: Output lsusb: Output I finally found the card with usb-devices: T: Bus=01 Lev=02 Prnt=02 Port=00 Cnt=01 Dev#= 3 Spd=12 MxCh= 0 D: Ver= 2.00 Cls=ff(vend.) Sub=01 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=0b05 ProdID=17b5 Rev=01.12 S: Manufacturer=Broadcom Corp S: Product=BCM20702A0 S: SerialNumber=############ C: #Ifs= 4 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=01 Prot=01 Driver=(none) I: If#= 1 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=01 Prot=01 Driver=(none) I: If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none) I: If#= 3 Alt= 0 #EPs= 0 Cls=fe(app. ) Sub=01 Prot=01 Driver=(none) I've heard that this card needs to have firmware injected into it in order to function. If that's the case, how do I do it?

    Read the article

  • Integrating JavaFX Scene Builder in the IDEs

    - by Jerome Cambon
    I experienced recently using Scene Builder from Netbeans, Eclipse and IntelliJ IDEA. As you may know, Scene Builder is a standalone tool, that can be used independently of any IDE. But it can be very convenient to use it with your favorite IDE, for instance start it by double-clicking on an FXML file, or run samples delivered with Scene Builder.  I'm sharing here with you few tweaks that I had to do for a better integration. Scene Builder 1.1 Developer Preview should be installed before doing the tweaks. The steps below have been done on Windows 7. It should be very similar on both Mac OS and Linux. Please tell me if you find any issue on one of these 2 platforms. Netbeans 7.3 Netbeans 7.3 can be downloaded from here. Creating a New FXML project Part of the JavaFx projects, Netbeans allows to create a 'JavaFX FXML Application', that creates a JavaFx project based on FXML description. The FXML file will be editable with Scene Builder. Starting Scene Builder from Netbeans If SceneBuilder 1.1 is installed, Netbeans will discover it automatically.In case of issue, one can open the Options panel, Java section, JavaFx tab. Scene Builder home should appear here. You can then either Open the FXML file with Scene Builder, or edit it with the Netbeans FXML editor : When 'Open' is selected, Scene Builder appears on top of the Netbeans window : When 'Edit' is selected, the FXML is opened in the Netbeans FXML editor, which support syntax highlighting and completion : Using Scene Builder Samples Scene Builder provides Netbeans projects, that can be opened/run directly : Eclipse 4.2.1 + e(fx)clipse 0.1.1 JavaFX integration in Eclipse has been done with the e(fx)clipse plugin. A distribution bundle containing Eclipse and e(fx)clipse is provided here. Creating New FXML project All the JavaFX-related projects can be found in 'Other' section : First create a new JavaFX project: Enter the project name (Test here). JavaFX delivery will be found in the JRE. Then, create a 'New FXML Document': Enter the FXML file name (Sample here). You may also want to choose the FXML document root element (AnchorPane by default). Dynamic root is for advanced users which want to manage custom types. Starting Scene Builder from Eclipse Once created, you can then either Open the FXML file with Scene Builder, or Open it in the Eclipse FXML editor : Using Scene Builder Samples from Eclipse To use Scene Builder samples, first create a new JavaFX Project (from 'Other' section): Then, on the next panel, 'Link additionnal source': … and select the source directory of a Scene Builder example : HelloWorld here (the parent directory of the java package should be selected).Then, choose a 'Folder name' for your sample: You can now run the Scene Builder example by right-clicking the Main.java source file: IntelliJ IDEA 11.1.3 IntelliJ IDEA Community Edition can be downloaded from here. IntelliJ IDEA has no specific JavaFX integration. Creating New IntelliJ project from existing source Since IntelliJ has no JavaFX project knowledge, we are using the Scene Builder samples as a starting point. We are going to create a new Java project from the HelloWorld sample: Then, click twice on 'Next' (nothing to change), then 'Finish'. The 'HelloWorld' project is created. Starting Scene Builder from IntelliJ We need to tell the IDE that FXML files are opened with an external application. Then, the OS file association will be used. To do this, open the File->Settings panel. Then, select 'File Types' and 'Files opened in associated applications'. And add a new wildcard : '*.fxml' : Now, from the HelloWorld project, you can double-click on HelloWorld.fxml : Scene Builder window appears on top of the IntelliJ window : Using Scene Builder Samples from IntelliJ We need to tell IntelliJ that the fxml files must be copied in the build directory.To do that, from the HelloWorld directory, open the 'idea' section, and edit the 'compiler.xml' file. We need to add an '*.fxml' entry: Then, you can run the sample from HelloWorld project, by right-clicking the Main class:

    Read the article

  • Danke für die gute Zusammenarbeit im FY12

    - by A&C Redaktion
    Liebe Oracle Partner, und schon wieder ist ein Geschäftsjahr zu Ende gegangen. Als erstes möchte ich Ihnen, unseren Partnern, sagen: Sie leisten seit Jahren einen fundamentalen Beitrag zum Erfolg von Oracle – auch im Fiskaljahr 2012. Herzlichen Dank dafür! Wenn wir auf das letzte Jahr zurückblicken, gab es unter dem Motto „Oracle on Oracle“ herausragende Produktvorstellungen, Events und Partner Programme. Zu den wichtigsten technischen Innovationen gehörte sicherlich die Oracle Database Appliance speziell für kleinere Unternehmen. Auch die Daten-Explosion, zu der wir jeden Tag beitragen, stellt für alle Unternehmen eine Herausforderung dar. Um diese „Big Data“ effizient zu verwalten, haben wir die Exa-Familie erweitert. So stehen neben der Exadata Database Machine nun auch Exalogic und Exalytics – mit zuverlässiger Hardware, ausgereifter Software und Support aus einer Hand – je nach Bedarf zur Verfügung. Da ist für jeden Kunden was dabei. Die Oracle Optimized Solutions unserer VADs waren ein weiteres wichtiges Thema, speziell für den Mittelstand, über das wir auch hier im Blog berichtet haben. Für ISVs wurden die Exastack Ready und Exastack Optimized Programme entwickelt. Speziell für Partner wurden Partner Sales Books zu den Fokus-Themen, wie beispielsweise Cloud Computing, erstellt. Im OPN stehen Ihnen mehr als 30 deutschsprachige Marketing-Kits zur Verfügung, um Sie bei der täglichen Vertriebsarbeit zu unterstützen. Und mit dem überarbeiteten und erweiterten Solutions Catalog im OPN können Sie von Endkunden ganz einfach nach Ihrem Lösungsangebot gefunden werden. Dies sind nur einige Beispiele, wie wir versuchen, Sie bei Ihrem Geschäft zu unterstützen. Ich hoffe, wir schaffen das auch weiterhin so gut wie bisher. Was das neue Fiskaljahr bringt, erfahren Sie beim EMEA Partner Kickoff am 29. Juni Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} und demnächst hier im Blog. Aber soviel sei schon verraten: Meine Glückszahl ist die 13 – und Ihre? Herzlichst, Ihre Silvia Kaske Senior Direktor Alliances & Channel Tech Europe North Oracle Deutschland B.V. & Co. KG

    Read the article

  • Danke für die gute Zusammenarbeit im FY12

    - by A&C Redaktion
    Liebe Oracle Partner, und schon wieder ist ein Geschäftsjahr zu Ende gegangen. Als erstes möchte ich Ihnen, unseren Partnern, sagen: Sie leisten seit Jahren einen fundamentalen Beitrag zum Erfolg von Oracle – auch im Fiskaljahr 2012. Herzlichen Dank dafür! Wenn wir auf das letzte Jahr zurückblicken, gab es unter dem Motto „Oracle on Oracle“ herausragende Produktvorstellungen, Events und Partner Programme. Zu den wichtigsten technischen Innovationen gehörte sicherlich die Oracle Database Appliance speziell für kleinere Unternehmen. Auch die Daten-Explosion, zu der wir jeden Tag beitragen, stellt für alle Unternehmen eine Herausforderung dar. Um diese „Big Data“ effizient zu verwalten, haben wir die Exa-Familie erweitert. So stehen neben der Exadata Database Machine nun auch Exalogic und Exalytics – mit zuverlässiger Hardware, ausgereifter Software und Support aus einer Hand – je nach Bedarf zur Verfügung. Da ist für jeden Kunden was dabei. Die Oracle Optimized Solutions unserer VADs waren ein weiteres wichtiges Thema, speziell für den Mittelstand, über das wir auch hier im Blog berichtet haben. Für ISVs wurden die Exastack Ready und Exastack Optimized Programme entwickelt. Speziell für Partner wurden Partner Sales Books zu den Fokus-Themen, wie beispielsweise Cloud Computing, erstellt. Im OPN stehen Ihnen mehr als 30 deutschsprachige Marketing-Kits zur Verfügung, um Sie bei der täglichen Vertriebsarbeit zu unterstützen. Und mit dem überarbeiteten und erweiterten Solutions Catalog im OPN können Sie von Endkunden ganz einfach nach Ihrem Lösungsangebot gefunden werden. Dies sind nur einige Beispiele, wie wir versuchen, Sie bei Ihrem Geschäft zu unterstützen. Ich hoffe, wir schaffen das auch weiterhin so gut wie bisher. Was das neue Fiskaljahr bringt, erfahren Sie beim EMEA Partner Kickoff am 29. Juni Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} und demnächst hier im Blog. Aber soviel sei schon verraten: Meine Glückszahl ist die 13 – und Ihre? Herzlichst, Ihre Silvia Kaske Senior Direktor Alliances & Channel Tech Europe North Oracle Deutschland B.V. & Co. KG

    Read the article

  • MvcExtensions - ActionFilter

    - by kazimanzurrashid
    One of the thing that people often complains is dependency injection in Action Filters. Since the standard way of applying action filters is to either decorate the Controller or the Action methods, there is no way you can inject dependencies in the action filter constructors. There are quite a few posts on this subject, which shows the property injection with a custom action invoker, but all of them suffers from the same small bug (you will find the BuildUp is called more than once if the filter implements multiple interface e.g. both IActionFilter and IResultFilter). The MvcExtensions supports both property injection as well as fluent filter configuration api. There are a number of benefits of this fluent filter configuration api over the regular attribute based filter decoration. You can pass your dependencies in the constructor rather than property. Lets say, you want to create an action filter which will update the User Last Activity Date, you can create a filter like the following: public class UpdateUserLastActivityAttribute : FilterAttribute, IResultFilter { public UpdateUserLastActivityAttribute(IUserService userService) { Check.Argument.IsNotNull(userService, "userService"); UserService = userService; } public IUserService UserService { get; private set; } public void OnResultExecuting(ResultExecutingContext filterContext) { // Do nothing, just sleep. } public void OnResultExecuted(ResultExecutedContext filterContext) { Check.Argument.IsNotNull(filterContext, "filterContext"); string userName = filterContext.HttpContext.User.Identity.IsAuthenticated ? filterContext.HttpContext.User.Identity.Name : null; if (!string.IsNullOrEmpty(userName)) { UserService.UpdateLastActivity(userName); } } } As you can see, it is nothing different than a regular filter except that we are passing the dependency in the constructor. Next, we have to configure this filter for which Controller/Action methods will execute: public class ConfigureFilters : ConfigureFiltersBase { protected override void Configure(IFilterRegistry registry) { registry.Register<HomeController, UpdateUserLastActivityAttribute>(); } } You can register more than one filter for the same Controller/Action Methods: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(); You can register the filters for a specific Action method instead of the whole controller: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(c => c.Index()); You can even set various properties of the filter: registry.Register<ControlPanelController, CustomAuthorizeAttribute>( attribute => { attribute.AllowedRole = Role.Administrator; }); The Fluent Filter registration also reduces the number of base controllers in your application. It is very common that we create a base controller and decorate it with action filters and then we create concrete controller(s) so that the base controllers action filters are also executed in the concrete controller. You can do the  same with a single line statement with the fluent filter registration: Registering the Filters for All Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type))); Registering Filters for selected Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type) && (type.Name.StartsWith("Home") || type.Name.StartsWith("Post")))); You can also use the built-in filters in the fluent registration, for example: registry.Register<HomeController, OutputCacheAttribute>(attribute => { attribute.Duration = 60; }); With the fluent filter configuration you can even apply filters to controllers that source code is not available to you (may be the controller is a part of a third part component). That’s it for today, in the next post we will discuss about the Model binding support in MvcExtensions. So stay tuned.

    Read the article

  • CSV Anyone?

    - by Tim Dexter
    CSV, a dead format? With today's abilities to generate Excel output (Im working on some posts for the shiny new Excel templates) and the sophistication of the eText outputs, do you really need CSV? I guess so seeing as it was added after the base product was released as an enhancement. But I'd be interested in hearing use cases? I used to think the same of text output. But now Im working in the public sector where older technologies abound I can see a use for text output. Its also used as a machine readable format, etc. BIP still does not support a text format from its RTF templates but its very doable using XSL templates. Back to CSV, I noticed an enhancement in the big list of new stuff in the 10.1.3.4.1 rollup. You can now generate CSV from structured data ie a data template or a SQL XML query. This piqued my interest so I ran off a data template to CSV. Its interesting, in a geeky kinda way, I guess Im that way inclined.Here's my data structure:<EMPLOYEES> <LIST_G_DEPT> <G_DEPT>  <DEPARTMENT_ID>10</DEPARTMENT_ID>  <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME>  <LIST_G_EMP>   <G_EMP>     <EMPLOYEE_ID>200</EMPLOYEE_ID>     <EMP_NAME>Jennifer Whalen</EMP_NAME>     <EMAIL>JWHALEN</EMAIL>     <PHONE_NUMBER>515.123.4444</PHONE_NUMBER>     <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE>     <SALARY>4400</SALARY>    </G_EMP>   </LIST_G_EMP>  <TOTAL_EMPS>1</TOTAL_EMPS>  <TOTAL_SALARY>4400</TOTAL_SALARY>  <AVG_SALARY>4400</AVG_SALARY>  <MAX_SALARY>4400</MAX_SALARY>  <MIN_SALARY>4400</MIN_SALARY> </G_DEPT>Poor ol Jennifer, she needs some help in the Admin department :0)Pushing this to CSV and BP completely flattens the data out so you get a completely denormalized data output in the CSV.TOTAL_SALARY,DEPARTMENT_ID,DEPARTMENT_NAME,MIN_SALARY,AVG_SALARY,MAX_SALARY,EMPLOYEE_ID,SALARY,PHONE_NUMBER,HIRE_DATE,EMP_NAME,EMAIL,TOTAL_EMPS4400,10,Administration,4400,4400,4400,200,4400,515.123.4444,1987-09-17T00:00:00.000-06:00,"Jennifer Whalen",JWHALEN,119000,20,Marketing,6000,9500,13000,201,13000,515.123.5555,1996-02-17T00:00:00.000-07:00,"Michael Hartstein",MHARTSTE,219000,20,Marketing,6000,9500,13000,202,6000,603.123.6666,1997-08-17T00:00:00.000-06:00,"Pat Fay",PFAY,2Useful? Its a little confusing to start with but it is completely de-normalized so there is lots of repetition. But if it floats your boat, you got it!

    Read the article

  • Welcome 2011

    - by WeigeltRo
    Things that happened in 2010 MIX10 was absolutely fantastic. Read my report of MIX10 to see why.   The dotnet Cologne 2010, the community conference organized by the .NET user group Köln and my own group Bonn-to-Code.Net became an even bigger success than I dared to dream of.   There was a huge discrepancy between the efforts by Microsoft to support .NET user groups to organize public live streaming events of the PDC keynote (the dotnet Cologne team joined forces with netug  Niederrhein to organize the PDCologne) and the actual content of the keynote. The reaction of the audience at our event was “meh” and even worse I seriously doubt we’ll ever get that number of people to such an event (which on top of that suffered from technical difficulties beyond our control).   What definitely would have deserved the public live streaming event treatment was the Silverlight Firestarter (aka “Silverlight Damage Control”) event. And maybe we would have thought about organizing something if it weren’t for the “burned earth” left by the PDC keynote. Anyway, the stuff shown at the firestarter keynote was the topic of conversations among colleagues days later (“did you see that? oh yeah, that was seriously cool”). Things that I have learned/observed/noticed in 2010 In the long run, there’s a huge difference between “It works pretty well” and “it just works and I never have to think about it”. I had to get rid of my USB graphics adapter powering the third monitor (read about it in this blog post). Various small issues (desktop icons sometimes moving their positions after a reboot for no apparent reasons, at least one game I couldn’t get run at all, all three monitors sometimes simply refusing to wake up after standby) finally made me buy a PCIe 1x graphics adapter. If you’re interested: The combination of a NVIDIA GTX 460 and a GT 220 is running in “don’t make me think” mode for a couple of months now.   PowerPoint 2010 is a seriously cool piece of software. Not only the new hardware-accelerated effects, but also features like built-in background removal and picture processing (which in many cases are simply “good enough” and save a lot of time) or the smart guides.   Outlook 2010 crashes on me a lot. I haven’t been successful in reproducing these crashes, they just happen when every couple of days on different occasions (only thing in common: I clicked something in the main window – yeah, very helpful observation)   Visual Studio 2010 reminds me of Visual Studio 2005 before SP1, which is actually not a good thing to say about a piece of software. I think it’s telling that Microsoft’s message regarding the beta of SP1 has been different from earlier service pack betas (promising an upgrade path for a beta to the RTM sounds to me like “please, please use it NOW!”).   I have a love/hate relationship with ReSharper. I don’t want to develop without it, but at the same time I can’t fail to notice that ReSharper is taking a heavy toll in terms of performance and sometimes stability. Things I’m looking forward to in 2011 Obviously, the dotnet Cologne 2011. We already have been able to score some big name sponsors (Microsoft, Intel), but we’re still looking for more sponsors. And be assured that we’ll make sure that our partners get the most out of their contribution, regardless of how big or small.   MIX11, period.    Silverlight 5 is going to be great. The only thing I’m a bit nervous about is that I still haven’t read anything official on whether C# next version’s async/await will be in it. Leaving that out would be really stupid considering the end-of-2011 release of SL5 (moving the next release way into the future).

    Read the article

  • Cox Communications' Strategic Approach to Enterprise User Experience: How Change Management and Usab

    - by Applications User Experience
    Author: Anna Wichansky, Senior Director, Applications User Experience, and Chair, Oracle Usability Advisory Board As part of our work in the User Experience group, our teams often go to Customer events such as the Higher Education User Group (HEUG) conference, Alliance 2010. This year's event was held in San Antonio, Texas, and was attended by hundreds of higher education, government, and public sector users of Oracle applications. The User Assistance team used this opportunity to reach out to customers in the Educational and Government sectors to better understand how their organizations are currently approaching help, messages, and other forms of user assistance. What is User Assistance? For us, user assistance is more than the old books of users' manuals and documentation. User assistance is anything that helps users get their jobs done quickly and efficiently. Instead of expecting users to stop and look through a guide or manual, we have been developing solutions that are embedded within the interface. We know that when people are having difficulty with a task, they want to be able to search efficiently for solutions and collaborate with coworkers. We know that they want to find their answers right there, right then, so that they can get on with their work. In our interviews at Alliance, we wanted to learn what the participants could tell us about what was happening on their campuses and in their institutions. Figure 1. For Oracle User Assistance, it's not just about books any more. So what did we do? Off to Texas, we recruited 10 people from nine different government and education organizations to come to our Oracle User Experience Onsite Usability Labs. We conducted one-hour interviews with these folks and asked them all about User Assistance--what people are doing, what they would like to do, what technologies they are using, what they would like to use, and ultimately what should we as a company be planning for our future products. We used this as an opportunity also to show them some of our design concepts for Fusion User Assistance, our next generation of user assistance based on the best of our user assistance in other products. Figure 2. Interviewing a technical user at Alliance. What we learned... People are not using paper or online manuals anymore. They don't want to see a manual that is written for technical users and that doesn't make sense to the ordinary end user. They really don't want to have to flip through a manual trying to find an answer to their question. Even when the answer might be tailored to their organization, they don't want to dig through documentation. When they need an answer now, they don't have the patience to dig for something that might or might not be clearly written. What does it mean to an organization when users don't want to deal with documentation? In many cases, it means that frustrated users make phone calls to try to find the answers that they need immediately. Phone calls are expensive to an organization and frustrating to the technical support staff who have provided documentation that no one wants to read anymore. If they don't call, they email for help often, and many users are asking for the same information. The bottom line is that if they could get that help immediately in the interface, they wouldn't have to make those calls or send those emails -- and that saves time and money. Our Fusion User Assistance options to customize help and get help for the task immediately were seen as an opportunity by these technical users to build the solutions that their users need and want. Figure 3. Joyce Ohgi and Laurie Pattison of Applications UX. Chicken Fried Steak. That was huge. But then, this was Texas, where we discovered a lot of things come very big. Drinks are served in quart-size glasses and dishes like Chicken Fried Steaks are served on platters not plates. We saw three-pound cinnamon rolls that you down with tea sweet enough to curl your hair. Deep in the heart of Texas, we learned a lot, and we ate even more.

    Read the article

  • Bridging the Gap in Cloud, Big Data, and Real-time

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} With all the buzz of around big data and cloud computing, it is easy to overlook one of your most precious commodities—your data. Today’s businesses cannot stand still when it comes to data. Market success now depends on speed, volume, complexity, and keeping pace with the latest data integration breakthroughs. Are you up to speed with big data, cloud integration, real-time analytics? Join us in this three part blog series where we’ll look at each component in more detail. Meet us online on October 24th where we’ll take your questions about what issues you are facing in this brave new world of integration. Let’s start first with Cloud. What happens with your data when you decide to implement a private cloud architecture? Or public cloud? Data integration solutions play a vital role migrating data simply, efficiently, and reliably to the cloud; they are a necessary ingredient of any platform as a service strategy because they support cloud deployments with data-layer application integration between on-premise and cloud environments of all kinds. For private cloud architectures, consolidation of your databases and data stores is an important step to take to be able to receive the full benefits of cloud computing. Private cloud integration requires bidirectional replication between heterogeneous systems to allow you to perform data consolidation without interrupting your business operations. In addition, integrating data requires bulk load and transformation into and out of your private cloud is a crucial step for those companies moving to private cloud. In addition, the need for managing data services as part of SOA/BPM solutions that enable agile application delivery and help build shared data services for organizations. But what about public Cloud? If you have moved your data to a public cloud application, you may also need to connect your on-premise enterprise systems and the cloud environment by moving data in bulk or as real-time transactions across geographies. For public and private cloud architectures both, Oracle offers a complete and extensible set of integration options that span not only data integration but also service and process integration, security, and management. For those companies investing in Oracle Cloud, you can move your data through Oracle SOA Suite using REST APIs to Oracle Messaging Cloud Service —a new service that lets applications deployed in Oracle Cloud securely and reliably communicate over Java Messaging Service . As an example of loading and transforming data into other public clouds, Oracle Data Integrator supports a knowledge module for Salesforce.com—now available on AppExchange. Other third-party knowledge modules are being developed by customers and partners every day. To learn more about how to leverage Oracle’s Data Integration products for Cloud, join us live: Data Integration Breakthroughs Webcast on October 24th 10 AM PST.

    Read the article

  • Get to Know a Candidate (9 of 25): Gary Johnson&ndash;Libertarian Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Johnson served as the 29th Governor of New Mexico from 1995 to 2003, as a member of the Republican Party, and is known for his low-tax libertarian views and his strong emphasis on personal health and fitness. While a student at the University of New Mexico in 1974, Johnson sustained himself financially by working as a door-to-door handyman. In 1976 he founded Big J Enterprises, which grew from this one-person venture to become one of New Mexico's largest construction companies. He entered politics for the first time by running for Governor of New Mexico in 1994 on a fiscally conservative, low-tax, anti-crime platform. Johnson won the Republican Party of New Mexico's gubernatorial nomination, and defeated incumbent Democratic governor Bruce King by 50% to 40%. He cut the 10% annual growth in the budget: in part, due to his use of the gubernatorial veto 200 times during his first six months in office, which gained him the nickname "Governor Veto". Johnson sought re-election in 1998, winning by 55% to 45%. In his second term, he concentrated on the issue of school voucher reforms, as well as campaigning for marijuana decriminalization and opposition to the War on Drugs. During his tenure as governor, Johnson adhered to a stringent anti-tax and anti-bureaucracy policy driven by a cost–benefit analysis rationale, setting state and national records for his use of veto powers: more than the other 49 contemporary governors put together. Term-limited, Johnson could not run for re-election at the end of his second term. As a fitness enthusiast, Johnson has taken part in several Ironman Triathlons, and he climbed Mount Everest in May 2003. After leaving office, Johnson founded the non-profit Our America Initiative in 2009, a political advocacy committee seeking to promote policies such as free enterprise, foreign non-interventionism, limited government and privatization. The Libertarian Party is the third largest political party in the United States. It is also identified by many as the fastest growing political party in the United States. The political platform of the Libertarian Party reflects the ideas of libertarianism, favoring minimally regulated markets, a less powerful state, strong civil liberties (including support for Same-sex marriage and other LGBT rights), cannabis legalization and regulation, separation of church and state, open immigration, non-interventionism and neutrality in diplomatic relations (i.e., avoiding foreign military or economic entanglements with other nations), freedom of trade and travel to all foreign countries, and a more responsive and direct democracy. Members of the Libertarian Party have also supported the repeal of NAFTA, CAFTA, and similar trade agreements, as well as the United States' exit from the United Nations, WTO, and NATO. Although there is not an officially labeled political position of the party, it is considered by many to be more right-wing than the Democratic Party but more left-wing than the Republican Party when comparing the parties' positions to each other, placing it at or above the center. In the 30 states where voters can register by party, there are over 282,000 voters registered as Libertarians. Hundreds of Libertarian candidates have been elected or appointed to public office, and thousands have run for office under the Libertarian banner. The Libertarian Party has many firsts in its credit, such as being the first party to get an electoral vote for a woman in a United States presidential election. Learn more about Gary Johnson and Libertarian Party on Wikipedia.

    Read the article

  • Who Makes a Good Product Owner

    - by Robert May
    In general, the best product owners are those that care passionately about the customer of the product.  Note that I didn’t say about the product itself.  Actually, people that only care about the product, generally do not make good product owners.  Products only matter in relationship to their customers.  If a product doesn’t provide value to the customer, then the product has no value, no matter what a person might think of the product, and no matter what cool technologies exist inside of the product. A good product owner is also a good negotiator.  They recognize that many different priorities exist inside of a corporation, but that there can be only one list that developers work from.  A good product owner recognizes that its their job to help others around them prioritize (perhaps with a Product Council), but also understand that they alone have the final say about priorities and are willing to make the tough decisions required.  Deciding the priority between two perfectly valid stories is very difficult, especially when the stories are from two different departments! A good product owner is deeply interested in helping the team be successful.  They don’t seek to control the team, but instead seek to understand what the team can do and then work with the team to get the best product possible for the Customer.  A good product owner is never denigrating to team members, ever.  They recognize that such behavior would damage the trust that needs to be present between team members and product owners and will avoid it at all costs. In general, technical people (i.e. former or current developers) make poor product owners.  In their minds, they can’t separate implementation details from user functionality, so their stories end up sounding like implementation details.  For example, “The user enters their username on the password screen” is something that a technical product owner would write.  The proper wording for that story is “A user supplies the system with their credentials.”  Because technical people think different from the rest of the population, they are generally not a good fit. A good product owner is also a good writer.  Writing good stories demands good writing.  The art of persuasion, descriptiveness and just general good grammar are all required.  A good Product Owner must also be well spoken, since most of what will be conveyed will be conveyed with the spoken word, not just written word. A good product owner is a “People Person.”  They like talking to people and are very patient.  They don’t mind having questions repeated or fielding many questions, because they want to make sure that the ideas they’re conveying are properly understood so the customer gets the best product possible.  They are happy to answer any questions a team member may have and invite feedback and criticism of designs and stories, since they want a good product.  They really have little ego that gets in the way of building a great product. All of these qualities can be hard to find, but if you look close enough, you’ll find the right person in your organization.  Product owners can be found anywhere, not just in upper management.  Some of the best product owners are those that are very close to the customer.  In fact, check your customer support staff.  I’d bet that several great product owners are lurking there. Final note about what makes a good product owner.  You’re probably NOT going to find a good product owner in a manager, especially if they consider themselves a “Manager.”  Product owners don’t manage anything but the backlog, so be especially careful if the person you’re selecting for Product Owner is a manager. Up Next, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • problem with network-manager-pptp

    - by Riuzaki90
    I've a problema with the VPA CAble connection of my university... on the website of the university there's a .sh file that set all the variables of the connection in ETC/PPP/PEERS and another .sh file that call the connection...I'm on ubuntu 11.10 and when I run the setup.sh I have this error: impossible to find network-manager-pptp these are the two file that I had talk about: #!/bin/bash echo "Creazione della connessione in corso attendere........." apt-get update apt-get install pptp-linux network-manager-pptp echo -n "Digitare la propria Username: " read USERNAME echo -n "Digitare la propria Password: " read PASSWORD pptpsetup --create UNICAL_Campus_Access --server 160.97.73.253 --username $USERNAME --password $PASSWORD echo 'pty "pptp 160.97.73.253 --nolaunchpppd"' >/etc/ppp/peers/UNICAL_Campus_Access echo 'require-mppe-128' >>/etc/ppp/peers/UNICAL_Campus_Access echo 'file /etc/ppp/options.pptp'>>/etc/ppp/peers/UNICAL_Campus_Access echo 'name '$USERNAME''>>/etc/ppp/peers/UNICAL_Campus_Access echo 'remotename PPTP'>>/etc/ppp/peers/UNICAL_Campus_Access echo 'ipparam UNICAL_Campus_Access'>>/etc/ppp/peers/UNICAL_Campus_Access echo $USERNAME' PPTP '$PASSWORD' *'>>/etc/ppp/chap-secrets rm /etc/ppp/options.pptp echo '###############################################################################'>/etc/ppp/options.pptp echo '# $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $'>>/etc/ppp/options.pptp echo '#'>>/etc/ppp/options.pptp echo '# Sample PPTP PPP options file /etc/ppp/options.pptp'>>/etc/ppp/options.pptp echo '# Options used by PPP when a connection is made by a PPTP client.'>>/etc/ppp/options.pptp echo '# This file can be referred to by an /etc/ppp/peers file for the tunnel.'>>/etc/ppp/options.pptp echo '# Changes are effective on the next connection. See "man pppd".'>>/etc/ppp/options.pptp echo '#'>>/etc/ppp/options.pptp echo '# You are expected to change this file to suit your system. As'>>/etc/ppp/options.pptp echo '# packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/'>>/etc/ppp/options.pptp echo '# and the kernel MPPE module available from the CVS repository also on'>>/etc/ppp/options.pptp echo '# http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe.'>>/etc/ppp/options.pptp echo '###############################################################################'>>/etc/ppp/options.pptp echo '# Lock the port'>>/etc/ppp/options.pptp echo 'lock'>>/etc/ppp/options.pptp echo '# Authentication'>>/etc/ppp/options.pptp echo '# We do not need the tunnel server to authenticate itself'>>/etc/ppp/options.pptp echo 'noauth'>>/etc/ppp/options.pptp echo '#We won"t do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2'>>/etc/ppp/options.pptp echo '#(you may need to remove these refusals if the server is not using MPPE)'>>/etc/ppp/options.pptp echo 'refuse-pap'>>/etc/ppp/options.pptp echo 'refuse-eap'>>/etc/ppp/options.pptp echo 'refuse-chap'>>/etc/ppp/options.pptp echo 'refuse-mschap'>>/etc/ppp/options.pptp echo '# Compression Turn off compression protocols we know won"t be used'>>/etc/ppp/options.pptp echo 'nobsdcomp'>>/etc/ppp/options.pptp echo 'nodeflate'>>/etc/ppp/options.pptp echo '# Encryption'>>/etc/ppp/options.pptp echo '# (There have been multiple versions of PPP with encryption support,'>>/etc/ppp/options.pptp echo '# choose with of the following sections you will use. Note that MPPE'>>/etc/ppp/options.pptp echo '# requires the use of MSCHAP-V2 during authentication)'>>/etc/ppp/options.pptp echo '# http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras'>>/etc/ppp/options.pptp echo '# ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o'>>/etc/ppp/options.pptp echo '#{{{'>>/etc/ppp/options.pptp echo '# Require MPPE 128-bit encryption'>>/etc/ppp/options.pptp echo '#require-mppe-128'>>/etc/ppp/options.pptp echo '#}}}'>>/etc/ppp/options.pptp echo '# http://polbox.com/h/hs001/ fork from PPP project by Jan Dubiec'>>/etc/ppp/options.pptp echo '#ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o'>>/etc/ppp/options.pptp echo '#{{{'>>/etc/ppp/options.pptp echo '# Require MPPE 128-bit encryption'>>/etc/ppp/options.pptp echo '#mppe required,stateless'>>/etc/ppp/options.pptp echo '# }}}'>>/etc/ppp/options.pptp echo "setup di 'UNICAL Campus Access' terminato correttamente" echo "per connettersi eseguire lo script 'UNICAL_Campus_Access.sh' " and the second: #!/bin/bash echo "Connessione alla Rete del Centro Residenziale in corso attendere........." modprobe ppp_mppe pppd call UNICAL_Campus_Access sleep 30 tail -n 8 /var/log/messages echo "Connessione Stabilita" echo -n "Per terminare la connessione premere invio (in alternativa eseguire il commando 'killall pppd'):----> " read CONN killall pppd echo "Connessione terminata" I've correctly installed network-manager-pptp to the latest version...help?

    Read the article

  • Oracle Products Reflect Key Trends Shaping Enterprise 2.0

    - by kellsey.ruppel(at)oracle.com
    Following up on his predictions for 2011, we asked Enterprise 2.0 veteran Andy MacMillan to map out the ways Oracle solutions are at the forefront of industry trends--and how Oracle customers can benefit in the coming year. 1. Increase organizational awareness | Oracle WebCenter Suite Oracle WebCenter Suite provides a unique set of capabilities to drive organizational awareness. In particular, the expansive activity graph connects users directly to key enterprise applications, activities, and interests. In this way, applicable and critical business information is automatically and immediately visible--in the context of key tasks--via real-time dashboards and comprehensive reporting. Oracle WebCenter Suite also integrates key E2.0 services, such as blogs, wikis, and RSS feeds, into critical business processes, including back-office systems of records such as ERP and CRM systems. 2. Drive online customer engagement | Oracle Real-Time Decisions With more and more business being conducted on the Web, driving increased online customer engagement becomes a critical key to success. This effort is usually spearheaded by an increasingly important executive role, the Head of Online, who usually reports directly to the CMO. To help manage the Web experience online, Oracle solutions are driving a new kind of intelligent social commerce by combining Oracle Universal Content Management, Oracle WebCenter Services, and Oracle Real-Time Decisions with leading e-commerce and product recommendations. Oracle Real-Time Decisions provides multichannel recommendations for content, products, and services--including seamless integration across Web, mobile, and social channels. The result: happier customers, increased customer acquisition and retention, and improved critical success metrics such as shopping cart abandonment. 3. Easily build composite applications | Oracle Application Development Framework Thanks to the shared user experience strategy across Oracle Fusion Middleware, Oracle Fusion Applications and many other Oracle Applications, customers can easily create real, customer-specific composite applications using Oracle WebCenter Suite and Oracle Application Development Framework. Oracle Application Development Framework components provide modular user interface components that can build rich, social composite applications. In addition, a broad set of components spanning BPM, SOA, ECM, and beyond can be quickly and easily incorporated into composite applications. 4. Integrate records management into a global content platform | Oracle Enterprise Content Management 11g Oracle Enterprise Content Management 11g provides leading records management capabilities as part of a unified ECM platform for managing records, documents, Web content, digital assets, enterprise imaging, and application imaging. This unique strategy provides comprehensive records management in a consistent, cost-effective way, and enables organizations to consolidate ECM repositories and connect ECM to critical business applications. 5. Achieve ECM at extreme scale | Oracle WebLogic Server and Oracle Exadata To support the high-performance demands of a unified and rationalized content platform, Oracle has pioneered highly scalable and high-performing ECM infrastructures. Two innovations in particular helped make this happen. The core ECM platform itself moved to an Enterprise Java architecture, so organizations can now use Oracle WebLogic Server for enhanced scalability and manageability. Oracle Enterprise Content Management 11g can leverage Oracle Exadata for extreme performance and scale. Likewise, Oracle Exalogic--Oracle's foundation for cloud computing--enables extreme performance for processor-intensive capabilities such as content conversion or dynamic Web page delivery. Learn more about Oracle's Enterprise 2.0 solutions.

    Read the article

  • Where is the value of OEA

    - by [email protected]
    In a room full of architects, if you were to ask for the definition of enterprise architecture, or the importance thereof,  you are likely to get a number of varying view points ranging from,  a complete analysis of the digital assets of an organization,  to, a strategic alignment of business goals/objectives to IT initiatives.  Similiarily in a room full of senior business executives,  if you asked them how they see their IT groups and their effectiveness to align to business strategy,  you would get a myriad of responses,  ranging from, “a huge drain on our bottom line”, “always more expensive than budgeted”, “lack of agility,  by the time IT is ready,  my business strategy has changed”, and on the rare occurrence, “ a leader of innovation,  that is lock step with my business strategy”. However does this necessarily demonstrate the overall value of enterprise architecture.  Having a framework, and process is of critical importance to help produce a number of the artefacts that ultimately align technology goals and initiatives to business strategy,  however,  is that really where the value is?  I believe that first we need to understand the concept of value.  Value typically is a measure of sorts,  when we purchase a product it’s value is equivalent to the maximum amount that someone is willing to pay for the product,  however,  is the same equation valid in terms of the business value of enterprise architecture? Is the library of artefacts generated through a process/framework, inclusive of a strategic roadmap to realize the enterprise architecture where the value is? If we agree that enterprise architecture is the alignment of IT and IT assets to support business strategy, and by achieving our business strategy, we have we have increased the business value of the enterprise then;  it seems that, in order to really identify the true value of an enterprise architecture,  we need to understand how we measure business value .  A number of formal measurement methodologies exist for this purpose, business models, balanced scorecards, etc   After we have an understanding on how to measure the business value of each of the organizational units within an enterprise, then we understand how the enterprise architecture contributes to the success of business strategy,  and EXECUTE on the roadmap to implement, and deliver the IT initiatives that provide MEASUREABLE returns, As we analyse the value chain of each of the individual organizational units within the enterprise we may identify how that unit has performed by quantitatively measuring it proximity to achieving the goals defined by the business for each unit. However, It would appear that true business value (the aggregate of all of the business units in the value chain), is to some degree subjectively measured  as for public companies this lies in shareholder value,  as the true value, or be it, the maximum amount that someone would pay for shares of an organization.

    Read the article

  • Fake It Easy On Yourself

    - by Lee Brandt
    I have been using Rhino.Mocks pretty much since I started being a mockist-type tester. I have been very happy with it for the most part, but a year or so ago, I got a glimpse of some tests using Moq. I thought the little bit I saw was very compelling. For a long time, I had been using: 1: var _repository = MockRepository.GenerateMock<IRepository>(); 2: _repository.Expect(repo=>repo.SomeCall()).Return(SomeValue); 3: var _controller = new SomeKindaController(_repository); 4:  5: ... some exercising code 6: _repository.AssertWasCalled(repo => repo.SomeCall()); I was happy with that syntax. I didn’t go looking for something else, but what I saw was: 1: var _repository = new Mock(); And I thought, “That looks really nice!” The code was very expressive and easier to read that the Rhino.Mocks syntax. I have gotten so used to the Rhino.Mocks syntax that it made complete sense to me, but to developers I was mentoring in mocking, it was sometimes to obtuse. SO I thought I would write some tests using Moq as my mocking tool. But I discovered something ugly once I got into it. The way Mocks are created makes Moq very easy to read, but that only gives you a Mock not the object itself, which is what you’ll need to pass to the exercising code. So this is what it ends up looking like: 1: var _repository = new Mock<IRepository>(); 2: _repository.SetUp(repo=>repo.SomeCall).Returns(SomeValue); 3: var _controller = new SomeKindaController(_repository.Object); 4: .. some exercizing code 5: _repository.Verify(repo => repo.SomeCall()); Two things jump out at me: 1) when I set up my mocked calls, do I set it on the Mock or the Mock’s “object”? and 2) What am I verifying on SomeCall? Just that it was called? that it is available to call? Dealing with 2 objects, a “Mock” and an “Object” made me have to consider naming conventions. Should I always call the mock _repositoryMock and the object _repository? So I went back to Rhino.Mocks. It is the most widely used framework, and show other how to use it is easier because there is one natural object to use, the _repository. Then I came across a blog post from Patrik Hägne, and that led me to a post about FakeItEasy. I went to the Google Code site and when I saw the syntax, I got very excited. Then I read the wiki page where Patrik stated why he wrote FakeItEasy, and it mirrored my own experience. So I began to play with it a bit. So far, I am sold. the syntax is VERY easy to read and the fluent interface is super discoverable. It basically looks like this: 1: var _repository = A.Fake<IRepository>(); 2: a.CallTo(repo=>repo.SomeMethod()).Returns(SomeValue); 3: var _controller = new SomeKindaController(_repository); 4: ... some exercising code 5: A.CallTo(() => _repository.SOmeMethod()).MustHaveHappened(); Very nice. But is it mature? It’s only been around a couple of years, so will I be giving up some thing that I use a lot because it hasn’t been implemented yet? I doesn’t seem so. As I read more examples and posts from Patrik, he has some pretty complex scenarios. He even has support for VB.NET! So if you are looking for a mocking framework that looks and feels very natural, try out FakeItEasy!

    Read the article

  • Hidden exceptions

    - by user12617285
    Occasionally you may find yourself in a Java application environment where exceptions in your code are being caught by the application framework and either silently swallowed or converted into a generic exception. Either way, the potentially useful details of your original exception are inaccessible. Wouldn't it be nice if there was a VM option that showed the stack trace for every exception thrown, whether or not it's caught? In fact, HotSpot includes such an option: -XX:+TraceExceptions. However, this option is only available in a debug build of HotSpot (search globals.hpp for TraceExceptions). And based on a quick skim of the HotSpot source code, this option only prints the exception class and message. A more useful capability would be to have the complete stack trace printed as well as the code location catching the exception. This is what the various TraceException* options in in Maxine do (and more). That said, there is a way to achieve a limited version of the same thing with a stock standard JVM. It involves the use of the -Xbootclasspath/p non-standard option. The trick is to modify the source of java.lang.Exception by inserting the following: private static final boolean logging = System.getProperty("TraceExceptions") != null; private void log() { if (logging && sun.misc.VM.isBooted()) { printStackTrace(); } } Then every constructor simply needs to be modified to call log() just before returning: public Exception(String message) { super(message); log(); } public Exception(String message, Throwable cause) { super(message, cause); log(); } // etc... You now need to compile the modified Exception.java source and prepend the resulting class to the boot class path as well as add -DTraceExceptions to your java command line. Here's a console session showing these steps: % mkdir boot % javac -d boot Exception.java % java -DTraceExceptions -Xbootclasspath/p:boot -cp com.oracle.max.vm/bin test.output.HelloWorld java.util.zip.ZipException: error in opening zip file at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.(ZipFile.java:127) at java.util.jar.JarFile.(JarFile.java:135) at java.util.jar.JarFile.(JarFile.java:72) at sun.misc.URLClassPath$JarLoader.getJarFile(URLClassPath.java:646) at sun.misc.URLClassPath$JarLoader.access$600(URLClassPath.java:540) at sun.misc.URLClassPath$JarLoader$1.run(URLClassPath.java:607) at java.security.AccessController.doPrivileged(Native Method) at sun.misc.URLClassPath$JarLoader.ensureOpen(URLClassPath.java:599) at sun.misc.URLClassPath$JarLoader.(URLClassPath.java:583) at sun.misc.URLClassPath$3.run(URLClassPath.java:333) at java.security.AccessController.doPrivileged(Native Method) at sun.misc.URLClassPath.getLoader(URLClassPath.java:322) at sun.misc.URLClassPath.getLoader(URLClassPath.java:299) at sun.misc.URLClassPath.getResource(URLClassPath.java:168) at java.net.URLClassLoader$1.run(URLClassLoader.java:194) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at sun.misc.Launcher$ExtClassLoader.findClass(Launcher.java:229) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at java.lang.ClassLoader.loadClass(ClassLoader.java:295) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) java.security.PrivilegedActionException at java.security.AccessController.doPrivileged(Native Method) at sun.misc.URLClassPath$JarLoader.ensureOpen(URLClassPath.java:599) at sun.misc.URLClassPath$JarLoader.(URLClassPath.java:583) at sun.misc.URLClassPath$3.run(URLClassPath.java:333) at java.security.AccessController.doPrivileged(Native Method) at sun.misc.URLClassPath.getLoader(URLClassPath.java:322) ... It's worth pointing out that this is not as useful as direct VM support for tracing exceptions. It has (at least) the following limitations: The trace is shown for every exception, whether it is thrown or not. It only applies to subclasses of java.lang.Exception as there appears to be bootstrap issues when the modification is applied to Throwable.java. It does not show you where the exception was caught. It involves overriding a class in rt.jar, something should never be done in a non-development environment.

    Read the article

  • How to set default xrandr settings?

    - by echo-flow
    I'm trying to enable dual monitors in Ubuntu. This is working fine, but every time I do it, desktop effects is disabled. I think I've found the reason why, though: https://wiki.ubuntu.com/X/Config/Multihead/ As with the GNOME XRandR configuration method, setting Virtual to too large a value may result in a loss of hardware acceleration, and thus an inability to use Compiz and its desktop effects. When I use the GNOME monitor applet, or the Monitors configuration in the System menu, the default xrandr settings puts the second monitor to the right of the first, and, as I found with this bug, for most monitors this creates a virtual desktop larger than the maximum 2048 horizontal resolution needed for hardware acceleration on my netbook hardware. So, it seems like if I can modify xrandr's default settings so that it places the new desktop above or below (north or south of) the main LVDS display, then hardware acceleration, and therefore compiz will continue to work. Can anyone tell me, what is the easiest way to achieve this? UPDATE: I have confirmed that multihead support with desktop effects and hardware acceleration works when I move the external monitor display north of the main LVDS display. Right now this involves the following process: plugging in the external monitor, starting the Monitors configuration menu, desktop effects are disabled automatically (and all of the windows on my workspaces are moved to the first workspace), repositioning the external display so that it is north of LVDS display and clicking apply, and then navigating to the Appearance menu and telling it to reenable desktop effects. Is there a simpler way do this? UPDATE 2: OK, so I thought that perhaps the GNOME Monitors configuration screen was trying to be clever, and might be disbling desktop effects. So, I just tried using the xrandr command-line client instead, as follows: xrandr --output VGA1 --above LVDS1 When I do that, desktop effects are still disabled, and I need to manually reenable them. This, despite the fact that hardware acceleration works, and there is never a point where hardware acceleration stops working because the horizontal dimension of the virtual display is too large. So what program is trying to be clever, and is turning off desktop effects when it doesn't need to? And how do I make it stop? If there were a way to re-enable desktop effects from the command line, which I could then put into a script along with the proper xrandr invocation, I would accept that as a workaround. UPDATE 3: OK, here's my script to enable a second monitor with desktop effects. It might be evil, I'm not sure: second-monitor.sh xrandr --output VGA1 --above LVDS1 sleep 3 compiz --replace & The sleep statement might not be necessary. If there's a better way to do this, please let me know. UPDATE 4: This is a Dell Mini Inspiron 1012. Here are my system specifications: lspci -vv 00:02.0 VGA compatible controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: Dell Device 041a Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 29 Region 0: Memory at f0b00000 (32-bit, non-prefetchable) [size=512K] Region 1: I/O ports at 18d0 [size=8] Region 2: Memory at d0000000 (32-bit, prefetchable) [size=256M] Region 3: Memory at f0900000 (32-bit, non-prefetchable) [size=1M] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: Dell Device 041a Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Region 0: Memory at f0b80000 (32-bit, non-prefetchable) [size=512K] Capabilities: <access denied> lsmod | grep i915 i915 287458 2 drm_kms_helper 29329 1 i915 drm 162409 3 i915,drm_kms_helper intel_agp 24375 2 i915 i2c_algo_bit 5028 1 i915 video 17375 1 i915

    Read the article

  • Using AdMob with Games that use Open GLES

    - by Vishal Kumar
    Can anyone help me integrating Admob to my game. I've used the badlogic framework by MarioZencher ... and My game is like the SuperJumper. I am unable to use AdMob after a lot of my attempts. I am new to android dev...please help me..I went thru a number of tutorials but not getting adds displayed ... I did the following... get the libraries and placed them properly My main.xml looks like this android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello" / My Activity class onCreate method: public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); RelativeLayout layout = new RelativeLayout(this); adView = new AdView(this, AdSize.BANNER, "a1518637fe542a2"); AdRequest request = new AdRequest(); request.addTestDevice(AdRequest.TEST_EMULATOR); request.addTestDevice("D77E32324019F80A2CECEAAAAAAAAA"); adView.loadAd(request); layout.addView(glView); RelativeLayout.LayoutParams adParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.WRAP_CONTENT, RelativeLayout.LayoutParams.WRAP_CONTENT); adParams.addRule(RelativeLayout.ALIGN_PARENT_BOTTOM); adParams.addRule(RelativeLayout.CENTER_IN_PARENT); layout.addView(adView, adParams); setContentView(layout); requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); glView = new GLSurfaceView(this); glView.setRenderer(this); //setContentView(glView); glGraphics = new GLGraphics(glView); fileIO = new AndroidFileIO(getAssets()); audio = new AndroidAudio(this); input = new AndroidInput(this, glView, 1, 1); PowerManager powerManager = (PowerManager) getSystemService(Context.POWER_SERVICE); wakeLock = powerManager.newWakeLock(PowerManager.FULL_WAKE_LOCK, "GLGame"); } My Manifest file looks like this .... <activity android:name="com.google.ads.AdActivity" android:configChanges="keyboard|keyboardHidden|orientation|smallestScreenSize"/> </application> <uses-permission android:name="android.permission.WAKE_LOCK" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-sdk android:minSdkVersion="7" /> When I first decided to use the XML for admob purpose ..it showed no changes..it even didn't log the device id in Logcat... Later when I wrote code in the Main Activity class.. and run ... Application crashed ... with 7 errors evry time ... The android:configChanges value of the com.google.ads.AdActivity must include screenLayout. The android:configChanges value of the com.google.ads.AdActivity must include uiMode. The android:configChanges value of the com.google.ads.AdActivity must include screenSize. The android:configChanges value of the com.google.ads.AdActivity must include smallestScreenSize. You must have AdActivity declared in AndroidManifest.xml with configChanges. You must have INTERNET and ACCESS_NETWORK_STATE permissions in AndroidManifest.xml. Please help me by telling what wrong is there with the code? Can I write code only in xml files without changing the Activity class ... I will be very grateful to anyone providing support.

    Read the article

  • Great ADF Content at Collaborate 12

    - by Shay Shmeltzer
    If you are attending the Collaborate 12 conference this month in Las Vegas and you are interested in Oracle ADF you are going to be very busy. There are more than 20 sessions covering ADF and a special Wednesday ADF Enterprise Methodology Group event focused on ADF. Session range from how to get started to deep technical dives and real world war stories of customers and their implementations. Also don't forget to drop by the Oracle ADF booth at the Oracle Demoground and say hello. Here is a quick list of session that list ADF as a keyword in their content: Sun. Apr. 22 9613 A Fusion Approach to Building Unified and Scalable Applications With Rich User Experience 1:00 pm - 2:00 pm Mon. Apr. 23 669 Fusion DBA Boot Camp: Tailoring Your Application to Customer Needs in a Upgrade-safe Way - Support in ADF and Fusion Apps 9:45 am - 10:45 am Mon. Apr. 23 438 Oracle Fusion Applications Security 9:45 am - 10:45 am Mon. Apr. 23 176 How to get started with Oracle ADF 12:15 pm - 12:45 pm Mon. Apr. 23 330 Fusion DBA Boot Camp: Implementing Self-Service Portals for Partners/Distributors Using EBS/WebCenter/Fusion Technologies 1:15 pm - 2:15 pm Mon. Apr. 23 288 Working with Portlets in ADF and Webcenter 2:30 pm - 3:30 pm Tue. Apr. 24 503 Who’s Converting My Portal? 2:00 pm - 3:00 pm Tue. Apr. 24 9370 Coexistence of Oracle E-Business Suite and Oracle Fusion Applications: Platform Perspective 2:00 pm - 3:00 pm Wed. Apr. 25 647 Developing Custom BI Solutions - OBIEE vs. Oracle Application Development Framework (ADF) 9:30 am - 10:30 am Wed. Apr. 25 173 ADF: A Path to the Future for Dinosaur Nerds 11:00 am - 12:00 pm Wed. Apr. 25 581 How Will You Build Your Next System? 11:00 am - 12:00 pm Wed. Apr. 25 10351 Integrating CRM On Demand With the E-Business Suite to Supercharge Your Sales Team 1:00 pm - 2:00 pm Wed. Apr. 25 9348 Mobile,ADF, Coherence and Live Data Streaming? A Herbalife Case Study 1:00 pm - 2:00 pm Wed. Apr. 25 566 Getting Started with ADF 1:00 pm - 2:00 pm Wed. Apr. 25 775 WebCenter Portal Template Design and Development Best Practices 1:00 pm - 2:00 pm Wed. Apr. 25 791 Surfacing Oracle Social Network into Your Business Applications 1:00 pm - 2:00 pm Wed. Apr. 25 9407 The Latest Oracle E-Business Suite Release User Interface and Usability Enhancements 1:00 pm - 2:00 pm Wed. Apr. 25 100080 Extending JD Edwards with Oracle ADF and Oracle SOA Suite 3:00 pm - 4:00 pm Wed. Apr. 25 172 JDeveloper ADF and the Oracle database – friends not foes 3:00 pm - 4:00 pm Wed. Apr. 25 595 Achieving Real-Time Social Collaboration in WebCenter 11g 3:00 pm - 4:00 pm Wed. Apr. 25 164 ADF + Faces: Do I Have to Write ANY Java code? 4:15 pm - 5:15 pm Thu. Apr. 26 257 Mobile App Development with Oracle ADF Mobile: develop once and for all 8:30 am - 9:30 am Thu. Apr. 26 177 Understanding Oracle ADF and its role in Oracle Fusion Middleware 9:45 am - 10:45 am Thu. Apr. 26 523 Making Next-Generation Mobile Apps With The Latest ADF Mobile Tools 9:45 am - 10:45 am Thu. Apr. 26 356 ADF Integration with WebCenter Content 11:00 am - 12:00 pm

    Read the article

  • Playing HTML5 Video with fall back for IE8/IE7 and earlier versions of other browsers using Silverlight

    - by Harish Ranganathan
    One of the popular HTML5 tags is the video tag.  The ability to play videos without depending on a plugin is something that excites web developers to a great extent and no wonder you end up seeing video demos in all HTML5 conferences. Now, coming to HTML5 Video, the tag itself is simply <video id=”ID” src=”FILENAME.mp4/ogv/webm” > in the simplest form.  This also means that the video needs to be H.256 encoded MP4 format or some of the other formats as mentioned above.  For a detailed specification on this, check this Wikipedia article HTML5 video is supported by all the modern browsers such as IE9 (currently in RC stage), Mozilla Firefox 4 and Chrome latest versions.  Here below is a simple example of a HTML5 video tag and the screen shot of how it looks like in IE9 RC <!DOCTYPE html> <head></head> <body> <h1>This is a sample of an HTML5 Video</h1> <video src="video.mp4" id="myvideo">Your browser doesn’t support this currently</video> </body> </html> You can add attributes to the video tag such as “autoplay” which will automatically start playing the video.  Also, you can specify “poster” to display an initial picture before the video starts playing etc., but I am not going into those for now. This would play well in the modern browsers as mentioned above.  However, if the end users are viewing this page from an earlier version of browsers such as IE8/IE7 or IE6, this video wouldn’t play.  Whatever text that is specified between the video tags, would just show up. Note: for demo purposes, I went to the IE9 developer toolbar and chose IE8 as Browser Mode to exhibit this legacy behaviour.  However, in the interest of serving the larger community of users who visit the site, we would like to have a fall back mechanism for playing videos on older version of browsers. Now, Silverlight is supported in IE6/7 & 8 and other browsers too.  If we can have the same video encoded for Silverlight, we can put the fall-back code, as follows:- <video src="videos/video.mp4" id="myvideo">     <object height="252" type="application/x-silverlight-2" width="448">         <param name="source" value="resources/player.xap">         <param name="initParams" value="deferredLoad=true, duration=0, m=http://localhost/DemoSite/videos/video.mp4, autostart=false, autohide=true, showembed=true, postid=0" />         <param name="background" value="#00FFFFFF" />     </object> </video>   Note, this sample uses a Silverlight XAP file with the same video and uses the object tag to embed it instead of the HTML5 video tag. So, when I now run this sample and switch to IE8 (using the IE9 Developer toolbar’s Browser Mode), I get and when clicking on the “Play” icon, Note, there are multiple ways to play videos in Silverlight and this is one of the ways.  For a complete list of Silverlight samples, visit http://www.silverlight.net/learn/  Also, we can use Flash to play video in the fall-back mechanism as well. Thus, we can create a fall-back mechanism for playing HTML5 videos for the older browsers and hence ensure that the end users get to experience the same. Cheers !!!

    Read the article

  • Microsoft hosting free Hyper-V training for VMware Pros

    - by Ryan Roussel
    Microsoft will be hosting free training for virtualization professionals focused on Hyper-V, System Center, and virtualization architecture.  Details are below:   Just one week after Microsoft Management Summit 2011 (MMS), Microsoft Learning will be hosting an exclusive three-day Jump Start class specially tailored for VMware and Microsoft virtualization technology pros.  Registration for “Microsoft Virtualization for VMware Professionals” is open now and will be delivered as an online class on March 29-31, 2010 from 10:00am-4:00pm PDT.    The course is COMPLETELY FREE and OPEN TO ANYONE!  Please share with your customers, blog, Tweet, etc. – help us get the word out to strengthen support for Microsoft’s virtualization offerings. What’s the high-level overview? This cutting edge course will feature expert instruction and real-world demonstrations of Hyper-V and brand new releases from System Center Virtual Machine Manager 2012 Beta (many of which will be announced just one week earlier at MMS).  Register Now!   Day 1 will focus on “Platform” (Hyper-V, virtualization architecture, high availability & clustering) 10:00am – 10:30pm PDT:  Virtualization 360 Overview 10:30am – 12:00pm:  Microsoft Hyper-V Deployment Options & Architecture 1:00pm – 2:00pm:  Differentiating Microsoft and VMware (terminology, etc.) 2:00pm – 4:00pm:  High Availability & Clustering Day 2 will focus on “Management” (System Center Suite, SCVMM 2012 Beta, Opalis, Private Cloud solutions) 10:00am – 11:00pm PDT:  System Center Suite Overview w/ focus on DPM 11:00am – 12:00pm:  Virtual Machine Manager 2012 | Part 1 1:00pm –   1:30pm:  Virtual Machine Manager 2012 | Part 2 1:30pm – 2:30pm:  Automation with System Center Opalis & PowerShell 2:30pm – 4:00pm:  Private Cloud Solutions, Architecture & VMM SSP 2.0 Day 3 will focus on “VDI” (VDI Infrastructure/architecture, v-Alliance, application delivery via VDI) 10:00am – 11:00pm PDT:  Virtual Desktop Infrastructure (VDI) Architecture | Part 1 11:00am – 12:00pm:  Virtual Desktop Infrastructure (VDI) Architecture | Part 2 1:00pm – 2:30pm:  v-Alliance Solution Overview 2:30pm – 4:00pm:  Application Delivery for VDI     Every section will be team-taught by two of the most respected authorities on virtualization technologies: Microsoft Technical Evangelist Symon Perriman and leading Hyper-V, VMware, and XEN infrastructure consultant, Corey Hynes Who is the target audience for this training? Suggested prerequisite skills include real-world experience with Windows Server 2008 R2, virtualization and datacenter management. The course is tailored to these types of roles: · IT Professional · IT Decision Maker · Network Administrators & Architects · Storage/Infrastructure Administrators & Architects How do I to register and learn more about this great training opportunity? · Register: Visit the Registration Page and sign up for all three sessions · Blog: Learn more from the Microsoft Learning Blog · Twitter: Here are a few posts you can retweet: o Mar. 29-31 "Microsoft #Virtualization for VMware Pros" @SymonPerriman Corey Hynes http://bit.ly/JS-Hyper-V @MSLearning #Hyper-V o @SysCtrOpalis Mar. 29-31 "Microsoft #Virtualization for VMware Pros" @SymonPerriman Corey Hynes http://bit.ly/JS-Hyper-V #Hyper-V o Learn all the cool new features in Hyper-V & System Center 2012! SCVMM, Self-Service Portal 2.0, http://bit.ly/JS-Hyper-V #Hyper-V #Opalis What is a “Jump Start” course? A “Jump Start” course is “team-taught” by two expert instructors in an engaging radio talk show style format. The idea is to deliver readiness training on strategic and emerging technologies that drive awareness at scale before Microsoft Learning develops mainstream Microsoft Official Courses (MOC) that map to certifications.  All sessions are professionally recorded and distributed through MS Showcase, Channel 9, Zune Marketplace and iTunes for broader reach.

    Read the article

  • Knowledge Management Feedback

    - by Robert Schweighardt
    Did you know that you can provide feedback on Knowledge Management (KM) articles? It's nice to read a technical article that is well-written, the grammar and spelling are correct, the information is up to date, concise, to the point, easy to understand and it flows from one paragraph to another.  And though we always strive for a well-written article, it doesn't always come out that way. Knowledge Management articles are written by Oracle Support Engineers and we welcome your feedback.  Providing feedback helps to improve Oracle's Knowledge Base.  If you're reading a KM article and you have a comment, please let us know about it.  Maybe it's just to fix a spelling or grammatical error.  Maybe there's a broken link that needs to be fixed.  Maybe it's a suggestion to provide additional information.  Maybe the article contains incorrect information.  Maybe some information in the article is outdated.  Maybe something is not clear in the article.  Whatever it is, we want to hear about it.  We value your input! When you provide feedback it goes directly to the owner of the article.  The owner carefully reviews the comment and decides whether or not to implement it.  Most comments are implemented and we strive to implement them within a week!  For those comments that are not implemented, there is normally a good reason.  It may not be feasible to implement the suggestion or the suggestion may not be correct.  We don't take the decision lightly! So how do you provide feedback? Providing feedback on a KM article depends on whether you're a customer or an Oracle Employee. Customer 1. In the upper right hand corner of the article, click on the little +/- Rate this document icon: Note: The grayed out Comments (0) link will only show a number when there are open comments that are still being evaluated. 2. In the Article Rating window, complete as many of the following optional fields as you like and then click the Send Rating button: Rate the article as Excellent, Good or Poor Specify whether the article helped you or not Specify the ease of finding the article Provide whatever comments you have Employee The interface for Oracle Employees is a little bit different, there are more options. 1. The +/- Rate this document icon is also available to employees and is identical to what the customers have.  Please see Customer section above. 2. The Show document comments link shows all comments that have ever been submitted for the article 3. Employees have an additional way to submit a comment.  Click on the little + Add Comment icon: 4. Fill out the Add Comment fields and click the Add Comment button: We look forward to your feedback!

    Read the article

  • i don't receive mail notification...Nagios Core 4

    - by alessio
    I have a problem with automatically mail notification in Nagios Core 4 installed on ubuntu 12 .04 lts server... i have tried to send mail with nagios user and root user with the command: echo "test" | mail -s "test mail" [email protected] and i received mail correctly... but i don't receive any automatically mail notification... i don't know how can i do to resolve this issue! :( these are my configuration files (commands.cfg, contacts.cfg, nagios.log, mail.log): commands.cfg (the path /usr/bin/mail is the right path): # 'notify-host-by-email' command definition define command{ command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ } # 'notify-service-by-email' command definition define command{ command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ } # 'process-host-perfdata' command definition define command{ command_name process-host-perfdata command_line /usr/bin/printf "%b" "$LASTHOSTCHECK$\t$HOSTNAME$\t$HOSTSTATE$\t$HOSTATTEMPT$\t$HOSTSTATETYPE$\t$HOSTEXECUTIONTIME$\t$HOSTOUTPUT$\t$HOSTPERFDATA$\n" >> /usr/local/nagios/var/host-perfdata.out } # 'process-service-perfdata' command definition define command{ command_name process-service-perfdata command_line /usr/bin/printf "%b" "$LASTSERVICECHECK$\t$HOSTNAME$\t$SERVICEDESC$\t$SERVICESTATE$\t$SERVICEATTEMPT$\t$SERVICESTATETYPE$\t$SERVICEEXECUTIONTIME$\t$SERVICELATENCY$\t$SERVICEOUTPUT$\t$SERVICEPERFDATA$\n" >> /usr/local/nagios/var/service-perfdata.out } contacts.cfg: define contact{ contact_name supporto alias Supporto Clienti DEA service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r host_notification_options d,r service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email email [email protected] } define contactgroup{ contactgroup_name admins alias Nagios Administrators members supporto } nagios.log: [1401871412] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401871953] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 541 seconds ago [1401872133] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago [1401872321] SERVICE ALERT: posta;Swap Usage;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872322] SERVICE ALERT: fileserver;Current Users;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872420] SERVICE ALERT: archivio;Disk Space;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872492] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401872492] SERVICE ALERT: posta;Swap Usage;OK;SOFT;2;SWAP OK: 100% free (1984 MB out of 1984 MB) [1401872590] SERVICE ALERT: archivio;Disk Space;OK;SOFT;2;DISK OK [1401872931] Auto-save of retention data completed successfully. [1401873333] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 402 seconds ago [1401873513] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago mail.log (i think that the problem is here but i don't know how to resolve it): Jun 4 10:00:01 backups sm-msp-queue[6109]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:01:01 backups sm-msp-queue[6109]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:20:01 backups sm-msp-queue[7247]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:21:01 backups sm-msp-queue[7247]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:40:01 backups sm-msp-queue[8327]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:41:01 backups sm-msp-queue[8327]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:00:01 backups sm-msp-queue[9549]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:01:01 backups sm-msp-queue[9549]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:20:01 backups sm-msp-queue[10678]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:21:01 backups sm-msp-queue[10678]: unable to qualify my own domain name (backups) -- using short name i'm at the last step and i want to finish this Nagios Core! :) Any help be appreciate!:) host definition (this host have the disk almost full and it is in hard state but non notification) : define host{ use generic-host ; Name of host template to use host_name posta alias Server Posta ESA address 10.10.2.102 parents xen1, xen2 icon_image redhat.png statusmap_image redhat.gd2 } service definition: define service{ use generic-service host_name xen1, maestro, xen2, posta, nas002, serv2, esasrvmi02, esaubuntumi service_description Disk Space check_command ssh_all_disks!10%!5% } Notification is allowed for the contact definition you gave, but is it also allowed at the the service level ? sorry but i don't understand this thing! :(

    Read the article

< Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >