Search Results

Search found 15139 results on 606 pages for 'scripting interface'.

Page 475/606 | < Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >

  • Games at Work Part 2: Gamification and Enterprise Applications

    - by ultan o'broin
    Gamification and Enterprise Applications In part 1 of this article, we explored why people are motivated to play games so much. Now, let's think about what that means for Oracle applications user experience. (Even the coffee is gamified. Acknowledgement @noelruane. Check out the Guardian article Dublin's Frothing with Tech Fever. Game development is big business in Ireland too.) Applying game dynamics (gamification) effectively in the enterprise applications space to reflect business objectives is now a hot user experience topic. Consider, for example, how such dynamics could solve applications users’ problems such as: Becoming familiar or expert with an application or process Building loyalty, customer satisfaction, and branding relationships Collaborating effectively and populating content in the community Completing tasks or solving problems on time Encouraging teamwork to achieve goals Improving data accuracy and completeness of entry Locating and managing the correct resources or information Managing changes and exceptions Setting and reaching targets, quotas, or objectives Games’ Incentives, Motivation, and Behavior I asked Julian Orr, Senior Usability Engineer, in the Oracle Fusion Applications CRM User Experience (UX) team for his thoughts on what potential gamification might offer Oracle Fusion Applications. Julian pointed to the powerful incentives offered by games as the starting place: “The biggest potential for gamification in enterprise apps is as an intrinsic motivator. Mechanisms include fun, social interaction, teamwork, primal wiring, adrenaline, financial, closed-loop feedback, locus of control, flow state, and so on. But we need to know what works best for a given work situation.” For example, in CRM service applications, we might look at the motivations of typical service applications users (see figure 1) and then determine how we can 'gamify' these motivations with techniques to optimize the desired work behavior for the role (see figure 2). Description of Figure 1 Description of Figure 2 Involving Our Users Online game players are skilled collaborators as well as problem solvers. Erika Webb (@erikanollwebb), Oracle Fusion Applications UX Manager, has run gamification events for Oracle, including one on collaboration and gamification in Oracle online communities that involved Oracle customers and partners. Read more... However, let’s be clear: gamifying a user interface that’s poorly designed is merely putting the lipstick of gamification on the pig of work. Gamification cannot replace good design and killer content based on understanding how applications users really work and what motivates them. So, Let the Games Begin! Gamification has tremendous potential for the enterprise application user experience. The Oracle Fusion Applications UX team is innovating fast and hard in this area, researching with our users how gamification can make work more satisfying and enterprises more productive. If you’re interested in knowing more about our gamification research, sign up for more information or check out how your company can get involved through the Oracle Usability Advisory Board. Your thoughts? Find those comments.

    Read the article

  • What's the best way to create a static utility class in python? Is using metaclasses code smell?

    - by rsimp
    Ok so I need to create a bunch of utility classes in python. Normally I would just use a simple module for this but I need to be able to inherit in order to share common code between them. The common code needs to reference the state of the module using it so simple imports wouldn't work well. I don't like singletons, and classes that use the classmethod decorator do not have proper support for python properties. One pattern I see used a lot is creating an internal python class prefixed with an underscore and creating a single instance which is then explicitly imported or set as the module itself. This is also used by fabric to create a common environment object (fabric.api.env). I've realized another way to accomplish this would be with metaclasses. For example: #util.py class MetaFooBase(type): @property def file_path(cls): raise NotImplementedError def inherited_method(cls): print cls.file_path #foo.py from util import * import env class MetaFoo(MetaFooBase): @property def file_path(cls): return env.base_path + "relative/path" def another_class_method(cls): pass class Foo(object): __metaclass__ = MetaFoo #client.py from foo import Foo file_path = Foo.file_path I like this approach better than the first pattern for a few reasons: First, instantiating Foo would be meaningless as it has no attributes or methods, which insures this class acts like a true single interface utility, unlike the first pattern which relies on the underscore convention to dissuade client code from creating more instances of the internal class. Second, sub-classing MetaFoo in a different module wouldn't be as awkward because I wouldn't be importing a class with an underscore which is inherently going against its private naming convention. Third, this seems to be the closest approximation to a static class that exists in python, as all the meta code applies only to the class and not to its instances. This is shown by the common convention of using cls instead of self in the class methods. As well, the base class inherits from type instead of object which would prevent users from trying to use it as a base for other non-static classes. It's implementation as a static class is also apparent when using it by the naming convention Foo, as opposed to foo, which denotes a static class method is being used. As much as I think this is a good fit, I feel that others might feel its not pythonic because its not a sanctioned use for metaclasses which should be avoided 99% of the time. I also find most python devs tend to shy away from metaclasses which might affect code reuse/maintainability. Is this code considered code smell in the python community? I ask because I'm creating a pypi package, and would like to do everything I can to increase adoption.

    Read the article

  • Oracle Enterprise Manager Ops Center : Using Operational Profiles to Install Packages and other Content

    - by LeonShaner
    Oracle Enterprise Manager Ops Center provides numerous ways to deploy content, such as through OS Update Profiles, or as part of an OS Provisioning plan or combinations of those and other "Install Software" capabilities of Deployment Plans.  This short "how-to" blog will highlight an alternative way to deploy content using Operational Profiles. Usually we think of Operational Profiles as a way to execute a simple "one-time" script to perform a basic system administration function, which can optionally be based on user input; however, Operational Profiles can be much more powerful than that.  There is often more to performing an action than merely running a script -- sometimes configuration files, packages, binaries, and other scripts, etc. are needed to perform the action, and sometimes the user would like to leave such content on the system for later use. For shell scripts and other content written to be generic enough to work on any flavor of UNIX, converting the same scripts and configuration files into Solaris 10 SVR4 package, Solaris 11 IPS package, and/or a Linux RPM's might be seen as three times the work, for little appreciable gain.   That is where using an Operational Profile to deploy simple scripts and other generic content can be very helpful.  The approach is so powerful, that pretty much any kind of content can be deployed using an Operational Profile, provided the files involved are not overly large, and it is not necessary to convert the content into UNIX variant-specific formats. The basic formula for deploying content with an Operational Profile is as follows: Begin with a traditional script header, which is a UNIX shell script that will be responsible for decoding and extracting content, copying files into the right places, and executing any other scripts and commands needed to install and configure that content. Include steps to make the script platform-aware, to do the right thing for a given UNIX variant, or a "sorry" message if the operator has somehow tried to run the Operational Profile on a system where the script is not designed to run.  Ops Center can constrain execution by target type, so such checks at this level are an added safeguard, but also useful with the generic target type of "Operating System" where the admin wants the script to "do the right thing," whatever the UNIX variant. Include helpful output to show script progress, and any other informational messages that can help the admin determine what has gone wrong in the case of a problem in script execution.  Such messages will be shown in the job execution log. Include necessary "clean up" steps for normal and error exit conditions Set non-zero exit codes when appropriate -- a non-zero exit code will cause an Operational Profile job to be marked failed, which is the admin's cue to look into the job details for diagnostic messages in the output from the script. That first bullet deserves some explanation.  If Operational Profiles are usually simple "one-time" scripts and binary content is not allowed, then how does the actual content, packages, binaries, and other scripts get delivered along with the script?  More specifically, how does one include such content without needing to first create some kind of traditional package?   All that is required is to simply encode the content and append it to the end of the Operational Profile.  The header portion of the Operational Profile will need to contain the commands to decode the embedded content that has been appended to the bottom of the script.  The header code can do whatever else is needed, and finally clean up any intermediate files that were created during the decoding and extraction of the content. One way to encode binary and other content for inclusion in a script is to use the "uuencode" utility to convert the content into simple base64 ASCII text -- a form that is suitable to be appended to an Operational Profile.   The behavior of the "uudecode" utility is such that it will skip over any parts of the input that do not fit the uuencoded "begin" and "end" clauses.  For that reason, your header script will be skipped over, and uudecode will find your embedded content, that you will uuencode and paste at the end of the Operational Profile.  You can have as many "begin" / "end" clauses as you need -- just separate each embedded file by an empty line between "begin" and "end" clauses. Example:  Install SUNWsneep and set the system serial number Script:  deploySUNWsneep.sh ( <- right-click / save to download) Highlights: #!/bin/sh # Required variables: OC_SERIAL="$OC_SERIAL" # The user-supplied serial number for the asset ... Above is a good practice, showing right up front what kind of input the Operational Profile will require.   The right-hand side where $OC_SERIAL appears in this example will be filled in by Ops Center based on the user input at deployment time. The script goes on to restrict the use of the program to the intended OS type (Solaris 10 or older, in this example, but other content might be suitable for Solaris 11, or Linux -- it depends on the content and the script that will handle it). A temporary working directory is created, and then we have the command that decodes the embedded content from "self" which in scripting terms is $0 (a variable that expands to the name of the currently executing script): # Pass myself through uudecode, which will extract content to the current dir uudecode $0 At that point, whatever content was appended in uuencoded form at the end of the script has been written out to the current directory.  In this example that yields a file, SUNWsneep.7.0.zip, which the rest of the script proceeds to unzip, and pkgadd, followed by running "/opt/SUNWsneep/bin/sneep -s $OC_SERIAL" which is the command that stores the system serial for future use by other programs such as Explorer.   Don't get hung up on the example having used a pkgadd command.  The content started as a zip file and it could have been a tar.gz, or any other file.  This approach simply decodes the file.  The header portion of the script has to make sense of the file and do the right thing (e.g. it's up to you). The script goes on to clean up after itself, whether or not the above was successful.  Errors are echo'd by the script and a non-zero exit code is set where appropriate. Second to last, we have: # just in case, exit explicitly, so that uuencoded content will not cause error OPCleanUP exit # The rest of the script is ignored, except by uudecode # # UUencoded content follows # # e.g. for each file needed, #  $ uuencode -m {source} {source} > {target}.uu5 # then paste the {target}.uu5 files below # they will be extracted into the workding dir at $TDIR # The commentary above also describes how to encode the content. Finally we have the uuencoded content: begin-base64 444 SUNWsneep.7.0.zip UEsDBBQAAAAIAPsRy0Di3vnukAAAAMcAAAAKABUAcmVhZG1lLnR4dFVUCQADOqnVT7up ... VXgAAFBLBQYAAAAAAgACAJEAAADTNwEAAAA= ==== That last line of "====" is the base64 uuencode equivalent of a blank line, followed by "end" and as mentioned you can have as many begin/end clauses as you need.  Just separate each embedded file by a blank line after each ==== and before each begin-base64. Deploying the example Operational Profile looks like this (where I have pasted the system serial number into the required field): The job succeeded, but here is an example of the kind of diagnostic messages that the example script produces, and how Ops Center displays them in the job details: This same general approach could be used to deploy Explorer, and other useful utilities and scripts. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • SCHA API for resource group failover / switchover history

    - by krishna.k.murthy
    The Oracle Solaris Cluster framework keeps an internal log of cluster events, including switchover and failover of resource groups. These logs can be useful to Oracle support engineers for diagnosing cluster behavior. However, till now, there was no external interface to access the event history. Oracle Solaris Cluster 4.2 provides a new API option for viewing the recent history of resource group switchovers in a program-parsable format. Oracle Solaris Cluster 4.2 provides a new option tag argument RG_FAILOVER_LOG for the existing API command scha_cluster_get which can be used to list recent failover / switchover events for resource groups. The command usage is as shown below: # scha_cluster_get -O RG_FAILOVER_LOG number_of_days number_of_days : the number of days to be considered for scanning the historical logs. The command returns a list of events in the following format. Each field is separated by a semi-colon [;]: resource_group_name;source_nodes;target_nodes;time_stamp source_nodes: node_names from which resource group is failed over or was switched manually. target_nodes: node_names to which the resource group failed over or was switched manually. There is a corresponding enhancement in the C API function scha_cluster_get() which uses the SCHA_RG_FAILOVER_LOG query tag. In the example below geo-infrastructure (failover resource group), geo-clusterstate (scalable resource group), oracle-rg (failover resource group), asm-dg-rg (scalable resource group) and asm-inst-rg (scalable resource group) are part of Geographic Edition setup. # /usr/cluster/bin/scha_cluster_get -O RG_FAILOVER_LOG 3 geo-infrastructure;schost1c;;Mon Jul 21 15:51:51 2014 geo-clusterstate;schost2c,schost1c;schost2c;Mon Jul 21 15:52:26 2014 oracle-rg;schost1c;;Mon Jul 21 15:54:31 2014 asm-dg-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:54:58 2014 asm-inst-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:56:11 2014 oracle-rg;;schost2c;Mon Jul 21 15:58:51 2014 geo-infrastructure;;schost2c;Mon Jul 21 15:59:19 2014 geo-clusterstate;schost2c;schost2c,schost1c;Mon Jul 21 16:01:51 2014 asm-inst-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:01:10 2014 asm-dg-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:02:10 2014 oracle-rg;schost2c;;Tue Jul 22 16:58:02 2014 oracle-rg;;schost1c;Tue Jul 22 16:59:05 2014 oracle-rg;schost1c;schost1c;Tue Jul 22 17:05:33 2014 Note that in the output some of the entries might have an empty string in the source_nodes. Such entries correspond to events in which the resource group is switched online manually or during a cluster boot-up. Similarly, an empty destination_nodes list indicates an event in which the resource group went offline. - Arpit Gupta, Harish Mallya

    Read the article

  • Telesharp – An Application Repository for .NET applications

    - by cibrax
    A year ago, we released SO-Aware as our first product in Tellago Studios. SO-Aware represented a new way to manage web services and all the related artifacts like configuration, tests or monitoring data in the Microsoft stack. It was based on the idea of using a lightweight SOA governance approach with a central repository exposed through RESTful services. At that point, we thought the same idea could be extended to enterprise applications in general by providing a generic repository for many of the runtime or design time artifacts generated during the development like configuration, application description or topology (a high level view of the components that made up a system), logging information or binaries. It took us several months to give a form to that idea and implement it as a product, but it is finally here and I am very proud to announce the release today under the name of “TeleSharp”. Telesharp provides in a nutshell the following features, 1. Configure your application topology in a central repository. Application topology in this context means that you can decompose your application and describe it in terms of components and how they interact each other. For example, you can tell that the CRM system is made up of a couple of WCF services and a ASP.NET MVC front end. 2. Centralize configuration for your applications and components.  You can import existing .NET configuration sections into the repository and associate them to the different components. In addition, environment overrides are supported for the configuration sections. We provide tooling and extensions in Visual Studio for managing all the configuration, and a set of powershell commands for automating the configuration deployment. 3. Browse all the assemblies and types remotely in your application servers in a web browser using an interface similar to any of the existing .NET reflection tools. You can easily determine this way whether the server is running the correct version of your applications. 4. Centralize logging and exception management into the repository. You get different reports and a pivot viewer experience for browsing all the different logging information generated by your applications. In addition, TeleSharp provides different providers for pushing the logging information to the central repository using well-known frameworks like ELMAH, Log4Net, EntLib or even Windows ETW.  The central repository itself is implemented as a set of OData services that any application can easily consume using regular Http. You can read more details in this introductory post If you think this product can be a good fit in your organization, you can request a trial version in our Tellago Studios website.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-06

    - by Bob Rhubart
    Creating an Oracle Endeca Information Discovery 2.3 Application Part 3 : Creating the User Interface | Mark Rittman Oracle ACE Director Mark Rittman continues his article series. WebLogic Advisor WebCasts on-demand A series of videos by WebLogic experts, available to those with access to support.oracle.com. Integrating OBIEE 11g into Weblogic’s SAML SSO | Andre Correa A-Team blogger Andre Correa illustrates a transient federation scenario. InfoQ: Cloud 2017: Cloud Architectures in 5 Years Andrew Phillips, Mark Holdsworth, Martijn Verburg, Patrick Debois, and Richard Davies review the evolution of cloud computing so far and look five years into the future. Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 These awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Either a customer, their partner, or an Oracle representative can submit the nomination form on behalf of the customer. Submission deadline: July 17. Winners receive a free pass to Oracle OpenWorld 2012 in San Francisco. SOA Analysis within the Department of Defense Architecture Framework (DoDAF) 2.0 – Part II | Dawit Lessanu The conclusion of Lessanu's two-part series for Service Technology Magazine. Driving from Business Architecture to Business Process Services | Hariharan V. Ganesarethinam "The perfect mixture of EA, SOA and BPM make enterprise IT highly agile so it can quickly accommodate dynamic business strategies, alignments and directions," says Ganesarethinam. "However, there should be a structured approach to drive enterprise architecture to service-oriented architecture and business processes." Book Review: Oracle Application Integration Architecture (AIA) Foundation Pack 11gR1: Essentials | Rajesh Raheja Rajesh Raheja reviews the new AIA book from Packt Publishing. ODTUG Kscope12 - June 24-28 - San Antonio, TX San Antonio, TX June 24-28, 2012 Kscope12, sponsored by ODTUG, is your home for Application Express, BI and Oracle EPM, Database Development, Fusion Middleware, and MySQL training by the best of the best! Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA) | Mahesh Sharma Mahesh Sharma describes EC HA, looks at the prerequisites, and shares screen shots. The right way to transform your business via the cloud | David Linthicum A couple of quick tests will show you where you need to focus your transition efforts. Thought for the Day "Most software isn't designed. Rather, it emerges from the development team like a zombie emerging from a bubbling vat of Research and Development juice. When a discipline is hugging the ragged edge of technology, this might be expected, but most of today's software is comprised of mostly 'D' and very little 'R'." — Alan Cooper Source: softwarequotes.com

    Read the article

  • Virtualized data centre&ndash;Part four: The design

    - by marc dekeyser
    Welcome back to the fourth post in this series! Today we will have a look at what Microsoft recommends as a “private cloud design” and what I will make of it. Whilst my own solution is based of the reference architecture, it is quite different indeed! An important thing to know is that, whilst I am using the private cloud as a reference, I am skipping most of the steps in designing a private cloud. If that is why you are here, please read the links at the end of the article and skim through my own content. A private cloud is much more process driven than just building a virtual infrastructure… The architecture of it all… So imagine for a minute that you have unlimited funds to build this lab of yours… You’d want redundancy on all levels and separation of each network where possible! Unfortunately we don’t have that luxury and, as you saw me hinting at in the previous article, our own design will be more limited but still quite capable! Networking From the networking perspective I will not have a fully redundant network, after all, this is but a lab environment! Thanks to Server 2012 I will be able to use bonding on my NIC’s and use LACP to improve the performance on that part. Storage As I mentioned in the previous article a Synology DS1218+ will be used for iSCSI provisioning. This device has 2 NICs on-board which can be bonded in to one 2 Gbps interface giving me a decent throughput and making the disks the most limiting factor in the storage design. Domain controllers and extra infrastructure Server 2012 completely supports running domain controllers virtualized and has no need to actually have a reachable DC when booting… That being said I need a remote access machine to power on the hosts (I have no need for them running 24/7) and a possible System Center VMM 2012 box (although server 2012 is not supported until SP1 :( ). Undecided on if I am to install those boxes separately or as a virtual machine… Which amounts to… Something like this pretty picture!                   Sources Microsoft Private Cloud Solutions Repository (en-US) http://social.technet.microsoft.com/wiki/contents/articles/12131.microsoft-private-cloud-solutions-repository-en-us.aspx Reference  Architecture: http://social.technet.microsoft.com/wiki/contents/articles/3819.reference-architecture-for-private-cloud.aspx Private Cloud Reference Model: http://social.technet.microsoft.com/wiki/contents/articles/4399.private-cloud-reference-model.aspx

    Read the article

  • Getting WLAN on my Laptop to work (Medion MD98300)

    - by Anand Böhmer
    Dear Ubuntu Community, I am having difficulties to get the WLAN Adapter on my Medion 98300 Laptop to work. The WLAN Card seems to be connected through an internal USB Interface and the Card itself had shown up as a wirelles Network while installing Ubuntu. I have tried a few things earlier, but none of my google reasearches have brought me to a working solution... I am quite new to the Linux System and only knoew a couple of terminal commands so far, so I probably have missed out on a few possible solutions. Maybe you can help me? Thank you very much in advance! A fre minimal technical Details: AMD Turion 64 X2 Dual-Core Mobile Technolgie TL-50 NVIDIA GeForce Go 6150 SanDisk 64GB SSD 2GB RAM DDR2 667 nForce Chipset (I forgt the Version, but deductable from the GPU I guess) WiFi: ZyDAS ZD1211B 802.11g Thank's a lot again! :) UPDATE: I tried around myself a little and found a guide on the Linux Mint forums that helped! I already had tried to install the linux backport modules etc. What I finally did was update the linux firmware and run the following command: echo "options acer_wmi wirelles=1" sudo tee /etc/modprobe.d/acer_wmi.conf and rebooted now I found and could connect to networks, but unfortunately I found, that the link quality was very bad, around 40 to 50. Eventhough my Router is running at high power and is only 6 Meters away! I then switched a few channels, but that did not improve much. Before, under Windows, I had a very good link quality and had the entire 16mbit/s internet connection at disposal, now I can only get about 3-5Mbit/s. better then nothing, but still pretty bad! The "TX power" is fixed on 20dBm and iwconfig says, that the "Power Management" is off... Maybe the power of the module is set too low?... UPDATE2: I figured that 20dBm is a normal power output. I even tried to change the power using iwconfig wlan0 txpower INTEGERHERE but, obviously my "Card" does not support more then 20. More would probably be illegal as well, so I won't even use more then 20. I guess that I will have to figure out a way, or maybe just switch cards. Are the Mailboard-USB-Connectors on a laptop of the same property as the standard external ones? If so, I could simply solder a micro Wirelless N Adapter onto the board :)

    Read the article

  • Class instance clustering in object reference graph for multi-entries serialization

    - by Juh_
    My question is on the best way to cluster a graph of class instances (i.e. objects, the graph nodes) linked by object references (the -directed- edges of the graph) around specifically marked objects. To explain better my question, let me explain my motivation: I currently use a moderately complex system to serialize the data used in my projects: "marked" objects have a specific attributes which stores a "saving entry": the path to an associated file on disc (but it could be done for any storage type providing the suitable interface) Those object can then be serialized automatically (eg: obj.save()) The serialization of a marked object 'a' contains implicitly all objects 'b' for which 'a' has a reference to, directly s.t: a.b = b, or indirectly s.t.: a.c.b = b for some object 'c' This is very simple and basically define specific storage entries to specific objects. I have then "container" type objects that: can be serialized similarly (in fact their are or can-be "marked") they don't serialize in their storage entries the "marked" objects (with direct reference): if a and a.b are both marked, a.save() calls b.save() and stores a.b = storage_entry(b) So, if I serialize 'a', it will serialize automatically all objects that can be reached from 'a' through the object reference graph, possibly in multiples entries. That is what I want, and is usually provides the functionalities I need. However, it is very ad-hoc and there are some structural limitations to this approach: the multi-entry saving can only works through direct connections in "container" objects, and there are situations with undefined behavior such as if two "marked" objects 'a'and 'b' both have a reference to an unmarked object 'c'. In this case my system will stores 'c' in both 'a' and 'b' making an implicit copy which not only double the storage size, but also change the object reference graph after re-loading. I am thinking of generalizing the process. Apart for the practical questions on implementation (I am coding in python, and use Pickle to serialize my objects), there is a general question on the way to attach (cluster) unmarked objects to marked ones. So, my questions are: What are the important issues that should be considered? Basically why not just use any graph parsing algorithm with the "attach to last marked node" behavior. Is there any work done on this problem, practical or theoretical, that I should be aware of? Note: I added the tag graph-database because I think the answer might come from that fields, even if the question is not.

    Read the article

  • Endeca Information Discovery 3-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    For Oracle Partners, on October 3-5, 2012 in Milan, Italy: Register here. Endeca Information Discovery plays a key role with your big data analysis and complements Oracle Business Intelligence Solutions such as OBIEE. This FREE hands-on workshop for Oracle Partners highlights technical know-how of the product and helps understand its value proposition. We will walk you through four key components of the product: Oracle Endeca Server—A highly scalable, search-analytical database that derives the data model based on the data presented to it, thereby reducing data modeling requirements. Studio—A highly interactive, component-based user interface for configuring advanced, yet intuitive, analytical applications. Integration Suite—Provides rapid unification and enrichment of diverse sources of information into a single integrated view. Extensible Value-Added Modules—Add-on modules that provide value quickly through configuration instead of custom coding. Topics covered will include Data Exploration with Endeca Information Discovery, Data Ingest, Project Lifecycle, Building an Endeca Server data model and advanced modeling techniques, and Working with Studio. Lab Outline The labs showcase Oracle Endeca Information Discovery components and functionality by providing expertise on features and know-how of building such applications. The hands-on activities are based on a Quick Start application provided during the class. Audience Oracle Partners, Big Data Analytics Developer and Architects BI and EPM Application Developers and Implementers, Data Warehouse Developers Equipment Requirements This workshop requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware 8GB RAM is highly recommended (Windows 64 bit Machine is required) 40 GB free space (includes staging) USB 2.0 port (at least one available) Software One of the following operating systems: 64-bit Windows host/laptop OS (Windows 7 or Windows Server 2008) 64-bit host/laptop OS with a Windows VM (Server, or Win 7, BIC2g, etc.) Internet Explorer 8.x , Firefox 3.6 or Firefox 6.0 WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle Endeca Information Discovery Workshop Register here: October 3-5, 2012: Cinisello Balsamo, Milan.  We will confirm with you your place within 2 weeks. Questions?  Send email to: [email protected]  :  Oracle Platform Technologies Enablement Services.

    Read the article

  • Fan not detected by lm-sensors

    - by OrangeTux
    My fan is blowing hard, while my cpu temperature is 32 degrees I tried a lot of things to control my fan. Changed grub file GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi= pci=noacpi" _ GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi=\"Linux\"" Ran sensors-detect : To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO) Unloading i2c-dev... OK Unloading i2c-i801... OK Unloading cpuid... OK Ran sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +34.0°C (crit = +90.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +34.0°C (high = +80.0°C, crit = +90.0°C) Core 2: +34.0°C (high = +80.0°C, crit = +90.0°C) Ran sudo start module-init-tools and sudo start module-init-tools module-init-tools stop/waiting As you can see my fan isn't detected. Running fancontrol gives me this: Loading configuration from /etc/fancontrol ... Error: Can't read configuration file Can you help me, please? I cannot use my laptop now in class. Thanks in advance. My system 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 02) 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 02) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 05) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) 00:1c.2 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 (rev 05) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 05) 01:00.0 VGA compatible controller: ATI Technologies Inc Manhattan [Mobility Radeon HD 5400 Series] 01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series] 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 7f:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 05) 7f:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 05) 7f:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 05) 7f:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 05) 7f:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 05) 7f:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 05)

    Read the article

  • Puppet: Getting Started On Windows

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-getting-started-on-windows.aspxNow that we’ve talked a little about Puppet. Let’s see how easy it is to get started. Install Puppet Let’s get Puppet Installed. There are two ways to do that: With Chocolatey: Open an administrative/elevated command shell and type: choco install puppet Download and install Puppet manually - http://puppetlabs.com/misc/download-options Run Puppet Let’s make pasting into a console window work with Control + V (like it should): choco install wincommandpaste If you have a cmd.exe command shell open, (and chocolatey installed) type: RefreshEnv The previous command will refresh your environment variables, ala Chocolatey v0.9.8.24+. If you were running PowerShell, there isn’t yet a refreshenv for you (one is coming though!). If you have to restart your CLI (command line interface) session or you installed Puppet manually open an administrative/elevated command shell and type: puppet resource user Output should look similar to a few of these: user { 'Administrator': ensure => 'present', comment => 'Built-in account for administering the computer/domain', groups => ['Administrators'], uid => 'S-1-5-21-some-numbers-yo-500', } Let's create a user: puppet apply -e "user {'bobbytables_123': ensure => present, groups => ['Users'], }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: created Run the 'puppet resource user' command again. Note the user we created is there! Let’s clean up after ourselves and remove that user we just created: puppet apply -e "user {'bobbytables_123': ensure => absent, }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed Run the 'puppet resource user' command one last time. Note we just removed a user! Conclusion You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet. Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

    Read the article

  • Is Infiniband going to get squeezed by iWARP and external QPI?

    - by andy.grover
    The Inquirer certainly thinks so.However, I'm not so sure it makes sense to compare Infiniband to an as-yet-unannounced optical external QPI. QPI is currently a processor interconnect. CPUs, RAM, and devices connected by it are conceptually part of the same machine -- they run a single OS, for example. They are both "networks" or "fabrics" but they have very different design trade-offs.Another widely-used bus in the system is closer to Infiniband than QPI -- PCI Express. Isn't it more likely that PCIe could take on IB? There are companies already who have solutions that use external PCI Express for cluster interconnect, but these have not gained significant market share. Why would QPI, a technology whose sweet spot is even further from Infiniband's than PCIe, be able to challenge Infiniband? It's hard to speculate without much information, but right now it doesn't seem likely to me.The other prediction made in the article is that Intel's 10GbE iWARP card could squeeze IB on the low end, due to its greater compatibility and lower cost.It's definitely never a good idea to bet against Ethernet when it comes to mass-market, commodity networking. Ethernet will win. 10GbE will win. But, there are now two competing ways to implement the low-latency RDMA Verbs interface on top of Ethernet. iWARP is essentially RDMA over TCP/IP over Ethernet. The new alternative is IBoE (Infiniband over Ethernet, aka RoCEE, aka "Rocky"). This encapsulates the IB packet protocol directly in the Ethernet frame. It loses the layer 3 routability of iWARP, but better maintains software compatibility with existing apps that use IB, and is simpler to implement in both software and hardware. iWARP has a substantial head start, but I believe that IBoE silicon will eventually be cheaper, and more likely to be implemented in commodity Ethernet hardware.I think IBoE is going to take low-end market share from traditional IB, but I think this is a situation IB hardware vendors have no problem accepting. Commoditized IBoE NICs invite greater use of RDMA features, and when higher performance is needed, customers can upgrade to "real" IB, maintaining IB's justification for higher prices. (IB max interconnect speeds have historically been 2-4x higher than Ethernet, and I don't see that changing.)(ObDisclosure: My current employer now sells IB hardware. I previously also worked at Intel. My opinions are my own, duh.)

    Read the article

  • Northwind now available on SQL Azure

    - by jamiet
    Two weeks ago I made available a copy of [AdventureWorks2012] on SQL Azure and published credentials so that anyone from the SQL community could connect up and experience SQL Azure, probably for the first time. One of the (somewhat) popular requests thereafter was to make the venerable Northwind database available too so I am pleased to say that as of right now, Northwind is up there too. You will notice immediately that all of the Northwind tables (and the stored procedures and views too) have been moved into a schema called [Northwind] – this was so that they could be easily differentiated from the existing [AdventureWorks2012] objects. I used an SQL Server Data Tools (SSDT) project to publish the schema and data up to this SQL Azure database; if you are at all interested in poking around that SSDT project then I have made it available on Codeplex for your convenience under the MS-PL license – go and get it from https://northwindssdt.codeplex.com/. Using SSDT proved particularly useful as it alerted me to some aspects of Northwind that were not compatible with SQL Azure, namely that five of the tables did not have clustered indexes: The beauty of using SSDT is that I am alerted to these issues before I even attempt a connection to SQL Azure. Pretty cool, no? Fixing this situation was of course very easy, I simply changed the following primary keys from being nonclustered to clustered: [PK_Region] [PK_CustomerDemographics] [PK_EmployeeTerritories] [PK_Territories] [PK_CustomerCustomerDemo]   If you want to connect up then here are the credentials that you will need: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly You will need SQL Server Management Studio (SSMS) 2008R2 installed in order to connect or alternatively simply use this handy website: https://mhknbn2kdz.database.windows.net which provides a web interface to a SQL Azure server. Do remember that hosting this database is not free so if you find that you are making use of it please help to keep it available by visiting Paypal and donating any amount at all to [email protected]. To make this easy you can simply hit this link and the details will be completed for you – all you have to do is login and hit the “Send” button. If you are already a PayPal member then it should take you all of about 20 seconds! I hope this is useful to some of you folks out there. Don’t forget that we also have more data up there than in the conventional [AdventureWorks2012], read more at Big AdventureWorks2012. @Jamiet  AdventureWorks on Azure - Provided by the SQL Server community, for the SQL Server community!

    Read the article

  • Data Generator Source Adapter

    This component needs little explanation. It generates random integer (DT_I4) and string (DT_WSTR) data and places them in the pipeline. You specify how many columns of each you would like and for any string columns you pass a fixed length value. You then need to specify how many rows in total you require to be generated. This component is used by us to do testing of the pipeline and components downstream. Previously we would have used a script component (as a source) to generate the rows but found ourselves rewriting the code too often so created this component. Screenshots SQL Server 2005 Integration Services SQL Server 2008/2012 Integration Services The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Data Generator Source from the list. Downloads The Data Generator Source Adapter is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Data Generator Source Adapter for SQL Server 2005 Data Generator Source Adapter for SQL Server 2008 Data Generator Source Adapter for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.30 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.29 - SQL Server 2008 February 2008 CTP. Includes support for upgrade of 2005 packages. Simplified user interface. (4 Mar 2008) Version 2.0.0.27 - SQL Server 2008 November 2007 CTP. String columns will now use the default system code page. Previously string columns always used 1252. (15 Feb 2008) SQL Server 2005 Version 1.1.0.23 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.0.0 - SQL Server 2005 IDW 16 Sept CTP. Public release. (6 Oct 2005)

    Read the article

  • Calling functions from different classes

    - by A Ron Hubbard Clevenger
    I'm writing a program and I'm supposed to check and see if a certain object is in the list before I call it. I set up the contains() method which is supposed to use the equals() method of the Comparable interface I implemented on my Golfer class but it doesn't seem to call it (I put print statements in to check). I can't seem to figure out whats wrong with the code, the ArrayUnsortedList class I'm using to go through the list even uses the correct toString() method I defined in my Golfer class but for some reason it won't use the equals() method I implemented. //From "GolfApp.java" public class GolfApp{ ListInterface <Golfer>golfers = new ArraySortedList<Golfer> (20); Golfer golfer; //..*snip*.. if(this.golfers.contains(new Golfer(name,score))) System.out.println("The list already contains this golfer"); else{ this.golfers.add(this.golfer = new Golfer(name,score)); System.out.println("This golfer is already on the list"); } //From "ArrayUnsortedList.java" protected void find(T target){ location = 0; found = false; while (location < numElements){ if (list[location].equals(target)) //Where I think the problem is { found = true; return; } else location++; } } public boolean contains(T element){ find(element); return found; } //From "Golfer.java" public class Golfer implements Comparable<Golfer>{ //..irrelavant code sniped..// public boolean equals(Golfer golfer) { String thisString = score + ":" + name; String otherString = golfer.getScore() + ":" + golfer.getName() ; System.out.println("Golfer.equals() has bee called"); return thisString.equalsIgnoreCase(otherString); } public String toString() { return (score + ":" + name); } My main problem seems to be getting the find function of the ArrayUnsortedList to call my equals function in the find() part of the List but I'm not exactly sure why, like I said when I have it printed out it works with the toString() method I implemented perfectly. I'm almost positive the problem has to do with the find() function in the ArraySortedList not calling my equals() method. I tried using some other functions that relied on the find() method and got the same results.

    Read the article

  • How to recognize an optimus laptop?

    - by kellogs
    kellogs@kellogs-K52Jc ~ $ lspci 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 18) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 18) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 06) 00:1c.5 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 6 (rev 06) 00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 06) 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 06) 02:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) 03:00.0 System peripheral: JMicron Technology Corp. SD/MMC Host Controller (rev 80) 03:00.2 SD Host controller: JMicron Technology Corp. Standard SD Host Controller (rev 80) 03:00.3 System peripheral: JMicron Technology Corp. MS Host Controller (rev 80) 03:00.4 System peripheral: JMicron Technology Corp. xD Host Controller (rev 80) 03:00.5 Ethernet controller: JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller (rev 03) ff:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 05) ff:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 05) ff:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 05) ff:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 05) ff:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 05) ff:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 05) kellogs@kellogs-K52Jc ~ $ inxi -SGx System: Host: kellogs-K52Jc Kernel: 3.5.0-17-generic x86_64 (64 bit, gcc: 4.7.2) Desktop: KDE 4.9.5 (Qt 4.8.3) Distro: Linux Mint 14 Nadia Graphics: Card: Intel Core Processor Integrated Graphics Controller bus-ID: 00:02.0 X.Org: 1.13.0 drivers: intel (unloaded: fbdev,vesa) Resolution: [email protected] GLX Renderer: Mesa DRI Intel Ironlake Mobile GLX Version: 2.1 Mesa 9.0.3 Direct Rendering: Yes kellogs@kellogs-K52Jc ~ $ lshw [...] *-display description: VGA compatible controller product: Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 18 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:44 memory:d0000000-d03fffff memory:c0000000-cfffffff ioport:e080(size=8) Manufacturer advertises the K52Jc model which I bought as optimus enabled. However, no traces of it in the output above. Of course, Bumblebee would not start on this machine. Should I rest assured that is a defective / un-optimused machine ?

    Read the article

  • Webcast Replay : SANS Institute Product Review of Oracle Identity Manager

    - by B Shashikumar
    Thanks to everyone who attended the SANS Institute webinar covering the product review of Oracle Identity Manager. And a special thanks to our guest speakers from SuperValu - Phillip Black and Patrick Abreo. If you missed the webcast, you can catch a replay here  And here are the slides that were used in the webcast.  There were many questions that we could not answer as we ran out of time. We have captured some of the questions with responses below. Is Oracle Identity Analytics still offered as a separate product or is it part of Oracle Identity Manager? Oracle Identity Manager and Oracle Identity Analytics are now offered as part of Oracle Identity Governance Suite. OIA and OIM share a common UI architecture, common data model and common support for connected and disconnected resources.  When requesting new access/entitlements is there an approval process? Yes. We leverage SOA BPEL-based workflows for approvals  Are the identity self service capabilities based on Oracle ADF? Yes they are completely based on Oracle ADF  Can you give some examples of personalization and customization with Oracle Identity Manager 11gR2? With the new UI config framework we can enable different levels of UI customization. Customers now have the ability to Point & click to customize; or drag and drop customization without any need for coding. So users can easily personalize the interface of their application within the browser. For example, they can change the logo, Rearrange, hide Home Page regions; regularly searched items can be saved and re-used; Searchable & search results columns can be configured; Sorting preferences are remembered and so on. For more sophisticated customization, Customers can also edit the standard JSF within the page to alter business rules, modify page flows, page layouts and other items. Can you explain the role of sandboxes in customization? Customers can make their custom changes within a sandbox so that it doesn’t impact their production environment. They can make their changes, validate those changes, stage and then commit those changes without affecting production users. This is similar to how source code control systems like perforce work To watch a replay of the webcast, click here

    Read the article

  • Register Game Object Components in Game Subsystems? (Component-based Game Object design)

    - by topright
    I'm creating a component-based game object system. Some tips: GameObject is simply a list of Components. There are GameSubsystems. For example, rendering, physics etc. Each GameSubsystem contains pointers to some of Components. GameSubsystem is a very powerful and flexible abstraction: it represents any slice (or aspect) of the game world. There is a need in a mechanism of registering Components in GameSubsystems (when GameObject is created and composed). There are 4 approaches: 1: Chain of responsibility pattern. Every Component is offered to every GameSubsystem. GameSubsystem makes a decision which Components to register (and how to organize them). For example, GameSubsystemRender can register Renderable Components. pro. Components know nothing about how they are used. Low coupling. A. We can add new GameSubsystem. For example, let's add GameSubsystemTitles that registers all ComponentTitle and guarantees that every title is unique and provides interface to quering objects by title. Of course, ComponentTitle should not be rewrited or inherited in this case. B. We can reorganize existing GameSubsystems. For example, GameSubsystemAudio, GameSubsystemRender, GameSubsystemParticleEmmiter can be merged into GameSubsystemSpatial (to place all audio, emmiter, render Components in the same hierarchy and use parent-relative transforms). con. Every-to-every check. Very innefficient. con. Subsystems know about Components. 2: Each Subsystem searches for Components of specific types. pro. Better performance than in Approach 1. con. Subsystems still know about Components. 3: Component registers itself in GameSubsystem(s). We know at compile-time that there is a GameSubsystemRenderer, so let's ComponentImageRender will call something like GameSubsystemRenderer::register(ComponentRenderBase*). pro. Performance. No unnecessary checks as in Approach 1. con. Components are badly coupled with GameSubsystems. 4: Mediator pattern. GameState (that contains GameSubsystems) can implement registerComponent(Component*). pro. Components and GameSubystems know nothing about each other. con. In C++ it would look like ugly and slow typeid-switch. Questions: Which approach is better and mostly used in component-based design? What Practice says? Any suggestions about implementation of Approach 4? Thank you.

    Read the article

  • An Epic Question "How to call a method when the page loads"

    - by Arunkumar Ramamoorthy
    Quite often, there comes a question in OTN, with different subjects, all meaning "How to call a method when my ADF page loads?". More often, people tend to take the approach of ADF Phase Listener by overriding before/afterPhase methods.In this blog, we will go through different options in achieving it.1. Method Call Activity as default activity in Taskflow :If the application is built with taskflows, then this is the best suited approach to take. 1.a. Calling a Data Control Method :To call a Data Control method (ex: A method in AMImpl exposed as client interface), simply Drag and Drop the method as Default Method Call Activity, then draw a control flow case from the method to your page. Once after this, drop the taskflow as region in main page. When we run the main page, the Method Call Activity would be called first, and then the page will be rendered.1.b. Calling a Method in Backing Bean: To call a method in the backing bean before pageload, we can follow the similar approach as above. Instead of binding the Method Call Activity to an action/method binding in pagedef, we bind to the method. Insert a Method Call Activity (and make it as default) from the Component Palette. Double click on to select a method to bind. This approach can also be used, to perform some action in backing bean along with calling a method Data Control (just need to add bindings code in backing bean to execute DC method). 2. Using invokeAction Executable :If the application is built with pages and no taskflows are involved, then this option can be taken into consideration.In the page definition of the page, add an invokeAction Executable and bind it to the method needed to be executed. 3. Using combination of Server and Client Listeners : If the page does not have any page definition, then to call a method in backing bean, this approach can be taken. In this, a serverListener would be added at the document level, which would be calling the method in backing bean. Along with this, a clientListener would be added with "load" type (i.e will be triggered when the page loads), which would queue a serverEvent to trigger the method. 4. Using Page Phase Listener :This should be the last resort. Care should be taken when using this approach since the Phase Listener would be called for each request sent by the client.Zeeshan Baig's blog covers this scenario.

    Read the article

  • An Epic Question "How to call a method when the page loads"

    - by Arunkumar Ramamoorthy
    Quite often, there comes a question in OTN, with different subjects, all meaning "How to call a method when my ADF page loads?". More often, people tend to take the approach of ADF Phase Listener by overriding before/afterPhase methods.In this blog, we will go through different options in achieving it.1. Method Call Activity as default activity in Taskflow :If the application is built with taskflows, then this is the best suited approach to take. 1.a. Calling a Data Control Method :To call a Data Control method (ex: A method in AMImpl exposed as client interface), simply Drag and Drop the method as Default Method Call Activity, then draw a control flow case from the method to your page. Once after this, drop the taskflow as region in main page. When we run the main page, the Method Call Activity would be called first, and then the page will be rendered.1.b. Calling a Method in Backing Bean: To call a method in the backing bean before pageload, we can follow the similar approach as above. Instead of binding the Method Call Activity to an action/method binding in pagedef, we bind to the method. Insert a Method Call Activity (and make it as default) from the Component Palette. Double click on to select a method to bind. This approach can also be used, to perform some action in backing bean along with calling a method Data Control (just need to add bindings code in backing bean to execute DC method). 2. Using invokeAction Executable :If the application is built with pages and no taskflows are involved, then this option can be taken into consideration.In the page definition of the page, add an invokeAction Executable and bind it to the method needed to be executed. 3. Using combination of Server and Client Listeners : If the page does not have any page definition, then to call a method in backing bean, this approach can be taken. In this, a serverListener would be added at the document level, which would be calling the method in backing bean. Along with this, a clientListener would be added with "load" type (i.e will be triggered when the page loads), which would queue a serverEvent to trigger the method. 4. Using Page Phase Listener :This should be the last resort. Care should be taken when using this approach since the Phase Listener would be called for each request sent by the client.Zeeshan Baig's blog covers this scenario.

    Read the article

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

  • EPM Planning (Hyperion) V11.1.2 Implementation Hands-On Boot-camp

    - by Mike.Hallett(at)Oracle-BI&EPM
    5-Day Training for Partners: 29th October - 2nd November 2012, London (UK): REGISTER Here This FREE for Partners 5-day workshop is designed to provide implementation instruction on Oracle Hyperion EPM Planning.  This boot-camp is intended for prospective implementers of the Planning and Budgeting functionality of Oracle EPM or implementers that are currently familiar with the basics of EPM Planning and looking to strengthen their base of knowledge in the product. The class begins with an overview of Essbase, the foundation of Hyperion Planning. It provides a general overview of Planning and Planning terms, the architecture of all the Planning components, and how they are commonly used. The course goes over all the steps to create an application from scratch. This involves some preparation work outside of Planning and leads to developing the application in both the Planning Windows and Web clients. Participants will modify existing dimensions and build out the hierarchies using the Web client. Topics Covered The boot-camp shows developers how to build out dimensions using Classic Planning and by using EPMA. It covers the mechanics and cover strategies for automating the build process such as interface tables. It reviews data loads using Load Rules to load the Planning database. The course focuses on tasks that end-users must perform during the planning cycle. It walks students through creating and modifying forms, working with forms to enter data, adding annotations, and the rest of the form features such as running business rules and managing task lists. It covers how to use the forms in the Smart View client and finishes up the end-user perspective by going through Workflow Management and the process of submitting a plan for review. The final section of the course covers Security and other administration topics such as automation and deployment. Prerequisites Ideal participants are Oracle partners (SIs and resellers) with a background in business information systems and a clientele of customers with ongoing or prospective EPM initiatives. Alternatively, partners with the background described above and an interest in evolving their practice to a similar profile are suitable participants. Further online OPN guided learning path information and webinars are available at: Oracle Hyperion Planning 11 Essentials. Please note that attendees are required to bring a laptop. View here laptop requirements and detailed agenda. ·       REGISTER Here : acceptance is subject to availability and your place will be confirmed within two weeks  ( and for help see the Partner Registration Guide ). Training Location: Oracle Corporation UK Ltd Columbus Room Customer Visit Center 1 South Place London EC2M 2RB Training Dates: 29th October - 2nd November  9:30 am – 5:00 pm BST For more information please contact [email protected].

    Read the article

  • ArchBeat Link-o-Rama for November 16, 2012

    - by Bob Rhubart
    X.509 Certificate Revocation Checking Using OCSP protocol with Oracle WebLogic Server 12c | Abhijit Patil Abhijit Patil's article focuses on how to use X.509 Certificate Revocation Checking Functionality with the OCSP protocol to validate in-bound certificates. Although this article focuses on inbound OCSP validation using OCSP, Oracle WebLogic Server 12c also supports outbound OCSP validation. Leveraging Oracle Scorecard and Strategy Management for Everyday BI Needs "Oracle Scorecard and Strategy Management (OSSM) is built-upon the premise that a scorecard system should not be separate from the BI system, like many comparable tools are today," says author Kevin McGinely. "Instead of a separate application with its own data, its own data definitions, and its own front-end, Oracle made the choice to integrate OSSM directly into OBIEE." Applying BI for personal productivity recognition and gamification | Capgemini Oracle Blog "It is quite obvious that if you want people to participate you need an appealing and intuitive user interface," says Capgemini's Henk Vermeulen in this interesting exploration of gamification in the enterprise. Build and release OSB projects with Maven | Edwin Biemond "With Maven we are able to build and deploy OSB projects," says Oracle ACE Edwin Biemond. "The artifacts generated by Maven called snaphosts and releases can be automatically uploaded to a software repository. These versioned OSB jars can then be downloaded by the OSB Servers and deployed." Biemond shows you how in this detailed technical post. ADF Generator for Dynamic ADF BC and ADF UI | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post is an extension of his OOW12 presentation, "Oracle ADF Implementations Around the Globe: Best Practices," and includes the sample application he promised to share. Service-oriented organizations have a head start in the cloud race | ZDNet ZDNet SOA blogger Joe McKendrick offers a snapshot of a recent report Forrester analyst James Staten. Oracle Fusion Middleware Security: X509 Fallback to Form | Debasish BhattacharyaOracle Fusion Middleware A-Team architect Debasish Bhattacharya shares a solution that resulted from brainstorming with colleagues Chris Johnson and Brian Eidelman. "The solution is not very difficult," says Bhattacharya, "though it needs some additional configurations and coding." It's all presented in this detailed post. Agile Architecture | David Sprott "There is ample evidence that Agile Architecture is a primary contributor to business agility, yet we do not have a well understood architecture management system that integrates with Agile methods," observes David Sprott in this extensive post. Thought for the Day "Operating systems are like underwear — nobody really wants to look at them." — Bill Joy Source: SoftwareQuotes.com

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >