Search Results

Search found 4808 results on 193 pages for 'reserved instances'.

Page 148/193 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • What's up with LDoms: Part 2 - Creating a first, simple guest

    - by Stefan Hinker
    Welcome back! In the first part, we discussed the basic concepts of LDoms and how to configure a simple control domain.  We saw how resources were put aside for guest systems and what infrastructure we need for them.  With that, we are now ready to create a first, very simple guest domain.  In this first example, we'll keep things very simple.  Later on, we'll have a detailed look at things like sizing, IO redundancy, other types of IO as well as security. For now,let's start with this very simple guest.  It'll have one core's worth of CPU, one crypto unit, 8GB of RAM, a single boot disk and one network port.  CPU and RAM are easy.  The network port we'll create by attaching a virtual network port to the vswitch we created in the primary domain.  This is very much like plugging a cable into a computer system on one end and a network switch on the other.  For the boot disk, we'll need two things: A physical piece of storage to hold the data - this is called the backend device in LDoms speak.  And then a mapping between that storage and the guest domain, giving it access to that virtual disk.  For this example, we'll use a ZFS volume for the backend.  We'll discuss what other options there are for this and how to chose the right one in a later article.  Here we go: root@sun # ldm create mars root@sun # ldm set-vcpu 8 mars root@sun # ldm set-mau 1 mars root@sun # ldm set-memory 8g mars root@sun # zfs create rpool/guests root@sun # zfs create -V 32g rpool/guests/mars.bootdisk root@sun # ldm add-vdsdev /dev/zvol/dsk/rpool/guests/mars.bootdisk \ mars.root@primary-vds root@sun # ldm add-vdisk root mars.root@primary-vds mars root@sun # ldm add-vnet net0 switch-primary mars That's all, mars is now ready to power on.  There are just three commands between us and the OK prompt of mars:  We have to "bind" the domain, start it and connect to its console.  Binding is the process where the hypervisor actually puts all the pieces that we've configured together.  If we made a mistake, binding is where we'll be told (starting in version 2.1, a lot of sanity checking has been put into the config commands themselves, but binding will catch everything else).  Once bound, we can start (and of course later stop) the domain, which will trigger the boot process of OBP.  By default, the domain will then try to boot right away.  If we don't want that, we can set "auto-boot?" to false.  Finally, we'll use telnet to connect to the console of our newly created guest.  The output of "ldm list" shows us what port has been assigned to mars.  By default, the console service only listens on the loopback interface, so using telnet is not a large security concern here. root@sun # ldm set-variable auto-boot\?=false mars root@sun # ldm bind mars root@sun # ldm start mars root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 8 7680M 0.5% 1d 4h 30m mars active -t---- 5000 8 8G 12% 1s root@sun # telnet localhost 5000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. ~Connecting to console "mars" in group "mars" .... Press ~? for control options .. {0} ok banner SPARC T3-4, No Keyboard Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.33.1, 8192 MB memory available, Serial # 87203131. Ethernet address 0:21:28:24:1b:50, Host ID: 85241b50. {0} ok We're done, mars is ready to install Solaris, preferably using AI, of course ;-)  But before we do that, let's have a little look at the OBP environment to see how our virtual devices show up here: {0} ok printenv auto-boot? auto-boot? = false {0} ok printenv boot-device boot-device = disk net {0} ok devalias root /virtual-devices@100/channel-devices@200/disk@0 net0 /virtual-devices@100/channel-devices@200/network@0 net /virtual-devices@100/channel-devices@200/network@0 disk /virtual-devices@100/channel-devices@200/disk@0 virtual-console /virtual-devices/console@1 name aliases We can see that setting the OBP variable "auto-boot?" to false with the ldm command worked.  Of course, we'd normally set this to "true" to allow Solaris to boot right away once the LDom guest is started.  The setting for "boot-device" is the default "disk net", which means OBP would try to boot off the devices pointed to by the aliases "disk" and "net" in that order, which usually means "disk" once Solaris is installed on the disk image.  The actual devices these aliases point to are shown with the command "devalias".  Here, we have one line for both "disk" and "net".  The device paths speak for themselves.  Note that each of these devices has a second alias: "net0" for the network device and "root" for the disk device.  These are the very same names we've given these devices in the control domain with the commands "ldm add-vnet" and "ldm add-vdisk".  Remember this, as it is very useful once you have several dozen disk devices... To wrap this up, in this part we've created a simple guest domain, complete with CPU, memory, boot disk and network connectivity.  This should be enough to get you going.  I will cover all the more advanced features and a little more theoretical background in several follow-on articles.  For some background reading, I'd recommend the following links: LDoms 2.2 Admin Guide: Setting up Guest Domains Virtual Console Server: vntsd manpage - This includes the control sequences and commands available to control the console session. OpenBoot 4.x command reference - All the things you can do at the ok prompt

    Read the article

  • jtreg update, March 2012

    - by jjg
    There is a new update for jtreg 4.1, b04, available. The primary changes have been to support faster and more reliable test runs, especially for tests in the jdk/ repository. [ For users inside Oracle, there is preliminary direct support for gathering code coverage data using jcov while running tests, and for generating a coverage report when all the tests have been run. ] -- jtreg can be downloaded from the OpenJDK jtreg page: http://openjdk.java.net/jtreg/. Scratch directories On platforms like Windows, if a test leaves a file open when the test is over, that can cause a problem for downstream tests, because the scratch directory cannot be emptied beforehand. This is addressed in agentvm mode by discarding any agents using that scratch directory and starting new agents using a new empty scratch directory. Successive directives use suffices _1, _2, etc. If you see such directories appearing in the work directory, that is an indication that files were left open in the preceding directory in the series. Locking support Some tests use shared system resources such as fixed port numbers. This causes a problem when running tests concurrently. So, you can now mark a directory such that all the tests within all such directories will be run sequentially, even if you use -concurrency:N on the command line to run the rest of the tests in parallel. This is seen as a short term solution: it is recommended that tests not use shared system resources whenever possible. If you are running multiple instances of jtreg on the same machine at the same time, you can use a new option -lock:file to specify a file to be used for file locking; otherwise, the locking will just be within the JVM used to run jtreg. "autovm mode" By default, if no options to the contrary are given on the command line, tests will be run in othervm mode. Now, a test suite can be marked so that the default execution mode is "agentvm" mode. In conjunction with this, you can now mark a directory such that all the tests within that directory will be run in "othervm" mode. Conceptually, this is equivalent to putting /othervm on every appropriate action on every test in that directory and any subdirectories. This is seen as a short term solution: it is recommended tests be adapted to use agentvm mode, or use "@run main/othervm" explicitly. Info in test result files The user name and jtreg version info are now stored in the properties near the beginning of the .jtr file. Build The makefiles used to build and test jtreg have been reorganized and simplified. jtreg is now using JT Harness version 4.4. Other jtreg provides access to GNOME_DESKTOP_SESSION_ID when set. jtreg ensures that shell tests are given an absolute path for the JDK under test. jtreg now honors the "first sentence rule" for the description given by @summary. jtreg saves the default locale before executing a test in samevm or agentvm mode, and restores it afterwards. Bug fixes jtreg tried to execute a test even if the compilation failed in agentvm mode because of a JVM crash. jtreg did not correctly handle the -compilejdk option. Acknowledgements Thanks to Alan, Amy, Andrey, Brad, Christine, Dima, Max, Mike, Sherman, Steve and others for their help, suggestions, bug reports and for testing this latest version.

    Read the article

  • Silverlight Cream for February 13, 2011 -- #1046

    - by Dave Campbell
    In this Issue: Loek van den Ouweland, Colin Eberhardt, Rudi Grobler, Joost van Schaik, Mike Taulty(-2-, -3-), Deborah Kurata, David Kelley, Peter Foot, Samuel Jack(-2-), and WindowsPhoneGeek(-2-). Above the Fold: Silverlight: "Silverlight Simple MVVM Commanding" Deborah Kurata WP7: "WP7 CustomInputPrompt control with Cancel button" WindowsPhoneGeek Expression Blend: "Silverlight Templated Image Button with two images" Loek van den Ouweland Shoutouts: Dave Campbell posted a write-up about the project he's on and the use of Sterling: Sterling Object-Oriented Database for ISO 1.0 Released!... Also see Jeremy Likness' post on the 1.0 release: Sterling Object-Oriented Database 1.0 RTM Not necessarily Silverlight, but darn cool, a great control by Sasha Barber: WPF : A Weird 3d based control snoutholder announced new content: Windows Phone 7 QuickStarts Live! From SilverlightCream.com: Silverlight Templated Image Button with two images Loek van den Ouweland has a video tutorial up for creating an ImageButton with a hover state... Expression Blend coolness, and check out the external links he has to their training site. Windows Phone 7 Performance Measurements – Emulator vs. Hardware Colin Eberhardt's latest is a popular post comparing performance metrics between the WP7 emulator and a real device. Mileage may vary, but I'm pretty sure the overall results are conculsive, and should help the way you view your app as you're building in the emulator. WP7: WebClient vs HttpWebRequest Rudi Grobler's latest is a discussion of WebClient and HttpWebRequest, gives coding examples of each plus discussion of why you may choose one over the other... and pay attention to his comment about mobile providers. A Blendable Windows Phone 7 / Silverlight clipping behavior Joost van Schaik posted this WP7/Silverlight clipping behavior he developed because all the other solutions were not blendable. Another really useful piece of code from Joost! Blend Bits 22–Being Stylish Mike Taulty has 3 more episodes in his Blend Bits series... first up is on one Styles... explicit, implicit, inheriting... you name it, he's covering it! Blend Bits 23–Templating Part 1 MIke Taulty then has the beginning of a series within his Blend Bits series on Templating. This is something you just have to either bite the bullet and go with Blend to do, or consume someone else's work. Mike shows us how to do it ourself by tweaking the visual aspects of a checkbox Blend Bits 24–Templating Part 2 In part 2 of the Templating series, Mike Taulty digs deeper into Blend and cracks open the Listbox control to take a bunch of the inner elements out for a spin... fun stuff and great tutorial, Mike! Silverlight Simple MVVM Commanding Deborah Kurata has another great MVVM post up... if you don't have your head wrapped around commanding yet, this is a good place to start that process... VB and C# as always. App Development for Windows Phone 7 101 David Kelley goes through the basics of producing a WP7 app both from the Silverlight and XNA side... good info and good external links to get you going. Copyable TextBlock for Windows Phone Peter Foot takes a look at the Copy/Paste functionality in WP7 and how to apply it to a TextBlock... which is NOT an out-of-the-box solution. How to deploy to, and debug, multiple instances of the Windows Phone 7 emulator Samuel Jack has a couple posts up this week... first is this clever one on running multiple copies of the emulator at once... too cool for debugging a multi-player game! Multi-player enabling my Windows Phone 7 game: Day 3 – The Server Side Samuel Jack's latest is a detailed look at his day 3 adventure of taking his multi-player game to WP7... lots of information and external links... what do you say, give him another day? :) WP7 CustomInputPrompt control with Cancel button WindowsPhoneGeek has a couple more posts up... first is this "CustomInputPrompt" control based off the InputPrompt from Coding4Fun. Implementing Windows Phone 7 DataTemplateSelector and CustomDataTemplateSelector In his latest post, WindowsPhoneGeek writes a DataTemplateSelector to allow different data templates for different list elements based on the type of the element. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Windows Azure Mobile Services Updates Keep Coming

    - by Clint Edmonson
    Some exciting new Windows Azure Mobile Services features were delivered to production this week. The highlights include: iPhone and iPad connectivity support via a new iOS SDK Integrated Authentication so developers can configure user authentication via Microsoft Account, Facebook, Twitter, and Google. New server-side Mobile Service script modules Access to Structured Storage, Windows Azure Blob, Table, Queues, and ServiceBus Email services through partnership with SendGrid SMS & voice services through partnership with Twilio Mobile Services hosting expanded to west coast US The iOS SDK I’m excited to share that we've announced the release of an under-development iOS client SDK for Windows Azure Mobile Services. The iOS SDK joins the Windows 8 SDK launched with Windows Azure Mobile Services as well as client SDKs released by Xamarin for MonoTouch and MonoDroid.  The native iOS SDK is for developers programming in Objective-C on the iPhone and iPad platforms. The SDK gives developers the same level of access to data storage using dynamic schematization that is available for Windows 8. Also, iOS applications can use the same authentication options available in Mobile Services. While full iOS support is still in development, the libraries are currently available on GitHub. There’s a great getting started tutorial to walk you through building a simple iOS “Todo List” app that stores data in Windows Azure.  These additional tutorials explore how to use the iOS client libraries to store data and authenticate users: Get Started with data in Mobile Services for iOS Get Started with authentication in Mobile Services for iOS What’s New in Authentication Available to both iOS and Windows 8 developers, Mobile Services has expanded its authentication options.  Developers can now use Microsoft, Facebook, Twitter, and Google authentication. Similar to using Microsoft accounts for authentication, developers must sign up and through Facebook, Twitter, or Google's developer portal in order to authenticate through them.  These tutorials walk through how to register your Mobile Service with an identity provider: How to register your app with Microsoft Account How to register your app with Facebook How to register your app with Twitter How to register your app with Google And these tutorials walk through authenticating against Mobile Services: Get started with authentication in Mobile Services for Windows Store (C#) Get started with authentication in Mobile Services for Windows Store (JavaScript) Get started with authentication in Mobile Services for iOS What’s New in Mobile Service Scripts Some great new functionality is now available in the Mobile Service script layer.  These server side scripts are triggered off of any CRUD operation on a Mobile Service's table and can already handle doing data and query validation, filtering, web requests and more.  Today, the Azure SDK module is now available to these scripts giving them access to blob storage, service bus, table storage.  Check out the new tutorials on the Windows Azure Node.js developer center to learn more about working with Blob, Tables, Queues and Service Bus using the azure module. In addition, SendGrid and Twilio are now available via modules that can be called from the scripts as well.  This gives developers the ability to send emails (SendGrid) or SMS text messages (Twilio) whenever a script is fired.  Windows Azure customers receive a special offer of 25,000 free emails per month from SendGrid and 1000 free text messages from Twilio. Expanded Data Center Availability In addition to Mobile Services being available in our US East data center, they can now be spun up in US West. The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. The Windows Azure Mobile Developer Center has been updated with new tutorials that cover these new features in detail. And don’t forget - Windows Azure Mobile Services are still free for your first ten applications running on shared compute instances. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted

    Read the article

  • Pfsense 2.1 OpenVPN can't reach servers on the LAN

    - by Lucas Kauffman
    I have a small network set up like this: I have a Pfsense for connecting my servers to the WAN, they are using NAT from the LAN - WAN. I have an OpenVPN server using TAP to allow remote workers to be put on the same LAN network as the servers. They connect through the WAN IP to the OVPN interface. The LAN interface also servers as the gateway for the servers to get internet connection and has an IP of 10.25.255.254 The OVPN Interface and the LAN interface are bridged in BR0 Server A has an IP of 10.25.255.1 and is able to connect the internet Client A is connecting through the VPN and is assigned an IP address on its TAP interface of 10.25.24.1 (I reserved a /24 within the 10.25.0.0/16 for VPN clients) Firewall currently allows any-any connection OVPN towards LAN and vice versa Currently when I connect, all routes seem fine on the client side: Destination Gateway Genmask Flags Metric Ref Use Iface 300.300.300.300 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.25.0.0 10.25.255.254 255.255.0.0 UG 0 0 0 tap0 10.25.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tap0 0.0.0.0 300.300.300.300 0.0.0.0 UG 0 0 0 eth0 I can ping the LAN interface: root@server:# ping 10.25.255.254 PING 10.25.255.254 (10.25.255.254) 56(84) bytes of data. 64 bytes from 10.25.255.254: icmp_req=1 ttl=64 time=7.65 ms 64 bytes from 10.25.255.254: icmp_req=2 ttl=64 time=7.49 ms 64 bytes from 10.25.255.254: icmp_req=3 ttl=64 time=7.69 ms 64 bytes from 10.25.255.254: icmp_req=4 ttl=64 time=7.31 ms 64 bytes from 10.25.255.254: icmp_req=5 ttl=64 time=7.52 ms 64 bytes from 10.25.255.254: icmp_req=6 ttl=64 time=7.42 ms But I can't ping past the LAN interface: root@server:# ping 10.25.255.1 PING 10.25.255.1 (10.25.255.1) 56(84) bytes of data. From 10.25.255.254: icmp_seq=1 Redirect Host(New nexthop: 10.25.255.1) From 10.25.255.254: icmp_seq=2 Redirect Host(New nexthop: 10.25.255.1) I ran a tcpdump on my em1 interface (LAN interface which has the IP of 10.25.255.254) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on em1, link-type EN10MB (Ethernet), capture size 96 bytes 08:21:13.449222 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 10, length 64 08:21:13.458211 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:14.450541 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 11, length 64 08:21:14.458431 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:15.451794 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 12, length 64 08:21:15.458530 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:16.453203 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 13, length 64 So traffic is reaching the LAN interface, but it's not getting passed it. But no answer from the 10.25.255.1 host. I'm not sure what I'm missing.

    Read the article

  • Problem with AssetManager while loading a Model type

    - by user1204548
    Today I've tried the AssetManager for the first time with .g3db files and I'm having some problems. Exception in thread "LWJGL Application" com.badlogic.gdx.utils.GdxRuntimeException: com.badlogic.gdx.utils.GdxRuntimeException: Couldn't load dependencies of asset: data/data at com.badlogic.gdx.assets.AssetManager.handleTaskError(AssetManager.java:508) at com.badlogic.gdx.assets.AssetManager.update(AssetManager.java:342) at com.lostchg.martagdx3d.MartaGame.render(MartaGame.java:78) at com.badlogic.gdx.Game.render(Game.java:46) at com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:207) at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:114) Caused by: com.badlogic.gdx.utils.GdxRuntimeException: Couldn't load dependencies of asset: data/data at com.badlogic.gdx.assets.AssetLoadingTask.handleAsyncLoader(AssetLoadingTask.java:119) at com.badlogic.gdx.assets.AssetLoadingTask.update(AssetLoadingTask.java:89) at com.badlogic.gdx.assets.AssetManager.updateTask(AssetManager.java:445) at com.badlogic.gdx.assets.AssetManager.update(AssetManager.java:340) ... 4 more Caused by: com.badlogic.gdx.utils.GdxRuntimeException: com.badlogic.gdx.utils.GdxRuntimeException: Couldn't load file: data/data at com.badlogic.gdx.utils.async.AsyncResult.get(AsyncResult.java:31) at com.badlogic.gdx.assets.AssetLoadingTask.handleAsyncLoader(AssetLoadingTask.java:117) ... 7 more Caused by: com.badlogic.gdx.utils.GdxRuntimeException: Couldn't load file: data/data at com.badlogic.gdx.graphics.Pixmap.<init>(Pixmap.java:140) at com.badlogic.gdx.assets.loaders.TextureLoader.loadAsync(TextureLoader.java:72) at com.badlogic.gdx.assets.loaders.TextureLoader.loadAsync(TextureLoader.java:41) at com.badlogic.gdx.assets.AssetLoadingTask.call(AssetLoadingTask.java:69) at com.badlogic.gdx.assets.AssetLoadingTask.call(AssetLoadingTask.java:34) at com.badlogic.gdx.utils.async.AsyncExecutor$2.call(AsyncExecutor.java:49) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: com.badlogic.gdx.utils.GdxRuntimeException: File not found: data\data (Internal) at com.badlogic.gdx.files.FileHandle.read(FileHandle.java:132) at com.badlogic.gdx.files.FileHandle.length(FileHandle.java:586) at com.badlogic.gdx.files.FileHandle.readBytes(FileHandle.java:220) at com.badlogic.gdx.graphics.Pixmap.<init>(Pixmap.java:137) ... 9 more Why it tries to load that unexisting file? It seems that the AssetManager manages to load my .g3db file at first, because earlier the java console threw some errors related to the textures associated to the 3D scene having to be a power of 2. Relevant code: public void show() { ... assets = new AssetManager(); assets.load("data/levelprueba2.g3db", Model.class); loading = true; ... } private void doneLoading() { Model model = assets.get("data/levelprueba2.g3db", Model.class); for (int i = 0; i < model.nodes.size; i++) { String id = model.nodes.get(i).id; ModelInstance instance = new ModelInstance(model, id); Node node = instance.getNode(id); instance.transform.set(node.globalTransform); node.translation.set(0,0,0); node.scale.set(1,1,1); node.rotation.idt(); instance.calculateTransforms(); instances.add(instance); } loading = false; } public void render(float delta) { super.render(delta); if (loading && assets.update()) doneLoading(); ... } The error points to the line with the assets.update() method. Please, help! Sorry for my bad English and my amateurish doubts.

    Read the article

  • Windows Workflow Foundation (WF) and things I were more intuitive

    - by pjohnson
    I've started using Windows Workflow Foundation, and so far ran into a few things that aren't incredibly obvious. Microsoft did a good job of providing a ton of samples, which is handy because you need them to get anywhere with WF. The docs are thin, so I've been bouncing between samples and downloadable labs to figure out how to implement various activities in a workflow. Code separation or not? You can create a workflow and activity in Visual Studio with or without code separation, i.e. just a .cs "Component" style object with a Designer.cs file, or a .xoml XML markup file with code behind (beside?) it. Absence any obvious advantage to one or the other, I used code separation for workflows and any complex custom activities, and without code separation for custom activities that just inherit from the Activity class and thus don't have anything special in the designer. So far, so good. Service - In the WF world, this is simply a class that talks to the workflow about things outside the workflow, not to be confused with how the term "service" is used in every other context I've seen in the Windows and .NET world, i.e. an executable that waits for events or requests from a client and services them (Windows service, web service, WCF service, etc.). ListenActivity - Such a great concept, yet so unintuitive. It seems you need at least two branches (EventDrivenActivity instances), one for your positive condition and one for a timeout. The positive condition has a HandleExternalEventActivity, and the timeout has a DelayActivity followed by however you want to handle the delay, e.g. a ThrowActivity. The timeout is simple enough; wiring up the HandleExternalEventActivity is where things get fun. You need to create a service (see above), and an interface for that service (this seems more complex than should be necessary--why not have activities just wire to a service directly?). And you need to create a custom EventArgs class that inherits from ExternalDataEventArgs--you can't create an ExternalDataEventArgs event handler directly, even if you don't need to add any more information to the event args, despite ExternalDataEventArgs not being marked as an abstract class, nor a compiler error nor warning nor any other indication that you're doing something wrong, until you run it and find that it always times out and get to check every place mentioned here to see why. Your interface and service need an event that consumes your custom EventArgs class, and a method to fire that event. You need to call that method from somewhere. Then you get to hope that you did everything just right, or that you can step through code in the debugger before your Delay timeout expires. Yes, it's as much fun as it sounds. TransactionScopeActivity - I had the bright idea of putting one in as a placeholder, then filling in the database updates later. That caused this error: The workflow hosting environment does not have a persistence service as required by an operation on the workflow instance "[GUID]". ...which is about as helpful as "Object reference not set to an instance of an object" and even more fun to debug. Google led me to this Microsoft Forums hit, and from there I figured out it didn't like that the activity had no children. Again, a Validator on TransactionScopeActivity would have pointed this out to me at design time, rather than handing me a nearly useless error at runtime. Easily enough, I disabled the activity and that fixed it. I still see huge potential in my work where WF could make things easier and more flexible, but there are some seriously rough edges at the moment. Maybe I'm just spoiled by how much easier and more intuitive development elsewhere in the .NET Framework is.

    Read the article

  • Creating an SMF service for mercurial web server

    - by Chris W Beal
    I'm working on a project at the moment, which has a number of contributers. We're managing the project gate (which is stand alone) with mercurial. We want to have an easy way of seeing the changelog, so we can show management what is going on.  Luckily mercurial provides a basic web server which allows you to see the changes, and drill in to change sets. This can be run as a daemon, but as it was running on our build server, every time it was rebooted, someone needed to remember to start the process again. This is of course a classic usage of SMF. Now I'm not an experienced person at writing SMF services, so it took me 1/2 an hour or so to figure it out the first time. But going forward I should know what I'm doing a bit better. I did reference this doc extensively. Taking a step back, the command to start the mercurial web server is $ hg serve -p <port number> -d So we somehow need to get SMF to run that command for us. In the simplest form, SMF services are really made up of two components. The manifest Usually lives in /var/svc/manifest somewhere Can be imported from any location The method Usually live in /lib/svc/method I simply put the script straight in that directory. Not very repeatable, but it worked Can take an argument of start, stop, or refresh Lets start with the manifest. This looks pretty complex, but all it's doing is describing the service name, the dependencies, the start and stop methods, and some properties. The properties can be by instance, that is to say I could have multiple hg serve processes handling different mercurial projects, on different ports simultaneously Here is the manifest I wrote. I stole extensively from the examples in the Documentation. So my manifest looks like this $ cat hg-serve.xml <?xml version="1.0"?> <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type='manifest' name='hg-serve'> <service name='application/network/hg-serve' type='service' version='1'> <dependency name='network' grouping='require_all' restart_on='none' type='service'> <service_fmri value='svc:/milestone/network:default' /> </dependency> <exec_method type='method' name='start' exec='/lib/svc/method/hg-serve %m' timeout_seconds='2' /> <exec_method type='method' name='stop' exec=':kill' timeout_seconds='2'> </exec_method> <instance name='project-gate' enabled='true'> <method_context> <method_credential user='root' group='root' /> </method_context> <property_group name='hg-serve' type='application'> <propval name='path' type='astring' value='/src/project-gate'/> <propval name='port' type='astring' value='9998' /> </property_group> </instance> <stability value='Evolving' /> <template> <common_name> <loctext xml:lang='C'>hg-serve</loctext> </common_name> <documentation> <manpage title='hg' section='1' /> </documentation> </template> </service> </service_bundle> So the only things I had to decide on in this are the service name "application/network/hg-serve" the start and stop methods (more of which later) and the properties. This is the information I need to pass to the start method script. In my case the port I want to start the web server on "9998", and the path to the source gate "/src/project-gate". These can be read in to the start method. So now lets look at the method scripts $ cat /lib/svc/method/hg-serve #!/sbin/sh # # # Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. # # Standard prolog # . /lib/svc/share/smf_include.sh if [ -z $SMF_FMRI ]; then echo "SMF framework variables are not initialized." exit $SMF_EXIT_ERR fi # # Build the command line flags # # Get the port and directory from the SMF properties port=`svcprop -c -p hg-serve/port $SMF_FMRI` dir=`svcprop -c -p hg-serve/path $SMF_FMRI` echo "$1" case "$1" in 'start') cd $dir /usr/bin/hg serve -d -p $port ;; *) echo "Usage: $0 {start|refresh|stop}" exit 1 ;; esac exit $SMF_EXIT_OK This is all pretty self explanatory, we read the port and directory using svcprop, and use those simply to run a command in the start case. We don't need to implement a stop case, as the manifest says to use "exec=':kill'for the stop method. Now all we need to do is import the manifest and start the service, but first verify the manifest # svccfg verify /path/to/hg-serve.xml If that doesn't give an error try importing it # svccfg import /path/to/hg-serve.xml If like me you originally put the hg-serve.xml file in /var/svc/manifest somewhere you'll get an error and told to restart the import service svccfg: Restarting svc:/system/manifest-import The manifest being imported is from a standard location and should be imported with the command : svcadm restart svc:/system/manifest-import # svcadm restart svc:/system/manifest-import and you're nearly done. You can look at the service using svcs -l # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled false state disabled next_state none state_time Thu May 31 16:11:47 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15749 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) And look at the interesting properties # svcprop hg-serve hg-serve/path astring /src/project-gate hg-serve/port astring 9998 ...stuff deleted.... Then simply enable the service and if every things gone right, you can point your browser at http://server:9998 and get a nice graphical log of project activity. # svcadm enable hg-serve # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled true state online next_state none state_time Thu May 31 16:18:11 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15858 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) None of this is rocket science, but a bit fiddly. Hence I thought I'd blog it. It might just be you see this in google and it clicks with you more than one of the many other blogs or how tos about it. Plus I can always refer back to it myself in 3 weeks, when I want to add another project to the server, and I've forgotten how to do it.

    Read the article

  • Improved Performance on PeopleSoft Combined Benchmark using SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved a world record 18,000 concurrent users experiencing subsecond response time while executing a PeopleSoft Payroll batch job of 500,000 employees in 32.4 minutes. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 47% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. Performance Landscape Results are presented for the PeopleSoft HRMS Self-Service and Payroll combined benchmark. The new result with 128 streams shows significant improvement in the payroll batch processing time with little impact on the self-service component response time. PeopleSoft HRMS Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.988 0.539 32.4 128 SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.944 0.503 43.3 64 The following results are for the PeopleSoft HRMS Self-Service benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the payroll component. PeopleSoft HRMS Self-Service 9.1 Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) 2x SPARC T4-2 (db) 18,000 1.048 0.742 N/A N/A The following results are for the PeopleSoft Payroll benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the self-service component. PeopleSoft Payroll (N.A.) 9.1 - 500K Employees (7 Million SQL PayCalc, Unicode) Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-4 (db) N/A N/A N/A 30.84 96 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 PeopleTools 8.52 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Micro Focus Server Express (COBOL v 5.1.00) Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. A total of 128 PeopleSoft streams server processes where used on the database node to complete payroll batch job of 500,000 employees in 32.4 minutes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Managementoracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 8 November 2012.

    Read the article

  • SQL SERVER – Read Only Files and SQL Server Management Studio (SSMS)

    - by pinaldave
    Just like any other Developer or DBA SQL Server Management Studio is my favorite application. Any any moment of the time I have multiple instances of the same application are open and I am working on it. Recently, I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. First create a read only SQL file. You can make any file read by Right Click >> Properties >> Select Attribute Read Only. Now open the same file in SQL Server Management Studio. You will find that besides the file name there is a small ‘lock’ icon. This small icon indicates that the file is read only. Now let us attempt to edit the read only file. It will let us edit the file any way we want, however when we attempt to save it, it gives following pop-up value. The options in the pop-up are self explanatory and I liked it. The goal of the read only file is to prevent users to make un-intended changes. However, when a user should have complete control over the user file. User should be aware that the file is read only but if he wants to edit the file or save as a new file the choices should be present in front of it and the pop-up menu precisely captures the same. Now let us check option related to this feature in SSMS. Go to Menu >> Options >> Environment >> Documents You will find the third option which is “Allow editing of read-only files; warn when attempt to save”. In the above scenario it was already checked. Let us uncheck the same and do the same exercise which we have done earlier. I closed all the earlier window to avoid confusion. With the new option selected when I attempt to even modify the Read Only file, it gives me totally different pop up screen. It gives me an option like “Edit In-Memory”, “Make Writeable” etc. When you select “Edit In-Memory” it allows you to edit the file and later you can save as new file – just like the earlier scenario which we have discussed. . If clicked on the Make Writeable it will remove the restriction of the Read Only and file can be edited as pleased. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Running Solaris 11 as a control domain on a T2000

    - by jsavit
    There is increased adoption of Oracle Solaris 11, and many customers are deploying it on systems that previously ran Solaris 10. That includes older T1-processor based systems like T1000 and T2000. Even though they are old (from 2005) and don't have the performance of current SPARC servers, they are still functional, stable servers that customers continue to operate. One reason to install Solaris 11 on them is that older machines are attractive for testing OS upgrades before updating current, production systems. Normally this does not present a challenge, because Solaris 11 runs on any T-series or M-series SPARC server. One scenario adds a complication: running Solaris 11 in a control domain on a T1000 or T2000 hosting logical domains. Solaris 11 pre-installed Oracle VM Server for SPARC incompatible with T1 Unlike Solaris 10, Solaris 11 comes with Oracle VM Server for SPARC preinstalled. The ldomsmanager package contains the logical domains manager for Oracle VM Server for SPARC 2.2, which requires a SPARC T2, T2+, T3, or T4 server. It does not work with T1-processor systems, which are only supported by LDoms Manager 1.2 and earlier. The following screenshot shows what happens (bold font) if you try to use Oracle VM Server for SPARC 2.x commands in a Solaris 11 control domain. The commands were issued in a control domain on a T2000 that previously ran Solaris 10. We also display the version of the logical domains manager installed in Solaris 11: root@t2000 psrinfo -vp The physical processor has 4 virtual processors (0-3) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep T SUNW,Sun-Fire-T200 # ldm -V Failed to connect to logical domain manager: Connection refused # pkg info ldomsmanager Name: system/ldoms/ldomsmanager Summary: Logical Domains Manager Description: LDoms Manager - Virtualization for SPARC T-Series Category: System/Virtualization State: Installed Publisher: solaris Version: 2.2.0.0 Build Release: 5.11 Branch: 0.175.0.8.0.3.0 Packaging Date: May 25, 2012 10:20:48 PM Size: 2.86 MB FMRI: pkg://solaris/system/ldoms/[email protected],5.11-0.175.0.8.0.3.0:20120525T222048Z The 2.2 version of the logical domains manager will have to be removed, and 1.2 installed, in order to use this as a control domain. Preparing to change - create a new boot environment Before doing anything else, lets create a new boot environment: # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 2.14G static 2012-09-25 10:32 # beadm create solaris-1 # beadm activate solaris-1 # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris N / 4.82M static 2012-09-25 10:32 solaris-1 R - 2.14G static 2012-09-29 11:40 # init 0 Normally an init 6 to reboot would have been sufficient, but in the next step I reset the system anyway in order to put the system in factory default mode for a "clean" domain configuration. Preparing to change - reset to factory default There was a leftover domain configuration on the T2000, so I reset it to the factory install state. Since the ldm command is't working yet, it can't be done from the control domain, so I did it by logging onto to the service processor: $ ssh -X admin@t2000-sc Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved. Oracle Advanced Lights Out Manager CMT v1.7.9 Please login: admin Please Enter password: ******** sc> showhost Sun-Fire-T2000 System Firmware 6.7.10 2010/07/14 16:35 Host flash versions: OBP 4.30.4.b 2010/07/09 13:48 Hypervisor 1.7.3.c 2010/07/09 15:14 POST 4.30.4.b 2010/07/09 14:24 sc> bootmode config="factory-default" sc> poweroff Are you sure you want to power off the system [y/n]? y SC Alert: SC Request to Power Off Host. SC Alert: Host system has shut down. sc> poweron SC Alert: Host System has Reset At this point I rebooted into the new Solaris 11 boot environment, and Solaris commands showed it was running on the factory default configuration of a single domain owning all 32 CPUs and 32GB of RAM (that's what it looked like in 2005.) # psrinfo -vp The physical processor has 8 cores and 32 virtual processors (0-31) The core has 4 virtual processors (0-3) The core has 4 virtual processors (4-7) The core has 4 virtual processors (8-11) The core has 4 virtual processors (12-15) The core has 4 virtual processors (16-19) The core has 4 virtual processors (20-23) The core has 4 virtual processors (24-27) The core has 4 virtual processors (28-31) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep Mem Memory size: 32640 Megabytes Note that the older processor has 4 virtual CPUs per core, while current processors have 8 per core. Remove ldomsmanager 2.2 and install the 1.2 version The Solaris 11 pkg command is now used to remove the 2.2 version that shipped with Solaris 11: # pkg uninstall ldomsmanager Packages to remove: 1 Create boot environment: No Create backup boot environment: No Services to change: 2 PHASE ACTIONS Removal Phase 130/130 PHASE ITEMS Package State Update Phase 1/1 Package Cache Update Phase 1/1 Image State Update Phase 2/2 Finally, LDoms 1.2 installed via its install script, the same way it was done years ago: # unzip LDoms-1_2-Integration-10.zip # cd LDoms-1_2-Integration-10/Install/ # ./install-ldm Welcome to the LDoms installer. You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. ... ... normal install messages omitted ... The Solaris Security Toolkit applies to Solaris 10, and cannot be used in Solaris 11 (in which several things hardened by the Toolkit are already hardened by default), so answer b in the choice below: You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. Select a security profile from this list: a) Hardened Solaris configuration for LDoms (recommended) b) Standard Solaris configuration c) Your custom-defined Solaris security configuration profile Enter a, b, or c [a]: b ... other install messages omitted for brevity... After install I ensure that the necessary services are enabled, and verify the version of the installed LDoms Manager: # svcs ldmd STATE STIME FMRI online 22:00:36 svc:/ldoms/ldmd:default # svcs vntsd STATE STIME FMRI disabled Aug_19 svc:/ldoms/vntsd:default # ldm -V Logical Domain Manager (v 1.2-debug) Hypervisor control protocol v 1.3 Using Hypervisor MD v 1.1 System PROM: Hypervisor v. 1.7.3. @(#)Hypervisor 1.7.3.c 2010/07/09 15:14\015 OpenBoot v. 4.30.4. @(#)OBP 4.30.4.b 2010/07/09 13:48 Set up control domain and domain services At this point we have a functioning LDoms 1.2 environment that can be configured in the usual fashion. One difference is that LDoms 1.2 behavior had 'delayed configuration mode (as expected) during initial configuration before rebooting the control domain. Another minor difference with a Solaris 11 control domain is that you define virtual switches using the 'vanity name' of the network interface, rather than the hardware driver name as in Solaris 10. # ldm list ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Configuration and resource information is displayed for the configuration under construction; not the current active configuration. The configuration being constructed will only take effect after it is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- SP 32 32640M 3.2% 4d 2h 50m # ldm add-vdiskserver primary-vds0 primary # ldm add-vconscon port-range=5000-5100 primary-vcc0 primary # ldm add-vswitch net-dev=net0 primary-vsw0 primary # ldm set-mau 2 primary # ldm set-vcpu 8 primary # ldm set-memory 4g primary # ldm add-config initial # ldm list-spconfig factory-default initial [current] That's it, really. After reboot, we are ready to install guest domains. Summary - new wine in old bottles This example shows that (new) Solaris 11 can be installed on (old) T2000 servers and used as a control domain. The main activity is to remove the preinstalled Oracle VM Server for 2.2 and install Logical Domains 1.2 - the last version of LDoms to support T1-processor systems. I tested Solaris 10 and Solaris 11 guest domains running on this server and they worked without any surprises. This is a viable way to get further into Solaris 11 adoption, even on older T-series equipment.

    Read the article

  • Fun with Declarative Components

    - by [email protected]
    Use case background I have been asked on a number of occasions if our selectOneChoice component could allow random text to be entered, as well as having a list of selections available. Unfortunately, the selectOneChoice component only allows entry via the dropdown selection list and doesn't allow text entry. I was thinking of possible solutions and thought that this might make a good example for using a declarative component.My initial idea My first thought was to use an af:inputText to allow the text entry, and an af:selectOneChoice with mode="compact" for the selections. To get it to layout horizontally, we would want to use an af:panelGroupLayout with layout="horizontal". To get the label for this to line up correctly, we'll need to wrap the af:panelGroupLayout with an af:panelLabelAndMessage. This is the basic structure: <af:panelLabelAndMessage> <af:panelGroupLayout layout="horizontal"> <af:inputText/> <af:selectOneChoice mode="compact"/> </af:panelgroupLayout></af:panelLabelAndMessage> Make it into a declarative component One of the steps to making a declarative component is deciding what attributes we want to be able to specify. To keep this example simple, let's just have: 'label' (the label of our declarative component)'value' (what we want to bind to the value of the input text)'items' (the select items in our dropdown) Here is the initial declarative component code (saved as file "inputTextWithChoice.jsff"): <?xml version='1.0' encoding='UTF-8'?><!-- Copyright (c) 2008, Oracle and/or its affiliates. All rights reserved. --><jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:f="http://java.sun.com/jsf/core" xmlns:af="http://xmlns.oracle.com/adf/faces/rich"> <jsp:directive.page contentType="text/html;charset=utf-8"/> <af:componentDef var="attrs" componentVar="comp"> <af:xmlContent> <component xmlns="http://xmlns.oracle.com/adf/faces/rich/component"> <description>Input text with choice component.</description> <attribute> <description>Label</description> <attribute-name>label</attribute-name> <attribute-class>java.lang.String</attribute-class> </attribute> <attribute> <description>Value</description> <attribute-name>value</attribute-name> <attribute-class>java.lang.Object</attribute-class> </attribute> <attribute> <description>Choice Select Items Value</description> <attribute-name>items</attribute-name> <attribute-class>[[Ljavax.faces.model.SelectItem;</attribute-class> </attribute> </component> </af:xmlContent> <af:panelLabelAndMessage id="myPlm" label="#{attrs.label}" for="myIt"> <af:panelGroupLayout id="myPgl" layout="horizontal"> <af:inputText id="myIt" value="#{attrs.value}" partialTriggers="mySoc" label="myIt" simple="true" /> <af:selectOneChoice id="mySoc" label="mySoc" simple="true" mode="compact" value="#{attrs.value}" autoSubmit="true"> <f:selectItems id="mySIs" value="#{attrs.items}" /> </af:selectOneChoice> </af:panelGroupLayout> </af:panelLabelAndMessage> </af:componentDef></jsp:root> By having af:inputText and af:selectOneChoice both have the same value, then (assuming that this passed in as an EL expression) selecting something in the selectOneChoice will update the value in the af:inputText. To use this declarative component in a jspx page: <af:declarativeComponent id="myItwc" viewId="inputTextWithChoice.jsff" label="InputText with Choice" value="#{demoInput.choiceValue}" items="#{demoInput.selectItems}" /> Some problems arise At first glace, this seems to be functioning like we want it to. However, there is a side effect to having the af:inputText and af:selectOneChoice share a value, if one changes, so does the other. The problem here is that when we update the af:inputText to something that doesn't match one of the selections in the af:selectOneChoice, the af:selectOneChoice will set itself to null (since the value doesn't match one of the selections) and the next time the page is submitted, it will submit the null value and the af:inputText will be empty. Oops, we don't want that. Hmm, what to do. Okay, how about if we make sure that the current value is always available in the selection list. But, lets not render it if the value is empty. We also need to add a partialTriggers attribute so that this gets updated when the af:inputText is changed. Plus, we really don't want to select this item so let's disable it. <af:selectOneChoice id="mySoc" partialTriggers="myIt" label="mySoc" simple="true" mode="compact" value="#{attrs.value}" autoSubmit="true"> <af:selectItem id="mySI" label="Selected:#{attrs.value}" value="#{attrs.value}" disabled="true" rendered="#{!empty attrs.value}"/> <af:separator id="mySp" /> <f:selectItems id="mySIs" value="#{attrs.items}" /></af:selectOneChoice> That seems to be working pretty good. One minor issue that we probably can't do anything about is that when you enter something in the inputText and then click on the selectOneChoice, the popup is displayed, but then goes away because it has been replaced via PPR because we told it to with the partialTriggers="myIt". This is not that big a deal, since if you are entering something manually, you probably don't want to select something from the list right afterwards. Making it look like a single component. Now, let's play around a bit with the contentStyle of the af:inputText and the af:selectOneChoice so that the compact icon will layout inside the af:inputText, making it look more like an af:selectManyChoice. We need to add some padding-right to the af;inputText so there is space for the icon. These adjustments were for the Fusion FX skin. <af:inputText id="myIt" partialTriggers="mySoc" autoSubmit="true" contentStyle="padding-right: 15px;" value="#{attrs.value}" label="myIt" simple="true" /><af:selectOneChoice id="mySoc" partialTriggers="myIt" contentStyle="position: relative; top: -2px; left: -19px;" label="mySoc" simple="true" mode="compact" value="#{attrs.value}" autoSubmit="true"> <af:selectItem id="mySI" label="Selected:#{attrs.value}" value="#{attrs.value}" disabled="true" rendered="#{!empty attrs.value}"/> <af:separator id="mySp" /> <f:selectItems id="mySIs" value="#{attrs.items}" /></af:selectOneChoice> There you have it, a declarative component that allows for suggested selections, but also allows arbitrary text to be entered. This could be used for search field, where the 'items' attribute could be populated with popular searches. Lines of java code written: 0

    Read the article

  • Oracle Fusion Middleware (OFM) 11g (11.1.1.7) Starter Kit available & Customizable Demos

    - by JuergenKress
    OFM PS6 starter kit is now available from Global Sales Engineering (GSE, formerly DSS).  OFM Starter Kit provides a basic foundation to design and develop middleware demos. It is based on plug and play architecture and designed to use optimal hardware resources.  The starter kit is easily extendable to incorporate more Oracle Fusion Middleware components. New Features Built on the "Build your own demos (POC)" concept Starter Kit comes with core OFM Components Oracle Unified Directory (OUD, SOA, WebCenter Content and WebCenter Spaces) Starter Kit is available over the Internet and is tuned for optimal performance Portable/Downloadable version of the Starter Kit will be available soon. Please check Demos Corner. For and questions/feedback please contact chandan Das or Anand Prasad. Call to Action Review the Release Notes. & Visit the GSE Website and book the “OFM 11.1.1.7.0 Base Platform” customizable instance. Further information about this platform is available on this page. This announcement will appear in the archive as number 412. Customizable Demos We are happy to announce the availability of the SOA 11.1.1.7.0 Platform.  SOA 11g (11.1.1.7) Platform is fully featured, built on Plug and Play architecture, and designed to develop best of breed SOA demos. New Features Built on the "Build your own demos" concept Fusion Middleware products SOA, BAM, OSB, OEP, OER, OSR, WebCenter Content and WebCenter Spaces are installed, configured, and tuned for better performance Demo instances are available over the Internet Portable version of the platform will be available soon. Please check Demos Corner For questions/feedback please contact Anvesh Baluguri or Anand Prasad. Call to Action Review the Release Notes & Visit the GSE Website and book the “SOA 11.1.1.7.0 Platform” customizable demo. Further information about this platform is available on this page.  This announcement will appear in the archive as number 413. To get access to the demo environment please contact OPN! Support If you need assistance or encounter any issues please submit a GSE Repository ticket or call the GSE Support Hotline for assistance. The GSE Support Hotline is available 24 hours a day, Monday through Friday, at: US/CAN: +1.650.506.8763 & EMEA: +44 118 9240808 & APAC: +65.6436.2150 & LAD: +1.650.506.8763 & Japan: +81-3-6834-6097. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: OFM,demos,sales,marketing,dss,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How can I keep the cpu temp low?

    - by Newton
    I have an HP pavilion dv7, I'm using ubuntu 12.04 so the overheating problem with sandybridge cpu is a lot better. However my laptop is still becoming too hot to keep on my legs. The problem is that the fan wait too much before starting, so the medium temp is too hight. When I'm using windows 7 the laptop is room-temperature cold, I've absolutely no problem. On windows the fan is always spinning very low & very silently so the heat is continuously removed, without reaching an unconfortable temp. How can I force the computer to act like that also on ubuntu? PS The bios can't let me control this kind of thing, and this is my experience with lm-sensors and fancontrol al@notebook:~$ sudo sensors-detect [sudo] password for al: # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: Hewlett-Packard HP Pavilion dv7 Notebook PC (laptop) # Board: Hewlett-Packard 1800 This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Module cpuid loaded successfully. Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... Yes Found unknown chip with ID 0x8518 Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel Cougar Point (PCH) Module i2c-i801 loaded successfully. Module i2c-dev loaded successfully. Next adapter: i915 gmbus disabled (i2c-0) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus ssc (i2c-1) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOB (i2c-2) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus vga (i2c-3) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOA (i2c-4) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus panel (i2c-5) Do you want to scan it? (YES/no/selectively): y Client found at address 0x50 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No Probing for `SPD EEPROM'... No Probing for `EDID EEPROM'... Yes (confidence 8, not a hardware monitoring chip) Next adapter: i915 GPIOC (i2c-6) Do you want to scan it? (YES/no/selectively): y Client found at address 0x50 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No Probing for `SPD EEPROM'... No Probing for `EDID EEPROM'... Yes (confidence 8, not a hardware monitoring chip) Next adapter: i915 gmbus dpc (i2c-7) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOD (i2c-8) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus dpb (i2c-9) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOE (i2c-10) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus reserved (i2c-11) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 gmbus dpd (i2c-12) Do you want to scan it? (YES/no/selectively): y Next adapter: i915 GPIOF (i2c-13) Do you want to scan it? (YES/no/selectively): y Next adapter: DPDDC-B (i2c-14) Do you want to scan it? (YES/no/selectively): y Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO)y Successful! Monitoring programs won't work until the needed modules are loaded. You may want to run 'service module-init-tools start' to load them. Unloading i2c-dev... OK Unloading i2c-i801... OK Unloading cpuid... OK al@notebook:~$ sudo /etc/init.d/module-init-tools restart Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service module-init-tools restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop module-init-tools ; start module-init-tools. The restart(8) utility is also available. module-init-tools stop/waiting al@notebook:~$ sudo service module-init-tools restart stop: Unknown instance: module-init-tools stop/waiting al@notebook:~$ sudo service module-init-tools start module-init-tools stop/waiting al@notebook:~$ sudo pwmconfig # pwmconfig revision 5857 (2010-08-22) This program will search your sensors for pulse width modulation (pwm) controls, and test each one to see if it controls a fan on your motherboard. Note that many motherboards do not have pwm circuitry installed, even if your sensor chip supports pwm. We will attempt to briefly stop each fan using the pwm controls. The program will attempt to restore each fan to full speed after testing. However, it is ** very important ** that you physically verify that the fans have been to full speed after the program has completed. /usr/sbin/pwmconfig: There are no pwm-capable sensor modules installed Is my case too desperate?

    Read the article

  • links for 2010-12-08

    - by Bob Rhubart
    What's a data architect? A comic dialog by one who knows: Oracle ACE Director Lewis Cunningham. Webcast: Oracle Business Intelligence Forum - December 15, 2010 at 9:00 am PT "The Oracle Business Intelligence Online Forum is a half-day virtual event that offers you a unique opportunity to see, in one place, the full portfolio of Oracle’s Business Intelligence (BI) offerings, and to learn what sets Oracle apart from the rest. Hear Oracle executives and industry analyst, Howard Dresner, present the current state of Business Intelligence, along with a series of customers who will share their case studies of putting analytics in action." Oracle Rolls Out Private Cloud Architecture And World-Record Transaction Performance | Forrester Blogs "Exadata has been dealt with extensively in other venues, both inside Forrester and externally, and appears to deliver the goods for I&O groups who require efficient consolidation and maximum performance from an Oracle database environment." -- Richard Fichera, Forrester Seven ways to get things started: Java EE Startup Classes with GlassFish and WebLogic "This is a blog about a topic that I realy don't like. But it comes across my ways over and over again and it's no doubt that you need it from time to time. Enough reasons for me to collect some information about it and publish it for your reference. I am talking about Startup-/Shutdown classes with Java EE applications or servers." -- Oracle ACE Director Markus "@myfear" Eisele." Monitoring Undelivered Messages in BPEL in SOA 10g (Antony Reynolds' Blog) "I am currently working with a client that wants to know how many undelivered messages they have, and if it reaches a certain threshold then they wants to alert the operator. To do this they plan on using the Enterprise Manager alert functions, but first they needs to know how many undelivered instances are out there." SOA author Antony Reynolds VirtualBox Appliances for Developers "Developers can simply download a few files, assemble them with a script , and then import and run the resulting pre-built VM in VirtualBox. This makes starting with these technologies even easier. Each appliance contains some Hands-On-Labs to start learning." -- Peter Paul van de Beek Oracle UCM 11g Remote Intradoc Client (RIDC) Integration with Oracle ADF 11g "It's great we have out of the box WebCenter ADF task flows for document management in UCM. However, for complete business scenario implementations, usually it's not enough and we need to manage Content Repository programmatically. This can be achieved through Remote Intradoc Client (RIDC) API. It's quite hard to find any practical information about this API, but I managed to get code for UCM folder creation/removal and folder information." -- Oracle ACE Director Andrejus Baranovskis Interview with Java Champion Matjaz B. Juric on Cloud Computing, SOA, and Java EE 6 "Matjaz Juric of Slovenia, head of the Cloud Computing and SOA Competence Centre at the University of Maribor, and professor at the University of Ljubljana, shares insights about cloud computing, SOA and Java EE 6." White Paper: Oracle Complex Event Processing High Availability "This whitepaper describes the high availability (HA) solutions available in Oracle CEP 11g Release 1 Patch Set 2 and  presents the results of a benchmark study demonstrating the performance of the Oracle CEP HA solutions."

    Read the article

  • Combining template method with strategy

    - by Mekswoll
    An assignment in my software engineering class is to design an application which can play different forms a particular game. The game in question is Mancala, some of these games are called Wari or Kalah. These games differ in some aspects but for my question it's only important to know that the games could differ in the following: The way in which the result of a move is handled The way in which the end of the game is determined The way in which the winner is determined The first thing that came to my mind to design this was to use the strategy pattern, I have a variation in algorithms (the actual rules of the game). The design could look like this: I then thought to myself that in the game of Mancala and Wari the way the winner is determined is exactly the same and the code would be duplicated. I don't think this is by definition a violation of the 'one rule, one place' or DRY principle seeing as a change in rules for Mancala wouldn't automatically mean that rule should be changed in Wari as well. Nevertheless from the feedback I got from my professor I got the impression to find a different design. I then came up with this: Each game (Mancala, Wari, Kalah, ...) would just have attribute of the type of each rule's interface, i.e. WinnerDeterminer and if there's a Mancala 2.0 version which is the same as Mancala 1.0 except for how the winner is determined it can just use the Mancala versions. I think the implementation of these rules as a strategy pattern is certainly valid. But the real problem comes when I want to design it further. In reading about the template method pattern I immediately thought it could be applied to this problem. The actions that are done when a user makes a move are always the same, and in the same order, namely: deposit stones in holes (this is the same for all games, so would be implemented in the template method itself) determine the result of the move determine if the game has finished because of the previous move if the game has finished, determine who has won Those three last steps are all in my strategy pattern described above. I'm having a lot of trouble combining these two. One possible solution I found would be to abandon the strategy pattern and do the following: I don't really see the design difference between the strategy pattern and this? But I am certain I need to use a template method (although I was just as sure about having to use a strategy pattern). I also can't determine who would be responsible for creating the TurnTemplate object, whereas with the strategy pattern I feel I have families of objects (the three rules) which I could easily create using an abstract factory pattern. I would then have a MancalaRuleFactory, WariRuleFactory, etc. and they would create the correct instances of the rules and hand me back a RuleSet object. Let's say that I use the strategy + abstract factory pattern and I have a RuleSet object which has algorithms for the three rules in it. The only way I feel I can still use the template method pattern with this is to pass this RuleSet object to my TurnTemplate. The 'problem' that then surfaces is that I would never need my concrete implementations of the TurnTemplate, these classes would become obsolete. In my protected methods in the TurnTemplate I could just call ruleSet.determineWinner(). As a consequence, the TurnTemplate class would no longer be abstract but would have to become concrete, is it then still a template method pattern? To summarize, am I thinking in the right way or am I missing something easy? If I'm on the right track, how do I combine a strategy pattern and a template method pattern? This is part of a homework assignment but I'm not looking to be gifted the answer, I have deliberately been very verbose in my question to show that I have thought about it before coming here to ask a question

    Read the article

  • New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

    - by Alexandra Georgescu
    After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options. Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1). Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled: In order to activate its usage, corresponding statements could be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads: TRANSACTIONLOGLOCATION [ appname [ dbname]] LOGLOCATION NATIVE ?ENABLE | DISABLE Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed. The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged. TRANSACTIONLOGLOCATION Sample Hyperion/trlog/Sample NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Basic Hyperion/trlog/Sample NATIVE DISABLE Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings. A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available: In Administration Services selecting Replay Transactions on the right-click menu on the database: Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time. Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database: These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed. This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater. Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions. But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory: ARBORPATH/app/appname/dbname/Replay These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is: TRANSACTIONLOGDATALOADARCHIVE [appname [dbname]] [OPTION] While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following: Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions. Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded. You can find more detailed information in the following documents: Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1) Oracle Essbase Online Documentation (rel. 11.1.2.1)) Enterprise Performance Management System Documentation (including previous releases) Or on the Oracle Technology Network. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected]. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis. Disclaimer: All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

    Read the article

  • Big Data – Basics of Big Data Architecture – Day 4 of 21

    - by Pinal Dave
    In yesterday’s blog post we understood how Big Data evolution happened. Today we will understand basics of the Big Data Architecture. Big Data Cycle Just like every other database related applications, bit data project have its development cycle. Though three Vs (link) for sure plays an important role in deciding the architecture of the Big Data projects. Just like every other project Big Data project also goes to similar phases of the data capturing, transforming, integrating, analyzing and building actionable reporting on the top of  the data. While the process looks almost same but due to the nature of the data the architecture is often totally different. Here are few of the question which everyone should ask before going ahead with Big Data architecture. Questions to Ask How big is your total database? What is your requirement of the reporting in terms of time – real time, semi real time or at frequent interval? How important is the data availability and what is the plan for disaster recovery? What are the plans for network and physical security of the data? What platform will be the driving force behind data and what are different service level agreements for the infrastructure? This are just basic questions but based on your application and business need you should come up with the custom list of the question to ask. As I mentioned earlier this question may look quite simple but the answer will not be simple. When we are talking about Big Data implementation there are many other important aspects which we have to consider when we decide to go for the architecture. Building Blocks of Big Data Architecture It is absolutely impossible to discuss and nail down the most optimal architecture for any Big Data Solution in a single blog post, however, we can discuss the basic building blocks of big data architecture. Here is the image which I have built to explain how the building blocks of the Big Data architecture works. Above image gives good overview of how in Big Data Architecture various components are associated with each other. In Big Data various different data sources are part of the architecture hence extract, transform and integration are one of the most essential layers of the architecture. Most of the data is stored in relational as well as non relational data marts and data warehousing solutions. As per the business need various data are processed as well converted to proper reports and visualizations for end users. Just like software the hardware is almost the most important part of the Big Data Architecture. In the big data architecture hardware infrastructure is extremely important and failure over instances as well as redundant physical infrastructure is usually implemented. NoSQL in Data Management NoSQL is a very famous buzz word and it really means Not Relational SQL or Not Only SQL. This is because in Big Data Architecture the data is in any format. It can be unstructured, relational or in any other format or from any other data source. To bring all the data together relational technology is not enough, hence new tools, architecture and other algorithms are invented which takes care of all the kind of data. This is collectively called NoSQL. Tomorrow Next four days we will answer the Buzz Words – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Best Design Pattern for Coupling User Interface Components and Data Structures

    - by szahn
    I have a windows desktop application with a tree view. Due to lack of a sound data-binding solution for a tree view, I've implemented my own layer of abstraction on it to bind nodes to my own data structure. The requirements are as follows: Populate a tree view with nodes that resemble fields in a data structure. When a node is clicked, display the appropriate control to modify the value of that property in the instance of the data structure. The tree view is populated with instances of custom TreeNode classes that inherit from TreeNode. The responsibility of each custom TreeNode class is to (1) format the node text to represent the name and value of the associated field in my data structure, (2) return the control used to modify the property value, (3) get the value of the field in the control (3) set the field's value from the control. My custom TreeNode implementation has a property called "Control" which retrieves the proper custom control in the form of the base control. The control instance is stored in the custom node and instantiated upon first retrieval. So each, custom node has an associated custom control which extends a base abstract control class. Example TreeNode implementation: //The Tree Node Base Class public abstract class TreeViewNodeBase : TreeNode { public abstract CustomControlBase Control { get; } public TreeViewNodeBase(ExtractionField field) { UpdateControl(field); } public virtual void UpdateControl(ExtractionField field) { Control.UpdateControl(field); UpdateCaption(FormatValueForCaption()); } public virtual void SaveChanges(ExtractionField field) { Control.SaveChanges(field); UpdateCaption(FormatValueForCaption()); } public virtual string FormatValueForCaption() { return Control.FormatValueForCaption(); } public virtual void UpdateCaption(string newValue) { this.Text = Caption; this.LongText = newValue; } } //The tree node implementation class public class ExtractionTypeNode : TreeViewNodeBase { private CustomDropDownControl control; public override CustomControlBase Control { get { if (control == null) { control = new CustomDropDownControl(); control.label1.Text = Caption; control.comboBox1.Items.Clear(); control.comboBox1.Items.AddRange( Enum.GetNames( typeof(ExtractionField.ExtractionType))); } return control; } } public ExtractionTypeNode(ExtractionField field) : base(field) { } } //The custom control base class public abstract class CustomControlBase : UserControl { public abstract void UpdateControl(ExtractionField field); public abstract void SaveChanges(ExtractionField field); public abstract string FormatValueForCaption(); } //The custom control generic implementation (view) public partial class CustomDropDownControl : CustomControlBase { public CustomDropDownControl() { InitializeComponent(); } public override void UpdateControl(ExtractionField field) { //Nothing to do here } public override void SaveChanges(ExtractionField field) { //Nothing to do here } public override string FormatValueForCaption() { //Nothing to do here return string.Empty; } } //The custom control specific implementation public class FieldExtractionTypeControl : CustomDropDownControl { public override void UpdateControl(ExtractionField field) { comboBox1.SelectedIndex = comboBox1.FindStringExact(field.Extraction.ToString()); } public override void SaveChanges(ExtractionField field) { field.Extraction = (ExtractionField.ExtractionType) Enum.Parse(typeof(ExtractionField.ExtractionType), comboBox1.SelectedItem.ToString()); } public override string FormatValueForCaption() { return string.Empty; } The problem is that I have "generic" controls which inherit from CustomControlBase. These are just "views" with no logic. Then I have specific controls that inherit from the generic controls. I don't have any functions or business logic in the generic controls because the specific controls should govern how data is associated with the data structure. What is the best design pattern for this?

    Read the article

  • Using JQuery tabs in an HTML 5 page

    - by nikolaosk
    In this post I will show you how to create a simple tabbed interface using JQuery,HTML 5 and CSS.Make sure you have downloaded the latest version of JQuery (minified version) from http://jquery.com/download.Please find here all my posts regarding JQuery.Also have a look at my posts regarding HTML 5.In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. Let me move on to the actual example.This is the sample HTML 5 page<!DOCTYPE html><html lang="en">  <head>    <title>Liverpool Legends</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">    <script type="text/javascript" src="jquery-1.8.2.min.js"> </script>     <script type="text/javascript" src="tabs.js"></script>       </head>  <body>    <header>        <h1>Liverpool Legends</h1>    </header>     <section id="tabs">        <ul>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#first-tab">Defenders</a></li>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#second-tab">Midfielders</a></li>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#third-tab">Strikers</a></li>        </ul>   <div id="first-tab">     <h3>Liverpool Defenders</h3>     <p> The best defenders that played for Liverpool are Jamie Carragher, Sami Hyypia , Ron Yeats and Alan Hansen.</p>   </div>   <div id="second-tab">     <h3>Liverpool Midfielders</h3>     <p> The best midfielders that played for Liverpool are Kenny Dalglish, John Barnes,Ian Callaghan,Steven Gerrard and Jan Molby.        </p>   </div>   <div id="third-tab">     <h3>Liverpool Strikers</h3>     <p>The best strikers that played for Liverpool are Ian Rush,Roger Hunt,Robbie Fowler and Fernando Torres.<br/>      </p>   </div> </div></section>            <footer>        <p>All Rights Reserved</p>      </footer>     </body>  </html>  This is very simple HTML markup. I have styled this markup using CSS.The contents of the style.css file follow* {    margin: 0;    padding: 0;}header{font-family:Tahoma;font-size:1.3em;color:#505050;text-align:center;}#tabs {    font-size: 0.9em;    margin: 20px 0;}#tabs ul {    float: left;    background: #777;    width: 260px;    padding-top: 24px;}#tabs li {    margin-left: 8px;    list-style: none;}* html #tabs li {    display: inline;}#tabs li, #tabs li a {    float: left;}#tabs ul li.active {    border-top:2px red solid;    background: #15ADFF;}#tabs ul li.active a {    color: #333333;}#tabs div {    background: #15ADFF;    clear: both;    padding: 15px;    min-height: 200px;}#tabs div h3 {    margin-bottom: 12px;}#tabs div p {    line-height: 26px;}#tabs ul li a {    text-decoration: none;    padding: 8px;    color:#0b2f20;    font-weight: bold;}footer{background-color:#999;width:100%;text-align:center;font-size:1.1em;color:#002233;}There are some CSS rules that style the various elements in the HTML 5 file. These are straight-forward rules. The JQuery code lives inside the tabs.js file $(document).ready(function(){$('#tabs div').hide();$('#tabs div:first').show();$('#tabs ul li:first').addClass('active'); $('#tabs ul li a').click(function(){$('#tabs ul li').removeClass('active');$(this).parent().addClass('active');var currentTab = $(this).attr('href');$('#tabs div').hide();$(currentTab).show();return false;});}); I am using some of the most commonly used JQuery functions like hide , show, addclass , removeClass I hide and show the tabs when the tab becomes the active tab. When I view my page I get the following result Hope it helps!!!!!

    Read the article

  • Cisco VPNClient from Mac won't connect using iPhone Tethering

    - by Dan Short
    I just set up iPhone tethering from my Snow Leopard Macbook Pro to my iPhone 3GS with the Datapro 4GB plan from AT&T. When attempting to connect to my corporate VPN from the MacBook Pro with Cisco VPNClient 4.9.01 (0100) I get the following log information: Cisco Systems VPN Client Version 4.9.01 (0100) Copyright (C) 1998-2006 Cisco Systems, Inc. All Rights Reserved. Client Type(s): Mac OS X Running on: Darwin 10.6.0 Darwin Kernel Version 10.6.0: Wed Nov 10 18:13:17 PST 2010; root:xnu-1504.9.26~3/RELEASE_I386 i386 Config file directory: /etc/opt/cisco-vpnclient 1 13:02:50.791 02/22/2011 Sev=Info/4 CM/0x43100002 Begin connection process 2 13:02:50.791 02/22/2011 Sev=Warning/2 CVPND/0x83400011 Error -28 sending packet. Dst Addr: 0x0AD337FF, Src Addr: 0x0AD33702 (DRVIFACE:1158). 3 13:02:50.791 02/22/2011 Sev=Warning/2 CVPND/0x83400011 Error -28 sending packet. Dst Addr: 0x0A2581FF, Src Addr: 0x0A258102 (DRVIFACE:1158). 4 13:02:50.792 02/22/2011 Sev=Info/4 CM/0x43100004 Establish secure connection using Ethernet 5 13:02:50.792 02/22/2011 Sev=Info/4 CM/0x43100024 Attempt connection with server "209.235.253.115" 6 13:02:50.792 02/22/2011 Sev=Info/4 CVPND/0x43400019 Privilege Separation: binding to port: (500). 7 13:02:50.793 02/22/2011 Sev=Info/4 CVPND/0x43400019 Privilege Separation: binding to port: (4500). 8 13:02:50.793 02/22/2011 Sev=Info/6 IKE/0x4300003B Attempting to establish a connection with 209.235.253.115. 9 13:02:51.293 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 10 13:02:51.894 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 11 13:02:52.495 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 12 13:02:53.096 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 13 13:02:53.698 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 14 13:02:54.299 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 15 13:02:54.299 02/22/2011 Sev=Info/4 IKE/0x43000075 Unable to acquire local IP address after 5 attempts (over 5 seconds), probably due to network socket failure. 16 13:02:54.299 02/22/2011 Sev=Warning/2 IKE/0xC300009A Failed to set up connection data 17 13:02:54.299 02/22/2011 Sev=Info/4 CM/0x4310001C Unable to contact server "209.235.253.115" 18 13:02:54.299 02/22/2011 Sev=Info/5 CM/0x43100025 Initializing CVPNDrv 19 13:02:54.300 02/22/2011 Sev=Info/4 CVPND/0x4340001F Privilege Separation: restoring MTU on primary interface. 20 13:02:54.300 02/22/2011 Sev=Info/4 IKE/0x43000001 IKE received signal to terminate VPN connection 21 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700008 IPSec driver successfully started 22 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 23 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x4370000D Key(s) deleted by Interface (192.168.0.171) 24 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 25 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 26 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 27 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x4370000A IPSec driver successfully stopped The key line is 15: 15 13:02:54.299 02/22/2011 Sev=Info/4 IKE/0x43000075 Unable to acquire local IP address after 5 attempts (over 5 seconds), probably due to network socket failure. I can't find anything online about this. I found a single entry for the error message in Google, and it was a swedish (or some other nordic language site) that didn't have an answer to the question. I've tried connecting through both USB and Bluetooth tethering to the iPhone, and they both return the exact same results. I don't have direct control over the firewall, but if changes are necessary to make it work, I may be able to get the powers-that-be to make adjustments. A solution that doesn't require reconfiguring the firewall would be far better of course... Does anyone know what I can do to make this behave? Thanks, Dan

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

  • Setup Guide for updating local system and the repository with the incremental Solaris 11.1 SRU

    - by Gurubalan
    This guide covers the steps to implement the following setup. I. Updating the local system from Solaris 11.1 to Solaris 11.1 SRU 16.5II. Setting up local system as an IPS Repository Server (HTTP interface)III. Updating the local repository with the incremental Solaris 11.1 SRU 16.5I. Updating the local system from Solaris 11.1 to Solaris 11.1 SRU 16.5We assume that the local system is currently installed with Solaris 11.1 GA and the system doesn't have internet connectivity.What I have:1. Two parts of full repo iso files downloaded from http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html. Both files are concatenated to a single file using the following command. $ cat sol-11_1-repo-full.iso-a sol-11_1-repo-full.iso-b > sol-11_1-repo-full.iso I suggest to verify the downloaded file against its md5checksum value [http://download.oracle.com/otn/solaris/11_1/md5sum.txt] using the following command digest -a md5 <file-name>  // the output of this command should match the original checksum value for that file.2. Incremental repo sol-11_1_16_5_0-incr-repo.iso downloaded from MOS [Patch 18269379: ORACLE SOLARIS 11.1.16.5.0 REPO ISO IMAGE (SPARC/X86 (64-BIT)]. You can get the checksum value of incremental repo iso by clicking the check box "show digest details" when you download the file.3. The local system IP is 192.168.10.10 & port 81 is reserved for repo serverPlease note that this repo file (either full or incremental) is common for both SPARC and X86(64BIT).Steps to update the local system: 1. #mounting s11.1 full repo iso to mnt        $ mount -F hsfs /soft/sol-11_1-repo-full.iso /mnt 2. Setting the pkg publisher to full repo source         $ pkg set-publisher -g file:///mnt/repo solaris 3. Perform the update of the packages.        $ pkg updateII. Setting up local system (Oracle Solaris 11.1) as an IPS Repository Server(HTTP interface):Please note that we have already mounted the full repo iso at /mnt    1. # copying /mnt permanently to the disk location at /s11.1        #zfs create -o atime=off -o mountpoint=/s11.1 rpool/s11.1        #rsync -aP /mnt/* /s11.1     2. #unmounting mnt         #umount /mnt3. To allow clients to access the local repository via HTTP, enable the application/pkg/server Service Management Facility (SMF) service.        svccfg -s application/pkg/server setprop pkg/inst_root=<data_source>/repo        eg: $svccfg -s application/pkg/server setprop pkg/inst_root=/s11.1/repo4. Setting port# to 81      svccfg -s application/pkg/server setprop pkg/port=<port_number>      eg: svccfg -s application/pkg/server setprop pkg/port="81"5a. Enable the pkg/server service (if the service is disabled)     $svcs pkg/server     STATE          STIME    FMRI     disabled        19:55:03 svc:/application/pkg/server:default      $svcadm enable pkg/server5b. Refresh/Restart the service, if it is already online       $svcadm refresh application/pkg/server       $svcadm restart application/pkg/server6. Setting pkg publisher on repo server and repo clients:      pkg set-publisher -G '*' -g http://<ip>:<port> solaris      eg: $pkg set-publisher -G '*' -g 'http://192.168.10.10:81' solaris7. Verify the Solaris 11.1 version from the repository         $pkgrepo list -s http://192.168.10.10:81 | grep entire         solaris   entire     0.5.11,5.11-0.175.1.0.0.24.2:20120919T190135Z You will have multiple row entries if the repository is setup with incremental SRUs.III. Updating the local repository with the incremental Solaris 11.1 SRU 16.51. #mounting s11.1 incremental SRU repo iso to mnt        $ mount -F hsfs <full_path_to>/sol-11_1_sruN_bldnum_respinnum-incr-repo.iso  /mnt        $ mount -F hsfs /soft/sol-11_1_16_5_0-incr-repo.iso /mnt2. Updating the local repository        $pkgrecv -s  /mnt/repo -d /s11.1/repo '*'3. Building a Search Index    $pkgrepo -s /s11.1/repo refresh     Initiating repository refresh.4. Refresh/Restart the service       $svcadm refresh svc:/application/pkg/server       $svcadm restart svc:/application/pkg/server5. Verify the repo has the incremental SRU as well.       # pkgrepo list -s http://192.168.10.10:81 | grep entire        solaris   entire      0.5.11,5.11-0.175.1.16.0.5.0:20140218T165248Z       solaris   entire      0.5.11,5.11-0.175.1.0.0.24.2:20120919T190135Z

    Read the article

  • DAC pack up all your troubles

    - by Tony Davis
    Visual Studio 2010, or perhaps its apparently-forthcoming sister, "SQL Studio", is being geared up to become the natural way for developers to create databases. Central to this drive is the introduction of 'data-tier application components', or DACs. Applications are developed as normal but when it comes to deployment, instead of supplying the DBA with a bunch of scripts to create the required database objects, the developer creates a single DAC Package ("DAC Pack"); a zipped XML file containing all the database objects needed by the application, along with versioning information, policies for deployment, and so on. It's an intriguing prospect. Developers can work on their development database using their existing tools and source control, and then package up the changes into a single DACPAC for deployment and management. DBAs get an "application level view" of how their instances are being used and the ability to collectively, rather than individually, manage the objects. The DBA needing to manage a large number of relatively small databases can use "DAC snapshots" to get a quick overview of what has changed across all the databases they manage. The reason that DAC packs haven't caused more excitement is that they can only be pushed to SQL Server 2008 R2, and they must be developed or inspected using Visual Studio 2010. Furthermore, what we see right now in VS2010 is more of a 'work-in-progress' or 'vision of the future', with serious shortcomings and restrictions that render it unsuitable for anything but small 'non-critical' departmental databases. The first problem is that DAC packs support a limited set of schema objects (corresponding closely to the features available on 'Azure'). This means that Service Broker queues, CLR Objects, and perhaps most critically security (permissions, certificates etc.), are off-limits. Applications that require these objects will need to add them via a post-deployment TSQL script, rather defeating the whole idea. More worrying still is the process for altering a database with a DAC pack. The grand 'collective' philosophy, whereby a single XML file can be used for deploying and managing builds and changes, extends, unfortunately, to database upgrades. Any change to a database object will result in the creation of a new database, copying the data from the old version, nuking the previous one, and then renaming the new one. Simple eh? The problem is that even something as trivial as adding a comment to a stored procedure in a 5GB database will require the server to find at least twice as much space, as well sufficient elbow-room in the transaction log for copying the largest table. Of course, you'll need to take the database offline for the full course of the deployment, which is likely to take a long time if there is a lot of data. This upgrade/rename process breaks the log chain, makes any subsequent full restore operation highly complicated, and will also break log shipping. As with any grand vision, the devil is always in the detail. It's hard to fathom why Microsoft hasn't used a SQL Compare-style approach to the upgrade process, altering a database with a change script, and this will surely be adopted in the near future. Something had to be in place for VS2010, but right now DAC packs only make sense for Azure. For this, they're cute, but hardly compelling. Nevertheless, DBAs would do well to get familiar with VS 2010 and DAC packs. Like it or not, they're both coming. Cheers, Tony.

    Read the article

  • SQL Sentry Truth-Telling and Disk Configuration

    - by AjarnMark
    Recently, SQL Sentry told me something about my SQL Server disk configurations that I just didn’t want to believe, but alas, it was true. Several days ago I posted my First Impressions of the SQL Sentry Power Suite.  Today’s post could fall into the category of, “Hey, as long as you have that fancy tool…”  Unfortunately, it also falls into the category of an overloaded worker taking someone else’s word for the truth, not verifying it with independent fact-checking, and then making decisions based on that.  Here’s my story… I’m not exactly an Accidental DBA (or Involuntary DBA as Paul Randal calls it).  I came to this company five years ago as a lead application developer with extensive experience in database design and development.  I worked my way into management, and along the way, took over the DBA responsibilities.  Fortunately, our systems run pretty smoothly most of the time, but I’m always looking for ways to make them better and to fit into my understanding of best practices.  When I took over as DBA, I inherited a SQL 2000 server with about 30 databases on it supporting our main systems, and a SQL 2005 server with multiple instances.  Both of these servers were configured with the Operating System and Application files on the C drive, data files on a different drive letter, and log files on a third drive letter.  Even before I took over as DBA, I verified that this was true with a previous server administrator, and that these represented actual separate disks.  He stated that they did, and I thought that all was well. Then one day, I’m poking around inside the SQL Sentry Performance Advisor, checking out features as I am evaluating whether to purchase the product, and I come across a Disk Configuration section.  The first thing I notice is that the drives do not have the proper partition offset, which was not at all surprising to me given the age of the installation and the relative newness of that topic.  But what threw me for a loop was that the graphic display appeared to be telling me that I did not in fact have three separate drives (or arrays) but rather had two, and that the log files were merely on a separate volume on the same physical array as the OS.  I figured that I must be reading it wrong so I scanned the Help file, but that just seemed to confirm my interpretation.  Then I thought, “there must be something wrong with the demo version of the software!  This can’t be right!”  But just to double-check, I went to our current server admin to talk it over with him, and sure enough, SQL Sentry was telling the truth! I was stunned!  I quickly went through the grieving process…denial…anger…reconciliation.  Here was something that I thought was such a basic truth that was turned upside down.  OK, granted, this wasn’t disastrous.  Our databases didn’t suddenly grind to a halt.  I didn’t get calls late at night inquiring about the sudden downturn in performance.  But it was a bit of a shock to the system, in a good way, to jolt me out of taking what I had believed as the truth for granted, and instead to Trust, but Verify! Yes, before someone else points it out, I know that there are”free” disk management tools built-in to Windows that would have told me the same thing if I had only looked at them; I did not have to buy a fancy tool to tell me that, but the fact is, until I was evaluating the tool, I had just gone with what I was told, and never bothered to check what was actually there. So, what things do you believe to be true but you actually never verified?

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >