Search Results

Search found 1101 results on 45 pages for 'essential'.

Page 37/45 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Best ways to format LINQ queries.

    - by Aren B
    Before you ignore / vote-to-close this question, I consider this a valid question to ask because code clarity is an important topic of discussion, it's essential to writing maintainable code and I would greatly appreciate answers from those who have come across this before. I've recently run into this problem, LINQ queries can get pretty nasty real quick because of the large amount of nesting. Below are some examples of the differences in formatting that I've come up with (for the same relatively non-complex query) No Formatting var allInventory = system.InventorySources.Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }).GroupBy(i => i.Region, i => i.Inventory); Elevated Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory); Block Formatting var allInventory = system.InventorySources .Select( src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory ); List Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy(i => i.Region, i => i.Inventory); I want to come up with a standard for linq formatting so that it maximizes readability & understanding and looks clean and professional. So far I can't decide so I turn the question to the professionals here.

    Read the article

  • Preserving the order of annotations

    - by Ragunath Jawahar
    When obtaining the list of fields using getFields() and getDeclaredFields(), the order of the fields in a Java class is undefined. I need this order in my validation library. Since the order is not preserved (though some claim that the order is preserved in JDK 6 and above but is not guaranteed across VMs). I cannot speculate on this because the order of annotations across fields is absolutely essential for the library. One way to get around this is to have an order or an index attribute in my Annotation. What worries me is that the code could become a bit cumbersome for maintaining in the following case. If the use wants to insert a new annotated field then, he might have to renumber all the other annotations in the class. I could have the order or index as a floating point number - float or double but , it wouldn't look good to have order such as 1, 1.5, 2, etc., What would be an elegant solution for this problem? Here is a example code so that you can get an idea about the problem: @Required @TextRule (minLength = 6, message = "You need at least 6 characters.") private EditText usernameEditText; @Password private EditText passwordEditText; @ConfirmPassword private EditText confirmPasswordEditText; @Email private EditText emailEditText;

    Read the article

  • The cross-thread usage of "HttpContext.Current" property and related things

    - by smwikipedia
    I read from < Essential ASP.NET with Examples in C# the following statement: Another useful property to know about is the static Current property of the HttpContext class. This property always points to the current instance of the HttpContext class for the request being serviced. This can be convenient if you are writing helper classes that will be used from pages or other pipeline classes and may need to access the context for whatever reason. By using the static Current property to retrieve the context, you can avoid passing a reference to it to helper classes. For example, the class shown in Listing 4-1 uses the Current property of the context to access the QueryString and print something to the current response buffer. Note that for this static property to be correctly initialized, the caller must be executing on the original request thread, so if you have spawned additional threads to perform work during a request, you must take care to provide access to the context class yourself. I am wondering about the root cause of the bold part, and one thing leads to another, here is my thoughts: We know that a process can have multiple threads. Each of these threads have their own stacks, respectively. These threads also have access to a shared memory area, the heap. The stack then, as I understand it, is kind of where all the context for that thread is stored. For a thread to access something in the heap it must use a pointer, and the pointer is stored on its stack. So when we make some cross-thread calls, we must make sure that all the necessary context info is passed from the caller thread's stack to the callee thread's stack. But I am not quite sure if I made any mistake. Any comments will be deeply appreciated. Thanks. ADD Here the stack is limited to user stack.

    Read the article

  • Confusion on networking service start/stop in Ubuntu

    - by Daniel Ball
    I'm preparing to move and took down two of my servers, leaving only one with some essential services running. What I neglected to consider was that one was the DHCP server(which I realized when somebody contacted me saying they couldn't connect. Whups). So because I only have a few hosts on this small network, I opted to just statically configure them for now. One of these is a new Ubuntu 11.04 server, where I have very little experience. I edited /etc/network/interfaces and /etc/hosts to reflect my changes. I ran $sudo /etc/init.d/networking stop *deconfiguring network interfaces ... So yay. Then I try to start, it gives me the mumbo jumbo about using services (why didn't it do that for the stop?) So instead I run ... $sudo service networking start networking stop/waiting Now, to me that says the status of the service is stopped. But when I ping another computer, I get a successful reply. So is it not actually stopped? More importantly, am I doing something wrong? Edit daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking stop stop: Unknown instance: daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking start networking stop/waiting daniel@FOOBAR:~$ sudo service networking status networking stop/waiting So you can see why I ran /etc/init.d/networking stop instead. For some reason upstart (that is what "services" is, right?) isn't working with stop. cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 FOOBAR 198.3.9.2 FOOBAR #Added entry July 19 2011 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth0 #iface eth0 inet dhcp # hostname FOOBAR auto eth0 iface eth0 inet static address 198.3.9.2 netmask 255.255.255.0 network 198.3.9.0 broadcast 198.3.9.255 gateway 198.3.9.15 No I didn't save backups, it was just a minor change so I just commented out the old DHCP setting. Edit I set everything back to original settings and set up a DHCP server. "starting" networking does the same thing. I can only assume this is normal, I just don't know WHY. It can't be anything to do with the configuration files, since they've been restored.

    Read the article

  • dnsmasq acts as the DHCP server for selected nodes overriding the existing DHCP server on the same LAN?

    - by user183394
    I am trying to set up a small "lab" at home. Like many modern homes, I have a regular DSL service which comes with a 2Wire 3600HGV router, which acts also as a DHCP server. Since I would like to PXE boot a few computers in my "lab" The 2Wire is inflexible to adjustments that I want to do I have used dnsmasq at work so I would like to use dnsmasq as the DHCP server for the few nodes in my "lab" if feasible. In the dnsmasq man page, there is the following: [...] -K, --dhcp-authoritative (IPv4 only) Should be set when dnsmasq is definitely the only DHCP server on a network. It changes the behaviour from strict RFC compliance so that DHCP requests on unknown leases from unknown hosts are not ignored. This allows new hosts to get a lease without a tedious timeout under all circumstances. It also allows dnsmasq to rebuild its lease database without each client needing to reacquire a lease, if the database is lost. [...] As far as I know, the ISC DHCP server can use the following to do what I would like to accomplish: authoritative; [...] subnet 192.168.1.0 netmask 255.255.255.0 { host nb0 { # only give DHCP information to this computer: hardware ethernet e8:9a:8f:17:70:42; fixed-address 192.168.1.10; option subnet-mask 255.255.255.0; option routers 192.168.1.254; option domain-name-servers 192.168.1.254; # Non-essential DHCP options filename "/pxelinux.0"; } [...] But I much prefer dnsmasq's "all-in-one-ness". My question: do I have to couple the -K option with something else? As shown in the example above, the ISC DHCP server requires the mac addresses of managed nodes to be explicitly specified. Does dnsmasq have something similar? FYI, the machine on which I plan to run dnsmasq runs CentOS 6.3 64bit. It has a statically assigned IP address: 192.168.1.3.

    Read the article

  • EC2 Filesystem / Files stored on the wrong partiton after launching new instance from AMI

    - by Philip Isaacs
    Today I set up a new EC2 Instance from and AMI I created from an older EC2 instance. When I launched the new instance I took the AMI that was on a small instance and launched with a medium instance. From what I can tell this is pretty standard stuff. But here's the stang part. According to AWS these are the differences Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform Medium Instance 3.75 GB of memory, 2 EC2 Compute Units (1 virtual core with 2 EC2 Compute Units each), 410 GB of local instance storage, 32-bit or 64-bit platform Okay now here's where I'm having an issue. I when I log into the new bigger instance it still reports only having 1.7 GB of ram. The other strange part is that all my old partitions are still their in the same configurations. I see a new larger partition /mnt which is essential empty. Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 5.9G 1.6G 79% / none 846M 120K 846M 1% /dev none 879M 0 879M 0% /dev/shm none 879M 76K 878M 1% /var/run none 879M 0 879M 0% /var/lock none 879M 0 879M 0% /lib/init/rw /dev/sda2 335G 195M 318G 1% /mnt /dev/sdf 16G 9.9G 5.1G 67% /var2 This EC2 is a web server and I was serving files off the /var2 directory but for some reason the instance is storing everything on / Okay here's what I'd like to do. Move all my website files to /mnt and have the web server point to that. Any suggestions? If it helps here is what my fstab looks like as well. root@myserver:/var# mount -l /dev/sda1 on / type ext3 (rw) [cloudimg-rootfs] proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) /dev/sda2 on /mnt type ext3 (rw) /dev/sdf on /var2 type ext4 (rw,noatime) I hope this question makes sense. Basically i want my old files on this new partition. Thanks in advance

    Read the article

  • Cannot install passenger with Nginx

    - by Luc
    Hello, I have a rack application that I want to migrate from Ruby 1.8.7 + Apache + passenger to Ruby 1.9.1 + Nginx + passenger. I have made up the following script for a quick install all in one, and it raises an error... Here is the installation script: (basic one with all the steps I need to install everything on a Ubuntu 10.04 Lucid Lynx fresh box) Nginx sources cd /tmp wget http://nginx.org/download/nginx-0.7.66.tar.gz tar xzf nginx-0.7.66.tar.gz cd nginx-0.7.66 openssl required for SSL/TLS sudo apt-get install openssl sudo apt-get install libssl-dev Compilation stuff sudo apt-get zlib1g-dev Ruby interpreter 1.9.1 sudo apt-get install ruby1.9.1 ruby1.9.1-dev rubygems1.9.1 irb1.9.1 ri1.9.1 rdoc1.9.1 build-essential nginx libopenssl-ruby1.9.1 Make sure default ruby uses version 1.9.1 sudo update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby1.9.1 400 --slave /usr/share/man/man1/ruby.1.gz ruby.1.gz /usr/share/man/man1/ruby1.9.1.1.gz --slave /usr/bin/ri ri /usr/bin/ri1.9.1 --slave /usr/bin/irb irb /usr/bin/irb1.9.1 --slave /usr/bin/rdoc rdoc /usr/bin/rdoc1.9.1 sudo update-alternatives --config ruby Passenger (rake-0.8.7, fastthread-1.0.7, rack-1.1.0, passenger-2.2.14) sudo gem install passenger Activate Passenger in nginx, select option 2 to use nginx sources donwloaded above cd /var/lib/gems/1.9.1/gems/passenger-2.2.14/bin sudo ./passenger-install-nginx-module And this is the error message I got: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/ContentHandler.c gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I /tmp/pcre-8.00 -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/nginx/StaticContentHandler.o \ /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c: In function ‘passenger_static_content_handler’: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c:71: error: ‘ngx_http_request_t’ has no member named ‘zero_in_uri’ make[1]: *** [objs/addon/nginx/StaticContentHandler.o] Error 1 make[1]: Leaving directory `/tmp/nginx-0.7.66' make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong Please read our Users guide for troubleshooting tips: /var/lib/gems/1.9.1/gems/passenger-2.2.14/doc/Users guide Nginx.html I do not understand the reason of this error. Is this a compatibility problem ? Hope you have any clues :) Thanks a lot, Luc

    Read the article

  • Enterprise Tape Backup solutions

    - by Tom O'Connor
    I'm currently attempting to re-architect a backup solution where I'm working. We've got 2 NAS devices, one in the office, one in the datacentre. The servers in the DC back up to the DC NAS, which is then replicated to the Office NAS. The office NAS exports shares as CIFS and NFS, this bit is fine. At some point, I'll have to expand our storage capacity, currently we've got about 1.4TB of storage space, which is about 96% full. Previously, the tape backup was a script that ran tar a few times and squirted data onto a tape. It worked, but was by no means a perfect solution. Restores are a bit of a pest, adding new data to the backup requires editing the script as root. It's just all a bit non-ideal. I've been evaluating a number of "enterprise" ready backup solutions, such as Yosemite Backup from Barracuda, Acronis Backup/Restore, and something from Arkeia. In the process of evaluating these, I've found 2 big problems. Not all of them allow backup of mounted devices (such as a NFS mounted NAS) Many of these applications don't like our tape device. For the most part, (1) is essential. Our NAS has a feeble processor and can't run applications like backup agents. I suspect that the biggest problem is the tape device, which is a HP C7438A DAT72 connected via USB. Questions: Has anyone else got an USB DAT72 device working with similar software? Is there a better way to back up data from an "appliance" NAS device on which you can't run an agent? Would I be totally out of my mind to specify a cheap HP or Dell server with a couple of 1TB hard disks, and a SAS card to then talk to an HP Ultrium (or similar) device? The biggest drawback to this would be cost (400ish for the server, 200 for the SAS connectivity and 1700 for a LTO4 device) Notes: I'd love to be able to say that I'd get rid of tapes entirely, and use some form of hard disk backup. In a previous job, we had LaCie USB drives, which were decidedly unreliable.

    Read the article

  • Vagrant-aws not provisioning

    - by SuperCabbage
    I'm trying to spin up and provision an EC2 instance with Vagrant, it successfully creates the instance up and I can then use vagrant ssh to SSH into the it but Puppet doesn't seem to carry out any provisioning. Upon running vagrant up --provider=aws --provision I get the following output Bringing machine 'default' up with 'aws' provider... WARNING: Nokogiri was built against LibXML version 2.8.0, but has dynamically loaded 2.9.1 [default] Warning! The AWS provider doesn't support any of the Vagrant high-level network configurations (`config.vm.network`). They will be silently ignored. [default] Launching an instance with the following settings... [default] -- Type: m1.small [default] -- AMI: ami-a73264ce [default] -- Region: us-east-1 [default] -- Keypair: banderton [default] -- Block Device Mapping: [] [default] -- Terminate On Shutdown: false [default] Waiting for SSH to become available... [default] Machine is booted and ready for use! [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/ => /vagrant [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/manifests/ => /tmp/vagrant-puppet/manifests [default] Rsyncing folder: /Users/benanderton/development/projects/my-project/aws/modules/ => /tmp/vagrant-puppet/modules-0 [default] Running provisioner: puppet... An error occurred while executing multiple actions in parallel. Any errors that occurred are shown below. An error occurred while executing the action on the 'default' machine. Please handle this error then try again: No error message I can then SSH into the instance by using vagrant ssh but none of my provisioning has taken place, so I'm assuming that errors have occured but I'm not being given any useful information relating to them. My Vagrantfile is as following; Vagrant.configure("2") do |config| config.vm.box = "ubuntu_aws" config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box" config.vm.provider :aws do |aws, override| aws.access_key_id = "REDACTED" aws.secret_access_key = "REDACTED" aws.keypair_name = "banderton" override.ssh.private_key_path = "~/.ssh/banderton.pem" override.ssh.username = "ubuntu" aws.ami = "ami-a73264ce" end config.vm.provision :puppet do |puppet| puppet.manifests_path = "manifests" puppet.module_path = "modules" puppet.options = ['--verbose'] end end My Puppet manifest is as following; package { [ 'build-essential', 'vim', 'curl', 'git-core', 'nano', 'freetds-bin' ]: ensure => 'installed', } None of the packages are installed.

    Read the article

  • Establishing a web page bookmarking process - looking for ideas to improve

    - by Matt
    Like many others, I have a process for bookmarking web pages to read later. My requirements for web page bookmarking are: Ability to bookmark pages must be available from all (within reason) platforms - PC/browser, mobile device, etc. Bookmarks must be centrally stored (implicit from #2) so that I can read the bookmarks from anywhere/any device Full text of web pages must be stored Bonus features would be: Bookmarks and page content should be full text searchable Maintain an archive indefinitely Distinguish between what's read vs. unread Bookmarked page content is cleaned up, e.g. ads eliminated, unnecessary html removed, pages better formatted for reading My current process (which addresses most of these requirements) is as follows: I set up a Gmail account with 2 labels, "Bookmarks Unread" and "Bookmarks Read" Gmail filters set up such that depending on the form of the address (using Gmail's '+string' functionality in addresses), the incoming bookmark gets labeled appropriately On each of my browsers/devices, I have an address book entry for [email protected] and [email protected]. If I want to clean up the page content, I use the Readability bookmarklet which does a great job of giving me the essential content only Anywhere I have Firefox, I use the Send Page by Email extension which, with 2 clicks, allows me to send the cleaned-up Readability page URL and content to one of the above email addresses. Where I don't have Firefox (e.g. iPhone or other mobile device) I use the native ability to send the current link via email (most/all apps have them, including the browser, RSS readers, NYTimes, etc.). In most cases (unless it's built into the particular app), this won't include the page body. The process is almost perfect. I've got the central access and ubiquitous access of Gmail as the storage mechanism, full text searchability (due to Gmail, but of course only for the URLs I send from that Firefox extension), a cleaned up page due to Readability, ability to read offline (assuming I use an IMAP client against Gmail) and permanent archiving of content, including what's been read vs. unread. The missing pieces are: The Send Page by Email Firefox extension seems to only send X bytes of a web page. Or some portion. So it limits my full text searchability. Where I don't have Firefox, I can only send the link, so no full text search at all in those cases. Instapaper looks like it meets most of my requirements (and bonus items). The only downside to me (personal preference) is that central storage is based on Instapaper vs. something more broad like Gmail, which as a generalized service and with Google behind it pretty much means it's permanent. I'm not too hung up on this, but I would definitely prefer to keep Gmail if possible. An upside of Instapaper is that it does the page clean-up as well as stores the entire page content, unlike my Firefox extension. Thoughts on addressing the gaps and improving this process further?

    Read the article

  • Vista stuck at "Shutting down..." screen. Any way to get verbose logging?

    - by CapBBeard
    Hi all, My home machine has been running fine for about 3 years, no problems at all. Within the last couple of weeks it's had real trouble trying to shut down. It'll get so far and then just sit there at the "Shutting down..." screen for hours. I've left it overnight, I've tried in safe mode, all to no avail. These days, I just wait for the disk activity to finish up and then hold the power button to turn it off. Feels dirty as, especially because there's a RAID involved! The hardware itself is in pretty good shape and of decent spec; Core 2 Quad, 4GB RAM, 1TB RAID 1+0, so it's not quite like a 7 year old PC coming to end of life! In the last month, hardware hasn't changed except for a new monitor. Admittedly I haven't tried unplugging the monitor but I've never heard of that preventing a shutdown. I might give it a whirl later I guess, as a last resort. I've uninstalled old apps, done updates, checked the event log, looked in device manager, uninstalled all non-present devices, disabled various non-critical devices (imaging, audio etc), unplugged peripherals, stopped non-essential services, unplugged the network, disabled the network adapter entirely, ran chkdsk, verified my RAID, the list goes on. But not a single lead. I'm pretty stumped. It could be hardware, but I have no other evidence to suggest so; when the PC is running, it runs fine. Temperatures are good, gaming is smooth as always, disk performance is fine. Event log even makes it look like the shutdown was completed (gets to the point where the event log service stops). In fact, the PC doesn't appear to realise that I cut the power to it. So my question is, does anyone know if there is a way I can get some verbose output (or a log) from shutdown to give me some idea of what is causing the issue? I'm guessing it's stuck unloading some app/driver but it would be good to get some specifics! Unless anyone has any other ideas? I suspect a reinstall would resolve the issue, however I'm looking to get a new PC built in the next month or so, and the reinstall is going to be quite a big job so I'd rather just wait until then if it comes to that. Would still be nice to get this sorted in the mean time though. Cheers!

    Read the article

  • Fedora 11 System - Failed Hard Drive Removed, and Boot gets GRUB Hard Disk Error

    - by Mindful
    Greetings, I have a machine with a 120GB ATA drive that has what I thought to be non-essential data on it. I also have a 320GB SATA hard drive with the OS/Application/Files (good data I want to keep). My 120GB ATA is failing I believe, as my computer kept slowing to a halt. However, when I move the drive from BIOS my computer will not start, says "GRUB Hard Disk Error". I know that my Fedora system has an LVM setup. I am looking to just remove the 120GB drive from "the mix", and just have one hard drive. How do I recover ? Thank you. I have access to a Linux Live CD right now and can make any changes. However, it won't boot into my OS - it fails. UPDATE: here's my Grub.Conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd1,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda1 default=0 timeout=5 splashimage=(hd1,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.30.10-105.2.23.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.10-105.2.23.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.10-105.2.23.fc11.i686.PAE.img title Fedora (2.6.30.9-102.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.9-102.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.9-102.fc11.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.img title Fedora (2.6.27.21-170.2.56.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.21-170.2.56.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.21-170.2.56.fc10.i686.img title Fedora (2.6.27.19-170.2.35.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.19-170.2.35.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.19-170.2.35.fc10.i686.img title Upgrade to Fedora 10 (Cambridge) kernel /upgrade/vmlinuz preupgrade repo=hd::/var/cache/yum/preupgrade stage2=http://chi-10g-1-mirror.fastsoft.net/pub/linux/fedora/linux/releases/10/Fedora/i386/os/images/install.img ks=hd:UUID=f11769ba-29bc-46de-8c40-a949720a438e:/upgrade/ks.cfg initrd /upgrade/initrd.img title Win rootnoverify (hd0,0) chainloader +1

    Read the article

  • Oracle ADF Coverage at OOW

    - by Frank Nimphius
    Below is the schedule for all ADF related sessions at a glance. Note the Meet and greet session added for Wednesday Octiber 3rd from 4.30 pm to 5:30. Oracle ADF and Fusion Development General Session Mon 1 Oct, 2012 Time Title Location 10:45 AM - 11:45 AM General Session: The Future of Development for Oracle Fusion—From Desktop to Mobile to Cloud Marriott Marquis - Salon 8 12:15 PM - 1:15 PM General Session: Extend Oracle Fusion Apps to Tablets/Smartphones with Oracle Mobile Technology Moscone West - 3014 1:45 PM - 2:45 PM General Session: Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies Moscone West - 3002/3004 4:45 PM - 5:45 PM General Session: Building Mobile Applications with Oracle Cloud Moscone West - 2002/2004 Conference Session Mon 1 Oct, 2012 Time Title Location 12:15 PM - 1:15 PM Understanding Oracle ADF and Its Role in Oracle Fusion Moscone South - 306 1:45 PM - 2:45 PM Building Performant Oracle ADF Business Components to Meet Tomorrow’s Needs Marriott Marquis - Golden Gate C3 3:15 PM - 4:15 PM End-to-End Oracle ADF Development in Eclipse Marriott Marquis - Golden Gate C3 4:45 PM - 5:45 PM Classic Mistakes with Oracle Application Development Framework Marriott Marquis - Salon 7 Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM One Size Doesn’t Fit All: Oracle ADF Architecture Fundamentals Marriott Marquis - Golden Gate C2 10:15 AM - 11:15 AM Oracle Business Process Management/Oracle ADF Integration Best Practices Marriott Marquis - Golden Gate C3 11:45 AM - 12:45 PM Mobile-Enable Oracle Fusion Middleware and Enterprise Applications with Oracle ADF Moscone South - 306 11:45 AM - 12:45 PM Secrets of Successful Projects with Oracle Application Development Framework Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM Develop On-Device iPhone and iPad Apps Without Writing Any Objective-C Code Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM BPM, SOA, and Oracle ADF Combined: Patterns Learned from Oracle Fusion Applications Moscone West - 3003 1:15 PM - 2:15 PM The Future of Forms Is … Oracle Forms (and Friends) Moscone South - 306 5:00 PM - 6:00 PM Best Practices for Integrating SOAP and REST Service into Oracle ADF Marriott Marquis - Golden Gate C2 Wed 3 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Mobile Apps for Oracle E-Business Suite with Oracle ADF Mobile and Oracle SOA Suite Moscone West - 3001 10:15 AM - 11:15 AM Visualize This! Best Practices for Data Visualization in Desktop and Mobile Apps Marriott Marquis - Golden Gate C3 10:15 AM - 11:15 AM Set Up Your Oracle ADF Project and Development Team for Productivity: Seven Essential Tips Marriott Marquis - Golden Gate C2 11:45 AM - 12:45 PM How to Migrate an Oracle Forms Application to Oracle ADF Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM Oracle ADF: Lessons Learned in Real-World Implementations Moscone South - 309 3:30 PM - 4:30 PM Oracle ADF Implementations Around the Globe: Best Practices Marriott Marquis - Golden Gate C2 3:30 PM - 4:30 PM Oracle Developer Cloud Services Marriott Marquis - Salon 7 4:30 PM - 5:30 PM Oracle JDeveloper and Oracle ADF: What’s New Hilton San Francisco - Continental Ballroom 5 5:00 PM - 6:00 PM Mobile Solutions for Oracle E-Business Suite Applications: Technical Insight Moscone West - 2020 5:00 PM - 6:00 PM Extending Social into Enterprise Applications and Business Processes Marriott Marquis - Golden Gate C3 5:00 PM - 6:00 PM The Tie That Binds: An Introduction to Oracle ADF Bindings Marriott Marquis - Golden Gate C2 Thur 4 Oct, 2012 Time Title Location 11:15 AM - 12:15 PM Using Oracle ADF with Oracle E-Business Suite: The Full Integration View Moscone West - 3003 11:15 AM - 12:15 PM Deep Dive into Oracle ADF: Advanced Techniques Marriott Marquis - Golden Gate C2 12:45 PM - 1:45 PM Monitor, Analyze, and Troubleshoot Your Oracle ADF Application Marriott Marquis - Golden Gate C2 2:15 PM - 3:15 PM Oracle WebCenter Portal: Creating and Using Content Presenter Templates Marriott Marquis - Golden Gate C2 HOL (Hands-on Lab) Mon 1 Oct, 2012 Time Title Location 10:45 AM - 11:45 AM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 1:45 PM - 2:45 PM Build Mobile Applications for Oracle E-Business Suite Marriott Marquis - Salon 10A 3:15 PM - 4:15 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 3:15 PM - 4:15 PM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 4:45 PM - 5:45 PM Application Lifecycle Management with Oracle JDeveloper: Hands-on Lab Marriott Marquis - Salon 3/4 Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 5:00 PM - 6:00 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A Wed 3 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 11:45 AM - 12:45 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 1:15 PM - 2:15 PM Build Mobile Applications for Oracle E-Business Suite Marriott Marquis - Salon 10A 3:30 PM - 4:30 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 5:00 PM - 6:00 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A Thur 4 Oct, 2012 Time Title Location 11:15 AM - 12:15 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 11:15 AM - 12:15 PM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 12:45 PM - 1:45 PM Oracle ADF for Java EE Developers with Oracle Enterprise Pack for Eclipse Marriott Marquis - Salon 3/4 BOF (Birds-of-a-Feather) Mon 1 Oct, 2012 Time Title Location 6:15 PM - 7:00 PM How to Get Started with Oracle ADF Marriott Marquis - Club Room 7:15 PM - 8:00 PM Building Next-Generation Applications with Oracle ADF and Oracle BPM Marriott Marquis - Golden Gate C3 7:15 PM - 8:00 PM The Future of Oracle Forms: Upgrade, Modernize, or Migrate? Marriott Marquis - Golden Gate C2 7:15 PM - 8:00 PM Oracle ADF Faces: One Site for Many Devices Marriott Marquis - Golden Gate C1 - User Group Forum (Sunday Only) Sun 30 Sept, 2012 Time Title Location 9:00 AM - 10:00 AM Oracle ADF Immersion: How an Oracle Forms Developer Immersed Himself in the Oracle ADF World Moscone South - 305 10:15 AM - 11:15 AM Deploy with Joy: Using Hudson to Build and Deploy Your Oracle ADF Applications Moscone South - 305 11:30 AM - 12:30 PM ADF EMG User Group: A Peek into the Oracle ADF Architecture of Oracle Fusion Applications Moscone South - 305 12:45 PM - 3:45 PM ADF EMG User Group: Oracle Fusion Middleware Live Application Development Demo Moscone South - 305 3:15 PM - 4:15 PM Mobile Development with Oracle JDeveloper and Oracle ADF Moscone West - 2010 Demos Demo Location Developer Moscone North, Upper Lobby - N-002 Oracle ADF Mobile Development Moscone North, Upper Lobby - N-001 Oracle Eclipse Projects Hilton San Francisco, Grand Ballroom - HHJ-008 Oracle Enterprise Pack for Eclipse Moscone South, Right - S-208 Oracle JDeveloper and Oracle ADF Moscone South, Right - S-207 Exhibits 0 Exhibitor Location Accenture Moscone South - 1813 Moscone South - 2221 Infosys Moscone South - 1701 Moscone South - SMR-005 Innowave Technology Moscone South - 2309 ODTUG Moscone West, Level 2 Lobby - Kiosk in the User Groups Pavilion Oracle ADF Developers Meet Up Wednesday, Oct 03 Time Activity Location 4:30 PM - 5:30 PM Stop by the OTN Lounge and meet other Oracle ADF & Fusion developers as well as product managers and engineers who work on Oracle ADF, ADF Mobile and ADF Essentials. Feedback and questions welcome, or simply stop by and say ‘hi!’ and enjoy free beer. OTN Lounge

    Read the article

  • Dec 5th Links: ASP.NET, ASP.NET MVC, jQuery, Silverlight, Visual Studio

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series for another on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET ASP.NET Code Samples Collection: J.D. Meier has a great post that provides a detailed round-up of ASP.NET code samples and tutorials from a wide variety of sources.  Lots of useful pointers. Slash your ASP.NET compile/load time without any hard work: Nice article that details a bunch of optimizations you can make to speed up ASP.NET project load and compile times. You might also want to read my previous blog post on this topic here. 10 Essential Tools for Building ASP.NET Websites: Great article by Stephen Walther on 10 great (and free) tools that enable you to more easily build great ASP.NET Websites.  Highly recommended reading. Optimize Images using the ASP.NET Sprite and Image Optimization Framework: A nice article by 4GuysFromRolla that discusses how to use the open-source ASP.NET Sprite and Image Optimization Framework (one of the tools recommended by Stephen in the previous article).  You can use this to significantly improve the load-time of your pages on the client. Formatting Dates, Times and Numbers in ASP.NET: Scott Mitchell has a great article that discusses formatting dates, times and numbers in ASP.NET.  A very useful link to bookmark.  Also check out James Michael’s DateTime is Packed with Goodies blog post for other DateTime tips. Examining ASP.NET’s Membership, Roles and Profile APIs (Part 18): Everything you could possibly want to known about ASP.NET’s built-in Membership, Roles and Profile APIs must surely be in this tutorial series. Part 18 covers how to store additional user info with Membership. ASP.NET with jQuery An Introduction to jQuery Templates: Stephen Walther has written an outstanding introduction and tutorial on the new jQuery Template plugin that the ASP.NET team has contributed to the jQuery project. Composition with jQuery Templates and jQuery Templates, Composite Rendering, and Remote Loading: Dave Ward has written two nice posts that talk about composition scenarios with jQuery Templates and some cool scenarios you can enable with them. Using jQuery and ASP.NET to Build a News Ticker: Scott Mitchell has a nice tutorial that demonstrates how to build a dynamically updated “news ticker” style UI with ASP.NET and jQuery. Checking All Checkboxes in a GridView using jQuery: Scott Mitchell has a nice post that covers how to use jQuery to enable a checkbox within a GridView’s header to automatically check/uncheck all checkboxes contained within rows of it. Using jQuery to POST Form Data to an ASP.NET AJAX Web Service: Rick Strahl has a nice post that discusses how to capture form variables and post them to an ASP.NET AJAX Web Service (.asmx). ASP.NET MVC ASP.NET MVC Diagnostics Using NuGet: Phil Haack has a nice post that demonstrates how to easily install a diagnostics page (using NuGet) that can help identify and diagnose common configuration issues within your apps. ASP.NET MVC 3 JsonValueProviderFactory: James Hughes has a nice post that discusses how to take advantage of the new JsonValueProviderFactory support built into ASP.NET MVC 3.  This makes it easy to post JSON payloads to MVC action methods. Practical jQuery Mobile with ASP.NET MVC: James Hughes has another nice post that discusses how to use the new jQuery Mobile library with ASP.NET MVC to build great mobile web applications. Credit Card Validator for ASP.NET MVC 3: Benjii Me has a nice post that demonstrates how to build a [CreditCard] validator attribute that can be used to easily validate credit card numbers are in the correct format with ASP.NET MVC. Silverlight Silverlight FireStarter Keynote and Sessions: A great blog post from John Papa that contains pointers and descriptions of all the great Silverlight content we published last week at the Silverlight FireStarter.  You can watch all of the talks online.  More details on my keynote and Silverlight 5 announcements can be found here. 31 Days of Windows Phone 7: 31 great tutorials on how to build Windows Phone 7 applications (using Silverlight).  Silverlight for Windows Phone Toolkit Update: David Anson has a nice post that discusses some of the additional controls provided with the Silverlight for Windows Phone Toolkit. Visual Studio JavaScript Editor Extensions: A nice (and free) Visual Studio plugin built by the web tools team that significantly improves the JavaScript intellisense support within Visual Studio. HTML5 Intellisense for Visual Studio: Gil has a blog post that discusses a new extension my team has posted to the Visual Studio Extension Gallery that adds HTML5 schema support to Visual Studio 2008 and 2010. Team Build + Web Deployment + Web Deploy + VS 2010 = Goodness: Visual blogs about how to enable a continuous deployment system with VS 2010, TFS 2010 and the Microsoft Web Deploy framework.  Visual Studio 2010 Emacs Emulation Extension and VIM Emulation Extension: Check out these two extensions if you are fond of Emacs and VIM key bindings and want to enable them within Visual Studio 2010. Hope this helps, Scott

    Read the article

  • Mobile HCM: It’s not the future, it is right now

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Boese, Director Product Strategy, Oracle I’ll bet you reached for your iPhone or Android or BlackBerry and took a quick look at email or Facebook or last night’s text messages before you even got out of bed this morning. Come on, admit it, it’s ok, you are among friends here. See, feel better now? But seriously, the incredible growth and near-ubiquity of increasingly powerful, capable, and for many of us, essential in our daily lives mobile devices has profoundly changed the way we communicate, consume information, socialize, and more and more, conduct business and get our work done. And if you doubt that profound change has happened, just think for a moment about the last time you misplaced your iPhone.  The shivers, the cold sweats, the panic... We have all been there. And indeed your personal experiences with mobile technology echoes throughout the world - here are a few data points to consider: Market research firm IDC estimates 1.8 billion mobile phones will be shipped in 2012. A recent Pew study reports 46% of Americans own a smartphone of some kind. And finally in the USA, ownership of tablets like the iPad has doubled from 10% to 19% in the last year. So truly for the Human Resources leader, the question is no longer, ‘Should HR explore ways to exploit mobile devices and their always-on nature to better support and empower the modern workforce?’, but rather ‘How can HR best take advantage of smartphone and tablet capability to provide information, enable transactions, and enhance decision making?’. Because even though moving HCM applications to mobile devices seems inherently logical given today’s fast-moving and mobile workforces, and its promise to deliver incredible value to the organization, HR leaders also have to consider many factors before devising their Mobile HCM strategy and embarking on mobile HR technology projects. Here are just some of the important considerations for HR leaders as you build your strategies and evaluate mobile HCM solutions: Does your organization provide mobile devices to the workforce today, and if so, will the current set of deployed devices have the necessary capability and ecosystems to support your mobile HCM initiatives? Will you allow workers to use or bring their own mobile devices, (commonly abbreviated as ‘BYOD’), and if so are your IT and Security organizations in agreement and capable of supporting that strategy? Do you know which workers need access to mobile HCM applications? Often mobile HCM capability flows down in an organization, with executives and other ‘road-warrior’ types having the most immediate needs, followed by field sales staff, project managers, and even potential job candidates. But just as an organization will have to spend time understanding ‘who’ should have access to mobile HCM technology, the ‘what’ of the way the solutions should be deployed to these groups will also vary. What works and makes sense for the executive, (company-wide dashboards and analytics on an iPad), might not be as relevant for a retail store manager, (employee schedules, location-level sales and inventory data, transaction approvals, etc.). With Oracle Fusion HCM, we are taking an approach to mobile HR that encompasses not just the mobile solution needs for the various types of worker, but also incorporates the fundamental attributes of great mobile applications - the ability to support end-to-end transactions, apps that respond with lightning-fast speed, with functions that are embedded in a worker’s daily activities, and features that can be mashed-up easily with other business areas like Finance and CRM. Finally, and perhaps most importantly for the Oracle Fusion HCM team, delivering mobile experiences that truly enhance, enable, and empower the mobile workforce, and deliver on the design mantras of the best-in-class consumer applications, continues to shape and drive design decisions. Mobile is no longer the future, it is right now, and the cutting-edge HR leader of today will need to consider how mobile fits her HCM technology strategy from here on out. You can learn more about our ideas and plans for Oracle Fusion HCM mobile solutions at https://fusiontap.oracle.com/.

    Read the article

  • SQL SERVER – Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage

    - by pinaldave
    I have mentioned several times on this blog that the best part of blogging is the questions I receive from readers. They are often very interesting. The questions from readers give me a good idea what other readers might be thinking as well. After reading my earlier article Simple Example to Configure Resource Governor – Introduction to Resource Governor – I received an email from a reader and we exchanged a few emails. After exchanging emails we both figured out what is going on. It was indeed interesting and reader suggested to that I should blog about it.  I asked for permission to publish his name but he does not like the attention so we will just call him Jeff. I have converted our emails into chat for easy consumption. Jeff: Your script does not work at all. I think either there is a bug in SQL Server. Pinal: Would you please explain in detail? Jeff: Your code does not limit the CPU usage? Pinal: How did you measure it? Jeff: Well, we have third party tools for it but let us say I have limited the resources for Reporting Services and used your script described in your blog. After that I ran only reporting service workload the CPU is still used more than 100% and it is not limited to 30% as described in your script. Clearly something is wrong somewhere. Pinal: Did you say you ONLY ran reporting server load? Jeff: Yeah, to validate I ran ONLY reporting server load and CPU did not throttle at 30% as per your script. Pinal: Oh! I get it here is the answer - CAP_CPU_PERCENT = 30. Use it. Jeff: What is that, I think your earlier script says it will throttle the Reporting Service workload and Application/OLTP workload and balance it. Pinal: Exactly, that is correct. Jeff: You need to write more in email buddy! Just like your blogs, your answers do not make sense! No Offense! Pinal: Hmm…feedback well taken. Let me try again. In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Configuration: [Read Earlier Post] Reporting Workload: MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30 Application/OLTP Workload: MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100 Example 1: If there is only Reporting Workload on the server: SQL Server will not limit usage of CPU to only 30% workload but SQL Server instance will use all available CPU (if needed). In another word in this scenario it will use more than 30% CPU. Example 2: If there is Reproting Workload and heavy Application/OLTP workload: SQL Server will allocate a maximum of 30% CPU resources to Reporting Workload and allocate remaining resources to heavy application/OLTP workload. The reason for this enhancement is for better utilization of the resources. Let us think, if there is only single workload, which we have limited to max CPU usage to 30%. The other unused available CPU resources is now wasted. In this situation SQL Server allows the workload to use more than 30% resources leading to overall improved/optimized performance. However, in the case of multiple workload where lots of resources are needed the limits specified in MAX_CPU_PERCENT are acknowledged. Example 3: If there is a situation where the max CPU workload has to be enforced: This is a very interesting scenario, in the case when the max CPU workload has to be enforced irrespective of the workload and enhanced algorithm, the keyword CAP_CPU_PERCENT is essential. It specifies a hard cap on the CPU bandwidth that all requests in the resource pool will receive. It will never let CPU usage for reporting workload to go over 30% in our case. You can use the key word as follows: -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, CAP_CPU_PERCENT=40, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO Notice that there is MAX_CPU_PERCENT=30 and CAP_CPU_PERCENT=40, what it means is that when SQL Server Instance is under heavy load under different workload it will use the maximum CPU at 30%. However, when the SQL Server instance is not under workload it will go over the 30% limit. However, as CAP_CPU_PERCENT is set to 40, it will not go over 40% in any case by limiting the usage of CPU. CAP_CPU_PERCENT puts a hard limit on the resources usage by workload. Jeff: Nice Pinal, you should blog about it. [A day passes by] Pinal: Jeff, it is done! Click here to read it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Service Broker

    Read the article

  • Record and Play your WebLogic Console Tasks Like a DVR

    - by james.bayer
    Automation using WebLogic Scripting Tool Today on the Oracle internal mailing list for WebLogic Server questions someone asked how to automate the configuration of the thread model for WebLogic Server and they were having trouble with the jython scripting syntax.  I’ve previously written about this feature called Work Managers and the associated constraints.  However, I did not show how to automate the process of configuring this without the console using WebLogic Scripting Tool – the jython scripting automation environment abbreviated as WLST.  I’ve written some very basic introductions to WLST before and there is also an Oracle By Example on the subject, but this is a bit more advanced.  Fear not because there is a really easy-to-use feature of the WLS console that lets you “Record” user actions just like a DVR.  Using these recordings of the web-based console, you can easily create a script even if you are unfamiliar with the WLST syntax and API.  I’m a big fan of both DVR’s and automation as can be evidenced with this old Halloween picture taken during simpler times.  Obviously the Cast Away and The Big Labowski references show some age.  I was a big Tivo fan-boy back in the day and I still think it’s the best DVR. I strongly believe that WebLogic Scripting Tool (WLST) is an absolutely essential tool for automating administration tasks in anything beyond a development environment.  Even in development environments you can make a case that it makes sense to start the automation for environments downstream.  I promise you that once you start using it for any tasks that you do even semi-regularly, you won’t go back to clicking through the console.  It’s simply so much more efficient and less error-prone to run a script. Let’s say you need to create a Work Manager and MaxThreadsConstraint – the easy way to do it is configure it in the WLS console first while capturing the commands with a recording.  See the images for the simple steps – click to enlarge. Record Console Configurations to a File Review the Recordings and Make Slight Modifications In order to make the recorded .py file directly callable as a stand-alone script I added calls to the connect() and edit() functions at the beginning and calls to disconnect() and exit() at the end – otherwise the main section of the script was provided by the console recording.  Below is the resulting file I saved as d:/temp/wm.py connect('weblogic','welcome1', 't3://localhost:7001') edit() startEdit()   cd('/SelfTuning/wl_server') cmo.createMaxThreadsConstraint('MaxThreadsConstraint-0')   cd('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName)) cmo.setCount(5) cmo.unSet('ConnectionPoolName')   cd('/SelfTuning/wl_server') cmo.createWorkManager('WorkManager-0') cd('/SelfTuning/wl_server/WorkManagers/WorkManager-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName))   cmo.setMaxThreadsConstraint(getMBean('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0')) cmo.setIgnoreStuckThreads(false)   activate() disconnect() exit() Run the Script If you want to test it be sure to delete the Work Manager and MaxThreadConstraint that you had previously created in the console.  Do something like the following - set up the environment and tell WLST to execute the script which happens in the first 2 lines, the rest doesn’t require any user input: D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server\bin>setDomainEnv.cmd D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server>java weblogic.WLST d:\temp\wm.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'examplesServer' that belongs to domain 'wl_server'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to edit tree. This is a writable tree with DomainMBean as the root. To make changes you will need to start an edit session via startEdit().   For more help, use help(edit)   Starting an edit session ... Started edit session, please be sure to save and activate your changes once you are done. Activating all your changes, this may take a while ... The edit lock associated with this edit session is released once the activation is completed. Activation completed Disconnected from weblogic server: examplesServer     Exiting WebLogic Scripting Tool.   Now if you go back and look in the console the changes have been made and we now have a compete script.  Of course there is a full MBean reference and you can learn the nuances of jython and WLST, but why not the WLS console do most of the work for you!  Happy scripting.

    Read the article

  • Open Source but not Free Software (or vice versa)

    - by TRiG
    The definition of "Free Software" from the Free Software Foundation: “Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.” Free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software. More precisely, it means that the program's users have the four essential freedoms: The freedom to run the program, for any purpose (freedom 0). The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbor (freedom 2). The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. A program is free software if users have all of these freedoms. Thus, you should be free to redistribute copies, either with or without modifications, either gratis or charging a fee for distribution, to anyone anywhere. Being free to do these things means (among other things) that you do not have to ask or pay for permission to do so. The definition of "Open Source Software" from the Open Source Initiative: Open source doesn't just mean access to the source code. The distribution terms of open-source software must comply with the following criteria: Free Redistribution The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale. Source Code The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost preferably, downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed. Derived Works The license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software. Integrity of The Author's Source Code The license may restrict source-code from being distributed in modified form only if the license allows the distribution of "patch files" with the source code for the purpose of modifying the program at build time. The license must explicitly permit distribution of software built from modified source code. The license may require derived works to carry a different name or version number from the original software. No Discrimination Against Persons or Groups The license must not discriminate against any person or group of persons. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research. Distribution of License The rights attached to the program must apply to all to whom the program is redistributed without the need for execution of an additional license by those parties. License Must Not Be Specific to a Product The rights attached to the program must not depend on the program's being part of a particular software distribution. If the program is extracted from that distribution and used or distributed within the terms of the program's license, all parties to whom the program is redistributed should have the same rights as those that are granted in conjunction with the original software distribution. License Must Not Restrict Other Software The license must not place restrictions on other software that is distributed along with the licensed software. For example, the license must not insist that all other programs distributed on the same medium must be open-source software. License Must Be Technology-Neutral No provision of the license may be predicated on any individual technology or style of interface. These definitions, although they derive from very different ideologies, are broadly compatible, and most Free Software is also Open Source Software and vice versa. I believe, however, that it is possible for this not to be the case: It is possible for software to be Open Source without being Free, or to be Free without being Open Source. Questions Is my belief correct? Is it possible for software to fall into one camp and not the other? Does any such software actually exist? Please give examples. Clarification I've already accepted an answer now, but I seem to have confused a lot of people, so perhaps a clarification is in order. I was not asking about the difference between copyleft (or "viral", though I don't like that term) and non-copyleft ("permissive") licenses. Nor was I asking about your personal idiosyncratic definitions of "Free" and "Open". I was asking about "Free Software as defined by the FSF" and "Open Source Software as defined by the OSI". Are the two always the same? Is it possible to be one without being the other? And the answer, it seems, is that it's impossible to be Free without being Open, but possible to be Open without being Free. Thank you everyone who actually answered the question.

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • Going to the Score Cards - Exceptional DBA Awards 2011

    - by Rodney
    This year marks my 4th year as a judge for the Exceptional DBA Awards, founded by Red Gate in 2008 to "recognize the essential but often overlooked contributions of DBAs, the unsung heroes of the IT community." As a professional DBA myself I have been honored to participate as a judge. It is not an easy job because there is a voluminous amount of nominees from all over the world. Each judge has to read through every word of the nominee's answers, deciding what makes each person special and stand out amongst their peers. What drives them? What single element of their submission will shine above all others? It is my hope that what I am about to divulge to you as a judge will prompt you to think about yourself or someone you know and decide that you may be the exceptional DBA who can take home the gold at this year's award ceremony in Seattle. We are more than a few weeks into the nomination process and there are quite a number of submissions already. I can not tell you how many as that would not be fair. I can say it is not 1 million or more. I can also say that it is not 100,000. But that is all I can say about that. However, I can tell you that it is enough this year that we are breaking records on the number of people who have been influenced, inspired or intrigued by the awards in the past. I remember them all like it were yesterday. fuzzy thought cloud here. It was a rainy day in Seattle (all memories for each award ceremony will start thusly) and I was in the hotel going over my notes on what I wanted to say about the winner of the 2008 Red Gate Exceptional DBA Award. The notes were on index cards that I had either bought or stolen from my wife, I do not recall, but I was nervous which was unlike me. This was, after all, a big night for the winner. Of course, we, the judges and the SQL community, had already decided the winner and now all that remained was to present the award. The room was packed. It was Casino night, sponsored by sqlservercentral.com. Money (fake), drinks (not fake) and camaraderie flowed through the room. Dan McClain won the award that year. He worked for Anheuser-Busch at the time. I promise that did not influence my decision. We presented Dan with the award. He was very proud of this achievement, rightfully so, as was the SQL community for him. I spoke with Dan throughout the conference and realized how huge this award was for him, not just personally but professionally. It was a rainy day in Seattle in 2009 and I was nervous. I was asked to speak to a group of people again as a judge for the Exceptional DBA Awards. This year, Josef Richberg would be the recipient of the award, but he would not be able to attend. We all prayed for him as he fought through an illness and congratulated him for his accomplishments as a DBA for his company. He got better and sallied forth and continued to give back to the SQL community that he saw as one big family. In 2010, and I am getting ahead of myself, he was asked to be a judge himself for the very award he had just received the year before. It was a sunny day in Seattle and I missed it, because it was in July and I was not there. It was a rainy day in Seattle and it is 2010 and Tracy Hamlin enters a submission that blows this judge away. She is managing a 50 Terabyte distributed database ("50 Gigabytes! Are you kidding me!!!", Rodney jokes.)  and loves her daily job as a DBA working with developers, mentoring them and teaching them best practices with kindness and patience. She is a people person who just happens to have 10+ years experience with RDBMS'. She wins the award and goes on to be recognized as famous at PASS. It will be a rainy day in Seattle this year when I sit amongst my old constituent judges and friends, Brad McGehee, http://www.simple-talk.com/books/sql-books/how-to-become-an-exceptional-dba,-2nd-edition/, Steve Jones, whom we all know and love at http://www.sqlservercentral.com and a young upstart to the SQL Community, this cat named Brent Ozar to announce the 2011 winner. I personally have not heard of Brent but I am told I have interviewed him for a DBA position several years ago and turned him down, http://www.brentozar.com/archive/2011/05/exceptional-dba-contest/ . I hope that did not jeopardize his future in the SQL world. I am a big hearted oaf and would feel horrible. Hopefully I will meet him at PASS and we can work this all out and I can help him get a DBA job. The rain has stopped and a new year is upon us. The stakes are high...the competition is fierce...the rewards are incredible. The entry form awaits you. http://www.exceptionaldba.com/ I very much look forward to meeting you and presenting the award to you in front of hundreds of your envious but proud peers as the new Exceptional DBA for 2011 at the PASS Summit. Here is what you could win: The Exceptional DBA of the Year receives full conference registration for the 2011 PSS Summit in Seattle, where the awards ceremony will take place, four nights' hotel accommodation, and $300 towards travel expenses. They will also be featured on Simple-Talk. Are you ready? Are you nervous?

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c: Contributing to emerging Cloud standards

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Contributed by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups . As one would expect of an industry leader, Oracle's participation in industry standards bodies is extensive. We participate in dozens of organizations that produce open standards which apply to our products, and our commitment to the success of these organizations is manifest in several way - we support them financially through our memberships; our senior engineers are active participants, often serving in leadership positions on boards, technical working groups and committees; and when it makes good business sense we contribute our intellectual property. We believe supporting the development of open standards is fundamental to Oracle meeting customer demands for product choice, seamless interoperability, and lowering the cost of ownership. Nowhere is this truer than in the area of cloud standards, and for the most recent release of our flagship management product, Oracle Enterprise Manager Cloud Control 12c (EM Cloud Control 12c). There is a fundamental rule that standards follow architecture. This was true of distributed computing, it was true of service-oriented architecture (SOA), and it's true of cloud. If you are familiar with Enterprise Manager it is likely to be no surprise that EM Cloud Control 12c is a source of technology that can be considered for adoption within cloud management standards. The reason, quite simply, is that the Oracle integrated stack architecture aligns with the cloud architecture models being adopted by the industry, and EM Cloud Control 12c has been developed to manage this architecture. EM Cloud Control 12c has facilities for managing the various underlying capabilities of the integrated stack in IaaS, PaaS, and SaaS clouds, and enables essential characteristics such as on-demand self-service provisioning, centralized policy-based resource management, integrated chargeback, and capacity planning, and complete visibility of the physical and virtual environment from applications to disk. Our most recent contribution in support of cloud management standards to come out of the EM Cloud Control 12c work was the Oracle Cloud Elemental Resource Model API. Oracle contributed the Elemental Resource Model API to the Distributed Management Task Force (DMTF) in 2011 where it was assigned to DMTF's Cloud Management Working Group (CMWG). The CMWG is considering the Oracle specification and those of several other vendors in their effort to produce a best practices specification for managing IaaS clouds. DMTF's Cloud Infrastructure Management Interface specification, called CIMI for short, is currently out for public review and expected to be released by DMTF later this year. We are proud to be playing an important role in the development of what is expected to become a major cloud standard. You can find more information on DMTF CIMI at http://dmtf.org/standards/cloud. You can find the work-in-progress release of CIMI at http://dmtf.org/content/cimi-work-progress-specifications-now-available-public-comment . The Oracle Cloud API specification is available on the Oracle Technology Network. You can find more information about the Oracle Cloud Elemental Resource Model API on the Oracle Technical Network (OTN), including a webcast featuring the API engineering manager Jack Yu (see TechCast Live: Inside the Oracle Cloud Resource Model API). If you have not seen this video we recommend you take the time to view it. Simply hover your cursor over the webcast title and control+click to follow the embedded link. If you have a question about the Oracle Cloud API or want to learn more about Oracle's participation in cloud management standards efforts drop us a line. We'd love to hear from you. The Enterprise Manager Standards Blogs are written by Tony Di Cenzo, Director for Standards Strategy and Architecture, and Mark Carlson, Principal Cloud Architect, for Oracle's Systems Management and Storage Products Groups. They can be reached at Tony.DiCenzo at Oracle.com and Mark.Carlson at Oracle.com respectively. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • BPM Suite 11gR1 Released

    - by Manoj Das
    This morning (April 27th, 2010), Oracle BPM Suite 11gR1 became available for download from OTN and eDelivery. If you have been following our plans in this area, you know that this is the release unifying BEA ALBPM product, which became Oracle BPM10gR3, with the Oracle stack. Some of the highlights of this release are: BPMN 2.0 modeling and simulation Web based Process Composer for BPMN and Rules authoring Zero-code environment with full access to Oracle SOA Suite’s rich set of application and other adapters Process Spaces – Out-of-box integration with Web Center Suite Process Analytics – Native process cubes as well as integration with Oracle BAM You can learn more about this release from the documentation. Notes about downloading and installing Please note that Oracle BPM Suite 11gR1 is delivered and installed as part of SOA 11.1.1.3.0, which is a sparse release (only incremental patch). To install: Download and install SOA 11.1.1.2.0, which is a full release (you can find the bits at the above location) Download and install SOA 11.1.1.3.0 During configure step (using the Fusion Middleware configuration wizard), use the Oracle Business Process Management template supplied with the SOA Suite11g (11.1.1.3.0) If you plan to use Process Spaces, also install Web Center 11.1.1.3.0, which also is delivered as a sparse release and needs to be installed on top of Web Center 11.1.1.2.0 Some early feedback We have been receiving very encouraging feedback on this release. Some quotes from partners are included below: “I just attended a preview workshop on BPM Studio, Oracle's BPMN 2.0 tool, held by Clemens Utschig Utschig from Oracle HQ. The usability and ease to get started are impressive. In the business view analysts can intuitively start modeling, then developers refine in their own, more technical view. The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends. With BPM Studio we architects have a new modeling and development IDE that gives us interesting design challenges to grasp and elaborate, since many things BPMN 2.0 are different from good ol' BPEL. For example, for simple transformations, you don't use BPEL "assign" any more, but add the transformation directly to the service call. There is much less XPath involved. And, there is no translation from model to BPEL code anymore, so the awkward process model to BPEL roundtrip, which never really worked as well as it looked on marketing slides, is obsolete: With BPMN 2.0 "the model is the code". Now, these are great times to start the journey into BPM! Some tips: Start Projects smoothly, with initial processes being not overly complex and not using the more esoteric areas of BPMN, to manage the learning path and to stay successful with each iteration. Verify non functional requirements by conducting performance and load tests early. As mentioned above, separate all technical integration logic into SOA Suite or Oracle Service Bus. And - share your experience!” Hajo Normann, SOA Architect - Oracle ACE Director - Co-Leader DOAG SIG SOA   "Reuse of components across the Oracle 11G Fusion Middleware stack, like for instance a Database Adapter, is essential. It improves stability and predictability of the solution. BPM just is one of the components plugging into the stack and reuses all other components." Mr. Leon Smiers, Oracle Solution Architect, Capgemini   “I had the opportunity to follow a hands-on workshop held by Clemens for Oracle partners and I was really impressed of the overall offering of BPM11g. BPM11g allows the execution of BPMN 2.0 processes, without having to transform/translate them first to BPEL in order to be executable. The fact that BPMN uses the same underlying service infrastructure of SOA Suite 11g has a lot of benefits for us already familiar with SOA Suite 11g. BPMN is just another SCA component within a SCA composite and can (re)use all the existing components like Rules, Human Workflow, Adapters and Mediator. I also like the fact that BPMN runs on the same service engine as BPEL. By that all known best practices for making a BPEL  process reliable are valid for BPMN processes as well. Last but not least, BPMN is integrated into the superior end-to-end tracing of SOA Suite 11g. With BPM11g, Oracle offers a very competitive product which will have a big effect on the IT market. Clemens and Jürgen: Thanks for the great workshop! I’m really looking forward to my first project using Oracle BPM11g!” Guido Schmutz, Technology Manager / Oracle ACE Director for Fusion Middleware and SOA, Company:  Trivadis Some earlier feedback were summarized in this post.

    Read the article

  • Using the Static Code Analysis feature of Visual Studio (Premium/Ultimate) to find memory leakage problems

    - by terje
    Memory for managed code is handled by the garbage collector, but if you use any kind of unmanaged code, like native resources of any kind, open files, streams and window handles, your application may leak memory if these are not properly handled.  To handle such resources the classes that own these in your application should implement the IDisposable interface, and preferably implement it according to the pattern described for that interface. When you suspect a memory leak, the immediate impulse would be to start up a memory profiler and start digging into that.   However, before you follow that impulse, do a Static Code Analysis run with a ruleset tuned to finding possible memory leaks in your code.  If you get any warnings from this, fix them before you go on with the profiling. How to use a ruleset In Visual Studio 2010 (Premium and Ultimate editions) you can define your own rulesets containing a list of Static Code Analysis checks.   I have defined the memory checks as shown in the lists below as ruleset files, which can be downloaded – see bottom of this post.  When you get them, you can easily attach them to every project in your solution using the Solution Properties dialog. Right click the solution, and choose Properties at the bottom, or use the Analyze menu and choose “Configure Code Analysis for Solution”: In this dialog you can now choose the Memorycheck ruleset for every project you want to investigate.  Pressing Apply or Ok opens every project file and changes the projects code analysis ruleset to the one we have specified here. How to define your own ruleset  (skip this if you just download my predefined rulesets) If you want to define the ruleset yourself, open the properties on any project, choose Code Analysis tab near the bottom, choose any ruleset in the drop box and press Open Clear out all the rules by selecting “Source Rule Sets” in the Group By box, and unselect the box Change the Group By box to ID, and select the checks you want to include from the lists below. Note that you can change the action for each check to either warning, error or none, none being the same as unchecking the check.   Now go to the properties window and set a new name and description for your ruleset. Then save (File/Save as) the ruleset using the new name as its name, and use it for your projects as detailed above. It can also be wise to add the ruleset to your solution as a solution item. That way it’s there if you want to enable Code Analysis in some of your TFS builds.   Running the code analysis In Visual Studio 2010 you can either do your code analysis project by project using the context menu in the solution explorer and choose “Run Code Analysis”, you can define a new solution configuration, call it for example Debug (Code Analysis), in for each project here enable the Enable Code Analysis on Build   In Visual Studio Dev-11 it is all much simpler, just go to the Solution root in the Solution explorer, right click and choose “Run code analysis on solution”.     The ruleset checks The following list is the essential and critical memory checks.  CheckID Message Can be ignored ? Link to description with fix suggestions CA1001 Types that own disposable fields should be disposable No  http://msdn.microsoft.com/en-us/library/ms182172.aspx CA1049 Types that own native resources should be disposable Only if the pointers assumed to point to unmanaged resources point to something else  http://msdn.microsoft.com/en-us/library/ms182173.aspx CA1063 Implement IDisposable correctly No  http://msdn.microsoft.com/en-us/library/ms244737.aspx CA2000 Dispose objects before losing scope No  http://msdn.microsoft.com/en-us/library/ms182289.aspx CA2115 1 Call GC.KeepAlive when using native resources See description  http://msdn.microsoft.com/en-us/library/ms182300.aspx CA2213 Disposable fields should be disposed If you are not responsible for release, of if Dispose occurs at deeper level  http://msdn.microsoft.com/en-us/library/ms182328.aspx CA2215 Dispose methods should call base class dispose Only if call to base happens at deeper calling level  http://msdn.microsoft.com/en-us/library/ms182330.aspx CA2216 Disposable types should declare a finalizer Only if type does not implement IDisposable for the purpose of releasing unmanaged resources  http://msdn.microsoft.com/en-us/library/ms182329.aspx CA2220 Finalizers should call base class finalizers No  http://msdn.microsoft.com/en-us/library/ms182341.aspx Notes: 1) Does not result in memory leak, but may cause the application to crash   The list below is a set of optional checks that may be enabled for your ruleset, because the issues these points too often happen as a result of attempting to fix up the warnings from the first set.   ID Message Type of fault Can be ignored ? Link to description with fix suggestions CA1060 Move P/invokes to NativeMethods class Security No http://msdn.microsoft.com/en-us/library/ms182161.aspx CA1816 Call GC.SuppressFinalize correctly Performance Sometimes, see description http://msdn.microsoft.com/en-us/library/ms182269.aspx CA1821 Remove empty finalizers Performance No http://msdn.microsoft.com/en-us/library/bb264476.aspx CA2004 Remove calls to GC.KeepAlive Performance and maintainability Only if not technically correct to convert to SafeHandle http://msdn.microsoft.com/en-us/library/ms182293.aspx CA2006 Use SafeHandle to encapsulate native resources Security No http://msdn.microsoft.com/en-us/library/ms182294.aspx CA2202 Do not dispose of objects multiple times Exception (System.ObjectDisposedException) No http://msdn.microsoft.com/en-us/library/ms182334.aspx CA2205 Use managed equivalents of Win32 API Maintainability and complexity Only if the replace doesn’t provide needed functionality http://msdn.microsoft.com/en-us/library/ms182365.aspx CA2221 Finalizers should be protected Incorrect implementation, only possible in MSIL coding No http://msdn.microsoft.com/en-us/library/ms182340.aspx   Downloadable ruleset definitions I have defined three rulesets, one called Inmeta.Memorycheck with the rules in the first list above, and Inmeta.Memorycheck.Optionals containing the rules in the second list, and the last one called Inmeta.Memorycheck.All containing the sum of the two first ones.  All three rulesets can be found in the  zip archive  “Inmeta.Memorycheck” downloadable from here.   Links to some other resources relevant to Static Code Analysis MSDN Magazine Article by Mickey Gousset on Static Code Analysis in VS2010 MSDN :  Analyzing Managed Code Quality by Using Code Analysis, root of the documentation for this Preventing generated code from being analyzed using attributes Online training course on Using Code Analysis with VS2010 Blogpost by Tatham Oddie on custom code analysis rules How to write custom rules, from Microsoft Code Analysis Team Blog Microsoft Code Analysis Team Blog

    Read the article

  • JavaOne Afterglow by Simon Ritter

    - by JuergenKress
    Last week was the eighteenth JavaOne conference and I thought it would be a good idea to write up my thoughts about how things went. Firstly thanks to Yoshio Terada for the photos, I didn't bother bringing a camera with me so it's good to have some pictures to add to the words. Things kicked off full-throttle on Sunday.  We had the Java Champions and JUG leaders breakfast, which was a great way to meet up with a lot of familiar faces and start talking all things Java.  At midday the show really started with the Strategy and Technical Keynotes.  This was always going to be tougher job than some years because there was no big shiny ball to reveal to the audience.  With the Java EE 7 spec being finalised a few months ago and Java SE 8, Java ME 8 and JDK8 not due until the start of next year there was not going to be any big announcement.  I thought both keynotes worked really well each focusing on the things most important to Java developers: Strategy One of the things that is becoming more and more prominent in many companies marketing is the Internet of Things (IoT).  We've moved from the conventional desktop/laptop environment to much more mobile connected computing with smart phones and tablets.  The next wave of the internet is not just billions of people connected, but 10s or 100s of billions of devices connected to the network, all generating data and providing much more precise control of almost any process you can imagine.  This ties into the ideas of Big Data and Cloud Computing, but implementation is certainly not without its challenges.  As Peter Utzschneider explained it's about three Vs: Volume, Velocity and Value.  All these devices will create huge volumes of data at very high speed; to avoid being overloaded these devices will need some sort of processing capabilities that can filter the useful data from the redundant.  The raw data then needs to be turned into useful information that has value.  To make this happen will require applications on devices, at gateways and on the back-end servers, all very tightly integrated.  This is where Java plays a pivotal role, write once, run everywhere becomes essential, having nine million developers fluent in the language makes it the defacto lingua franca of IoT.  There will be lots more information on how this will become a reality, so watch this space. Technical How do we make the IoT a reality, technically?  Using the game of chess Mark Reinhold, with the help of people like John Ceccarelli, Jasper Potts and Richard Bair, showed what you could do.  Using Java EE on the back end, Java SE and JavaFX on the desktop and Java ME Embedded and JavaFX on devices they showed a complete end-to-end demo. This was really impressive, using 3D features from JavaFX 8 (that's included with JDK8) to make a 3D animated Duke chess board.  Jasper also unveiled the "DukePad" a home made tablet using a Raspberry Pi, touch screen and accelerometer. Although the Raspberry Pi doesn't have earth shattering CPU performance (about the same level as a mid 1990s Pentium), it does have really quite good GPU performance so the GUI works really well.  The plans are all open sourced and available here.  One small, but very significant announcement was that Java SE will now be included with the NOOB and Raspbian Linux distros provided by the Raspberry Pi foundation (these can be found here).  No more hassle having to download and install the JDK after you've flashed your SD card OS image.  The finale was the Raspberry Pi powered chess playing robot.  Really very, very cool.  I talked to Jasper about this and he told me each of the chess pieces had been 3D printed and then he had to use acetone to give them a glossy finish (not sure what his wife thought of him spending hours in the kitchen in a gas mask!)  The way the robot arm worked was very impressive as it did not have any positioning data (like a potentiometer connected to each motor), but relied purely on carefully calibrated timings to get the arm to the right place.  Having done things like this myself in the past I know how easy it is to find a small error gets magnified into very big mistakes. Here's some pictures from the keynote: The "Dukepad" architecture Nice clear perspex case so you can see the innards. The very nice 3D chess set.  Maya's obviously a great tool. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Simon Ritter,Java One,OOW,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Spolskism or Twitterism: A Doctor writes...

    - by Phil Factor
    "I never realized I had a problem. I just 'twittered' because it was a social thing to do. All my mates were doing it. It made me feel good to have 'followers'; it bolstered my self-esteem. Of course, you don't think of the long-term effects on your work and on the way you think. There's no denying that it impairs your judgment…" Yes, this story is typical. Hundreds of people are waking up to the long term effects of twittering, and seeking help. Dave, who wishes to remain anonymous, told our reporter… "I started using Twitter at work. Just a few minutes now and then, throughout the day. A lot of my colleagues were doing it and I thought 'Well, that's cool; it must be part of what I should be doing at work'. Soon, I was avidly reading every twitter that came my way, and counting the minutes between my own twitters. I tried to kid myself that it was all about professional development and getting other people to help you with work-related problems, but in truth I had become addicted to the buzz of the social network. The worse thing was that it made me seem busy even when I was really just frittering my time away. Inevitably, I started to get behind with my real work." Experts have identified the syndrome and given it a name: 'Twitterism', sometimes referred to as 'Spolskism', after the person who first drew attention to the pernicious damage to well-being that the practice caused, and who had the courage to take the pledge of rejecting it. According to one expert… "The occasional Twitter does little harm to the participant, and can be an adaptive way of dealing with stress. Unfortunately, it rarely stops there. The addictive qualities of the practice have put a strain on the caring professions who are faced with a flood of people making that first bold step to seeking help". Dave is one of those now seeking help for his addiction… "I had lost touch with reality. Even though I twittered my work colleagues constantly, I found I actually spoke to them less and less. Even when out socializing, I would frequently disengage from the conversation, in order to twitter. I stopped blogging. I stopped responding to emails; the only way to reach me was through the world of Twitter. Unfortunately, my denial about the harm that twittering was doing to me, my friends, and my work-colleagues was so strong that I truly couldn't see that I had a problem." Like other addictions, the help and support of others who are 'taking the cure' is important. There is a common bond between those who have 'been through hell and back' and are once more able to experience the joys of actually conversing and socializing, rather than the false comfort of solitary 'twittering'. Complete abstinence is essential to the cure. Most of those who risk even an occasional twitter face a headlong slide back into 'binge' twittering. Tom, another twitterer who has managed to kick the habit explains… "My twittering addiction now seems more like a bad dream. You get to work, and switch on the PC. You say to yourself, just open up the browser, just for a minute, just to see what people are saying on Twitter. The next thing you know, half the day has gone by. The worst thing is that when you're addicted, you get good at covering up the habit; I spent so much time looking at the screen and typing on the keyboard, people just assumed I was working hard.I know that I must never forget what it was like then, and what it's like now that I've kicked the habit. I now have more time for productive work and a real social life." Like many addictions, Spolskism has its most detrimental effects on family, friends and workmates, rather than the addict. So often nowadays, we hear the sad stories of Twitter-Widows; tales of long lonely evenings spent whilst their partners are engrossed in their twittering into their 'mobiles' or indulging in their solitary spolskistic habits in privacy, under cover of 'having to do work at home'. Workmates suffer too, when the addicts even take their laptops or mobiles into meetings in order to 'twitter' with their fellow obsessives, even stooping to complain to their followers how boring the meeting is. No; The best advice is to leave twittering to the birds. You know it makes sense.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >