Search Results

Search found 37931 results on 1518 pages for 'computer case'.

Page 452/1518 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Solving Euler Project Problem Number 1 with Microsoft Axum

    - by Jeff Ferguson
    Note: The code below applies to version 0.3 of Microsoft Axum. If you are not using this version of Axum, then your code may differ from that shown here. I have just solved Problem 1 of Project Euler using Microsoft Axum. The problem statement is as follows: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. My Axum-based solution is as follows: namespace EulerProjectProblem1{ // http://projecteuler.net/index.php?section=problems&id=1 // // If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. // The sum of these multiples is 23. // Find the sum of all the multiples of 3 or 5 below 1000. channel SumOfMultiples { input int Multiple1; input int Multiple2; input int UpperBound; output int Sum; } agent SumOfMultiplesAgent : channel SumOfMultiples { public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; } } agent MainAgent : channel Microsoft.Axum.Application { public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; } }} Let’s take a look at the various parts of the code. I begin by setting up a channel called SumOfMultiples that accepts three inputs and one output. The first two of the three inputs will represent the two possible multiples, which are three and five in this case. The third input will represent the upper bound of the problem scope, which is 1000 in this case. The lone output of the channel represents the sum of all of the matching multiples: channel SumOfMultiples{ input int Multiple1; input int Multiple2; input int UpperBound; output int Sum;} I then set up an agent that uses the channel. The agent, called SumOfMultiplesAgent, received the three inputs from the channel sent to the agent, stores the results in local variables, and performs the for loop that iterates from 1 to the received upper bound. The agent keeps track of the sum in a local variable and stores the sum in the output portion of the channel: agent SumOfMultiplesAgent : channel SumOfMultiples{ public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; }} The application’s main agent, therefore, simply creates a new SumOfMultiplesAgent in a new domain, prepares the channel with the inputs that we need, and then receives the Sum from the output portion of the channel: agent MainAgent : channel Microsoft.Axum.Application{ public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; }} The result of the calculation (which, by the way, is 233,168) is sent to the console using good ol’ Console.WriteLine().

    Read the article

  • Using 3G/UMTS in Mauritius

    After some conversation, threads in online forum and mailing lists I thought about writing this article on how to setup, configure and use 3G/UMTS connections on Linux here in Mauritius. Personally, I can only share my experience with Emtel Ltd. but try to give some clues about how to configure Orange as well. Emtel 3G/UMTS surf stick Emtel provides different surf sticks from Huawei. Back in 2007, I started with an E220 that wouldn't run on Windows Vista either. Nowadays, you just plug in the surf stick (ie. E169) and usually the Network Manager will detect the new broadband modem. Nothing to worry about. The Linux Network Manager even provides a connection profile for Emtel here in Mauritius and establishing the Internet connection is done in less than 2 minutes... even quicker. Using wvdial Old-fashioned Linux users might not take Network Manager into consideration but feel comfortable with wvdial. Although that wvdial is primarily used with serial port attached modems, it can operate on USB ports as well. Following is my configuration from /etc/wvdial.conf: [Dialer Defaults]Phone = *99#Username = emtelPassword = emtelNew PPPD = yesStupid Mode = 1Dial Command = ATDT[Dialer emtel]Modem = /dev/ttyUSB0Baud = 3774000Init2 = ATZInit3 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0Init4 = AT+cgdcont=1,"ip","web"ISDN = 0Modem Type = Analog Modem The values of user name and password are optional and can be configured as you like. In case that your SIM card is protected by a pin - which is highly advised, you might another dialer section in your configuration file like so: [Dialer pin]Modem = /dev/ttyUSB0Init1 = AT+CPIN=0000 This way you can "daisy-chain" your command to establish your Internet connection like so: wvdial pin emtel And it works auto-magically. Depending on your group assignments (dialout), you might have to sudo the wvdial statement like so: sudo wvdial pin emtel Orange parameters As far as I could figure out without really testing it myself, it is also necessary to set the Access Point (AP) manually with Orange. Well, although it is pretty obvious a lot of people seem to struggle. The AP value is "orange". [Dialer orange]Modem = /dev/ttyUSB0Baud = 3774000Init2 = ATZInit3 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0Init4 = AT+cgdcont=1,"ip","orange"ISDN = 0Modem Type = Analog Modem And you are done. Official Linux support from providers It's just simple: Forget it! The people at the Emtel call center are completely focused on the hardware and Mobile Connect software application provided by Huawei and are totally lost in case that you confront them with other constellations. For example, my wife's netbook has an integrated 3G/UMTS modem from Ericsson. Therefore, no need to use the Huawei surf stick at all and of course we use the existing software named Wireless Manager instead of. Now, imagine to mention at the help desk: "Ehm, sorry but what's Mobile Connect?" And Linux after all might give the call operator sleepless nights... Who knows? Anyways, I hope that my article and configuration could give you a helping hand and that you will be able to connect your Linux box with 3G/UMTS surf sticks here in Mauritius.

    Read the article

  • SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014

    - by Pinal Dave
    SQL Server 2014 has new cardinality estimation logic/algorithm. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. Cardinality estimates are a prediction of the number of rows in the query result. The query optimizer uses these estimates to choose a plan for executing the query. The quality of the query plan has a direct impact on improving query performance. ~ Souce MSDN Let us see a quick example of how cardinality improves performance for a query. I will be using the AdventureWorks database for my example. Before we start with this demonstration, remember that even though you have SQL Server 2014 to see the effect of new cardinality estimates, you will need your database compatibility mode set to 120 which is for SQL Server 2014. If your server instance of SQL Server 2014 but you have set up your database compatibility mode to 110 or any other earlier version, you will get performance from your query like older version of SQL Server. Now we will execute following query in two different compatibility mode and see its performance. (Note that my SQL Server instance is of version 2014). USE AdventureWorks2014 GO -- ------------------------------- -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO -- ------------------------------- -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO Result of Statistics IO Compatibility level 120 Table ‘Person’. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Compatibility level 110 Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Person’. Scan count 0, logical reads 137, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. You will notice in the case of compatibility level 110 there 137 logical read from table person where as in the case of compatibility level 120 there are only 6 physical reads from table person. This drastically improves the performance of the query. If we enable execution plan, we can see the same as well. I hope you will find this quick example helpful. You can read more about this in my latest Pluralsight Course. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • bluez 5.19 PS4 controller

    - by Athanase
    I currently have a problem when pairing my computer with a PS4 remote. On my Ubuntu 14.04 I removed everything related with bluez and bluetooth, and I built and installed bluez 5.19. Here are some useful command outputs: jean@system ~ hcitool hcitool - HCI Tool ver 5.19 jean@system ~ hcitool dev Devices: hci0 00:15:83:4C:0C:BB jean@system ~ bluetoothctl [bluetooth]# version Version 5.19 jean@system ~ bluetoothctl [NEW] Controller 00:15:83:4C:0C:BB BlueZ [default] jean@system ~ lsusb Bus 003 Device 012: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) So here is what happens. When I try to hard pair the controller with the computer, by holding the share and ps button for a while, everything works as expected and the pairing is done properly. After a hard pairing if I try the pairing by pressing the ps button only, nothings happen. In order to go it, I first power up the bluetooth dongle: jean@system ~ sudo hciconfig hciX up and then I run the bluetooh deamon bluetoothd: jean@system /usr/libexec/bluetooth ~ ./bluetoothd -d -n bluetoothd[11270]: Bluetooth daemon 5.19 bluetoothd[11270]: src/main.c:parse_config() parsing main.conf bluetoothd[11270]: src/main.c:parse_config() discovto=0 bluetoothd[11270]: src/main.c:parse_config() pairto=0 bluetoothd[11270]: src/main.c:parse_config() auto_to=60 bluetoothd[11270]: src/main.c:parse_config() name=%h-%d bluetoothd[11270]: src/main.c:parse_config() class=0x000100 bluetoothd[11270]: src/main.c:parse_config() Key file does not have key 'DeviceID' bluetoothd[11270]: src/gatt.c:gatt_init() Starting GATT server bluetoothd[11270]: src/adapter.c:adapter_init() sending read version command bluetoothd[11270]: Starting SDP server bluetoothd[11270]: src/sdpd-service.c:register_device_id() Adding device id record for 0002:1d6b:0246:0513 ... bluetoothd[11270]: src/adapter.c:adapter_service_insert() /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:add_uuid() sending add uuid command for index 0 bluetoothd[11270]: profiles/audio/a2dp.c:a2dp_sink_server_probe() path /org/bluez/hci0 bluetoothd[11270]: profiles/audio/a2dp.c:a2dp_source_server_probe() path /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:btd_adapter_unblock_address() hci0 00:00:00:00:00:00 bluetoothd[11270]: src/adapter.c:get_ltk_info() A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_create_from_storage() address A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_new() address A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_new() Creating device /org/bluez/hci0/dev_A4_15_66_C1_0D_2A bluetoothd[11270]: src/device.c:device_set_bonded() bluetoothd[11270]: src/adapter.c:get_ltk_info() A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_create_from_storage() address A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_new() address A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_new() Creating device /org/bluez/hci0/dev_A4_15_66_88_5E_9A bluetoothd[11270]: src/device.c:device_set_bonded() bluetoothd[11270]: src/adapter.c:load_link_keys() hci0 keys 2 debug_keys 0 bluetoothd[11270]: src/adapter.c:load_ltks() hci0 keys 0 bluetoothd[11270]: src/adapter.c:load_connections() sending get connections command for index 0 bluetoothd[11270]: src/adapter.c:adapter_service_insert() /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:add_uuid() sending add uuid command for index 0 bluetoothd[11270]: src/adapter.c:set_did() hci0 source 2 vendor 1d6b product 246 version 513 bluetoothd[11270]: src/adapter.c:adapter_register() Adapter /org/bluez/hci0 registered bluetoothd[11270]: src/adapter.c:set_dev_class() sending set device class command for index 0 bluetoothd[11270]: src/adapter.c:set_name() sending set local name command for index 0 bluetoothd[11270]: src/adapter.c:set_mode() sending set mode command for index 0 bluetoothd[11270]: src/adapter.c:set_mode() sending set mode command for index 0 bluetoothd[11270]: src/adapter.c:adapter_start() adapter /org/bluez/hci0 has been enabled bluetoothd[11270]: src/adapter.c:trigger_passive_scanning() bluetoothd[11270]: plugins/hostname.c:property_changed() static hostname: system bluetoothd[11270]: plugins/hostname.c:property_changed() pretty hostname: bluetoothd[11270]: plugins/hostname.c:update_name() name: system bluetoothd[11270]: src/adapter.c:adapter_set_name() name: system bluetoothd[11270]: plugins/hostname.c:property_changed() chassis: desktop bluetoothd[11270]: plugins/hostname.c:update_class() major: 0x01 minor: 0x01 bluetoothd[11270]: src/adapter.c:load_link_keys_complete() link keys loaded for hci0 bluetoothd[11270]: src/adapter.c:load_ltks_complete() LTKs loaded for hci0 bluetoothd[11270]: src/adapter.c:get_connections_complete() Connection count: 0 And then I press the ps button of the PS4 controller bluetoothd[11270]: src/adapter.c:connected_callback() hci0 device A4:15:66:C1:0D:2A connected eir_len 5 bluetoothd[11270]: profiles/input/server.c:connect_event_cb() Incoming connection from A4:15:66:C1:0D:2A on PSM 17 bluetoothd[11270]: profiles/input/device.c:input_device_set_channel() idev (nil) psm 17 bluetoothd[11270]: Refusing input device connect: No such file or directory (2) bluetoothd[11270]: profiles/input/server.c:confirm_event_cb() bluetoothd[11270]: Refusing connection from A4:15:66:C1:0D:2A: unknown device bluetoothd[11270]: src/adapter.c:dev_disconnected() Device A4:15:66:C1:0D:2A disconnected, reason 3 bluetoothd[11270]: src/adapter.c:adapter_remove_connection() bluetoothd[11270]: plugins/policy.c:disconnect_cb() reason 3 bluetoothd[11270]: src/adapter.c:bonding_attempt_complete() hci0 bdaddr A4:15:66:C1:0D:2A type 0 status 0xe bluetoothd[11270]: src/device.c:device_bonding_complete() bonding (nil) status 0x0e bluetoothd[11270]: src/device.c:device_bonding_failed() status 14 bluetoothd[11270]: src/adapter.c:resume_discovery() So I don't know what is happening here and a bit of help would be appreciated.

    Read the article

  • Dual boot Ubuntu and Windows 7, I Can only boot ubuntu through recovery mode

    - by Alec
    I want to become a new user of Ubuntu, however this problem is preventing me. I have/had Window 7 professional on my computer. I recently looked into getting linux. I discovered dual-booting and decided to give it a try. First I created a bootable flash drive with ubuntu 12.10 64 bit. I then followed the instructions on: https://help.ubuntu.com/community/WindowsDualBoot after I finished going through the setup, my computer rebooted. After the reboot I was able to select Ubuntu, advanced options for Ubuntu, 2 memory tests, and windows 7 (loader). So I chose Windows ( honestly i was more concerned that i still had everything on windows at this point). I then rebooted again and selected Ubuntu. When i selected Ubuntu, the background screen of Grub (the crimson/burgandy color) stayed for a few seconds then the screen went black: video here http://www.youtube.com/watch?v=6kKcG4sT7Lg&feature=plcp I tried again with the same results. so i redid the ubuntu install differently using http://www.liberiangeek.net/2012/10/dual-booting-windows-7-and-ubuntu-12-10-quantal-quetzal/. After rebooting the same thing happened. After that i was stumped, so i figured it could hurt to experiment. after all i backed up my windows 7 stuff, and i have the software disk. I tried booting in recovery mode under "advanced options for Ubuntu" and sure enough, after selecting continue to normal reboot it worked. So i updated and everything but when i rebooted it still wouldn't boot under Ubuntu. It would always boot after recovery mode. So i try installing 12.10 32 bit Ubuntu. the same problem keeps happening. I can still get to Ubuntu through recovery mode. so i went online and tried using the terminal (in ubuntu that i booted through recovery mode) when i was using it i discovered that "Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected" kept showing up. also i noticed a notification in the top right corner that looked like a do not enter sign. it said "an error occured, please run package manager from the right-click menu or apt-get in a terminal to see what is wrong. the error message was: 'ror in sitecustomize;set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected traceback (most recent called last): File "/usr/bin/lsb_release EOFError: EOF read where not expected 39;0' this usually means that your installed packages have unmet dependencies" Naturally i assumed this was what was causing my boot problems. I downloaded synaptic and updated everything and the error went away. but my boot problem was still a problem. so i go online find some things that have worked for others, like this Try to do this (in your terminal: sudo nano /etc/default/grub Look for: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" Change it too : GRUB_CMDLINE_LINUX_DEFAULT="quiet" And update Grub: sudo update-grub This should fix stuff.) I did this and i still have the problem. sorry for the excessive explanation, please help.

    Read the article

  • BlueTooth not working on my HP Probook 4720s

    - by mtrento
    the blue tooth on my ubuntu 11.10 does not work. When i try to ad a device it scans indefinitely and never find anything. Wireless is working perfeclty and with windows 7 it is detected. As i read somewhere , the bluetooth is not listed in the usb devices. Is it supported under ubuntu? here are the output of the various debug command i tested : hciconfig -a hci0: Type: BR/EDR Bus: USB BD Address: E0:2A:82:7A:8B:04 ACL MTU: 310:10 SCO MTU: 64:8 UP RUNNING PSCAN ISCAN RX bytes:1895 acl:0 sco:0 events:70 errors:0 TX bytes:1986 acl:0 sco:0 commands:64 errors:0 Features: 0xff 0xff 0x8f 0xfe 0x9b 0xff 0x59 0x83 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: RSWITCH HOLD SNIFF PARK Link mode: SLAVE ACCEPT Name: 'PC543host-0' Class: 0x5a0100 Service Classes: Networking, Capturing, Object Transfer, Telephony Device Class: Computer, Uncategorized HCI Version: 2.1 (0x4) Revision: 0x149c LMP Version: 2.1 (0x4) Subversion: 0x149c Manufacturer: Cambridge Silicon Radio (10) hcitool scan hcitool scan Scanning ... lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 04f2:b1ac Chicony Electronics Co., Ltd Bus 002 Device 003: ID 413c:3010 Dell Computer Corp. Optical Wheel Mouse Bus 002 Device 004: ID 148f:1000 Ralink Technology, Corp. lsmod | grep -i bluetooth bluetooth 166112 23 bnep,rfcomm,btusb dmesg | grep -i bluetooth [ 18.543947] Bluetooth: Core ver 2.16 [ 18.544017] Bluetooth: HCI device and connection manager initialized [ 18.544020] Bluetooth: HCI socket layer initialized [ 18.544021] Bluetooth: L2CAP socket layer initialized [ 18.545469] Bluetooth: SCO socket layer initialized [ 18.548890] Bluetooth: Generic Bluetooth USB driver ver 0.6 [ 30.204776] Bluetooth: RFCOMM TTY layer initialized [ 30.204782] Bluetooth: RFCOMM socket layer initialized [ 30.204784] Bluetooth: RFCOMM ver 1.11 [ 30.247291] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 30.247295] Bluetooth: BNEP filters: protocol multicast lspci 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 02) 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 02) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 05) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 05) 00:1c.3 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 4 (rev 05) 00:1c.5 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 6 (rev 05) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 05) 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 05) 01:00.0 VGA compatible controller: ATI Technologies Inc Manhattan [Mobility Radeon HD 5400 Series] 01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series] 44:00.0 Network controller: Ralink corp. RT3090 Wireless 802.11n 1T/1R PCIe 45:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) ff:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) ff:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) ff:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) ff:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) ff:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) ff:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: hci0: Bluetooth Soft blocked: no Hard blocked: no 2: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no 3: hp-bluetooth: Bluetooth Soft blocked: no Hard blocked: no

    Read the article

  • SQLUG Events - London/Edinburgh/Cardiff/Reading - Masterclass, NoSQL, TSQL Gotcha's, Replication, BI

    - by tonyrogerson
    We have acquired two additional tickets to attend the SQL Server Master Class with Paul Randal and Kimberly Tripp next Thurs (17th June), for a chance to win these coveted tickets email us ([email protected]) before 9pm this Sunday with the subject "MasterClass" - people previously entered need not worry - your still in with a chance. The winners will be announced Monday morning.As ever plenty going on physically, we've got dates for a stack of events in Manchester and Leeds, I'm looking at Birmingham if anybody has ideas? We are growing our online community with the Cuppa Corner section, to participate online remember to use the #sqlfaq twitter tag; for those wanting to get more involved in presenting and fancy trying it out we are always after people to do 1 - 5 minute SQL nuggets or Cuppa Corners (short presentations) at any of these User Group events - just email us [email protected] removing from this email list? Then just reply with remove please on the subject line.Kimberly Tripp and Paul Randal Master Class - Thurs, 17th June - LondonREGISTER NOW AND GET A SECOND REGISTRATION FREE*The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance.The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:>> Debunk many of the ingrained misconceptions around SQL Server's behaviour >> Show you disaster recovery techniques critical to preserving your company's life-blood - the data >> Explain how a common application design pattern can wreak havoc in the database >> Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Where: Radisson Edwardian Heathrow Hotel, LondonWhen: Thursday 17th June 2010*REGISTER TODAY AT www.regonline.co.uk/kimtrippsql on the registration form simply quote discount code: BOGOF for both yourself and your colleague and you will save 50% off each registration – that’s a 249 GBP saving! This offer is limited, book early to avoid disappointment.Wed, 23 JunREADINGEvening Meeting, More info and registerIntroduction to NoSQL (Not Only SQL) - Gavin Payne; T-SQL Gotcha's and how to avoid them - Ashwani Roy; Introduction to Recency Frequency - Tony Rogerson; Reporting Services - Tim LeungThu, 24 JunCARDIFFEvening Meeting, More info and registerAlex Whittles of Purple Frog Systems talks about Data warehouse design case studies, Other BI related session TBC Mon, 28 JunEDINBURGHEvening Meeting, More info and registerReplication (Components, Adminstration, Performance and Troubleshooting) - Neil Hambly Server Upgrades (Notes and Best practice from the field) - Satya Jayanty Wed, 14 JulLONDONEvening Meeting, More info and registerMeeting is being sponsored by DBSophic (http://www.dbsophic.com/download), database optimisation software. Physical Join Operators in SQL Server - Ami LevinWorkload Tuning - Ami LevinSQL Server and Disk IO (File Groups/Files, SSD's, Fusion-IO, In-RAM DB's, Fragmentation) - Tony RogersonComplex Event Processing - Allan MitchellMany thanks,Tony Rogerson, SQL Server MVPUK SQL Server User Grouphttp://sqlserverfaq.com"

    Read the article

  • A quick look at: sys.dm_os_buffer_descriptors

    - by Jonathan Allen
    SQL Server places data into cache as it reads it from disk so as to speed up future queries. This dmv lets you see how much data is cached at any given time and knowing how this changes over time can help you ensure your servers run smoothly and are adequately resourced to run your systems. This dmv gives the number of cached pages in the buffer pool along with the database id that they relate to: USE [tempdb] GO SELECT COUNT(*) AS cached_pages_count , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY DB_NAME(database_id) , database_id ORDER BY cached_pages_count DESC; This gives you results which are quite useful, but if you add a new column with the code: …to convert the pages value to show a MB value then they become more relevant and meaningful. To see how your server reacts to queries, start up SSMS and connect to a test server and database – mine is called AdventureWorks2008. Make sure you start from a know position by running: -- Only run this on a test server otherwise your production server's-- performance may drop off a cliff and your phone will start ringing. DBCC DROPCLEANBUFFERS GO Now we can run a query that would normally turn a DBA’s hair white: USE [AdventureWorks2008] go SELECT * FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] …and then check our cache situation: A nice low figure – not! Almost 2000 pages of data in cache equating to approximately 15MB. Luckily these tables are quite narrow; if this had been on a table with more columns then this could be even more dramatic. So, let’s make our query more efficient. After resetting the cache with the DROPCLEANBUFFERS and FREEPROCCACHE code above, we’ll only select the columns we want and implement a WHERE predicate to limit the rows to a specific customer. SELECT [sod].[OrderQty] , [sod].[ProductID] , [soh].[OrderDate] , [soh].[CustomerID] FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] WHERE [soh].[CustomerID] = 29722 …and check our effect cache: Now that is more sympathetic to our server and the other systems sharing its resources. I can hear you asking: “What has this got to do with logging, Jonathan?” Well, a smart DBA will keep an eye on this metric on their servers so they know how their hardware is coping and be ready to investigate anomalies so that no ‘disruptive’ code starts to unsettle things. Capturing this information over a period of time can lead you to build a picture of how a database relies on the cache and how it interacts with other databases. This might allow you to decide on appropriate schedules for over night jobs or otherwise balance the work of your server. You could schedule this job to run with a SQL Agent job and store the data in your DBA’s database by creating a table with: IF OBJECT_ID('CachedPages') IS NOT NULL DROP TABLE CachedPages CREATE TABLE CachedPages ( cached_pages_count INT , MB INT , Database_Name VARCHAR(256) , CollectedOn DATETIME DEFAULT GETDATE() ) …and then filling it with: INSERT INTO [dbo].[CachedPages] ( [cached_pages_count] , [MB] , [Database_Name] ) SELECT COUNT(*) AS cached_pages_count , ( COUNT(*) * 8.0 ) / 1024 AS MB , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY database_id After this has been left logging your system metrics for a while you can easily see how your databases use the cache over time and may see some spikes that warrant your attention. This sort of logging can be applied to all sorts of server statistics so that you can gather information that will give you baseline data on how your servers are performing. This means that when you get a problem you can see what statistics are out of their normal range and target you efforts to resolve the issue more rapidly.

    Read the article

  • Chrome Web Browser Messages: Some Observations

    - by ultan o'broin
    I'm always on the lookout for how different apps handle errors and what kind of messages are shown (I probably need to get out more), I use this 'research' to reflect on our own application error messages patterns and guidelines and how we might make things better for our users in future. Users are influenced by all sorts of things, but their everyday experiences of technology, and especially what they encounter on the internet, increasingly sets their expectations for the enterprise user experience too. I recently came across a couple of examples from Google's Chrome web browser that got me thinking. In the first case, we have a Chrome error about not being able to find a web page. I like how simple, straightforward messaging language is used along with an optional ability to explore things a bit further--for those users who want to. The 'more information' option shows the error encountered by the browser (or 'original' error) in technical terms, along with an error number. Contrasting the two messages about essentially the same problem reveals what's useful to users and what's not. Everyone can use the first message, but the technical version of the message has to be explicitly disclosed for any more advanced user to pursue further. More technical users might search for a resolution, using that Error 324 number, but I imagine most users who see the message will try again later or check their URL again. Seems reasonable that such an approach be adopted in the enterprise space too, right? Maybe. Generally, end users don't go searching for solutions based on those error numbers, and help desk folks generally prefer they don't do so. That's because of the more critical nature of enterprise data or the fact that end users may not have the necessary privileges to make any fixes anyway. What might be more useful here is a link to a trusted source of additional help provided by the help desk or reputable community instead. This takes me on to the second case, this time more closely related to the language used in messaging situations. Here, I first noticed by the using of the (s) approach to convey possibilities of there being one or more pages at the heart of the problem. This approach is a no-no in Oracle style terms (the plural would be used) and it can create translation issues (though it is not a show-stopper). I think Google could have gone with the plural too. However, of more interest is the use of the verb "kill", shown in the message text and as an action button label. For many writers, words like "kill" and "abort" are to be avoided as they can give offense. I am not so sure about that judgment, as really their use cannot be separated from the context. Certainly, for more technical users, they're fine and have been in use for years, so I see no reason to avoid these terms if the audience has accepted them. Most end users too, I think would find the idea of "kill" usable and may even use the term in every day speech. Others might disagree--Apple uses a concept of Force Quit, for example. Ultimately, the only way to really know how to proceed is to research these matter by asking users of differing roles and expertise to perform some tasks, encounter these messages and then make recommendations based on those findings for our designs. Something to do in 2011!

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

  • InfiniBand Enabled Diskless PXE Boot

    - by Neeraj Gupta
    When you want to bring up a compute server in your environment and need InfiniBand connectivity, usually you go through various installation steps. This could involve operating systems like Linux, followed by a compatible InfiniBand software distribution, associated dependencies and configurations. What if you just want to run some InfiniBand diagnostics or troubleshooting tools from a test machine ? What if something happened to your primary machine and while recovering in rescue mode, you also need access to your InfiniBand network ? Often times we use opensource community supported small Linux distributions but they don't come with required InfiniBand support and tools. In this weblog, I am going to provide instructions on how to add InfniBand support to a specific Linux image - Parted Magic.This is a free to use opensource Linux distro often used to recover or rescue machines. The distribution itself will not be changed at all. Yes, you heard it right ! I have built an InfiniBand Add-on package that will be passed to the default kernel and initrd to get this all working. Pr-requisites You will need to have have a PXE server ready on your ethernet based network. The compute server you are trying to PXE boot should have a compatible IB HCA and must be connected to an active IB network. Required Downloads Download the Parted Magic small distribution for PXE from Parted Magic website. Download InfiniBand PXE Add On package. Right Click and Download from here. Do not extract contents of this file. You need to use it as is. Prepare PXE Server Extract the contents of downloaded pmagic distribution into a temporary directory. Inside the directory structure, you will see pmagic directory containing two files - bzImage and initrd.img. Copy this directory in your TFTP server's root directory. This is usually /tftpboot unless you have a different setup. For Example: cp pmagic_pxe_2012_2_27_x86_64.zip /tmp cd /tmp unzip pmagic_pxe_2012_2_27_x86_64.zip cd pmagic_pxe_2012_2_27_x86_64 # ls -l total 12 drwxr-xr-x  3 root root 4096 Feb 27 15:48 boot drwxr-xr-x  2 root root 4096 Mar 17 22:19 pmagic cp -r pmagic /tftpboot As I mentioned earlier, we dont change anything to the default pmagic distro. Simply provide the add-on package via PXE append options. If you are using a menu based PXE server, then add an entry to your menu. For example /tftpboot/pxelinux.cfg/default can be appended with following section. LABEL Diskless Boot With InfiniBand Support MENU LABEL Diskless Boot With InfiniBand Support KERNEL pmagic/bzImage APPEND initrd=pmagic/initrd.img,pmagic/ib-pxe-addon.cgz edd=off load_ramdisk=1 prompt_ramdisk=0 rw vga=normal loglevel=9 max_loop=256 TEXT HELP * A Linux Image which can be used to PXE Boot w/ IB tools ENDTEXT Note: Keep the line starting with "APPEND" as a single line. If you use host specific files in pxelinux.cfg, then you can use that specific file to add the above mentioned entry. Boot Computer over PXE Now boot your desired compute machine over PXE. This does not have to be over InfiniBand. Just use your standard ethernet interface and network. If using menus, then pick the new entry that you created in previous section. Enable IPoIB After a few minutes, you will be booted into Parted Magic environment. Open a terminal session and see if InfiniBand is enabled. You can use commands like: ifconfig -a ibstat ibv_devices ibv_devinfo If you are connected to InfiniBand network with an active Subnet Manager, then your IB interfaces must have come online by now. You can proceed and assign IP address to them. This will enable you at IPoIB layer. Example InfiniBand Diagnostic Tools I have added several InfiniBand Diagnistic tools in this add-on. You can use from following list: ibstat, ibstatus, ibv_devinfo, ibv_devices perfquery, smpquery ibnetdiscover, iblinkinfo.pl ibhosts, ibswitches, ibnodes Wrap Up This concludes this weblog. Here we saw how to bring up a computer with IPoIB and InfiniBand diagnostic tools without installing anything on it. Its almost like running diskless !

    Read the article

  • FairWarning Privacy Monitoring Solutions Rely on MySQL to Secure Patient Data

    - by Rebecca Hansen
    FairWarning® solutions have audited well over 120 billion events, each of which was processed and stored in a MySQL database. FairWarning is the world's leading supplier of privacy monitoring solutions for electronic health records, relied on by over 1,200 Hospitals and 5,000 Clinics to keep their patients' data safe. In January 2014, FairWarning was awarded the highest commendation in healthcare IT as the first ever Category Leader for Patient Privacy Monitoring in the "2013 Best in KLAS: Software & Services" report[1]. FairWarning has used MySQL as their solutions’ database from their start in 2005 to worldwide expansion and market leadership. FairWarning recently migrated their solutions from MyISAM to InnoDB and updated from MySQL 5.5 to 5.6. Following are some of benefits they’ve had as a result of those changes and reasons for their continued reliance on MySQL (from FairWarning MySQL Case Study). Scalability to Handle Terabytes of Data FairWarning's customers have a lot of data: On average, FairWarning customers receive over 700,000 events to be processed daily. Over 25% of their customers receive over 30 million events per day, which equates to over 1 billion events and nearly one terabyte (TB) of new data each month. Databases range in size from a few hundred GBs to 10+ TBs for enterprise deployments (data are rolled off after 13 months). Low or Zero Admin = Few DBAs "MySQL has not required a lot of administration. After it's been tuned, configured, and optimized for size on initial setup, we have very low administrative costs. I can scale and add more customers without adding DBAs. This has had a big, positive impact on our business.” - Chris Arnold, FairWarning Vice President of Product Management and Engineering. Performance Schema  As the size of FairWarning's customers has increased, so have their tables and data volumes. MySQL 5.6’ new maintenance and management features have helped FairWarning keep up. In particular, MySQL 5.6 performance schema’s low-level metrics have provided critical insight into how the system is performing and why. Support for Mutli-CPU Threads MySQL 5.6' support for multiple concurrent CPU threads, and FairWarning's custom data loader allow multiple files to load into a single table simultaneously vs. one at a time. As a result, their data load time has been reduced by 500%. MySQL Enterprise Hot Backup Because hospitals and clinics never stop, FairWarning solutions can’t either. FairWarning changed from using mysqldump to MySQL Enterprise Hot Backup, which has reduced downtime, restore time, and storage requirements. For many of their larger customers, restore time has decreased by 80%. MySQL Enterprise Edition and Product Roadmap Provide Complete Solution "MySQL's product roadmap fully addresses our needs. We like the fact that MySQL Enterprise Edition has everything included; there's no need to purchase separate modules."  - Chris Arnold Learn More>> FairWarning MySQL Case Study Why MySQL 5.6 is an Even Better Embedded Database for Your Products presentation Updating Your Products to MySQL 5.6, Best Practices for OEMs on-demand webinar (audio and / or slides + Q&A transcript) MyISAM to InnoDB – Why and How on-demand webinar (same stuff) Top 10 Reasons to Use MySQL as an Embedded Database white paper [1] 2013 Best in KLAS: Software & Services report, January, 2014. © 2014 KLAS Enterprises, LLC. All rights reserved.

    Read the article

  • Is this simple XOR encrypted communication absolutely secure?

    - by user3123061
    Say Alice have 4GB USB flash memory and Peter also have 4GB USB flash memory. They once meet and save on both of memories two files named alice_to_peter.key (2GB) and peter_to_alice.key (2GB) which is randomly generated bits. Then they never meet again and communicate electronicaly. Alice also maintains variable called alice_pointer and Peter maintains variable called peter_pointer which is both initially set to zero. Then when Alice needs to send message to Peter they do: encrypted_message_to_peter[n] = message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] Where n i n-th byte of message. Then alice_pointer is attached at begining of the encrypted message and (alice_pointer + encrypted message) is sent to Peter and then alice_pointer is incremented by length of message (and for maximum security can be used part of key erased) Peter receives encrypted_message, reads alice_pointer stored at beginning of message and do this: message_to_peter[n] = encrypted_message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] And for maximum security after reading of message also erases used part of key. - EDIT: In fact this step with this simple algorithm (without integrity check and authentication) decreases security, see Paulo Ebermann post below. When Peter needs to send message to Alice they do analogical steps with peter_to_alice.key and with peter_pointer. With this trivial schema they can send for next 50 years each day 2GB / (50 * 365) = cca 115kB of encrypted data in both directions. If they need more data to send, they simple use larger memory for keys for example with today 2TB harddiscs (1TB keys) is possible to exchange next 50years 60MB/day ! (thats practicaly lots of data for example with using compression its more than hour of high quality voice communication) It Seems to me there is no way for attacker to read encrypted message without keys even if they have infinitely fast computer. because even with infinitely fast computer with brute force they get ever possible message that can fit to length of message, but this is astronomical amount of messages and attacker dont know which of them is actual message. I am right? Is this communication schema really absolutely secure? And if its secure, has this communication method its own name? (I mean XOR encryption is well-known, but whats name of this concrete practical application with use large memories at both communication sides for keys? I am humbly expecting that this application has been invented someone before me :-) ) Note: If its absolutely secure then its amazing because with today low cost large memories it is practicaly much cheeper way of secure communication than expensive quantum cryptography and with equivalent security! EDIT: I think it will be more and more practical in future with lower a lower cost of memories. It can solve secure communication forever. Today you have no certainty if someone succesfuly atack to existing ciphers one year later and make its often expensive implementations unsecure. In many cases before comunication exist step where communicating sides meets personaly, thats time to generate large keys. I think its perfect for military communication for example for communication with submarines which can have installed harddrive with large keys and military central can have harddrive for each submarine they have. It can be also practical in everyday life for example for control your bank account because when you create your account you meet with bank etc.

    Read the article

  • Inappropriate Updates?

    - by Tony Davis
    A recent Simple-talk article by Kathi Kellenberger dissected the fastest SQL solution, submitted by Peter Larsson as part of Phil Factor's SQL Speed Phreak challenge, to the classic "running total" problem. In its analysis of the code, the article re-ignited a heated debate regarding the techniques that should, and should not, be deemed acceptable in your search for fast SQL code. Peter's code for running total calculation uses a variation of a somewhat contentious technique, sometimes referred to as a "quirky update": SET @Subscribers = Subscribers = @Subscribers + PeopleJoined - PeopleLeft This form of the UPDATE statement, @variable = column = expression, is documented and it allows you to set a variable to the value returned by the expression. Microsoft does not guarantee the order in which rows are updated in this technique because, in relational theory, a table doesn’t have a natural order to its rows and the UPDATE statement has no means of specifying the order. Traditionally, in cases where a specific order is requires, such as for running aggregate calculations, programmers who used the technique have relied on the fact that the UPDATE statement, without the WHERE clause, is executed in the order imposed by the clustered index, or in heap order, if there isn’t one. Peter wasn’t satisfied with this, and so used the ingenious device of assuring the order of the UPDATE by the use of an "ordered CTE", based on an underlying temporary staging table (a heap). However, in either case, the ordering is still not guaranteed and, in addition, would be broken under conditions of parallelism, or partitioning. Many argue, with validity, that this reliance on a given order where none can ever be guaranteed is an abuse of basic relational principles, and so is a bad practice; perhaps even irresponsible. More importantly, Microsoft doesn't wish to support the technique and offers no guarantee that it will always work. If you put it into production and it breaks in a later version, you can't file a bug. As such, many believe that the technique should never be tolerated in a production system, under any circumstances. Is this attitude justified? After all, both forms of the technique, using a clustered index to guarantee the order or using an ordered CTE, have been tested rigorously and are proven to be robust; although not guaranteed by Microsoft, the ordering is reliable, provided none of the conditions that are known to break it are violated. In Peter's particular case, the technique is being applied to a temporary table, where the developer has full control of the data ordering, and indexing, and knows that the table will never be subject to parallelism or partitioning. It might be argued that, in such circumstances, the technique is not really "quirky" at all and to ban it from your systems would server no real purpose other than to deprive yourself of a reliable technique that has uses that extend well beyond the running total calculations. Of course, it is doubly important that such a technique, including its unsupported status and the assumptions that underpin its success, is fully and clearly documented, preferably even when posting it online in a competition or forum post. Ultimately, however, this technique has been available to programmers throughout the time Sybase and SQL Server has existed, and so cannot be lightly cast aside, even if one sympathises with Microsoft for the awkwardness of maintaining an archaic way of doing updates. After all, a Table hint could easily be devised that, if specified in the WITH (<Table_Hint_Limited>) clause, could be used to request the database engine to do the update in the conventional order. Then perhaps everyone would be satisfied. Cheers, Tony.

    Read the article

  • Oracle Unified Method (OUM) 6.1

    - by user714714
    ORACLE® UNIFIED METHOD RELEASE 6.1 Oracle’s Full Lifecycle Methodfor Deploying Oracle-Based Business Solutions About | Release | Access | Previous Announcements About Oracle is evolving the Oracle® Unified Method (OUM) to achieve the vision of supporting the entire Enterprise IT Lifecycle, including support for the successful implementation of every Oracle product. OUM replaces Legacy Methods, such as AIM Advantage, AIM for Business Flows, EMM Advantage, PeopleSoft's Compass, and Siebel's Results Roadmap. OUM provides an implementation approach that is rapid, broadly adaptive, and business-focused. OUM includes a comprehensive project and program management framework and materials to support Oracle's growing focus on enterprise-level IT strategy, architecture, and governance. Release OUM release 6.1 provides support for Application Implementation, Cloud Application Services Implementation, and Software Upgrade projects as well as the complete range of technology projects including Business Intelligence (BI), Enterprise Security, WebCenter, Service-Oriented Architecture (SOA), Application Integration Architecture (AIA), Business Process Management (BPM), Enterprise Integration, and Custom Software. Detailed techniques and tool guidance are provided, including a supplemental guide related to Oracle Tutor and UPK. This release features: Project Manager and Consultant views provide quick access to material relevant to each role OUM Cloud Application Services Implementation Approach Solution Delivery Guide 3.0 and Project Workplan Template OUM Microsoft Project Workplan Template and User's Guide updated to facilitate review and removal of out-of-scope Activities and Tasks MC.050 Application Setup Template available in Microsoft Excel format in addition to Microsoft Word format BT.070 Abbreviated Project Management Framework Presentation Template Envision Examples for Enterprise Organization Structures (BA.020), Enterprise Business Context Diagram (BA.045), and High-Level Use Cases (BA.060) Implement Examples for System Context Diagram (RD.005), Business Use Case Model (RA.015), Use Case Model (RA.023), MoSCoW List (RD.045), and Analysis Specification (AN.100) Home Page drop-down menu allows access to the method by Role, Supplemental Guidance, Method Repository, or View For a comprehensive list of features and enhancements, refer to the "What's New" page of the Method Pack. Upcoming releases will provide expanded support for Oracle's Enterprise Application suites including product-suite specific materials and guidance for tailoring OUM to support various engagement types. Access Oracle Customers Oracle customers may obtain copies of the method for their internal use – including guidelines, templates, and tailored work breakdown structure – by contracting with Oracle for a consulting engagement of two weeks or longer and meeting some additional minimum criteria. Customers, who have a signed consulting contract with Oracle and meet the engagement qualification criteria, are permitted to download the current release of OUM for their perpetual use. They may also obtain subsequent releases published during a renewable, three-year access period. Training courses are also available to these customers. Contact your local Oracle Sales Representative about enrolling in the OUM Customer Program. Oracle PartnerNetwork (OPN) Diamond, Platinum, and Gold Partners OPN Diamond, Platinum, and Gold Partners are able to access the OUM method pack, training courses, and collateral from the OPN Portal at no additional cost: Go to the OPN Portal at partner.oracle.com. Select "Sign In / Register for Account". Sign In. From the Product Resources section, select "Applications". From the Applications page, locate and select the "Oracle Unified Method" link. From the Oracle Unified Method Knowledge Zone, locate the "I want to:" section. From the I want to: section, locate and select "Implement Solutions". From the Implement Solution page, locate the "Best Practices" section. Locate and select the "Download Oracle Unified Method (OUM)" link. Previous Announcements Oracle Unified Method (OUM) Release 6.1 Oracle Unified Method (OUM) Release 6.0 Oracle Unified Method (OUM) Release 5.6 Oracle Unified Method (OUM) Release 5.5 Oracle Unified Method (OUM) Release 5.4 Oracle EMM Advantage Retired Retirement of Oracle EMM Advantage Planned for December 01, 2011

    Read the article

  • Acr.ExtDirect &ndash; Part 1 &ndash; Method Resolvers

    - by Allan Ritchie
    One of the most important things of any open source libraries in my opinion is to be as open as possible while avoiding having your library become invasive to your code/business model design.  I personally could never stand marking my business and/or data access code with attributes everywhere.  XML also isn’t really a fav with too many people these days since it comes with a startup performance hit and requires runtime compiling.  I find that there is a whole ton of communication libraries out there currently requiring this (ie. WCF, RIA, etc).  Even though Acr.ExtDirect comes with its own set of attributes, you can piggy-back the [ServiceContract] & [OperationContract] attributes from WCF if you choose.  It goes beyond that though, there are 2 others “out-of-the-box” implementations – Convention based & XML Configuration.    Convention – I don’t actually recommend using this one since it opens up all of your public instance methods to remote execution calls. XML Configuration – This isn’t so bad but requires you enter all of your methods and there operation types into the Castle XML configuration & as I said earlier, XML isn’t the fav these days.   So what are your options if you don’t like attributes, convention, or XML Configuration?  Well, Acr.ExtDirect has its own extension base to give the API a list of methods and components to make available for remote execution.  1: public interface IDirectMethodResolver { 2:   3: bool IsServiceType(ComponentModel model, Type type); 4: string GetNamespace(ComponentModel model); 5: string[] GetDirectMethodNames(ComponentModel model); 6: DirectMethodType GetMethodType(ComponentModel model, MethodInfo method); 7: }   Now to implement our own method resolver:   1: public class TestResolver : IDirectMethodResolver { 2:   3: #region IDirectMethodResolver Members 4:   5: /// <summary> 6: /// Determine if you are calling a service 7: /// </summary> 8: /// <param name="model"></param> 9: /// <param name="type"></param> 10: /// <returns></returns> 11: public bool IsServiceType(ComponentModel model, Type type) { 12: return (type.Namespace == "MyBLL.Data"); 13: } 14:   15: /// <summary> 16: /// Return the calling name for the client side 17: /// </summary> 18: /// <param name="model"></param> 19: /// <returns></returns> 20: public string GetNamespace(ComponentModel model) { 21: return model.Name; 22: } 23:   24: public string[] GetDirectMethodNames(ComponentModel model) { 25: switch (model.Name) { 26: case "Products" : 27: return new [] { 28: "GetProducts", 29: "LoadProduct", 30: "Save", 31: "Update" 32: }; 33:   34: case "Categories" : 35: return new [] { 36: "GetProducts" 37: }; 38:   39: default : 40: throw new ArgumentException("Invalid type"); 41: } 42: } 43:   44: public DirectMethodType GetMethodType(ComponentModel model, MethodInfo method) { 45: if (method.Name.StartsWith("Save") || method.Name.StartsWith("Update")) 46: return DirectMethodType.FormSubmit; 47: 48: else if (method.Name.StartsWith("Load")) 49: return DirectMethodType.FormLoad; 50:   51: else 52: return DirectMethodType.Direct; 53: } 54:   55: #endregion 56: }   And there you have it, your own custom method resolver.  Pretty easy and pretty open ended!

    Read the article

  • TechEd Israel 2010 may only accept speakers from sponsors

    - by RoyOsherove
    A month or so ago, Microsoft Israel started sending out emails to its partners and registered event users to “Save the date!” – Micraoft Teched Israel is coming, and it’s going to be this november! “Great news” I thought to myself. I’d been to a couple of the MS teched events, as a speaker and as an attendee, and it was lovely and professionally done. Israel is an amazing place for technology and development and TechEd hosted some big names in the world of MS software. A couple of weeks ago, I was shocked to hear from a couple of people that Microsoft Israel plans to only accept non-MS teched speakers, only from sponsors of the event. That means that according to the amount that you have paid, you get to insert one or more of your own selected speakers as part of teched. I’ve spent the past couple of weeks trying to gather more evidence of this, and have gotten some input from within MS about this information. It looks like that is indeed the case, though no MS rep. was prepared to answer any email I had publicly. If they approach me now I’d be happy to print their response. What does this mean? If this is true, it means that Microsoft Israel is making a grave mistake – They are diluting the quality of the speakers for pure money factors. That means, that as a teched attendee, who paid good money, you might be sitting down to watch nothing more that a bunch of infomercials, or sub-standard speakers – since speakers are no longer selected on quality or interest in their topic. They are turning the conference from a learning event to a commercial driven event They are closing off the stage to the community of speakers who may not be associated with any organization  willing to be a sponsor They are losing speakers (such as myself) who will not want to be part of such an event. (yes – even if my company ends up sponsoring the event, I will not take part in it, Sorry Eli!) They are saying “F&$K you” to the community of MVPs who should be the people to be approached first about technical talks (my guess is many MVPs wouldn’t want to talk at an event driven that way anyway ) I do hope this ends up not being true, but it looks like it is. MS Israel had already done such a thing with the Developer Days event previouly held in Israel – only sponsors were allowed to insert speakers into the event. If this turns out to be true I would urge the MS community in Israel to NOT TAKE PART AT THIS EVENT in any form (attendee, speaker, sponsor or otherwise). by taking part, you will be telling MS Israel it’s OK to piss all over the community that they are quietly suffocating anyway. The MVP case MS Israel has managed to screw the MVP program as well. MS MVPs (I’m one) have had a tough time here in Israel the past couple of years. ever since yosi taguri left the blue badge ranks, there was not real community leader left. Whoever runs things right now has their eyes and minds set elsewhere, with the software MVP community far from mind and heart. No special MVP events (except a couple of small ones this year). No real MVP leadership happens here, with the MVP MEA lead (Ruari) being on a remote line, is not really what’s needed. “MVP? What’s that?” I’m sure many MS Israel employees would say. Exactly my point. Last word I’ve been disappointed by the MS machine for a while now, but their slowness to realize what real community means in the past couple of years really turns me off. Maybe it’s time to move on. Maybe I shouldn’t be chasing people at MS Israel begging for a room to host the Agile Israel user group. Maybe it’s time to say a big bye bye and start looking at a life a bit more disconnected.

    Read the article

  • Bitmask data insertions in SSDT Post-Deployment scripts

    - by jamiet
    On my current project we are using SQL Server Data Tools (SSDT) to manage our database schema and one of the tasks we need to do often is insert data into that schema once deployed; the typical method employed to do this is to leverage Post-Deployment scripts and that is exactly what we are doing. Our requirement is a little different though, our data is split up into various buckets that we need to selectively deploy on a case-by-case basis. I was going to use a SQLCMD variable for each bucket (defaulted to some value other than “Yes”) to define whether it should be deployed or not so we could use something like this in our Post-Deployment script: IF ($(DeployBucket1Flag) = 'Yes')BEGIN   :r .\Bucket1.data.sqlENDIF ($(DeployBucket2Flag) = 'Yes')BEGIN   :r .\Bucket2.data.sqlENDIF ($(DeployBucket3Flag) = 'Yes')BEGIN   :r .\Bucket3.data.sqlEND That works fine and is, I’m sure, a very common technique for doing this. It is however slightly ugly because we have to litter our deployment with various SQLCMD variables. My colleague James Rowland-Jones (whom I’m sure many of you know) suggested another technique – bitmasks. I won’t go into detail about how this works (James has already done that at Using a Bitmask - a practical example) but I’ll summarise by saying that you can deploy different combinations of the buckets simply by supplying a different numerical value for a single SQLCMD variable. Each bit of that value’s binary representation signifies whether a particular bucket should be deployed or not. This is better demonstrated using the following simple script (which can be easily leveraged inside your Post-Deployment scripts): /* $(DeployData) is a SQLCMD variable that would, if you were using this in SSDT, be declared in the SQLCMD variables section of your project file. It should contain a numerical value, defaulted to 0. In this example I have declared it using a :setvar statement. Test the affect of different values by changing the :setvar statement accordingly. Examples: :setvar DeployData 1 will deploy bucket 1 :setvar DeployData 2 will deploy bucket 2 :setvar DeployData 3   will deploy buckets 1 & 2 :setvar DeployData 6   will deploy buckets 2 & 3 :setvar DeployData 31  will deploy buckets 1, 2, 3, 4 & 5 */ :setvar DeployData 0 DECLARE  @bitmask VARBINARY(MAX) = CONVERT(VARBINARY,$(DeployData)); IF (@bitmask & 1 = 1) BEGIN     PRINT 'Bucket 1 insertions'; END IF (@bitmask & 2 = 2) BEGIN     PRINT 'Bucket 2 insertions'; END IF (@bitmask & 4 = 4) BEGIN     PRINT 'Bucket 3 insertions'; END IF (@bitmask & 8 = 8) BEGIN     PRINT 'Bucket 4 insertions'; END IF (@bitmask & 16 = 16) BEGIN     PRINT 'Bucket 5 insertions'; END An example of running this using DeployData=6 The binary representation of 6 is 110. The second and third significant bits of that binary number are set to 1 and hence buckets 2 and 3 are “activated”. Hope that makes sense and is useful to some of you! @Jamiet P.S. I used the awesome HTML Copy feature of Visual Studio’s Productivity Power Tools in order to format the T-SQL code above for this blog post.

    Read the article

  • Introducing Oracle User Productivity Kit (UPK) 12.1 Thursday 26th June 2014 – Oracle, Reading, Berkshire

    - by Kathryn Lustenberger
    Join Oracle UPK Product Management and Product Development In conjunction with Larmer Brown Register Now v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid windowtext 1.0pt; mso-border-alt:solid windowtext .5pt; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid windowtext; mso-border-insidev:.5pt solid windowtext; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} UPK Client Event – Introducing v12.1 Thursday 26th June 2014 Oracle Thames Valley Park, Reading, Berkshire Agenda Time Session 10.00am Registration and Coffee 10.30am Introductions and Objectives TWIN TRACK SESSION 10.45am Introduction to UPK (Standard) Version 12.1 Overview and Demonstration for delegates new to UPK Upgrading to UPK (Standard) Version 12.1 Demonstration of the latest release, for delegates with experience of UPK 12.25pm Q&A An opportunity for delegates to raise specific questions about the tool Q&A An opportunity for delegates to raise specific questions about the latest release 12.45pm Lunch 1.30pm Larmer Brown Development Tracker Larmer Brown’s Development Tracker addresses the challenge of ensuring that a Content Development Project will meet agreed deadlines, identifying risks with sufficient notice to take action 1.50pm Case Study How the Development Tracker addressed this client’s requirement to track, monitor and report progress on a large-scale implementation Project 2.10pm Larmer Brown Library Content for UPK This session will showcase some of Larmer Brown’s content library and consider how pre-built content can be used to your advantage 2.30pm Coffee Break 2.45pm Making the most of UPK Professional This presentation and demonstration seeks to unlock the potential of UPK Professional for those that may not be fully utilising the tool   3.20pm Case Study How this client has utilised the tracking and reporting features within UPK Professional 3.40pm Summary and Conclusions 4.00pm Close

    Read the article

  • Wireless Network Found, can't connect, repeated requests for authentication

    - by Herm Holland
    After trawling through the internet, on forums, support websites, and through dozens upon dozens of answered questions on this site, I've not found a solution to what seems like a fairly regular problem... I cannot connect to a wireless network, and am continually asked for the network password. I have tried countless suggested solutions on the different locations I've already referred to. None of them have worked. Details of my experience are as follows: I have just recently installed Ubuntu 12.04.1 (32-bit). Ubuntu installed on my system seemingly fine, and I even formatted my hard drive during the process. It's as if it were a new desktop computer. During the installation I was asked to connect to a Wireless Network. I have a USB Wireless Card connected which I have used to connect desktop PC's, laptops, and a Wii to the internet from approximately the same area of the house (thus the same distance from the Wireless Router). I chose my network, entered the correct password for it (I double checked; it's definitely the right password) and proceeded with the installation. Several times before the installation was complete, I was asked to authenticate the connection, and this seemed to do nothing each time. On the repeated screens the password was already entered in the appropriate box. When Ubuntu booted up the first thing I was faced with (other than something about Language settings, or something) was another request for authentication. Again, the password was already there, so I clicked connect. It did not connect. Instead, I was once again faced with repeated requests every few minutes. I went onto my laptop, which is connected to this network, checked the details of the network, and entered them manually into my Ubuntu PC (including the IPv4 and IPv6 information) but this didn't work either, so I set it back to finding the settings automatically. Note, also, that the "Connect automatically" and "Available to all users" boxes are checked, and have been unchecked & rechecked countless times. I have also tried having my User account connect automatically, and to need a password entered at the welcome screen. Whilst I've been writing this, it has gone through a spat of connecting successfully to the network for less than a minute, before coming offline again, only to repeat the process. But it has now returned to prompting me for a password every couple of minutes. This computer has already run on the Fedora OS, and had no trouble connecting to, and maintaining a connection. I also have a laptop running Windows 7 less than a metre away from this desktop PC, which is connected and has no trouble maintaining a connection at 50%-100% strength (fluctuating). Therefore: - I know it's not the wireless card - I know it's not the PC itself - I know it's not the access point - I know it's not the location of my PC or wireless card - It is solely because of Ubuntu Everything else has worked fine, but the moment Ubuntu was introduced into the equation, it has gone completely wrong. Honestly; I prefer Ubuntu as an OS to Fedora, but if I can't solve the problem it'll be straight back to Fedora that I'll have to go. Can anyone help me at all?

    Read the article

  • Using Oracle Linux iSCSI targets with Oracle VM

    - by wim.coekaerts
    A few days ago I had written a blog entry on how to use Oracle Solaris 10 (in my case), ZFS and the iSCSI target feature in Oracle Solaris to create a set of devices exported to my Oracle VM server. Oracle Linux can do this as well and I wanted to make sure I also tried out how to do this on Oracle Linux and here are the results. When you install Oracle Linux 5 update 5 (anything newer than update 3), it comes with an rpm called scsi-target-utils. To begin your quest, should you choose to accept it :) make sure this is installed. rpm -qa |grep scsi-target If it is not installed : up2date scsi-target-utils The target utils come with a tool tgtadm which is similar to iscsitadm on Oracle Solaris. There are 2 components again on the iSCSI server side. (1) create volumes - we will use lvm with lvcreate (2) expose a target using tgtadm. My server has a simple setup. All the disks are part of a single volume group called vgroot. To export a 50Gb volume I just create a new volume : lvcreate -L 50G -nmytest1 vgroot This will show up as a new volume in /dev/mapper as /dev/mapper/vgroot-mytest1. Create as many as you want for your environment. Since I already have my blog entry about the 5 volumes, I am not going to repeat the whole thing. You can just go look at the previous blog entry. Now that we have created the volume, we need to use tgtadm to set it up : make sure the service is running : /etc/init.d/tgtd start or service tgtd start (if you want to keep it running you can do chkconfig tgtd on to start it automatically at boottime) Next you need a targetname to set everything up. My recommendation would be to install iscsi-initiator-utils . This will create an iscsi id and put it in /etc/iscsi/initiatorname.iscsi. For convenience you can do : source /etc/iscsi/initiatorname.iscsi echo $InitiatorName and from here on use $InitiatorName instead of the long complex iqn. create your target : tgtadm --lld iscsi --op new --mode target --tid 1 -T $InitiatorName to show the status : tgtadm --lld iscsi --op show --mode target add the volume previously created : tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/mapper/vgroot-mytest1 re-run status to see it's there : tgtadm --lld iscsi --op show --mode target and just like on Oracle Solaris you now have to export (bind) it : tgtadm --lld iscsi --op bind --mode target --tid 1 -I iqn.1986-03.com.sun:01:2a7526f0ffff If you want to export the lun to every iscsi initiator then replace the iqn with ALL. Of course you have to add the iqn of each iscsi initiator or client you want to connect. In the case of my 2 node Oracle VM server setup, both Oracle VM server's initiator names would have to be added. use status again to see that it has this iqn under ACL tgtadm --lld iscsi --op show --mode target You can drop the --lld iscsi if you want, or alias it. It just makes the command line more obvious as to what you are doing. Oracle VM side : Refer back to the previous blog entry for the detailed setup of my Oracle VM server volumes but the exact same commands will be used there. discover : iscsiadm --mode discovery --type sendtargets --portal login : iscsiadm --mode node --targetname iscsi targetname --portal --login get devices : /etc/init.d/iscsi restart and voila you should be in business. have fun.

    Read the article

  • SOA Community Newsletter October 2013

    - by JuergenKress
    Dear SOA & BPM Partner Community member, Our October newsletter edition focuses on Oracle OpenWorld 2013, highlights, keynotes and all presentations. Thanks to all partners who made the conference a huge success. If you could not come to San Francisco you will find all the details within this newsletter. As the newsletter edition contains a lot of content thus we have three sections - SOA, BPM & ACM, and AppAdvantage & UX. Make sure you share your content with the community, best via twitter @soacommunity #soacommunity! What is new in SOA Suite 12c? At OOW the product management team demonstrated some of the key features of the upcoming version. The important SOA topics are mobile integration and cloud integration - make sure you re-use your existing SOA platform! Bruce Tierney showcased the Agilent mobile integration and you try the new Mobile Order Management for EBS GSE Demo using middleware technology. On cloud integration the product management team presented several OOW sessions and published two whitepapers. As SOA becomes mature the awareness for SOA Governance continues to raise, Introducing Oracle Enterprise Repository Express Workflows and watch Luis Weir: Challenges to Implementing SOA Governance. Thanks to Ronald for the SOA Made Simple | Introduction to SOA series, the next article in the Industrial SOA series is SOA and User Interfaces (UI). Have you achieved successful BPM implementation? Nominate your customer references for the Gartner Business Process Management Excellence Awards 2014. Do you want to showcase the latest BPM Suite? Make sure you use the hosted BPM PS6 (11.1.1.7) demo. Do you want to become an expert in BPM Suite? Attend one of our BPM Bootcamps in Germany, Netherland, Spain or UK! If you can not make it – we offer plenty of on-demand content Advanced BPM Scenarios & BPM Architecture Topics & Process Modeling and Life Cycle & Adaptive Case Management & Smart Application Extensibility with Oracle Process Accelerators. I would also recommend to watch great introduction to Adaptive Case Management the on-demand webcast with Bruce Silver & Ajay Khanna. Thanks to Mark Foster from the A-team for the ACM article series & Leon Smiers for their blog posts. If you accomplished a SOA Suite or BPM Suite project and want to become a certified SOA or BPM expert, we are offering again free vouchers to become a certified SOA & BPM expert (limited to partners in Europe Middle East and Africa). Don't miss this opportunity and become Specialized! Best regards, Jürgen Kress To read the newsletter please visit http://tinyurl.com/soaNewsOctober2013 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: newsletter,SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Recent improvements in Console Performance

    - by loren.konkus
    Recently, the WebLogic Server development and support organizations have worked with a number of customers to quantify and improve the performance of the Administration Console in large, distributed configurations where there is significant latency in the communications between the administration server and managed servers. These improvements fall into two categories: Constraining the amount of time that the Console stalls waiting for communication Reducing and streamlining the amount of data required for an update A few releases ago, we added support for a configurable domain-wide mbean "Invocation Timeout" value on the Console's configuration: general, advanced section for a domain. The default value for this setting is 0, which means wait indefinitely and was chosen for compatibility with the behavior of previous releases. This configuration setting applies to all mbean communications between the admin server and managed servers, and is the first line of defense against being blocked by a stalled or completely overloaded managed server. Each site should choose an appropriate timeout value for their environment and network latency. In the next release of WebLogic Server, we've added an additional console preference, "Management Operation Timeout", to the Console's shared preference page. This setting further constrains how long certain console pages will wait for slowly responding servers before returning partial results. While not all Console pages support this yet, key pages such as the Servers Configuration and Control table pages and the Deployments Control pages have been updated to support this. For example, if a user requests a Servers Table page and a Management Operation Timeout occurs, the table is displayed with both local configuration and remote runtime information from the responding managed servers and only local configuration information for servers that did not yet respond. This means that a troublesome managed server does not impede your ability to manage your domain using the Console. To support these changes, these Console pages have been re-written to use the Work Management feature of WebLogic Server to interact with each server or deployment concurrently, which further improves the responsiveness of these pages. The basic algorithm for these pages is: For each configuration mbean (ie, Servers) populate rows with configuration attributes from the fast, local mbean server Find a WorkManager For each server, Create a Work instance to obtain runtime mbean attributes for the server Schedule Work instance in the WorkManager Call WorkManager.waitForAll to wait WorkItems to finish, constrained by Management Operation Timeout For each WorkItem, if the runtime information obtained was not complete, add a message indicating which server has incomplete data Display collected data in table In addition to these changes to constrain how long the console waits for communication, a number of other changes have been made to reduce the amount and scope of managed server interactions for key pages. For example, in previous releases the Deployments Control table looked at the status of a deployment on every managed server, even those servers that the deployment was not currently targeted on. (This was done to handle an edge case where a deployment's target configuration was changed while it remained running on previously targeted servers.) We decided supporting that edge case did not warrant the performance impact for all, and instead only look at the status of a deployment on the servers it is targeted to. Comprehensive status continues to be available if a user clicks on the 'status' field for a deployment. Finally, changes have been made to the System Status portlet to reduce its impact on Console page display times. Obtaining health information for this display requires several mbean interactions with managed servers. In previous releases, this mbean interaction occurred with every display, and any delay or impediment in these interactions was reflected in the display time for every page. To reduce this impact, we've made several changes in this portlet: Using Work Management to obtain health concurrently Applying the operation timeout configuration to constrain how long we will wait Caching health information to reduce the cost during rapid navigation from page to page and only obtaining new health information if the previous information is over 30 seconds old. Eliminating heath collection if this portlet is minimized. Together, these Console changes have resulted in significant performance improvements for the customers with large configurations and high latency that we have worked with during their development, and some lesser performance improvements for those with small configurations and very fast networks. These changes will be included in the 11g Rel 1 patch set 2 (10.3.3.0) release of WebLogic Server.

    Read the article

  • Using the latest (stable release) of Oracle Developer Tools for Visual Studio 11.1.0.7.20.

    - by mbcrump
    +  = Simple and safe Data connections.   This guide is for someone wanting to use the latest ODP.NET quickly without reading the official documentation. This guide will get you up and running in about 15 minutes. I have reviewed my referral link to my simple Setting up ODP.net with Win7 x64 and noticed most people were searching for one of the following terms: “how to use odp.net with vs” “setup connection odp.net” “query db using odp and vs” While my article provided links and a sample tnsnames.ora file, it really didn’t tell you how to use it. I’m hoping that this brief tutorial will help. So before we get started, you will need the following: Download the following: www.oracle.com/technology/software/tech/dotnet/utilsoft.html from oracle and install it. It is the first one on the page. Visual Studio 2008 or 2010. It should be noted that The System.Data.OracleClient namespace is the OLD .NET Framework Data Provider for Oracle. It should not be used anymore as it has been depreciated. The latest version which is what we are using is Oracle.DataAccess.Client. First things first, Add a reference to the Oracle.DataAccess.Client after you install ODP.NET   Copy and paste the following C# code into your project and replace the relevant info including the query string and you should be able to return data. I have commented several lines of code to assist in understanding what it is doing.   Lambda Expression. using System; using System.Data; using Oracle.DataAccess.Client;   namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {           try         {             //Setup DataSource             string oradb = "Data Source=(DESCRIPTION ="                                    + "(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521)))"                                    + "(CONNECT_DATA = (SERVICE_NAME = SERVICENAME))) ;"                                    + "Persist Security Info=True;User ID=USER;Password=PASSWORD;";                        //Open Connection to Oracle - this could be moved outside the try.             OracleConnection conn = new OracleConnection(oradb);             conn.Open();               //Create cmd and use parameters to prevent SQL injection attacks.             OracleCommand cmd = new OracleCommand();             cmd.Connection = conn;               cmd.CommandText = "select username from table where username = :username";               OracleParameter p1 = new OracleParameter("username", OracleDbType.Varchar2);             p1.Value = username;             cmd.Parameters.Add(p1);               cmd.CommandType = CommandType.Text;               OracleDataReader dr = cmd.ExecuteReader();             dr.Read();               //Contains the value of the datarow             Console.WriteLine(dr["username"].ToString());               //Disposes of objects.             dr.Dispose();             cmd.Dispose();             conn.Dispose();         }           catch (OracleException ex) // Catches only Oracle errors         {             switch (ex.Number)             {                 case 1:                     Console.WriteLine("Error attempting to insert duplicate data.");                     break;                 case 12545:                     Console.WriteLine("The database is unavailable.");                     break;                 default:                     Console.WriteLine(ex.Message.ToString());                     break;             }         }           catch (Exception ex) // Catches any error not previously caught         {                   Console.WriteLine("Unidentified Error: " + ex.Message.ToString());              }         }       }           } At this point, you should have a working Program that returns data from an oracle database. If you are still having trouble then drop me a line and I will be happy to assist. As of this writing, oracle has announced the latest beta release of ODP.NET 11.2.0.1.1 Beta.  This release includes .NET Framework 4 and .NET Framework 4 Client Profile support. You may want to hold off on this version for a while as its BETA, and I wouldn’t want any production code using any BETA software.

    Read the article

  • Threading Overview

    - by ACShorten
    One of the major features of the batch framework is the ability to support multi-threading. The multi-threading support allows a site to increase throughput on an individual batch job by splitting the total workload across multiple individual threads. This means each thread has fine level control over a segment of the total data volume at any time. The idea behind the threading is based upon the notion that "many hands make light work". Each thread takes a segment of data in parallel and operates on that smaller set. The object identifier allocation algorithm built into the product randomly assigns keys to help ensure an even distribution of the numbers of records across the threads and to minimize resource and lock contention. The best way to visualize the concept of threading is to use a "pie" analogy. Imagine the total workset for a batch job is a "pie". If you split that pie into equal sized segments, each segment would represent an individual thread. The concept of threading has advantages and disadvantages: Smaller elapsed runtimes - Jobs that are multi-threaded finish earlier than jobs that are single threaded. With smaller amounts of work to do, jobs with threading will finish earlier. Note: The elapsed runtime of the threads is rarely proportional to the number of threads executed. Even though contention is minimized, some contention does exist for resources which can adversely affect runtime. Threads can be managed individually – Each thread can be started individually and can also be restarted individually in case of failure. If you need to rerun thread X then that is the only thread that needs to be resubmitted. Threading can be somewhat dynamic – The number of threads that are run on any instance can be varied as the thread number and thread limit are parameters passed to the job at runtime. They can also be configured using the configuration files outlined in this document and the relevant manuals.Note: Threading is not dynamic after the job has been submitted Failure risk due to data issues with threading is reduced – As mentioned earlier individual threads can be restarted in case of failure. This limits the risk to the total job if there is a data issue with a particular thread or a group of threads. Number of threads is not infinite – As with any resource there is a theoretical limit. While the thread limit can be up to 1000 threads, the number of threads you can physically execute will be limited by the CPU and IO resources available to the job at execution time. Theoretically with the objects identifiers evenly spread across the threads the elapsed runtime for the threads should all be the same. In other words, when executing in multiple threads theoretically all the threads should finish at the same time. Whilst this is possible, it is also possible that individual threads may take longer than other threads for the following reasons: Workloads within the threads are not always the same - Whilst each thread is operating on the roughly the same amounts of objects, the amount of processing for each object is not always the same. For example, an account may have a more complex rate which requires more processing or a meter has a complex amount of configuration to process. If a thread has a higher proportion of objects with complex processing it will take longer than a thread with simple processing. The amount of processing is dependent on the configuration of the individual data for the job. Data may be skewed – Even though the object identifier generation algorithm attempts to spread the object identifiers across threads there are some jobs that use additional factors to select records for processing. If any of those factors exhibit any data skew then certain threads may finish later. For example, if more accounts are allocated to a particular part of a schedule then threads in that schedule may finish later than other threads executed. Threading is important to the success of individual jobs. For more guidelines and techniques for optimizing threading refer to Multi-Threading Guidelines in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >