Search Results

Search found 1277 results on 52 pages for 'cody smith'.

Page 5/52 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Enabling syntax highlighting for LESS in Programmer's Notepad?

    - by Cody Gray
    When I don't feel like firing up the Visual Studio behemoth, or when I don't have it installed, I always turn to Programmer's Notepad. It's an amazingly light and fast little text editor, with the special advantage that it is completely platform-native and conforms to standard UI conventions. Therefore, please do not suggest that I consider using other text editors. I've already considered and rejected them because they do not use native UI controls. I like Programmer's Notepad, thank you very much. Unfortunately, I've recently begun to learn, use, and love LESS for all of my CSS coding needs, and it appears that Programmer's Notepad is not bundled with a syntax highlighting scheme for LESS. Does anyone know if there is—by chance and good fortune—one already available somewhere on the web that some kind soul has tediously prepared? If not, how can I go about writing one of my own? Is there a way to build on the existing CSS scheme? It's also possible that any code coloring scheme designed for Scintilla-based editors will work, as Programmer's Notepad is based on the Scintilla control. If you know of a LESS highlighting scheme for Scintilla-based editors, and how to use that with Programmer's Notepad, please suggest that as well.

    Read the article

  • hierarical numbering in microsoft word 2003

    - by cody
    I have a headline in my document of level 3 and want the document hierarical numbered but it seems my level 3 headings do restart numbering at 1.1.1 and i have no clue why. it looks like that: 1. blah 1.1 blub 1.2 blub 2. blah 2.1 blub 2.2 blub 1.1.1 blubb <- shouldnt this be 2.2.1 ? 3. blah how can I correct this issue?

    Read the article

  • Trying to get to the command prompt through recovery disk

    - by cody
    I'm trying to reach the command prompt through a vista recovery disk I have, and it boots from the disk and gets to the point where it asks which installation I want to repair and then says the disk is not compatible with my version of windows. I have a dual boot setup with Vista and server 2008 R2. Is there another was to run check disk? I can't boot in safemode or normally, I suspect a driver (atipcie) is the problem

    Read the article

  • wireless internet in linux is very very slow... but in windows.... everythnings fine

    - by Cody Acer
    yesterday when i was connecting to our neighbors wifi connection which is the signal strength is below 50%, i cant browse anything... even ping to gateway. 100% packet loss, and sometimes.. i can connect awesomely i can open my facebook account for 15 minutes but after 15min.. connection is extremely slow. but not windows i can surf even the signal str is very poor weird ey??.. root@Emely:~# lspci -knn 00:00.0 Host bridge [0600]: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx DMI Bridge [8086:a010] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller [0300]: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx Integrated Graphics Controller [8086:a011] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller [0380]: Intel Corporation Atom Processor D4xx/D5xx/N4xx/N5xx Integrated Graphics Controller [8086:a012] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] 00:1b.0 Audio device [0403]: Intel Corporation NM10/ICH7 Family High Definition Audio Controller [8086:27d8] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 1 [8086:27d0] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 2 [8086:27d2] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.2 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 3 [8086:27d4] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge [0604]: Intel Corporation NM10/ICH7 Family PCI Express Port 4 [8086:27d6] (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 [8086:27c8] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.1 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 [8086:27c9] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.2 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 [8086:27ca] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.3 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 [8086:27cb] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: uhci_hcd 00:1d.7 USB controller [0c03]: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller [8086:27cc] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: ehci-pci 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev e2) 00:1f.0 ISA bridge [0601]: Intel Corporation NM10 Family LPC Controller [8086:27bc] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: lpc_ich Kernel modules: lpc_ich 00:1f.2 SATA controller [0106]: Intel Corporation NM10/ICH7 Family SATA Controller [AHCI mode] [8086:27c1] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: ahci Kernel modules: ahci 00:1f.3 SMBus [0c05]: Intel Corporation NM10/ICH7 Family SMBus Controller [8086:27da] (rev 02) Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel modules: i2c-i801 05:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01) Subsystem: Wistron NeWeb Corp. Device [185f:051a] Kernel driver in use: bcma-pci-bridge Kernel modules: bcma 09:00.0 Ethernet controller [0200]: Marvell Technology Group Ltd. 88E8040 PCI-E Fast Ethernet Controller [11ab:4354] Subsystem: Samsung Electronics Co Ltd Notebook N150P [144d:c072] Kernel driver in use: sky2 Kernel modules: sky2 root@Emely:~# ip addr show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether e8:11:32:2e:a6:fd brd ff:ff:ff:ff:ff:ff 3: wlan0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:1b:b1:a9:ac:e0 brd ff:ff:ff:ff:ff:ff inet 192.168.1.108/24 brd 192.168.1.255 scope global wlan0 inet6 fe80::21b:b1ff:fea9:ace0/64 scope link valid_lft forever preferred_lft forever root@Emely:~# ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether e8:11:32:2e:a6:fd brd ff:ff:ff:ff:ff:ff 3: wlan0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:1b:b1:a9:ac:e0 brd ff:ff:ff:ff:ff:ff root@Emely:~# rfkill list all 0: samsung-wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: samsung-bluetooth: Bluetooth Soft blocked: no Hard blocked: no 2: hci0: Bluetooth Soft blocked: no Hard blocked: no 3: phy0: Wireless LAN Soft blocked: no Hard blocked: no is this a wireless driver issue?

    Read the article

  • Map /dev/bus/usb node to /sys node on Linux

    - by Cody Brocious
    I'm using libusb to find and access a USB device, but once I get the information I need from there, I need to map it to a /sys node. This could be to the actual USB bus it's on, the /sys/bus/usb-serial node (which is where I'm going to get eventually), or effectively anywhere else since I can walk the tree from there. I can get to a /dev/bus/usb node easily enough, but I'm a bit lost from there. What would be the best route to perform this mapping? Alternatively, a way to get the /dev/ttyUSB device node for a /dev/bus/usb node would work as well, since it gets me the same result.

    Read the article

  • Home ADSL Modem Dropping Packets?

    - by Cody
    I know this is supposed to be a "pro" forum, but I'm hoping someone can help since my ISP isn't doing much to try and fix things. My ISP has given me a DSL modem / Router combo - a ADB / Pirelli P.DG A2100N and I have a 4096 / 767 kbps connection. I use it purely as modem and router, and have the wireless AP feature turned off. I run it to a Ubiquiti Networks Toughswitch and use a Ubiquiti UAP as the wireless access point - although I've ran tests directly wired to the router with nothing else connected, and still see the same issues. I've been having issues where latency suddenly spikes from 8ms to google.com to 250+ if someone does anything on the internet. If I run a speedtest or something, I can see latencies above 3000ms. Regularly when downloading something, even if the speed is throttled to , it can get random drops to 0kbps every few seconds. Online gaming is impossible because I notice the sudden lag-outs in the connection, and video streams or VoIP drop out as well - it's not at all consistent. I managed to find the password to my modem and I don't think I see anything wrong with the settings - but I looked for the logs and found this: Jun 6 17:10:30 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:30 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:31 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:40 user warn kernel: __ratelimit: 63 callbacks suppressed Jun 6 17:10:40 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:40 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:40 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:40 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:40 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:10:40 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:22 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:23 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:24 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:24 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:24 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:24 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:24 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:25 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:25 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:25 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:29 user warn kernel: __ratelimit: 15 callbacks suppressed Jun 6 17:11:29 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:29 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:30 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:11:30 user warn kernel: nf_conntrack: table full, dropping packet. Jun 6 17:55:26 user warn kernel: bcmxtmcfg: OAM loopback response not received on VCC 1.1.3 Jun 6 17:55:27 user warn kernel: bcmxtmcfg: OAM loopback response not received on VCC 1.1.4 So, as I understand it, it appears the router is dropping packets? If that's the case, is there anything in the config that I can change? Or should I buy a new router, a new modem, or both?

    Read the article

  • Error 502 in OpenOfficeSpreadsheet formula

    - by cody
    The formula failing is the following: =IF(TIMEVALUE(C2 & ":00") > TIMEVALUE(B2 & ":00"); 0; C2-B2) I previously tried =IF(C2 > B2; 0; C2-B2) but this also gives me "Error 502". The cells it is referring to contains data in the format "12:30" (I formatted the columns with format "HH:MM"). I just want to calculate how much time lies between two times, respecting the special case where endtime < starttime.

    Read the article

  • how to word wrap, align text like the output of man?

    - by cody
    what is the command that word wraps and justifies a text file so that the output looks like that of a man page: All of these system calls are used to wait for state changes in a child of the calling process, and obtain information about the child whose state has changed. A state change is considered to be: the child terminated; the child was stopped by a signal; or the child was resumed by a signal. In the case of a terminated child, performing a wait allows the system to release the resources associated with the child; if a wait is not performed, then the termi- nated child remains in a "zombie" state (see NOTES below). Thanks.

    Read the article

  • Null reading in stream images? Unable to start activity ComponentInfo

    - by lasmith
    I have reviewed a lot of similar questions regarding not being able to launch an activity but they don't seem to quite match my problem. I am working on a simple black jack game but its force quitting. I suspect there is a problem with loading up the card png images I have. Stepping through the debugger it crashes right while in the resetGame() function. I'm sure I am doing something dumb. My Logcat: 10-15 20:21:43.309: E/AndroidRuntime(2863): FATAL EXCEPTION: main 10-15 20:21:43.309: E/AndroidRuntime(2863): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.smith.blackjack/com.smith.blackjack.Main}: java.lang.NullPointerException 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2059) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2084) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.access$600(ActivityThread.java:130) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1195) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.os.Handler.dispatchMessage(Handler.java:99) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.os.Looper.loop(Looper.java:137) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.main(ActivityThread.java:4745) 10-15 20:21:43.309: E/AndroidRuntime(2863): at java.lang.reflect.Method.invokeNative(Native Method) 10-15 20:21:43.309: E/AndroidRuntime(2863): at java.lang.reflect.Method.invoke(Method.java:511) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:786) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553) 10-15 20:21:43.309: E/AndroidRuntime(2863): at dalvik.system.NativeStart.main(Native Method) 10-15 20:21:43.309: E/AndroidRuntime(2863): Caused by: java.lang.NullPointerException 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.smith.blackjack.DeckOfCards.<init>(DeckOfCards.java:17) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.smith.blackjack.Main.resetGame(Main.java:98) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.smith.blackjack.Main.onCreate(Main.java:67) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.Activity.performCreate(Activity.java:5008) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1079) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2023) 10-15 20:21:43.309: E/AndroidRuntime(2863): ... 11 more My androidmanifest: <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.smith.blackjack" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="11" android:targetSdkVersion="15" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".Main" android:label="@string/title_activity_main" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> Here is my Main.java package com.smith.blackjack; import android.os.Bundle; import android.app.Activity; import android.content.res.AssetManager; import android.graphics.drawable.Drawable; import java.io.IOException; import java.io.InputStream; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.ImageView; public class Main extends Activity { private ImageView dealerCard0; private ImageView dealerCard1; private ImageView dealerCard2; private ImageView dealerCard3; private ImageView playerCard0; private ImageView playerCard1; private ImageView playerCard2; private ImageView playerCard3; private ImageView imgResult; private Button btnDeal; private Button btnDraw; private Button btnHold; private DeckOfCards deckOfCards; private int[] dealerValues; private int dealerSum; private int dealerCardNumber; private int[] playerValues; private int playerSum; private int playerCardNumber; private InputStream dealerHiddenCard; private Card dealerCard; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); dealerCard0 = (ImageView) findViewById(R.id.dealerCard0); dealerCard1 = (ImageView) findViewById(R.id.dealerCard1); dealerCard2 = (ImageView) findViewById(R.id.dealerCard2); dealerCard3 = (ImageView) findViewById(R.id.dealerCard3); playerCard0 = (ImageView) findViewById(R.id.playerCard0); playerCard1 = (ImageView) findViewById(R.id.playerCard1); playerCard2 = (ImageView) findViewById(R.id.playerCard2); playerCard3 = (ImageView) findViewById(R.id.playerCard3); imgResult = (ImageView) findViewById(R.id.imgResult); btnDeal = (Button) findViewById(R.id.deal); btnDraw = (Button) findViewById(R.id.draw); btnHold = (Button) findViewById(R.id.hold); btnDeal.setOnClickListener(btnDealListener); btnDraw.setOnClickListener(btnDrawListener); btnHold.setOnClickListener(btnHoldListener); resetGame(); } private void resetGame(){ AssetManager assets = getAssets(); dealerValues = new int[4]; playerValues = new int[4]; dealerSum = 0; playerSum = 0; dealerCardNumber = 0; playerCardNumber = 0; for (int i = 0; i < 4; i++) { dealerValues[i] = 0; playerValues[i] = 0; } try { InputStream stream = assets.open("cardback.png"); // stream = assets.open("cardback.png"); Drawable cardImage = Drawable.createFromStream(stream, null); dealerCard0.setImageDrawable(cardImage); dealerCard1.setImageDrawable(cardImage); dealerCard2.setImageDrawable(cardImage); dealerCard3.setImageDrawable(cardImage); playerCard0.setImageDrawable(cardImage); playerCard1.setImageDrawable(cardImage); playerCard2.setImageDrawable(cardImage); playerCard3.setImageDrawable(cardImage); imgResult.setImageDrawable(cardImage); deckOfCards = new DeckOfCards(); deckOfCards.shuffle(); assets.close(); } catch (IOException e){ Log.e("Reset Game", "Error Loading", e); } } public OnClickListener btnDealListener = new OnClickListener() { // @Override public void onClick(View v) { try { AssetManager assets = getAssets(); InputStream stream; // first player card Card newCard; newCard = deckOfCards.dealCard(); playerValues[playerCardNumber] = newCard.faceValue; playerCardNumber++; stream = assets.open(newCard.File); Drawable cardImage = Drawable.createFromStream(stream, newCard.File); playerCard0.setImageDrawable(cardImage); assets.close(); // second player card newCard = deckOfCards.dealCard(); playerValues[playerCardNumber] = newCard.faceValue; playerCardNumber++; stream = assets.open(newCard.File); cardImage = Drawable.createFromStream(stream, newCard.File); playerCard1.setImageDrawable(cardImage); assets.close(); // first dealer card hidden newCard = deckOfCards.dealCard(); dealerCard = newCard; dealerValues[dealerCardNumber] = newCard.faceValue; dealerCardNumber++; dealerHiddenCard = assets.open(newCard.File); stream = assets.open("cardback.png"); cardImage = Drawable.createFromStream(stream, "cardback"); dealerCard0.setImageDrawable(cardImage); assets.close(); // second dealer card open newCard = deckOfCards.dealCard(); dealerValues[dealerCardNumber] = newCard.faceValue; dealerCardNumber++; stream = assets.open(newCard.File); cardImage = Drawable.createFromStream(stream, newCard.File); dealerCard1.setImageDrawable(cardImage); assets.close(); } catch (IOException e){ Log.e("Deal", "Error Loading", e); } }; }; public OnClickListener btnDrawListener = new OnClickListener() { // @Override public void onClick(View v) { try { AssetManager assets = getAssets(); InputStream stream; // get next player card Card newCard; newCard = deckOfCards.dealCard(); playerValues[playerCardNumber] = newCard.faceValue; playerCardNumber++; stream = assets.open(newCard.File); Drawable cardImage = Drawable.createFromStream(stream, newCard.File); switch (playerCardNumber){ case 3: playerCard2.setImageDrawable(cardImage); case 4: playerCard3.setImageDrawable(cardImage); } assets.close(); } catch (IOException e){ Log.e("Draw", "Error Loading", e); } }; }; public OnClickListener btnHoldListener = new OnClickListener() { // @Override public void onClick(View v) { Drawable cardImage; // evaluate player hand playerSum = evaluate(playerValues); if (playerSum > 21){ // player losses } // flip over the dealer hidden card cardImage = Drawable.createFromStream(dealerHiddenCard, dealerCard.File); Card newCard; InputStream stream; AssetManager assets = getAssets(); for (int i=2; i<4; i++){ dealerSum = evaluate(dealerValues); if (dealerSum < 16 ) { newCard = deckOfCards.dealCard(); dealerValues[dealerCardNumber] = newCard.faceValue; dealerCardNumber++; try { stream = assets.open(newCard.File); cardImage = Drawable.createFromStream(stream, newCard.File); switch (dealerCardNumber){ case 3: dealerCard2.setImageDrawable(cardImage); case 4: dealerCard3.setImageDrawable(cardImage); } assets.close(); } catch (IOException e){ Log.e("Draw", "Error Loading", e); } if (dealerSum < playerSum) { // player wins } if (dealerSum > playerSum){ // dealer wins } if (dealerSum == playerSum){ // it is a draw } } } }; }; public int evaluate (int[]values) { int sumCards = 0; for (int i = 0; i < 4; i++){ sumCards += values[i]; } if (sumCards > 21) { for (int i = 0; i < 4; i++){ if (values[i] == 11) { values[i] = 1; sumCards -= 10; continue; } } } return sumCards; } } My DeckOfCards class: package com.smith.blackjack; import java.util.Random; public class DeckOfCards { private Card [] deck; private int currentCard; private static final int NUMBER_OF_CARDS = 52; private static final Random randomNumbers = new Random(); public DeckOfCards () { deck = new Card[NUMBER_OF_CARDS]; currentCard = 0 ; for(int count = 0; count < deck.length; count++) { deck[count].faceValue = count + 1; } } public void shuffle () { currentCard = 0; for (int first = 0; first < deck.length; first ++){ int second = randomNumbers.nextInt(NUMBER_OF_CARDS); int temp = deck[first].faceValue; deck[first].faceValue=deck[second].faceValue; deck[second].faceValue = temp; } } public Card dealCard(){ Card temp = new Card(); temp.faceValue = 0; temp.File = ""; if(currentCard < deck.length) { temp.faceValue = deck[currentCard].faceValue / 4; int suit = deck[currentCard].faceValue % 4; String suitString = ""; switch (suit){ case 0: suitString = "c"; case 1: suitString = "d"; case 2: suitString = "h"; case 3: suitString = "s"; } Integer face = temp.faceValue / 4 ; String faceString = face.toString(); temp.File = faceString + suitString + ".png"; switch (temp.faceValue){ case 11: temp.faceValue = 10; case 12: temp.faceValue = 10; case 13: temp.faceValue = 10; } return temp; } else return temp; } }

    Read the article

  • How to run Repository Creation Utility (RCU) on 64-bit Linux

    - by Kevin Smith
    I was setting up WebCenter Content (WCC) on a new virtual box running 64-bit Linux and ran into a problem when I tried to run the Repository Creation Utility (RCU). I saw this error when trying to start RCU .../rcuHome/jdk/jre/bin/java: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory I think I remember running into this before and reading something about RCU only being supported on 32-bit Linux. I decided to try and see if I could get it to run on 64-bit Linux. I saw it was using it's own copy of java (.../rcuHome/jdk/jre/bin/java), so I decided to try and get it to use the 64-bit JRockit I had already installed. I edited the rcu script in rcuHome/bin and replaced JRE_DIR=$ORACLE_HOME/jdk/jre with JRE_DIR=/apps/java/jrockit-jdk1.6.0_29-R28.2.2-4.1.0 Sure enough that fixed it. I was able to run RCU and create the WCC schema.

    Read the article

  • Stark Expo Needs You

    - by [email protected]
    Train to Become a Master Cloud Operative Can't wait until September to get your Oracle fix? Then come visit us at the Stark Expo now. Marvel Entertainment has turned itself into one of the hottest media companies of the digital age, and at the heart of Marvel's growth and transformation is Oracle technology. Now, this successful collaboration finds its way to the big screen, as Oracle joins forces with Marvel to launch a special showcase Website and movie trailer for the upcoming Iron Man 2. In Iron Man 2, Oracle is a proud sponsor of Stark Expo, a world-class tradeshow that depends on a cloud computing architecture to ensure that systems are free from overload. Starting today, visitors to the showcase Website are invited to become Master Cloud Operatives and keep Stark Expo up and running. Complete your training, test your troubleshooting skills in the Oracle Pavilion, and qualify to receive a free movie poster.

    Read the article

  • Transactional Messaging in the Windows Azure Service Bus

    - by Alan Smith
    Introduction I’m currently working on broadening the content in the Windows Azure Service Bus Developer Guide. One of the features I have been looking at over the past week is the support for transactional messaging. When using the direct programming model and the WCF interface some, but not all, messaging operations can participate in transactions. This allows developers to improve the reliability of messaging systems. There are some limitations in the transactional model, transactions can only include one top level messaging entity (such as a queue or topic, subscriptions are no top level entities), and transactions cannot include other systems, such as databases. As the transaction model is currently not well documented I have had to figure out how things work through experimentation, with some help from the development team to confirm any questions I had. Hopefully I’ve got the content mostly correct, I will update the content in the e-book if I find any errors or improvements that can be made (any feedback would be very welcome). I’ve not had a chance to look into the code for transactions and asynchronous operations, maybe that would make a nice challenge lab for my Windows Azure Service Bus course. Transactional Messaging Messaging entities in the Windows Azure Service Bus provide support for participation in transactions. This allows developers to perform several messaging operations within a transactional scope, and ensure that all the actions are committed or, if there is a failure, none of the actions are committed. There are a number of scenarios where the use of transactions can increase the reliability of messaging systems. Using TransactionScope In .NET the TransactionScope class can be used to perform a series of actions in a transaction. The using declaration is typically used de define the scope of the transaction. Any transactional operations that are contained within the scope can be committed by calling the Complete method. If the Complete method is not called, any transactional methods in the scope will not commit.   // Create a transactional scope. using (TransactionScope scope = new TransactionScope()) {     // Do something.       // Do something else.       // Commit the transaction.     scope.Complete(); }     In order for methods to participate in the transaction, they must provide support for transactional operations. Database and message queue operations typically provide support for transactions. Transactions in Brokered Messaging Transaction support in Service Bus Brokered Messaging allows message operations to be performed within a transactional scope; however there are some limitations around what operations can be performed within the transaction. In the current release, only one top level messaging entity, such as a queue or topic can participate in a transaction, and the transaction cannot include any other transaction resource managers, making transactions spanning a messaging entity and a database not possible. When sending messages, the send operations can participate in a transaction allowing multiple messages to be sent within a transactional scope. This allows for “all or nothing” delivery of a series of messages to a single queue or topic. When receiving messages, messages that are received in the peek-lock receive mode can be completed, deadlettered or deferred within a transactional scope. In the current release the Abandon method will not participate in a transaction. The same restrictions of only one top level messaging entity applies here, so the Complete method can be called transitionally on messages received from the same queue, or messages received from one or more subscriptions in the same topic. Sending Multiple Messages in a Transaction A transactional scope can be used to send multiple messages to a queue or topic. This will ensure that all the messages will be enqueued or, if the transaction fails to commit, no messages will be enqueued.     An example of the code used to send 10 messages to a queue as a single transaction from a console application is shown below.   QueueClient queueClient = messagingFactory.CreateQueueClient(Queue1);   Console.Write("Sending");   // Create a transaction scope. using (TransactionScope scope = new TransactionScope()) {     for (int i = 0; i < 10; i++)     {         // Send a message         BrokeredMessage msg = new BrokeredMessage("Message: " + i);         queueClient.Send(msg);         Console.Write(".");     }     Console.WriteLine("Done!");     Console.WriteLine();       // Should we commit the transaction?     Console.WriteLine("Commit send 10 messages? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     } } Console.WriteLine(); messagingFactory.Close();     The transaction scope is used to wrap the sending of 10 messages. Once the messages have been sent the user has the option to either commit the transaction or abandon the transaction. If the user enters “yes”, the Complete method is called on the scope, which will commit the transaction and result in the messages being enqueued. If the user enters anything other than “yes”, the transaction will not commit, and the messages will not be enqueued. Receiving Multiple Messages in a Transaction The receiving of multiple messages is another scenario where the use of transactions can improve reliability. When receiving a group of messages that are related together, maybe in the same message session, it is possible to receive the messages in the peek-lock receive mode, and then complete, defer, or deadletter the messages in one transaction. (In the current version of Service Bus, abandon is not transactional.)   The following code shows how this can be achieved. using (TransactionScope scope = new TransactionScope()) {       while (true)     {         // Receive a message.         BrokeredMessage msg = q1Client.Receive(TimeSpan.FromSeconds(1));         if (msg != null)         {             // Wrote message body and complete message.             string text = msg.GetBody<string>();             Console.WriteLine("Received: " + text);             msg.Complete();         }         else         {             break;         }     }     Console.WriteLine();       // Should we commit?     Console.WriteLine("Commit receive? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     }     Console.WriteLine(); }     Note that if there are a large number of messages to be received, there will be a chance that the transaction may time out before it can be committed. It is possible to specify a longer timeout when the transaction is created, but It may be better to receive and commit smaller amounts of messages within the transaction. It is also possible to complete, defer, or deadletter messages received from more than one subscription, as long as all the subscriptions are contained in the same topic. As subscriptions are not top level messaging entities this scenarios will work. The following code shows how this can be achieved. try {     using (TransactionScope scope = new TransactionScope())     {         // Receive one message from each subscription.         BrokeredMessage msg1 = subscriptionClient1.Receive();         BrokeredMessage msg2 = subscriptionClient2.Receive();           // Complete the message receives.         msg1.Complete();         msg2.Complete();           Console.WriteLine("Msg1: " + msg1.GetBody<string>());         Console.WriteLine("Msg2: " + msg2.GetBody<string>());           // Commit the transaction.         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     Unsupported Scenarios The restriction of only one top level messaging entity being able to participate in a transaction makes some useful scenarios unsupported. As the Windows Azure Service Bus is under continuous development and new releases are expected to be frequent it is possible that this restriction may not be present in future releases. The first is the scenario where messages are to be routed to two different systems. The following code attempts to do this.   try {     // Create a transaction scope.     using (TransactionScope scope = new TransactionScope())     {         BrokeredMessage msg1 = new BrokeredMessage("Message1");         BrokeredMessage msg2 = new BrokeredMessage("Message2");           // Send a message to Queue1         Console.WriteLine("Sending Message1");         queue1Client.Send(msg1);           // Send a message to Queue2         Console.WriteLine("Sending Message2");         queue2Client.Send(msg2);           // Commit the transaction.         Console.WriteLine("Committing transaction...");         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     The results of running the code are shown below. When attempting to send a message to the second queue the following exception is thrown: No active Transaction was found for ID '35ad2495-ee8a-4956-bbad-eb4fedf4a96e:1'. The Transaction may have timed out or attempted to span multiple top-level entities such as Queue or Topic. The server Transaction timeout is: 00:01:00..TrackingId:947b8c4b-7754-4044-b91b-4a959c3f9192_3_3,TimeStamp:3/29/2012 7:47:32 AM.   Another scenario where transactional support could be useful is when forwarding messages from one queue to another queue. This would also involve more than one top level messaging entity, and is therefore not supported.   Another scenario that developers may wish to implement is performing transactions across messaging entities and other transactional systems, such as an on-premise database. In the current release this is not supported.   Workarounds for Unsupported Scenarios There are some techniques that developers can use to work around the one top level entity limitation of transactions. When sending two messages to two systems, topics and subscriptions can be used. If the same message is to be sent to two destinations then the subscriptions would have the default subscriptions, and the client would only send one message. If two different messages are to be sent, then filters on the subscriptions can route the messages to the appropriate destination. The client can then send the two messages to the topic in the same transaction.   In scenarios where a message needs to be received and then forwarded to another system within the same transaction topics and subscriptions can also be used. A message can be received from a subscription, and then sent to a topic within the same transaction. As a topic is a top level messaging entity, and a subscription is not, this scenario will work.

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • What's In Storage?

    - by [email protected]
    Oracle Flies South for Storage Networking Event Storage Networking World (now simply called SNW) is the place you'll find the most-comprehensive education on storage, infrastructure, and the datacenter in the spring of 2010. It's also the place where you'll see Oracle. During the April 12-15 event in Orlando, Florida, the industry's premiere presentations on storage trends and best practices are combined with hands-on labs covering storage management and IP storage. You'll also have the opportunity to learn about Oracle's Sun storage solutions, from Flash and open storage to enterprise disk and tape. Plus, if you stop by booth 207 in the expo hall, you might walk away with a bookish prize: an Amazon Kindle, courtesy of Oracle. Proving, once again, that education can be quite rewarding.

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    Hi, This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US, but we're in Texas. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • Stopping by the Store

    - by [email protected]
    Registrants Get Online Savings on Oracle Products Have you heard about the Oracle Store? It's the one-stop online shop for buying Oracle software and support at significant savings. Better yet, when you register for Oracle OpenWorld 2010 by April 30, you can get an additional 10% off your next purchase. The 10% discount applies to a one-time "click and buy" checkout, so load up as many items as you can. To get started, you'll need to visit the Oracle OpenWorld registration page to get more information about the promotion, including the promo code and link. It's another great way to turn your early bird registration into a long-term gain for your organization.

    Read the article

  • CQRS &ndash; Questions and Concerns

    - by Dylan Smith
    I’ve been doing a lot of learning on CQRS and Event Sourcing over the last little while and I have a number of questions that I haven’t been able to answer. 1. What is the benefit of CQRS when compared to a typical DDD architecture that uses Event Sourcing and properly captures intent and behavior via verb-based commands? (other than Scalability) 2. When using CQRS what do you do with complex query-based logic? I’m going to elaborate on #1 in this blog post and I’ll do a follow-up post on #2. I watched through Greg Young’s video on the business benefits of CQRS + Event Sourcing and first let me say that I thought it was an excellent presentation that really drives home a lot of the benefits to this approach to architecture (I watched it twice in a row I enjoyed it so much!). But it didn’t answer some of my questions fully (I wish I had been there to ask these of Greg in person!). So let me pick apart some of the points he makes and how they relate to my first question above. I’m completely sold on the idea of event sourcing and have a clear understanding of the benefits that it brings to the table, so I’m not going to question that. But you can use event sourcing without going to a CQRS architecture, so my main question is around the benefits of CQRS + Event Sourcing vs Event Sourcing + Typical DDD architecture Architecture with Event Sourcing + Commands on Left, CQRS on Right Greg talks about how the stereotypical architecture doesn’t support DDD, but is that only because his diagram shows DTO’s coming up from the client. If we use the same diagram but allow the client to send commands doesn’t that remove a lot of the arguments that Greg makes against the stereotypical architecture? We can now introduce verbs into the system. We can capture intent now (storing it still requires event sourcing, but you can implement event sourcing without doing CQRS) We can create a rich domain model (as opposed to an anemic domain model) Scalability is obviously a benefit that CQRS brings to the table, but like Greg says, very few of the systems we create truly need significant scalability Greg talks about the ability to scale your development efforts. He says CQRS allows you to split the system into 3 parts (Client, Domain/Commands, Reads) and assign 3 teams of developers to work on them in parallel; letting you scale your development efforts by 3x with nearly linear gains. But in the stereotypical architecture don’t you already have 2 separate modules that you can split your dev efforts between: The client that sends commands/queries and receives DTO’s, and the Domain which accepts commands/queries, and generates events/DTO’s. If this is true it’s not really a 3x scaling you achieve with CQRS but merely a 1.5x scaling which while great doesn’t sound nearly as dramatic (“I can do it with 10 devs in 12 months – let me hire 5 more and we can have it done in 8 months”). Making the Query side “stupid simple” such that you can assign junior developers (or even outsource it) sounds like a valid benefit, but I have some concerns over what you do with complex query-based logic/behavior. I’m going to go into more detail on this in a follow-up blog post shortly. He also seemed to focus on how “stupid-simple” it is doing queries against the de-normalized data store, but I imagine there is still significant complexity in the event handlers that interpret the events and apply them to the de-normalized tables. It sounds like Greg suggests that because we’re doing CQRS that allows us to apply Event Sourcing when we otherwise wouldn’t be able to (~33:30 in the video). I don’t believe this is true. I don’t see why you wouldn’t be able to apply Event Sourcing without separating out the Commands and Queries. The queries would just operate against the domain model instead of the database. But you’d still get the benefits of Event Sourcing. Without CQRS the queries would only be able to operate against the current state rather than the event history, but even in CQRS the domain behaviors can only operate against the current state and I don’t see that being a big limiting factor. If some query needs to operate against something that is not captured by the current state you would just have to update the domain model to capture that information (no different than if that statement were made about a Command under CQRS). Some of the benefits I do see being applicable are that your domain model might end up being simpler/smaller since it only needs to represent the state needed to process commands and not worry about the reads (like the Deactivate Inventory Item and associated comment example that Greg provides). And also commands that can be handled in a Transaction Script style manner by the command handler simply generating events and not touching the domain model. It also makes it easier for your senior developers to focus on the command behavior and ignore the queries, which is usually going to be a better use of their time. And of course scalability. If anybody out there has any thoughts on this and can help educate me further, please either leave a comment or feel free to get in touch with me via email:

    Read the article

  • Michael Stephenson joins CloudCasts

    - by Alan Smith
    Mike Stephenson has recorded a couple of webcasts focusing on build and test in BizTalk Server 2009. These are part of the “BizTalk Light & Easy” series of webcasts created by some of the BizTalk Server MVPs. Testing BizTalk Applications Implementing an Automated Build Process with BizTalk Server 2009

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • Silent Partner

    - by [email protected]
    The Team Behind the Man Behind the Mask As a continuing sponsor of the blockbuster Iron Man franchise, Oracle has been quietly preparing for the explosive sequel blasting its way into theaters this May. Through a series of advertising campaigns, immersive online experiences, and contests, Oracle plans to highlight its backstage efforts to help Marvel Entertainment hone its newfound superpowers. By driving the performance of critical systems, Oracle technologies are helping Marvel transform itself from mild-mannered comic book publisher to film industry power broker. You can learn more about this dynamic duo, and get free movie memorabilia, by visiting our Iron Man 2 showcase site.

    Read the article

  • Missing features from WebGL and OpenGL ES

    - by Chris Smith
    I've started using WebGL and am pleased with how easy it is to leverage my OpenGL (and by extension OpenGL ES) experience. However, my understanding is as follows: OpenGL ES is a subset of OpenGL WebGL is a subset of OpenGL ES Is this correct for both cases? If so, are there resources for detailing which features are missing? For example, one notable missing feature is glPushMatrix and glPopMatrix. I don't see those in WebGL, but in my searches I cannot find them referenced in OpenGL ES material either.

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • What kinds of low level knowledge matter?

    - by Peter Smith
    I realize that this question is similar to Low level programming - what's in it for me, but the answers didn't really address my question well. Part from just an understanding, how exactly does your low level knowledge translate into faster and better programs? There's the obvious lack of garbage collection, but what else is an advantage? Do you really outperform your optimizing compiler? Do you pack your data structures in as tight as possible and be concerned about alignment? There's extra freedom naturally, but does that really translate into a faster program?

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >